The types of neurons most commonly used in artificial neural networks convert an input stimulus into an output by means of a so-called activation function. The output signal is then transmitted via a synapse (link) to the following neuron. In this transmission, the signal can be amplified by a weight (w) or weakened and also reversed. The activation functions chosen are such that real numbers can be used as input stimuli and that the signal emitted by the neuron is likewise a real number. The activation function normally ensures that the emitted signal falls within a well-defined range.
An example is the frequently used hyperbolic tangent activation function y=a*tanh(b*x), where x represents the input stimulus. The constants a and b define the possible output range (a) as well as the slope (b).
Fig. 1: Hyperbolic tangent activation function: the constants a and b define the possible output range (a) as well as the slope (b). The neuron transforms the stimulus in a nonlinear way, an essential precondition for solving a variety of highly complex problems.
As a rule, these types of activation functions transform the input stimulus into an output signal in a nonlinear way. A real input signal such as a time series can be fed directly into the network as long as it falls within a meaningful range in terms of the activation function. This is achieved by means of transformations and the normalization of the time series. Thus, for example, the time series of the close curve of a security can initially be expressed by means of the day-to-day percentage deviations. It is then possible to normalize the oscillator generated in this manner such that all of the values within the time series range between +1.8 and -1.8, which are optimal values for the use of neurons with the tanh activation function with constants a=1.7159 and b=2/3.
It is obvious that such artificial neurons provide only an inadequate modelling of the activities in an organic brain. A neuron in our brain does not perform multiplications and cannot compute a tanh function. A new type of artificial neuron, however, known as spiking neurons, approaches the operation of biological neurons much more closely.
While neurons with continuous activation functions transform a real input signal into a real output signal and thus act merely in one dimension, spiking neurons are event driven transformers. If a stimulus crosses a certain threshold value, the neuron triggers an event: the emission of a so-called spike. This event has several consequences.
- With an optional time lag, the neuron emits an output signal (postsynaptic potential = PSP), which is transmitted to following neurons via the synapse. Initially, the PSP rises, then reaches a maximum value and finally converges toward zero. Thus, the event has a time dependent, lasting effect on the following neurons.
- In the event of a spike, the neuron resists the input stimulus for a certain time in order to prevent the immediate occurrence of another spike. Thus the spiking (or firing) of the neuron additionally has a time dependent influence on the neuron’s own receptivity for stimuli.
Fig. 2: The response pattern of a spiking neuron differs essentially from that of a neuron with a sigmoidal activation function, as represented in Fig. 1. Whether a signal (PSP) is emitted depends on whether the neuron has previously generated a spike. The level of the potential issued is determined by various time constants as well as by the time that has passed since the occurrence of the last spike. Accordingly, a spiking neuron – in contrast to a sigmoidal neuron – does not provide a continuous output.
It is clear that, in contrast to the neurons described at the outset, spiking neurons operate in two dimensions: they transform a stimulus into events (spikes) and event reactions (PSPs) in a process, in which time plays a central role. Neurons with the activation functions described at the outset provide a permanent output signal, which in each case is directly and with mathematical accuracy linked to the input signal. A spiking neuron, however, reacts differently. It delivers an output signal (PSP) only if a certain stimulus level has been reached and thus a spike has been triggered. The level of the output signal then depends on the temporal interval from the event of the last spike.
For the sake of simplicity, we will begin by considering a single neuron. Two functions must be used for a mathematical description of the desired behavior.
- Response Kernel
A postsynaptic potential (PSP) issued in reaction to a spike:
For the calculation of the response kernel, the time interval (s) since the last spike – minus an optional synaptic delay – is required. The constant defines the time span in which the PSP is emitted. The larger the longer will be the effect of a spike in the form of a PSP on the following neurons. The function H(s) is a step function: it is 1 for s>0, otherwise 0. Thus the neuron generates a PSP as output signal only if
- a spike has occurred and
- the time elapsed since the last spike is greater than the optional synaptic delay.
Fig. 2 shows an example of what a PSP according to this response kernel with = 50 would look like.
- Resistance Kernel
A neuron’s resistance to the input stimulus in reaction to its own spikes:
Here too, the time span since the last spike (s) is required for the calculation. The threshold value constant indicates at what maximal charge potential of the neuron a spike is generated. The time constant controls the time span during which the neuron is unable to generate a further spike. In this formula too, H(s) stands for a step function, the value of which is equal to 1 for s>0, otherwise 0. Fig. 2 shows the effect of the resistance kernel on the charge potential of the neuron. Immediately following a spike, the potential is returned to 0 and for a certain time decays to a negative level (refractory potential).
Fig. 3: A spiking neuron is optionally able to delay the emission of a PSP. The time lag can be implemented as a constant or as an adjustable and hence learnable quantity. In dependence on , the PSP rises with a time lag, but is otherwise identical with an PSP at = 0. Since we are dealing with a delay variable, must be positive.
The PSP of a spiking neuron is normally transported via a synapse (link) to a following neuron. The PSP is weighted in this process. The weight (which in this case is also called synaptic efficacy) increases or decreases the amplitude of the PSP issued, but not its form. Fig. 4 shows the effect of a positive weight greater than 1 on a PSP.
Fig. 4: The PSP emitted by a neuron is amplified or weakened by the synapse. The weight (also called synaptic efficacy) changes merely the amplitude of the PSP, but not its duration or the time of its onset. A positive weight w>0 produces an excitatory PSP (= EPSP). A negative weight, by contrast, emits an inhibitory PSP (= IPSP). It has the same form as the EPSP depicted, but is mirrored downward about the zero axis.
While the synaptic weight merely changes the amplitude of the PSP, the variable has an influence on the duration of the PSP emitted, as shown in Fig. 5. The larger the chosen the later the PSP will reach its maximum value and the longer it will move within a mathematically relevant range > 0. (From a mathematical point of view, the function suggested for the response kernel implies that the effect of a spike converges toward 0 over time, without ever becoming 0. Following a time span slightly greater than , however, the fading reaction falls within a negligible range. In the implementation, the interval must be chosen in such a way that it is larger than the time span of the phenomenon to be represented.
Fig. 5: The response interval affects the duration of the PSP emitted. The larger the chosen the longer the neuron after a spike will generate a reaction signal that could trigger a spike in the following neuron. Depending on the problem to be solved, must be chosen in such a way that a spike can still occur in the following neuron (exceeding the threshold value ). If the temporal behavior of the input signal is unknown such that cannot be chosen with certainty, so-called reference neurons can be integrated into a network of spiking neurons. In regular intervals timed to , these emit PSPs independently of the input signal.
With the help of the response and the resistance kernels, it is now possible to describe the flow of information between spiking neurons. To this end, we use a subscript notation in order to mark the affiliation of a variable with a certain neuron. For the sake of simplicity, we will initially consider only two neurons connected through a synapse, where neuron i will be considered as the source neuron and neuron j as the target neuron. The variables are to be interpreted accordingly. Thus stands for the point in time at which the last spike occurred at neuron i and for the delay of the emission at neuron j. The variable finally indicates the weight (efficacy) of a synapse that connects neuron i with neuron j.
The charge at neuron j at time t results from the effect of the response kernel on the spikes at neuron i as well as of the resistance kernel in reaction to its own spikes (at neuron j).
In most cases, neuron j is connected to several neurons i, which transmit spike reactions via synapses to neuron j. In these cases, the charge at neuron i results from the sum of all spike reactions of the neurons that lie directly in front of neuron i. Designating the set of all of these directly preceding neurons as , we can formulate the following:
In both cases, a spike at neuron j will be emitted at .
If the concern is raised that the mathematical operations of sigmoidal neurons are far from representing actual neural activity, the question arises with regard to these more complex formulas for spiking activated neurons, whether the problem here is not aggravated rather than alleviated. A more precise look at the PSPs generated as well as at the resistance kernel makes it clear, however, that the exponential processes model the ion emissions and electrical transmissions in an organic neural network much more faithfully. The most relevant approximation of the natural example, however, consists in the differentiated interpretation of the input signal as a temporally dynamic process driven by events.
Practical application has shown that networks of spiking neurons have the same processing power as sigmoidal networks – while often requiring a smaller set of neurons. In spite of the theoretical proximity to the natural example and the potential performance, the use of such pulsed neural networks also presents difficulties:
- A different type of receptor is required, or, alternatively, the input signal must be converted into sequences of spikes (spike trains) through suitable preprocessing. Not all time series of continuous inputs are suitable for being fed into a network of spiking neurons.
- A great number of free parameters must be adjusted in a learning process. Apart from the weights (w), the synaptic time delays as well as the stretch factors and are candidates for investigation in a learning process.
- Since spiking neurons form the reaction level of the network (output layer), one obtains the signaling of a certain state of stimulus, rather than a continuous real output. The use of hybrid networks consisting of various types of neurons can remedy this situation, although this results an increased complexity of a suitable learning algorithm.
Nevertheless, we see an enormous potential in pulsed neural networks, especially with regard to the analysis of time series. The fact that the dimension of time is directly taken into consideration in the processes of encoding and transformation solves or defuses many of the problems of the analysis of time series by means of sigmoidal networks.
If you are looking for further scientific papers on the topic of spiking neurons and pulsed networks, please click here. You will find a compilation of the most important recent scientific contributions on this topic in »Pulsed Neural Networks« (edited by Wolfgang Maas/Christopher M. Bishop), available at Amazon in hardcover as well as in paperback.