The types of neurons most commonly used in artificial neural networks convert an input stimulus into an output by means of a so-called activation function. The output signal is then transmitted via a synapse (link) to the following neuron. In this transmission, the signal can be amplified by a weight (w) or weakened and also reversed. The activation functions chosen are such that real numbers can be used as input stimuli and that the signal emitted by the neuron is likewise a real number. The activation function normally ensures that the emitted signal falls within a well-defined range.

An example is the frequently used hyperbolic tangent activation function y=a*tanh(b*x), where x represents the input stimulus. The constants a and b define the possible output range (a) as well as the slope (b).

Fig. 1: Hyperbolic tangent activation function: the constants a and b define the possible output range (a) as well as the slope (b). The neuron transforms the stimulus in a nonlinear way, an essential precondition for solving a variety of highly complex problems.