Neural information processing includes the extraction of information present in the statistics of afferent signals. For this, the afferent synaptic weights w(j) are continuously adapted, changing in turn the distribution p theta(y) of the post-synaptic neural activity y. Here. denotes relevant neural parameters. The functional form of p theta(y) will hence continue to evolve as long as learning is on-going, becoming stationary only when learning is completed. This stationarity principle can be captured by the Fisher information F-theta = integral P theta(y) (partial derivative/partial derivative theta ln(p theta(y)))(2) dy, partial derivative/partial derivative theta Sigma w(j)partial derivative/partial derivative w(j) of the neural activity with respect to the afferent synaptic weights wj. It then follows, that Hebbian learning rules may be derived by minimizing F-theta. The precise functional form of the learning rules depends then on the shape of the transfer function y = g(x) relating the membrane potential x with the activity y. The learning rules derived from the stationarity principle are self-limiting ( runaway synaptic growth does not occur), performing a standard principal component analysis, whenever a direction in the space of input activities with a large variance is present. Generically, directions of input activities having a negative excess Kurtosis are preferred, making the rules suitable for ICA ( see figure). Moreover, when only the exponential foot of g is considered ( low activity regime), the standard Hebbian learning rule, without reversal, is recovered.