DMFT Method of Random Recurrent Neural Networks
We consider a stochastic dynamical system defined by the following equation:
where
This is a typical random recurrent neural network (RNN) model. Here, we introduce the analysis of this model conducted by H. Sompolinsky et al. in 1988, which is based on the dynamical mean field theory (DMFT).
In the large
Using the statistical property
one can further simplify Eq. (3) to
On the other hand, we consider the autocorrelation function of the local current
Perform the Fourier transform on Eq. (1), using the definition
and the differentiation property of Fourier transform
Multiplying the above two equations and taking the average, yields
Taking the inverse Fourier transform, the left side of the equation can be calculated as
and the right side can be calculated in a similar way
Then we obtain a differential equation for
In addition, this result can also be obtained by directly calculating the second derivative of
The fist derivative is
The second derivative is
In the second step, we used the time translation invariance of the system
and in the fourth step, we used the inverse time translation.
The form of the Eq. (16) is similar to the equation of motion of a particle in a potential field, which inspired us to rewrite it as
In order to determine the potential function
And then using the Price’s theorem 3, we have
where
This equation describes the dynamics of the correlation function
Footnotes
-
The central limit theorem can be applied. ↩
-
According to the ergodic hypothesis, it can be concluded that the time-average is equal to the ensemble-average. ↩
-
One formulation of Price’s theorem is:
Consider two Gaussian random variables and with covariance , satisfying the joint probability distribution . For any function , define its expectation
then we have
↩