Wednesday, December 14, 2016

Westworld and the mathematical structure of memories

The dominant conceptual paradigm in mathematical neuroscience is to represent the human mind, and prospective artificial intelligence, as a neural network. The patterns of activity in such a network, whether they're realised by the neuronal cells in a human brain, or by artificial semiconductor circuits, provide the capability to represent the external world and to process information. In particular, the mathematical structures instantiated by neural networks enable us to understand what memories are, and thus to understand the foundation upon which personal identity is built.

Intriguingly, however, there is some latitude in the mathematical definition of what a memory is. To understand the significance of this, let's begin by reviewing some of the basic ideas in the field.

On an abstract level, a neural network consists of a set of nodes, and a set of connections between the nodes. The nodes possess activation levels; the connections between nodes possess weights; and the nodes have numerical rules for calculating their next activation level from a combination of the previous activation level, and the weighted inputs from other nodes. A negative weight transmits an inhibitory signal to the receiving node, while a positive weight transmits an excitatory signal.

The nodes are generally divided into three classes: input nodes, hidden/intermediate nodes, and output nodes. The activity levels of input nodes communicate information from the external world, or another neural system; output notes transmit information to the external world or other neural systems; and the hidden nodes merely communicate with other nodes inside the network. 

In general, any node can possess a connection with any other node. However, there is a directionality to the network in the sense that patterns of activation propagate through it from the input nodes to the output nodes. In a feedforward network, there is a partial ordering relationship defined on the nodes, which prevents downstream nodes from signalling those upstream. In contrast, such feedback circuits are permitted in a recurrent network. Biological neural networks are recurrent networks.

Crucially, the weights in a network are capable of evolving with time. This facilitates learning and memory in both biological and artificial networks. 

The activation levels in a neural network are also referred to as 'firing rates', and in the case of a biological brain, generally correspond to the frequencies of the so-called 'action potentials' which a neuron transmits down its output fibre, the axon. The neurons in a biological brain are joined at synapses, and in this case the weights correspond to the synaptic efficiency. The latter is dependent upon factors such as the pre-synaptic neurotransmitter release rate, the number and efficacy of post-synaptic receptors, and the availability of enzymes in the synaptic cleft. Whilst the weights can vary between inhibitory and excitatory in an artificial network, this doesn't appear to be possible for synaptic connections.

Having defined a neural network, the next step is to introduce the apparatus of dynamical systems theory. Here, the possible states of a system are represented by the points of a differential manifold $\mathcal{M}$, and the possible dynamical histories of that system are represented by a particular set of paths in the manifold. Specifically, they are represented by the integral curves of a vector field defined on the manifold by a system of differential equations. This generates a flow $\phi_t$, which is such that for any point $x(0) \in \mathcal{M}$, representing an initial state, the state after a period of time $t$ corresponds to the point $x(t) = \phi_t(x(0))$.  

In the case of a neural network, a state of the system corresponds to a particular combination of activation levels $x_i$ ('firing rates') for all the nodes in the network, $i = 1,\ldots,n$. The possible dynamical histories are then specified by ordinary differential equations for the $x_i$. A nice example of such a 'firing rate model' for a biological brain network is provided by Curto, Degeratu and Itskov:

$$
\frac{dx_i}{dt} = - \frac{1}{\tau_i}x_i + f \left(\sum_{j=1}^{n}W_{ij}x_j + b_i \right), \,  \text{for } \, i = 1,\ldots,n
$$
$W$ is the matrix of weights, with $W_{ij}$ representing the strength of the connection from the $j$-th neuron to the $i$-th neuron; $b_i$ is the external input to the $i$-th neuron; $\tau_i$ defines the timescale over which the $i$-th neuron would return to its resting state in the absence of any inputs; and $f$ is a non-linear function which, amongst other things, precludes the possibly of negative firing rates. 

In the case of a biological brain, one might have $n=10^{11}$ neurons in the entire network. This entails a state-state of dimension $10^{11}$. Within this manifold are submanifolds corresponding to the activities of subsets of neurons. In a sense to be defined below, memories correspond to stable fixed states of these submanifolds.

In dynamical systems theory, a fixed state $x^*$ is defined to be a point $x^* \in \mathcal{M}$ such that $\phi_t(x^*) = x^*$ for all $t \in \mathbb{R}$. 

The concept of a fixed state in the space of possible firing patterns of a neural network captures the persistence of memory. Memories are stored by changes to the synaptic efficiencies in a subnetwork, and the corresponding matrix of weights $W_{ij}$ permits the existence of a fixed state in the activation levels of that subnetwork. 

However, real physical systems cannot be controlled with infinite precision, and therefore cannot be manoeuvred into isolated fixed points in a continuous state space. Hence memory states are better defined in terms of the properties of neighbourhoods of fixed points. In particular, some concept of stability is required to ensure that the state of the system remains within a neighbourhood of a fixed point, under the inevitable perturbations and errors suffered by a system operating in a real physical environment.

There are two possible definitions of stability in this context (Hirsch and Smale, Differential Equations, Dynamical Systems and Linear Algebra, p185-186):

(i) A fixed point is stable if for every neighbourhood $U$ there is a super-neighbourhood $U_1$ such that any initial point $x(0) \in U$ remains in $U_1$, and therefore close to $x^*$, under the action of the flow $\phi_t$.


(ii)  A fixed point is asymptotically stable if for every neighbourhood $U$ there is a super-neighbourhood $U_1$ such that any initial point $x(0) \in U$ not only remains in $U_1$ but $lim_{t \rightarrow \infty} x(t) = x^*$.


The first condition seems more consistent with the nature of human memory. The memories are not perfect, retaining some aspects of the original experience, but fluctuate with time, (and ultimately become hazy as the synaptic weights drift away from their original values). The second condition, however, is a much stricter condition. In conjunction with an ability to fix the weights of a subnetwork on a long-term basis, this condition seems consistent with the long-term fidelity of memory. 

At first sight, one might wish to design an artificial intelligence so that its memories are asymptotically stable fixed points in the possible firing rate patterns within an artificial neural network. However, doing so could well entail that those memories become as vivid and realistic to the host systems as their present-day experiences. It might become impossible to distinguish past from present experience. 

And that might not turn out so well...


No comments: