Neuro-mathematician Carina Curto has recently published a fascinating paper, 'What can topology tell us about the neural code?' The centrepiece of the paper is a simple and profound exposition of the method by which the neural networks in animal brains can represent the topology of space.
As Curto reports, neuroscientists have discovered that there are so-called place cells in the hippocampus of rodents which "act as position sensors in space. When an animal is exploring a particular environment, a place cell increases its firing rate as the animal passes through its corresponding place field - that is, the localized region to which the neuron preferentially responds." Furthermore, a network of place cells, each representing a different position, is collectively capable of representing the topology of the environment.
Rather than beginning with the full topological structure of an environmental space X, the approach of such research is to represent the collection of place fields as an open covering, i.e., a collection of open sets $\mathcal{U} = \{U_1,...,U_n \}$ such that $X = \bigcup_{i=1}^n U_i$. A covering is referred to as a good cover if every non-empty intersection $\bigcap_{i \in \sigma} U_i$ for $\sigma \subseteq \{1,...,n \}$ is contractible. i.e., if it can be continuously deformed to a point.
The elements of the covering, and the finite intersections between them, define the so-called 'nerve' $\mathcal{N(U)}$ of the cover, (the mathematical terminology is coincidental!):
The nerve of a covering satisfies the conditions to be a simplicial complex, with each subset $U_i$ corresponding to a vertex, and each non-empty intersection of $k+1$ subsets defining a $k$-simplex of the complex. A simplicial complex inherits a topological structure from the imbedding of the simplices into $\mathbb{R}^n$, hence the covering defines a topology. And crucially, the following lemma applies:
The elements of the covering, and the finite intersections between them, define the so-called 'nerve' $\mathcal{N(U)}$ of the cover, (the mathematical terminology is coincidental!):
$\mathcal{N(U)} = \{\sigma \subseteq \{1,...,n \}: \bigcap_{i \in \sigma} U_i \neq \emptyset \}$.
The nerve of a covering satisfies the conditions to be a simplicial complex, with each subset $U_i$ corresponding to a vertex, and each non-empty intersection of $k+1$ subsets defining a $k$-simplex of the complex. A simplicial complex inherits a topological structure from the imbedding of the simplices into $\mathbb{R}^n$, hence the covering defines a topology. And crucially, the following lemma applies:
Nerve lemma: Let $\mathcal{U}$ be a good cover of X. Then $\mathcal{N(U)}$ is homotopy equivalent to X. In particular, $\mathcal{N(U)}$ and X have exactly the same homology groups.
The homology (and homotopy) of a topological space provides a group-theoretic means of characterising the topology. Homology, however, provides a weaker, more coarse-grained level of classification than topology as such. Homeomorphic topologies must possess the same homology (thus, spaces with different homology must be topologically distinct), but conversely, a pair of topologies with the same homology need not be homeomorphic.
The homology (and homotopy) of a topological space provides a group-theoretic means of characterising the topology. Homology, however, provides a weaker, more coarse-grained level of classification than topology as such. Homeomorphic topologies must possess the same homology (thus, spaces with different homology must be topologically distinct), but conversely, a pair of topologies with the same homology need not be homeomorphic.
Now, different firing patterns of the neurons in a network of hippocampal place cells correspond to different elements of the nerve which represents the corresponding place field. The simultaneous firing of $k$ neurons, $\sigma \subseteq \{1,...,n \}$, corresponds to the non-empty intersection $\bigcap_{i \in \sigma} U_i \neq \emptyset$ between the corresponding $k$ elements of the covering. Hence, the homological topology of a region of space is represented by the different possible firing patterns of a collection of neurons.
As Curto explains, "if we were eavesdropping on the activity of a population of place cells as the animal fully explored its environment, then by finding which subsets of neurons co-fire, we could, in principle, estimate $\mathcal{N(U)}$, even if the place fields themselves were unknown. [The nerve lemma] tells us that the homology of the simplicial complex $\mathcal{N(U)}$ precisely matches the homology of the environment X. The place cell code thus naturally reflects the topology of the represented space."
This entails the need to issue a qualification to a subsection of my 2005 paper, 'Universe creation on a computer'. This paper was concerned with computer representations of the physical world, and attempted to place these in context with the following general definition:
A representation is a mapping $f$ which specifies a correspondence between a represented thing and the thing which represents it. An object, or the state of an object, can be represented in two different ways:
The representation of a Formula One car by a wind-tunnel model is an example of type-$1$ representation: there is an approximate homothetic isomorphism, (a transformation which changes only the scale factor), from the exterior surface of the model to the exterior surface of a Formula One car. As an alternative example, the famous map of the London Underground preserves the topology, but not the geometry, of the semi-subterranean public transport network. Hence in this case, there is a homeomorphic isomorphism.
In 2005, I wrote that "the primary example of type-$2$a representation is the representation of the external world by brain states. Taking the example of visual perception, there is no homomorphism between the spatial geometry of an individual's visual field, and the state of the neuronal network in that part of the brain which deals with vision. However, the correspondence between brain states and the external world is not an arbitrary mapping. It is a correspondence defined by a causal physical process involving photons of light, the human eye, the retina, and the human brain. The correspondence exists independently of human decision-making."
The theorems and empirical research expounded in Curto's paper demonstrate very clearly that whilst there might not be a geometrical isometry between the spatial geometry of one's visual field and the state of a subsystem in the brain, there are, at the very least, isomorphisms between the homological topology of regions in one's environment and the state of neural subsystems.
A representation is a mapping $f$ which specifies a correspondence between a represented thing and the thing which represents it. An object, or the state of an object, can be represented in two different ways:
$1$. A structured object/state $M$ serves as the domain of a mapping $f: M \rightarrow f(M)$ which defines the
representation. The range of the mapping, $f(M)$, is also a structured
entity, and the mapping $f$ is a homomorphism with respect to some level of
structure possessed by $M$ and $f(M)$.
$2$. An object/state serves as an element $x \in M$ in the domain of a mapping $f: M \rightarrow f(M)$ which defines the representation.
$2$. An object/state serves as an element $x \in M$ in the domain of a mapping $f: M \rightarrow f(M)$ which defines the representation.
The representation of a Formula One car by a wind-tunnel model is an example of type-$1$ representation: there is an approximate homothetic isomorphism, (a transformation which changes only the scale factor), from the exterior surface of the model to the exterior surface of a Formula One car. As an alternative example, the famous map of the London Underground preserves the topology, but not the geometry, of the semi-subterranean public transport network. Hence in this case, there is a homeomorphic isomorphism.
Type-$2$
representation has two sub-classes: the mapping $f: M \rightarrow f(M)$
can be defined by either (2a) an objective, causal physical process, or
by ($2$b) the decisions of cognitive systems.
As
an example of type-$2$b representation, in computer engineering there
are different conventions,
such as ASCII and EBCDIC, for representing linguistic characters with the
states of the bytes in computer memory. In the ASCII
convention, 0100000 represents the symbol '@',
whereas in EBCDIC it represents a space ' '. Neither
relationship between linguistic characters and the states of computer
memory exists objectively. In particular, the relationship does not
exist independently of the interpretative decisions made by the
operating system of a computer.
In 2005, I wrote that "the primary example of type-$2$a representation is the representation of the external world by brain states. Taking the example of visual perception, there is no homomorphism between the spatial geometry of an individual's visual field, and the state of the neuronal network in that part of the brain which deals with vision. However, the correspondence between brain states and the external world is not an arbitrary mapping. It is a correspondence defined by a causal physical process involving photons of light, the human eye, the retina, and the human brain. The correspondence exists independently of human decision-making."
The theorems and empirical research expounded in Curto's paper demonstrate very clearly that whilst there might not be a geometrical isometry between the spatial geometry of one's visual field and the state of a subsystem in the brain, there are, at the very least, isomorphisms between the homological topology of regions in one's environment and the state of neural subsystems.
On a cautionary note, this result should be treated as merely illustrative of the representational mechanisms employed by biological brains. One would expect that a cognitive system which has evolved by natural selection will have developed a confusing array of different techniques to represent the geometry and topology of the external world.
Nevertheless, the result is profound because it ultimately explains how you can hold a world inside your own head.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.