Wednesday, December 14, 2016

Westworld and the mathematical structure of memories

The dominant conceptual paradigm in mathematical neuroscience is to represent the human mind, and prospective artificial intelligence, as a neural network. The patterns of activity in such a network, whether they're realised by the neuronal cells in a human brain, or by artificial semiconductor circuits, provide the capability to represent the external world and to process information. In particular, the mathematical structures instantiated by neural networks enable us to understand what memories are, and thus to understand the foundation upon which personal identity is built.

Intriguingly, however, there is some latitude in the mathematical definition of what a memory is. To understand the significance of this, let's begin by reviewing some of the basic ideas in the field.

On an abstract level, a neural network consists of a set of nodes, and a set of connections between the nodes. The nodes possess activation levels; the connections between nodes possess weights; and the nodes have numerical rules for calculating their next activation level from a combination of the previous activation level, and the weighted inputs from other nodes. A negative weight transmits an inhibitory signal to the receiving node, while a positive weight transmits an excitatory signal.

The nodes are generally divided into three classes: input nodes, hidden/intermediate nodes, and output nodes. The activity levels of input nodes communicate information from the external world, or another neural system; output notes transmit information to the external world or other neural systems; and the hidden nodes merely communicate with other nodes inside the network. 

In general, any node can possess a connection with any other node. However, there is a directionality to the network in the sense that patterns of activation propagate through it from the input nodes to the output nodes. In a feedforward network, there is a partial ordering relationship defined on the nodes, which prevents downstream nodes from signalling those upstream. In contrast, such feedback circuits are permitted in a recurrent network. Biological neural networks are recurrent networks.

Crucially, the weights in a network are capable of evolving with time. This facilitates learning and memory in both biological and artificial networks. 

The activation levels in a neural network are also referred to as 'firing rates', and in the case of a biological brain, generally correspond to the frequencies of the so-called 'action potentials' which a neuron transmits down its output fibre, the axon. The neurons in a biological brain are joined at synapses, and in this case the weights correspond to the synaptic efficiency. The latter is dependent upon factors such as the pre-synaptic neurotransmitter release rate, the number and efficacy of post-synaptic receptors, and the availability of enzymes in the synaptic cleft. Whilst the weights can vary between inhibitory and excitatory in an artificial network, this doesn't appear to be possible for synaptic connections.

Having defined a neural network, the next step is to introduce the apparatus of dynamical systems theory. Here, the possible states of a system are represented by the points of a differential manifold $\mathcal{M}$, and the possible dynamical histories of that system are represented by a particular set of paths in the manifold. Specifically, they are represented by the integral curves of a vector field defined on the manifold by a system of differential equations. This generates a flow $\phi_t$, which is such that for any point $x(0) \in \mathcal{M}$, representing an initial state, the state after a period of time $t$ corresponds to the point $x(t) = \phi_t(x(0))$.  

In the case of a neural network, a state of the system corresponds to a particular combination of activation levels $x_i$ ('firing rates') for all the nodes in the network, $i = 1,\ldots,n$. The possible dynamical histories are then specified by ordinary differential equations for the $x_i$. A nice example of such a 'firing rate model' for a biological brain network is provided by Curto, Degeratu and Itskov:

\frac{dx_i}{dt} = - \frac{1}{\tau_i}x_i + f \left(\sum_{j=1}^{n}W_{ij}x_j + b_i \right), \,  \text{for } \, i = 1,\ldots,n
$W$ is the matrix of weights, with $W_{ij}$ representing the strength of the connection from the $j$-th neuron to the $i$-th neuron; $b_i$ is the external input to the $i$-th neuron; $\tau_i$ defines the timescale over which the $i$-th neuron would return to its resting state in the absence of any inputs; and $f$ is a non-linear function which, amongst other things, precludes the possibly of negative firing rates. 

In the case of a biological brain, one might have $n=10^{11}$ neurons in the entire network. This entails a state-state of dimension $10^{11}$. Within this manifold are submanifolds corresponding to the activities of subsets of neurons. In a sense to be defined below, memories correspond to stable fixed states of these submanifolds.

In dynamical systems theory, a fixed state $x^*$ is defined to be a point $x^* \in \mathcal{M}$ such that $\phi_t(x^*) = x^*$ for all $t \in \mathbb{R}$. 

The concept of a fixed state in the space of possible firing patterns of a neural network captures the persistence of memory. Memories are stored by changes to the synaptic efficiencies in a subnetwork, and the corresponding matrix of weights $W_{ij}$ permits the existence of a fixed state in the activation levels of that subnetwork. 

However, real physical systems cannot be controlled with infinite precision, and therefore cannot be manoeuvred into isolated fixed points in a continuous state space. Hence memory states are better defined in terms of the properties of neighbourhoods of fixed points. In particular, some concept of stability is required to ensure that the state of the system remains within a neighbourhood of a fixed point, under the inevitable perturbations and errors suffered by a system operating in a real physical environment.

There are two possible definitions of stability in this context (Hirsch and Smale, Differential Equations, Dynamical Systems and Linear Algebra, p185-186):

(i) A fixed point is stable if for every neighbourhood $U$ there is a super-neighbourhood $U_1$ such that any initial point $x(0) \in U$ remains in $U_1$, and therefore close to $x^*$, under the action of the flow $\phi_t$.

(ii)  A fixed point is asymptotically stable if for every neighbourhood $U$ there is a super-neighbourhood $U_1$ such that any initial point $x(0) \in U$ not only remains in $U_1$ but $lim_{t \rightarrow \infty} x(t) = x^*$.

The first condition seems more consistent with the nature of human memory. The memories are not perfect, retaining some aspects of the original experience, but fluctuate with time, (and ultimately become hazy as the synaptic weights drift away from their original values). The second condition, however, is a much stricter condition. In conjunction with an ability to fix the weights of a subnetwork on a long-term basis, this condition seems consistent with the long-term fidelity of memory. 

At first sight, one might wish to design an artificial intelligence so that its memories are asymptotically stable fixed points in the possible firing rate patterns within an artificial neural network. However, doing so could well entail that those memories become as vivid and realistic to the host systems as their present-day experiences. It might become impossible to distinguish past from present experience. 

And that might not turn out so well...

Saturday, November 19, 2016

Neural networks and spatial topology

Neuro-mathematician Carina Curto has recently published a fascinating paper, 'What can topology tell us about the neural code?' The centrepiece of the paper is a simple and profound exposition of the method by which the neural networks in animal brains can represent the topology of space.

As Curto reports, neuroscientists have discovered that there are so-called place cells in the hippocampus of rodents which "act as position sensors in space. When an animal is exploring a particular environment, a place cell increases its firing rate as the animal passes through its corresponding place field - that is, the localized region to which the neuron preferentially responds." Furthermore, a network of place cells, each representing a different position, is collectively capable of representing the topology of the environment.

Rather than beginning with the full topological structure of an environmental space X, the approach of such research is to represent the collection of place fields as an open covering, i.e., a collection of open sets $\mathcal{U} = \{U_1,...,U_n \}$ such that $X  = \bigcup_{i=1}^n U_i$. A covering is referred to as a good cover if every non-empty intersection $\bigcap_{i \in \sigma} U_i$ for $\sigma \subseteq \{1,...,n \}$ is contractible. i.e., if it can be continuously deformed to a point.

The elements of the covering, and the finite intersections between them, define the so-called 'nerve' $\mathcal{N(U)}$ of the cover, (the mathematical terminology is coincidental!):

$\mathcal{N(U)} = \{\sigma \subseteq \{1,...,n \}: \bigcap_{i \in \sigma} U_i \neq \emptyset \}$.

The nerve of a covering satisfies the conditions to be a simplicial complex, with each non-empty intersection defining a vertex of the complex. A simplicial complex inherits a topological structure from the imbedding of the simplices into $\mathbb{R}^n$, hence the covering defines a topology. And crucially, the following lemma applies:

Nerve lemma: Let $\mathcal{U}$ be a good cover of X. Then $\mathcal{N(U)}$ is homotopy equivalent to X. In particular, $\mathcal{N(U)}$ and X have exactly the same homology groups.

The homology (and homotopy) of a topological space provides a group-theoretic means of characterising the topology. Homology, however, provides a weaker, more coarse-grained level of classification than topology as such. Homeomorphic topologies must possess the same homology (thus, spaces with different homology must be topologically distinct), but conversely, a pair of topologies with the same homology need not be homeomorphic. 

Now, different firing patterns of the neurons in a network of hippocampal place cells correspond to different elements of the nerve which represents the corresponding place field. The simultaneous firing of $k$ neurons, $\sigma \subseteq \{1,...,n \}$, corresponds to the non-empty intersection $\bigcap_{i \in \sigma} U_i \neq \emptyset$ between the corresponding $k$ elements of the covering. Hence, the homological topology of a region of space is represented by the different possible firing patterns of a collection of neurons.

As Curto explains, "if we were eavesdropping on the activity of a population of place cells as the animal fully explored its environment, then by finding which subsets of neurons co-fire, we could, in principle, estimate $\mathcal{N(U)}$, even if the place fields themselves were unknown. [The nerve lemma] tells us that the homology of the simplicial complex $\mathcal{N(U)}$ precisely matches the homology of the environment X. The place cell code thus naturally reflects the topology of the represented space."

This entails the need to issue a qualification to a subsection of my 2005 paper, 'Universe creation on a computer'. This paper was concerned with computer representations of the physical world, and attempted to place these in context with the following general definition:

A representation is a mapping $f$ which specifies a correspondence between a represented thing and the thing which represents it. An object, or the state of an object, can be represented in two different ways:

$1$. A structured object/state $M$ serves as the domain of a mapping $f: M \rightarrow f(M)$ which defines the representation. The range of the mapping, $f(M)$, is also a structured entity, and the mapping $f$ is a homomorphism with respect to some level of structure possessed by $M$ and $f(M)$.

$2$. An object/state serves as an element $x \in M$ in the domain of a mapping $f: M \rightarrow f(M)$ which defines the representation. 

The representation of a Formula One car by a wind-tunnel model is an example of type-$1$ representation: there is an approximate homothetic isomorphism, (a transformation which changes only the scale factor), from the exterior surface of the model to the exterior surface of a Formula One car. As an alternative example, the famous map of the London Underground preserves the topology, but not the geometry, of the semi-subterranean public transport network. Hence in this case, there is a homeomorphic isomorphism.

Type-$2$ representation has two sub-classes: the mapping $f: M \rightarrow f(M)$ can be defined by either (2a) an objective, causal physical process, or by ($2$b) the decisions of cognitive systems.

As an example of type-$2$b representation, in computer engineering there are different conventions, such as ASCII and EBCDIC, for representing linguistic characters with the states of the bytes in computer memory. In the ASCII convention, 0100000 represents the symbol '@', whereas in EBCDIC it represents a space ' '. Neither relationship between linguistic characters and the states of computer memory exists objectively. In particular, the relationship does not exist independently of the interpretative decisions made by the operating system of a computer.

In 2005, I wrote that "the primary example of type-$2$a representation is the representation of the external world by brain states. Taking the example of visual perception, there is no homomorphism between the spatial geometry of an individual's visual field, and the state of the neuronal network in that part of the brain which deals with vision. However, the correspondence between brain states and the external world is not an arbitrary mapping. It is a correspondence defined by a causal physical process involving photons of light, the human eye, the retina, and the human brain. The correspondence exists independently of human decision-making."

The theorems and empirical research expounded in Curto's paper demonstrate very clearly that whilst there might not be a geometrical isometry between the spatial geometry of one's visual field and the state of a subsystem in the brain, there are, at the very least, isomorphisms between the homological topology of regions in one's environment and the state of neural subsystems.

On a cautionary note, this result should be treated as merely illustrative of the representational mechanisms employed by biological brains. One would expect that a cognitive system which has evolved by natural selection will have developed a confusing array of different techniques to represent the geometry and topology of the external world.

Nevertheless, the result is profound because it ultimately explains how you can hold a world inside your own head.

Monday, November 14, 2016

Trump and Brexit

One of the strangest things about most scientists and academics, and, indeed, most educated middle-class people in developed countries, is their inability to adopt a scientific approach to their own political and ethical beliefs.

Such beliefs are not acquired as a consequence of growing rationality or progress. Rather, they are part of what defines the identity of a particular human tribe. A particular bundle of shared ideas is acquired as a result of chance, operating in tandem with the same positive feedback processes which drive all trends and fashions in human society. Alex Pentland, MIT academic and author of 'Social Physics', concisely summarises the situation as follows:

"A community with members who actively engage with each other creates a group with shared, integrated habits and beliefs...most of our public beliefs and habits are learned by observing the attitudes, actions and outcomes of peers, rather than by logic or argument," (p25, Being Human, NewScientistCollection, 2015).

So it continues to be somewhat surprising that so many scientists and academics, not to mention writers, journalists, and the judiciary, continue to regard their own particular bundle of political and ethical ideas, as in some sense, 'progressive', or objectively true.

Never has this been more apparent than in the response to Britain's decision to leave the European Union, and America's decision to elect Donald Trump. Those who voted in favour of these respective decisions have been variously denigrated as stupid people, working class people, angry white men, racists, and sexists.

To take one example of the genre, John Horgan has written an article on the Scientific American website which details the objective statistical indicators of human progress over hundreds of years. At the conclusion of this article he asserts that Trump's election "reveals that many Americans feel threatened by progress, especially rights for women and minorities."

There are three propositions implicit in Horgan's statement: (i) The political and ethical ideas represented by the US Democratic party are those which can be objectively equated with measurable progress; (ii) Those who voted against such ideas are sexist; (iii) Those who voted against such ideas are racist.

The accusation that those who voted for Trump feel threatened by equal rights for women is especially puzzling. As many political analysts have noted, 42% of those who voted for Trump were female, which, if Horgan is to be believed, was equivalent to turkeys voting for Christmas.

It doesn't say much for Horgan's view of women that he thinks so many millions of them could vote against equal rights for women. Unless, of course, people largely tend to form political beliefs, and vote, according to patterns determined by the social groups to which they belong, rather than on the basis of evidence and reason. A principle which would, unfortunately, fatally undermine Horgan's conviction that one of those bundles of ethical and political beliefs represents an objective form of progress.

In the course of his article, Horgan defines a democracy "as a society in which women can vote," and also, as an indicator of progress, points to the fact that homosexuality was a crime when he was a kid. These are two important points to consider when we turn from the issue of Trump to Brexit, and consider the problem of immigration. The past decades have seen the large-scale migration of people into Britain who are enemies of the open society: these are people who reject equal rights for women, and people who consider homosexuality to be a crime.

So the question is as follows: Do you permit the migration of people into your country who oppose the open society, or do you prohibit it?

If you believe that equal rights for women and the non-persecution of homosexuals are objective indicators of progress, then do you permit or prohibit the migration of people into your country who oppose such progress?

It's a well-defined, straightforward question for the academics, the writers, the journalists, the judiciary, and indeed for all those who believe in objective political and ethical progress. It's a question which requires a decision, not merely an admission of complexity or difficulty.

Now combine that question with the following European Union policy: "Access to the European single market requires the free migration of labour between participating countries."

Hence, Brexit.

What unites Brexit and Trump is that both events are a measure of the current relative size of different tribes, under external perturbations such as immigration. It's not about progress, rationality, reactionary forces, conspiracies or conservatism. Those are merely the delusional stories each tribe spins as part of its attempts to maintain internal cohesion and bolster its size. It's more about gaining and retaining membership of particular social groups, and that requires subscription to a bundle of political and ethical ideas.

However, the thing about democracy is that it doesn't require the academics, the writers, the journalists, the judiciary, and other middle-class elites to understand any of this. They just need to lose.

Sunday, September 18, 2016

Cosmological redshift and recession velocities

In a recent BBC4 documentary, 'The Beginning and End of the Universe', nuclear physicist and broadcaster Jim Al Khalili visits the Telescopio Nazionale Galileo (TNG). There, he performs some nifty arithmetic to calculate that the redshift $z$ of a selected galaxy is:
z = \frac{\lambda_o - \lambda_e}{\lambda_e} =
\frac{\lambda_o}{\lambda_e} - 1 \simeq 0.1\,,
$$ where $\lambda_o$ denotes the observed wavelength of light and $\lambda_e$ denotes the emitted wavelength. He then applies the following formula to calculate the recession velocity of the galaxy:
v = c z = 300,000 \; \text{km s}^{-1} \cdot 0.1 \simeq 30,000 \; \text{km s}^{-1} \,,
$$ where $c$ is the speed of light.

After pausing for a moment to digest this fact, Jim triumphantly concludes with an expostulation normally reserved for use by people under the mental age of 15, and F1 trackside engineers:


It's worth noting, however, that the formula used here to calculate the recession velocity is only an approximation, valid at low redshifts, as Jim undoubtedly explained in a scene which hit the cutting-room floor. So, let's take a deeper look at the concept of cosmological redshift to understand what the real formula should be.

In general relativistic cosmology, the universe is represented by a Friedmann-Roberston-Walker (FRW) spacetime. Geometrically, an FRW model is a $4$-dimensional Lorentzian manifold $\mathcal{M}$ which can be expressed as a 'warped product' (Barrett O'Neill, Semi-Riemannian Geometry with Applications to Relativity, Academic Press, 1983):
I \times_R \Sigma \,.
$$ $I$ is an open interval of the pseudo-Euclidean manifold $\mathbb{R}^{1,1}$, and $\Sigma$ is a complete and connected $3$-dimensional Riemannian manifold. The warping function $R$ is a smooth, real-valued, non-negative function upon the open interval $I$, otherwise known as the 'scale factor'.

If we denote by $t$ the natural coordinate function upon $I$, and if we denote the metric tensor on $\Sigma$ as $\gamma$, then the Lorentzian metric $g$ on $\mathcal{M}$ can be written as
g = -dt \otimes dt + R(t)^2 \gamma \,.
$$ One can consider the open interval $I$ to be the time axis of the warped product cosmology. The $3$-dimensional manifold $\Sigma$ represents the spatial universe, and the scale factor $R(t)$ determines the time evolution of the spatial geometry.

Now, a Riemannian manifold $(\Sigma,\gamma)$ is equipped with a natural metric space structure $(\Sigma,d)$. In other words, there exists a non-negative real-valued function $d:\Sigma \times \Sigma
\rightarrow \mathbb{R}$ which is such that

$$\eqalign{d(p,q) &= d(q,p) \cr
d(p,q) + d(q,r) &\geq d(p,r) \cr
d(p,q) &= 0 \; \text{iff} \; p =q}$$ The metric tensor $\gamma$ determines the Riemannian distance $d(p,q)$ between any pair of points $p,q \in \Sigma$. The metric tensor $\gamma$ defines the length of all curves in the manifold, and the Riemannian distance is defined as the infimum of the length of all the piecewise smooth curves between $p$ and $q$.

In the warped product space-time $I \times_R \Sigma$, the spatial distance between $(t,p)$ and $(t,q)$ is $R(t)d(p,q)$. Hence, if one projects onto $\Sigma$, one has a time-dependent distance function on the points of space,
d_t(p,q) = R(t)d(p,q) \,.
$$Each hypersurface $\Sigma_t$ is a Riemannian manifold $(\Sigma_t,R(t)^2\gamma)$, and $R(t)d(p,q)$ is the distance between $(t,p)$ and $(t,q)$ due to the metric space structure $(\Sigma_t,d_t)$.

The rate of change of the distance between a pair of points in space, otherwise known as the 'recession velocity' $v$, is given by
v = \frac{d}{dt} (d_t(p,q)) &= \frac{d}{dt} (R(t)d(p,q)) \cr &= R'(t)d(p,q) \cr &=
\frac{R'(t)}{R(t)}R(t)d(p,q) \cr &= H(t)R(t)d(p,q) \cr &=
H(t)d_t(p,q)\,. }
$$ The rate of change of distance between a pair of points is proportional to the spatial separation of those points, and the constant of proportionality is the Hubble parameter $H(t) \equiv R'(t)/R(t)$.

Galaxies are embedded in space, and the distance between galaxies increases as a result of the expansion of space, not as a result of the galaxies moving through space. Where $H_0$ denotes the current value of the Hubble parameter, and $d_0 = R(t_0)d$ denotes the present 'proper' distance between a pair of points, the Hubble law relates recession velocities to proper distance by the simple expresssion $v = H_0d_0$.

Cosmology texts often introduce what they call 'comoving' spatial coordinates $(\theta,\phi,r)$. In these coordinates, galaxies which are not subject to proper motion due to local inhomogeneities in the distribution of matter, retain the same spatial coordinates at all times.

In effect, comoving spatial coordinates are merely coordinates upon $\Sigma$ which are lifted to $I \times \Sigma$ to provide spatial coordinates upon each hypersurface $\Sigma_t$. The radial coordinate $r$ of a point $q \in \Sigma$ is chosen to coincide with the Riemannian distance in the metric space $(\Sigma,d)$ which separates the point at $r=0$ from the point $q$. Hence, assuming the point $p$ lies at the origin of the comoving coordinate system, the distance between $(t,p)$ and $(t,q)$ can be expressed in terms of the comoving coordinate $r(q)$ as $R(t)r(q)$.

If light is emitted from a point $(t_e,p)$ of a warped product space-time and received at a point $(t_0,q)$, then the integral,
d(t_e) = \int^{t_0}_{t_e}\frac{c}{R(t)} \, dt \, ,
$$ expresses the Riemannian distance $d(p,q)$ in $\Sigma$, (equivalent to the comoving coordinate distance), travelled by the light between the point of emission and the point of reception. The distance $d(t_e)$ is a function of the time of emission, $t_e$, a concept which will become important further below.

The present spatial distance between the point of emission and the point of reception is:
R(t_0)d(p,q) = R(t_0) \int^{t_0}_{t_e}\frac{c}{R(t)} \, dt \,.
$$ The distance which separated the point of emission from the point of reception at the time the light was emitted is:
R(t_e)d(p,q) = R(t_e) \int^{t_0}_{t_e}\frac{c}{R(t)} \, dt \,.
$$ The following integral defines the maximum distance in $(\Sigma,\gamma)$ from which one can receive light by the present time $t_0$:
d_{max}(t_0) = \int^{t_0}_{0}\frac{c}{R(t)} \, dt \,.
$$ From this, cosmologists define something called the 'particle horizon':
R(t_0) d_{max}(t_0) = R(t_0) \int^{t_0}_{0}\frac{c}{R(t)} \, dt
$$ We can only receive light from sources which are presently separated from us by, at most, $R(t_0) d_{max}(t_0)$. The size of the particle horizon therefore depends upon the time-dependence of the scale factor, $R(t)$.

Under the FRW model which currently has empirical support, (the 'concordance model', with cold dark matter, a cosmological constant $\Lambda$, and a mass-energy density equal to the critical density), the particle horizon is approximately 46 billion light years. This is the conventional definition of the present radius of the observable universe, before the possible effect of inflationary cosmology is introduced...

To obtain an expression which links recession velocity with redshift, let us first return to the Riemannian/ comoving distance travelled by the light that we detect now, as a function of the time of emission $t_e$:
d(t_e) = \int^{t_0}_{t_e}\frac{c}{R(t)} \, dt \,.
$$ We need to replace the time parameter here with redshift, and to do this we first note that the redshift can be expressed as the ratio of the scale-factor at the time of reception to the time of emission:
1+ z = \frac{R(t_0)}{R(t)} \,.
$$ Taking the derivative of this with respect to time (Davis and Lineweaver, p19-20), and re-arranging obtains:
\frac{dt}{R(t)} = \frac{-dz}{R(t_0) H(z)} \,.
$$ Substituting this in and executing a change of variables in which $t_o \rightarrow z' = 0$ and $t_{e} \rightarrow z' = z$, we obtain an expression for the Riemannian/comoving distance as a function of redshift:
d(z) = \frac{c}{R(t_0)} \int^{0}_{z}\frac{dz'}{H(z')} \, .
$$ From our general definition above of the recession velocity between a pair of points $(p,q)$ separated by a Riemannian/comoving distance $d(p,q)$ we know that:
v =  R'(t)d(p,q) \,.
$$ Hence, we obtain the following expression (Davis and Lineweaver Eq. 1) for the recession velocity of a galaxy detected at a redshift of $z$:
v = R'(t) d(z) = \frac{c}{R(t_0)} R'(t) \int^{0}_{z}\frac{dz'}{H(z')} \, .
$$ To obtain the present recession velocity, one merely sets $t = t_0$:
v = R'(t_0) d(z) = \frac{c}{R(t_0)} R'(t_0) \int^{0}_{z}\frac{dz'}{H(z')} \, .
$$ At low redshifts, such as the case of $z \simeq 0.1$, the integral reduces to:
 \int^{0}_{z}\frac{dz'}{H(z')} \approx \frac{z}{H(0)} =  \frac{z}{H(t_0)} \, .
$$ Hence, recalling that $H(t) \equiv R'(t)/R(t)$, at low redshifts one obtains Jim Al Khalili's:
v = cz \,.
$$ Boom...mathematics!

Monday, May 09, 2016

Brain of Britain

BBC Radio 4 has a general knowledge quiz-show modestly titled 'Brain of Britain'. The 2016 final of 'Brain of Britain' was broadcast this week. The four contestants were:

John, a dentist from Southampton.
Ian, a software developer from North Worcestershire.
Mike, a driver from Brechin.
Jane, a teacher and writer from Edinburgh.

After 7 mins, quiz-master Russell Davies poses the following question:

"In science, what name is given to the product of the mass of a particle and its velocity?"

Bit of a tricky one, eh? Science question. Still, at least it's an elementary science question, the type of question that anyone who didn't leave school at the age of 12 should be able to answer, surely?

In fact, this simple question elicited the following responses, in turn, from the contestants. And remember, these are the four finalists on a show entitled 'Brain of Britain':

John: "vector?"
Russell Davies: "No."
Ian: "acceleration."
Russell Davies: "Not that either, no"
Mike: "Force?"
Russell Davies: "No-o."
Jane: "Is it speed?"
Russell Davies: "It's not speed, it's momentum."

Still, it was Radio 4, so a science question does go somewhat outside the usual diet of politics, GCSE economics, and the arts. 

Sunday, April 17, 2016

Williams FW18/19 vs Ferrari F310/B

Mark Hughes has a useful survey of Ferrari's F1 fortunes from 1996 to the present day in the May edition of Motorsport Magazine. At the beginning of the article, it's noted that "The secret to the speed of the [Williams] FW18 in '96 and the following year's FW19 was exploiting a regulation loophole that allowed Newey to take the diffuser over the top of the plank to get a much bigger exit area - and therefore a more powerful diffuser effect...This arrangement made its debut late in '95 on the FW17B but amazingly Ferrari - and everyone else - had not noticed and thus did not incorporate it into their '96 cars."

So let's take a closer look at precisely what this loophole was.

The images below of the FW18's diffuser and its counterpart on the 1997 Ferrari F310B, show that whilst both exploit the greater permitted rearward extension of the central region, they differ in the crucial respect that Newey opened up windows in the vertical walls of the central diffuser. This not only increased the effective exit area of the diffuser, but coupled it to the beam-wing, thereby increasing its mass-flow rate and its capacity to generate downforce.

How glaring was this regulation loophole? Well, let's study the 1997 F1 Technical regulations, which are available, pro bono, at MattSomersF1. The relevant propositions read as follows:

3.10) No bodywork behind the centre line of the rear wheels, and more than 15cm each side of the longitudinal centre line of the car, may be less than 30cm above the reference plane. 

This regulation permitted the central region of the diffuser to be 30cm wide. To give some idea of the relative dimensions here, the central box itself was only 30cm tall. So outside that central region, nothing was permitted to be lower than the roof the central diffuser.

3.12) Between the rear edge of the complete front wheels and the front edge of the complete rear wheels all sprung parts of the car visible from underneath must form surfaces which lie on one of two parallel planes, the reference plane or the step plane.

This effectively defined the kick-up point of the diffuser to be the leading edge of the rear-wheels. 

The surface formed by all parts lying on the reference plane must extend from the rear edge of the complete front wheels to the centre line of the rear wheels, have minimum and maximum widths of 30cm and 50cm respectively and must be symmetrical about the centre line of the car. 

All parts lying on the reference and step planes, in addition to the transition between the two planes, must produce uniform, solid, hard, continuous, rigid (no degree of freedom in relation to the body/chassis unit), impervious surfaces under all circumstances.

This seems to be the regulation which Ferrari mis-interpreted. Whilst 3.12 required all parts of the car visible from underneath to belong to a pair of parallel surfaces, and for the transition between those surfaces to be continuous and impervious, this applied only between the trailing edge of the front wheels and the leading edge of the rear wheels. Moreover, although the definition of the reference plane extended to the centreline of the rear wheels, there was nothing whatsoever in the regulations which required a vertical plane behind the rear-wheel centreline to be continuous or impervious.

(Ferrari F310B diffuser. Photo by Alan Johnstone)
As an observation in passing, another part of regulation 3.10 should cause some puzzlement:

Any bodywork behind the rear wheel centre line which is more than 50cm above the reference plane, when projected to a plane perpendicular to the ground and the centre line of the car, must not occupy a surface greater than 70% of the area of a rectangle whose edges are 50cm either side of the car centre line and 50cm and 80cm above the reference plane.

As written, this regulation is somewhat opaque, not least because it is impossible in 3 dimensions to have a plane which is both perpendicular to the ground and the centreline of the car. A plane which is perpendicular to the centreline is certainly a well-defined concept, but in 3 dimensions such a plane will intersect the ground plane along a transverse line, hence cannot be perpendicular to it...

Saturday, April 09, 2016

Ferrari and thermal tyre modelling

Flavio Farroni, currently Research Fellow at the University of Naples Federico II, has been developing a suite of tyre-performance models for several years in collaboration with both Ferrari GT and the Ferrari Formula 1 team. Flavio has now published some of his work, and it may be of more than a little interest to those outside Maranello.

The snappily-titled Development of a grip and thermodynamics sensitive procedure for the determination of tyre/road interaction curves based on outdoor test sessions, provides an overview of all three of Farroni's models.

TRICK appears to be a tool for inferring tyre performance characteristics from empirical telemetry data; TRT is a thermal tyre model, specifically designed to calculate bulk tyre-temperature in real-time; GrETA is a grip model which takes the output from TRT and incorporates the influence of tyre compound and road-surface roughness on tyre performance.

Farroni reports that "TRICK and TRT have been successfully employed together, constituting an instrument able to provide tyre thermal analysis, useful to identify the range of temperature in which grip performances are maximized, allowing to define optimal tyres and vehicle setup."

Recent work on the thermal tyre model, published as An Evolved version of Thermo Racing Tyre for Real Time Applications, is worth considering in some detail.

Here, Farroni's model calculates bulk and sidewall tyre temperatures by representing: (i) the heat generated by the rolling deformation of the tyre and the tangential stresses at the contact patch between the tread and road surface; (ii) the heat flux between the sidewalls, carcass, bulk and surface layers; (iii) the heat transfer due to conduction between the tyre and the road; (iv) the convective heat transfer from the gas inside the tyre to the inner surface of the sidewall and the 'inner liner' (aka the 'carcass'); and (v) the convective heat transfer from the surface of the tread and the outer surface of the sidewall to the external atmosphere. Farroni neglects radiation as a heat transfer mechanism.

This particular paper reports that the measured surface and carcass temperatures can be reproduced despite resort to a simple model in which the bulk, carcass and sidewalls are replaced by single nodes rather than a full-blown mesh. This simplification enables the model to run in real-time, and Farroni reproduces some interesting graphs (below).

There are four graphs here, one for each corner of the car. The horizontal axes represent time, and the vertical axes represent temperatures, which "are dimensionless because of confidentiality agreements."

Those sufficiently cursed to spend their working lives staring at telemetry in ATLAS will recognise the fluctuating signature of the surface tyre-temperatures, which suffer transient peaks under cornering. The peak surface temps are greater than the bulk and carcass temps, but are on average lower that the latter. One can see that the outer sidewall temps are lower than the inner sidewall temps. Also possibly of interest is the fact that the bulk temps are lower than the inner liner temps, which implies there is a net heat flux from the inner liner into the bulk of the tyre.

Now, it's something of a pity that the vertical axes on those diagrams are "dimensionless because of confidentiality agreements." Happily, however, Farroni's PhD thesis is somewhat more forthcoming, printing a pair of fully-dimensionalised temperature plots on p98-99, (below).

The first diagram here plots the measured carcass ('inner liner') temps, the simulated carcass temps, and the calculated bulk temps. Once again, the calculated bulk temps are lower than the carcass temps throughout. The delta seems to be about 10 degrees at the outset, and increases over the course of what appears to be a stint. Being 850 seconds long, the segment of data reproduced covers about 10 laps of data. 

Farroni points out that "Proper time ranges have been selected to highlight thermal dynamics characteristic of each layer; in particular, as concerns bulk and inner liner, temperature decreasing trend is due to a vehicle slowdown before a pit stop." 

This is the drop in carcass and bulk temperatures which occur as a tyre loses its ability to generate and/or retain heat over the course of a stint, due to physical wear and/or irreversible thermal degradation. All four corners suffer this temperature reduction, but the effect appears most marked on the left-front and left-rear. The left-rear drops from ~130 degrees to ~110 degrees, while the left-front drops from ~120 degrees to ~100.

All four corners begin in the range 115-130 degrees, so perhaps this was a set of Softs?

The second diagram (above) is "with reference to a different circuit," and once more displays simulated bulk temperatures lower than the carcass temps. In each case, the bulk temp seems to match the carcass temp at the outset, and then swiftly decline. Both front tyre carcass temps start at 100 degrees, whilst the rear carcass temps start at only 80 degrees.

The left-front carcass temp increases to about 110 degrees, the right-front remains fairly constant, the left-rear increases by almost 20 degrees, whilst the right-rear increases by about 10 degrees. All of which might suggest a set of Mediums?

As a final flourish, Farroni also studies the rather alarming effect that exhaust blown diffusers had on tyre temps (below), suggesting that rear bulk temps could have reached ~200 degrees in some regions.

Farroni suggests that this would "bring the tyre to a too fast degradation and to average temperatures not able to maximize the grip." Quite.

Friday, March 25, 2016

The polarization of gravitational waves

In general relativity, a plane gravitational wave, such as that apparently detected by the LIGO apparatus in September 2015, is a type of transverse shear wave in the geometry of space.

To understand this, first consider the concept of a transverse wave in general relativity.

Recall that observers in general relativity are represented by timelike curves, and instantaneous observers correspond to particular points along timelike curves.

For an instantaneous observer, represented by the tangent vector $Z$ to a timelike curve at a point $z$, there is a local version of Euclidean space, dubbed the local rest-space $R = Z^\bot$, and defined as the set of (spacelike) vectors orthogonal to $Z$.

A plane gravitational wave travels in a spatial direction specified by a propagation vector $k \in R = Z^\bot$, and distorts the geometry of space in the two-dimensional plane $T$ orthogonal to $k$ in the observer's local rest-space $R$. It is in this sense that a gravitational wave is a transverse wave.

In particular, a plane gravitational wave is also a shear wave, and understanding this requires an explanation of the polarization of gravitational waves.

In the simplest case, a linearly-polarized gravitational wave alternately stretches space in one direction $e_x \in T$, and compresses it in a direction $e_y \in T$ at right-angles to $e_x$, in a manner which distorts circles into ellipses, but preserves spatial areas.

However, linearly polarized plane gravitational waves are nothing more than very special cases, and the purpose of this post is largely to put linear polarization into context.

But before digging a little deeper, it's worthwhile first to recall the characteristics of an electromagnetic plane wave, and its possible polarizations.

Just like a gravitational wave, an electromagnetic plane wave has a direction of propagation $k$. The electric $E$ and magnetic fields $B$ are then defined by perpendicular vectors of oscillating magnitude in a plane which is orthogonal to the propagation vector $k$. However, it is the direction in which the electric field vector points which defines the plane of polarization.

In the case of linear polarization, the plane of the electric field vector is constant. The electric field merely oscillates back-and-forth within this plane.

However, the most general case of an electromagnetic plane wave is one which is elliptically polarized. This is a superposition of two perpendicular plane waves, which may differ in either phase or amplitude. The polarization direction of one is separated by 90 degrees from the polarization direction of the other. The net effect is that the tip of the resultant electric field vector will sweep out an ellipse in the plane orthogonal to the direction of propagation.

If the relative phases of the component waves differ by 90 degrees, and the amplitudes of the two components are the same, then this reduces to the special case of circular polarization. In this event, the tip of the resultant electric field vector will sweep out a circle in the plane orthogonal to the direction of propagation.

One important distinction between gravitational waves and electromagnetic waves is that, whilst the most general case of an electromagnetic wave is defined as a linear combination of two components oriented at 90 degrees to each other, the most general case of a plane gravitational wave is defined as a linear combination of two components oriented at 45 degrees to each other.

To understand this, first note that the wave-fronts of a plane gravitational wave are represented by a foliation of space-time into a 1-parameter family of null hypersurfaces, each of which $\mathscr{W}$ is defined by a particular value of the function $\phi = t - z$.

This assumes that the z-coordinate is aligned with the direction of propagation of the wave. In general, one might be interested in surfaces with a constant value of $\omega (t - k \cdot x)$, with $\omega$ being the wave frequency and $k$ being the propagation vector.

Tangent to these null hypersurfaces $\mathscr{W}$ is a null vector field $Y$ which defines the space-time propagation vector of the gravitational wave (Sach and Wu, General relativity for mathematicians, 1977, p244). The projection of the null vector field $Y$ into an observer's local rest-space at a point provides the spatial propagation vector $k$.

If one imagines space-time as a 2-dimensional plane, with the time axis $t$ as the vertical axis, and the spatial direction $z$ as the horizontal axis, then the null hypersurfaces of constant $\phi$ correspond to diagonal lines running from the bottom left to the top-right. These represent a gravitational wave passing from the left to the right of the diagram. An observer corresponds to a timelike curve, tracing a path from the bottom to the top of the diagram.

In Christian Reisswig's diagram below, (taken from a different application), the null hypersurfaces are those labelled as $u$=constant, and the worldline of an observer corresponds to that labelled as $R_\Gamma$.

As the proper time of the observer elapses, the observer's worldline intersects a sequence of the null hypersurfaces. This corresponds to the different phases of the wave passing through the observer's point-in-space. Hence $\phi$ can be thought of as defining the phase of a plane gravitational wave.

In terms of the metric tensor, a gravitational wave is typically represented as a perturbation $h_{\mu\nu}$ on a background space-time geometry $\bar{g}_{\mu\nu}$: $$ g_{\mu\nu} = \bar{g}_{\mu\nu} + h_{\mu\nu} $$ The perturbation is represented as follows: $$ h_{\mu\nu} = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & h_+(\phi) & h_\times(\phi) & 0 \\ 0 & h_\times(\phi) & -h_+(\phi) & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \; . $$ The two components, or polarizations, of the wave are denoted as $h_+(\phi)$ and $h_\times(\phi)$. They form a net polarization tensor $h(\phi)$, which can be extracted from the metric tensor above, and written as follows: $$ h(\phi) = h_+(\phi)(e_x \otimes e_x - e_y \otimes e_y) + h_\times(\phi)(e_x \otimes e_y + e_y \otimes e_x) $$ Now, suppose that the source of a gravitational wave is a gravitationally bound system consisting of two compact objects (i.e., black holes or neutron stars). The plane of that orbital system will be inclined at an angle $\iota$ between 0 and 90 degrees to the line-of-sight of the observer. The case $\iota$ = 0 corresponds to a system which is face-on to the observer, and the case $\iota = \pi/2$ corresponds to a system which is edge-on to the observer.

The time-variation of a plane gravitational wave emitted by such a compact binary system, passing through a distant observer's point-of-view, is effectively specified by the phase-dependence of the two components of the wave: $$ h_+(\phi) = A(1+ \cos^2\iota) \cos (\phi) \\ h_\times(\phi) = -2A \cos \iota \sin \phi $$ $A$ determines the amplitude of the wave.

This is the general case, corresponding to elliptical polarization. The orbital paths of the stars or black holes in the binary system will appear as ellipses. In terms of the basis vectors in which the metric tensor perturbation is expressed, $e_x$ is determined by the long axis of the ellipse, and $e_y$ is perpendicular to $e_x$ in the plane orthogonal to the line-of-sight.

There are two special cases: when the system is face-on, the gravitational wave exhibits circular polarization; and when the system is edge-on, the wave exhibits linear polarization.

To make this explicit, consider first the case where the source of the wave is edge-on to the observer. $\iota = \pi/2$, hence $\cos^2 \iota = \cos \iota = 0$, and it follows that: $$ h_+(\phi) = A(1+ \cos^2\iota) \cos (\phi) = A \cos \phi \\ h_\times(\phi) = -2A \cos \iota \sin \phi = 0 $$ One of the polarization components has vanished altogether, hence from the perspective of the distant observer, space alternately stretches and contracts along a fixed pair of perpendicular axes. One of these axes, $e_x$, is determined by the orientation of the orbital plane of the source system, seen edge-on, and the other, $e_y$, is the axis perpendicular to $e_x$ in the plane orthogonal to the line-of-sight. The polarization tensor reduces to: $$\eqalign{ h(\phi) &= h_+(\phi)(e_x \otimes e_x - e_y \otimes e_y) \cr &= A \cos \phi(e_x \otimes e_x - e_y \otimes e_y)} $$ The negative sign associated with $e_y \otimes e_y$ entails that as space is stretching in direction $e_x$, it is contracting in direction $e_y$. This linear polarization is the simplest special case of a plane gravitational wave, as beautifully demonstrated in the animation below from Markus Possel:

In the other special case, the case of a face-on system, $\iota$ = 0. It follows that $\cos^2 \iota = \cos \iota = 1$, hence: $$ h_+(\phi) = A(1+ \cos^2\iota) \cos (\phi) = A \cos \phi + A \cos \phi = 2A \cos \phi \\ h_\times(\phi) = -2A \cos \iota \sin \phi = -2A \sin \phi $$ In this case, then, the two components have equal amplitude, $2A$, and differ by virtue of the fact that the $h_\times$ component lags 90 degrees behind the $h_+$ component. This is the case of circular polarization. As seen in the Markus Possel animation below, the net effect is to produce a rotation of the shear axes.

Sunday, February 28, 2016

Formula One and relativity

It would be not inaccurate to say that relativity theory has something of a low profile in Formula One. The recent announcement that gravitational waves have been detected for the first time aroused little more than a grudging blip of interest within the region of the autistic spectrum occupied by F1 vehicle dynamicists, strategists, and aerodynamicists.

It's worth noting, however, that modern F1 operations are heavily dependent upon relativity theory. F1 utilises GPS for its timing systems, and almost all teams use GPS for their trajectory analysis; and GPS, of course, is crucially dependent upon relativity theory.

To accurately establish the position of a car on the surface of the Earth, a GPS receiver must compare the time-stamps on signals it receives from multiple satellites, each one of which is orbiting the Earth at 14,000km/hr. To maintain the desired positional accuracy, the time on each such satellite must be known to within an accuracy of 20-30 nanoseconds.

However, there are two famous relativistic effects which have to be compensated for to maintain such accuracy: (i) special relativistic time dilation; (ii) general relativistic time dilation inside a gravitational well.

Because the satellites are in motion at high speed relative to the reference frame of a car on the surface of the Earth, their clock-ticks are slower by a rate of about 7 microseconds per day. Conversely, because a car lies deeper inside a gravitational well than the satellites, its clock-ticks will slow down by about 45 microseconds per day. The net effect is that the clocks on-board the satellites tick faster than those on-board an Earth-bound GPS receiver by about 35 microseconds per day.

As Richard W. Pogge points outs, "This sounds small, but the high-precision required of the GPS system requires nanosecond accuracy, and 38 microseconds is 38,000 nanoseconds. If these effects were not properly taken into account, a navigational fix based on the GPS constellation would be false after only 2 minutes, and errors in global positions would continue to accumulate at a rate of about 10 kilometers each day! The whole system would be utterly worthless for navigation in a very short time. This kind of accumulated error is akin to measuring my location while standing on my front porch in Columbus, Ohio one day, and then making the same measurement a week later and having my GPS receiver tell me that my porch and I are currently somewhere in the air kilometers away."

Which is worth recalling the next time GPS reveals that Dudley Duoflush is repeatedly missing the apex in Turn 4, or overtook under yellow-flag conditions between Turns 7 and 8.

Saturday, February 27, 2016

Red Bull's T-tray wing

Red Bull appeared at the first pre-season Formula 1 test this week with an interesting wing perched atop the T-tray splitter beneath the chassis. As Craig Scarborough points out on, the tips of this wing act as vortex generators. Craig also points out that the idea has been tried before, on the Brawn 001 in 2009.

The interesting thing about such a device is that it's profiled in the manner of an aircraft wing, generating low-pressure above and high pressure below. The consequence of this is that it generates vortices rotating in the same sense as the Y250 vortex on each side of the chassis.

So, for example, looking from a perspective in front of the car, and focusing on the right-hand-side of the chassis, both the Y250 vortex and the T-tray wing vortex rotate in an anticlockwise direction. On the left-hand-side, they both rotate in a clockwise direction.

Now, this is in contrast with the influence provided by a J-vane vortex. As alluded to in Jonathan Pegrum's 2006 academic work, when a vortex spinning around an axis pointing in the direction of the freestream flow passes close to a solid surface, it tends to pull a counter-rotating vortex off the boundary layer of that surface. Hence, when the Y250 vortex passes the J-vanes hanging from the underside of the raised nose on a Formula 1 car, it creates a pair of counter-rotating vortices on each side of the chassis.

For vortices sharing approximately the same rotation axis, it is a general rule that counter-rotating vortices tend to repel each other, whereas co-rotating vortices tend to attract each other. In fact, for a time, co-rotating vortices will orbit a common center of vorticity. This situation will persist so long as they are separated by a distance large compared to their vortex-core radii. Eventually, however, viscous diffusion will enlarge their respective cores, and they will begin to deform each other, eject arms of vorticity, and finally merge into a single, larger vortex.

Because the J-vane vortex rotates in the opposite sense to the Y250, it tends to repel it. Hence, the J-vane can be used to push the Y250 into the optimal position to fulfil its ultimate purpose, which is to push the front-wheel wake further outboard. 

However, the J-vane vortex can only push the Y250. Fitting a T-tray wing, which presumably generates vortices with the same sense of rotation as the Y250 itself, conceivably provides Red Bull with the ability to push and pull the position of the Y250, from two different downstream locations. That possibly improves their ability to fine-tune the position of the Y250 in both a vertical and lateral direction. Alternatively, of course, it may just be designed to interact with the vorticity generated by the bargeboards et al.

Whilst Brawn tried the same concept in 2009, note that the Brawn wasn't fitted with J-vanes, and the presence of a double-diffuser might have reduced the sensitivity of the diffuser to ingress of the front-wheel wake anyway.