Thursday, July 19, 2018

Understanding tyre compound deltas

Pirelli revealed at the beginning of the 2018 F1 season that it was using new software to help it choose the three tyre compounds available at each race. Pirelli's Racing Manager Mario Isola commented:

"It's important that we collect the delta lap times between compounds to decide the selection. If we confirm the numbers that we have seen in Abu Dhabi [testing in November] - between soft and supersoft we had 0.6s, and supersoft to ultrasoft was 0.4s - depending on that, we can fine tune the selection and try to choose the best combination."

Getting the tyre compound deltas correct is indeed a crucial part of F1 race strategy, so let's review some of the fundamental facts about these numbers. The first point to note is that tyres are a performance multiplier, rather than a performance additive

To understand this in the simplest possible terms, consider the following equation:
$$F_y = \mu F_z $$ This states that the lateral force $F_y$ generated by a tyre is a product of the coefficient of friction $\mu$, and the vertical load $F_z$. All other things being equal, the greater the lateral force generated by a car in the corners, the faster the laptime. (Note, however, that in many circumstances one would wish to work with lateral acceleration rather than lateral force, given the influence of car-mass on lateral acceleration).

Now, suppose we have a base compound. Let's call it the Prime, and let's denote its coefficient of friction as $\mu_P$. Let's consider a fixed car running the Prime tyre with: (i) a light fuel-load, and (ii) a heavy fuel-load. 

Let's really simplify things by supposing that the performance of the car, and its laptime, can be reduced to a single vertical load due to downforce alone, and a single lateral force number. When the car is running a heavy fuel load, it will generate a downforce $F_z$, but when it's running a light fuel load it will be cornering faster, so the vertical load due to downforce will be greater, $F_z + \delta F_z$. (Recall that the contribution of greater fuel weight to vertical load results in a net loss of lateral acceleration due to weight transfer). The lateral forces will be as follows:

Prime tyre. High fuel-load

$\mu_P  F_z $

Prime tyre. Low fuel-load

$\mu_P (F_z + \delta F_z) = \mu_P F_z + \mu_P\delta F_z$

Now, let's suppose that there is a softer tyre compound available. Call it the Option. Its coefficient of friction $\mu_O$ will be greater than that of the Prime, $\mu_O = \mu_P + \delta \mu$. 

Consider the performance of the same car on the softer compound, again running a light fuel-load and a heavy fuel-load:

Option tyre. High fuel-load

$\mu_O  F_z = ( \mu_P +\delta \mu )  F_z $

Option tyre. Low fuel-load

$\mu_O (F_z + \delta F_z) = ( \mu_P +\delta \mu )(F_z + \delta F_z) $

So far, so good. Now let's consider the performance deltas between the Option and the Prime, once again using lateral force as our proxy for laptime. 

High-fuel Option-Prime delta

$( \mu_P +\delta \mu )  F_z-\mu_P  F_z = \delta \mu F_z$

Low-fuel Option-Prime delta

$( \mu_P +\delta \mu )(F_z + \delta F_z)-\mu_P (F_z + \delta F_z)=\delta \mu (F_z + \delta F_z)$

Notice that sneaky extra term, $\delta \mu \delta F_z$, in the expression for the low-fuel compound delta? As a consequence of that extra term, the Option-Prime delta is greater on a low fuel load than a heavy fuel-load. As promised, tyre-grip is a performance multiplier.

If you scrutinise the compound deltas in each FP2 session, you'll see that the low-fuel compound deltas from the beginning of the session are indeed greater than those from the high-fuel running later in the session. 

Given that the compound deltas input into race-strategy software are generally high-fuel deltas, one could make quite a mistake by using those low-fuel deltas. In fact, parties using low-fuel deltas might be surprised to see more 1-stop races than they were expecting.

There is another important consequence of the fact that tyres are performance multipliers: the pace gap between faster cars and slower cars increases when softer tyres are supplied. The faster cars have more downforce, and therefore more vertical load $F_z$ than the slower cars, at any equivalent fuel-weight. The delta in vertical load is multiplied by the delta in the coefficient of friction, and all things being equal, the faster cars duly benefit from that extra $\delta \mu \delta F_z$.  

Of course, that qualification about 'all things being equal', hides some complex issues. For example, softer tyres have a lower 'cornering stiffness', (i.e., the gradient of lateral force against slip-angle). A softer tyre therefore generates peak grip at a higher slip-angle than a harder tyre. If the aerodynamics of a car are particularly susceptible to the steering angle of the front wheels, then such a car might struggle to gain proportionately from the greater grip theoretically afforded by a softer tyre. Such a car would also appear to gain, relative to its opposition, towards the end of a stint, when the tyres are worn and their cornering stiffness increases.

Notwithstanding such qualifications, the following problem presents itself: the softer the tyres supplied to the teams in an attempt to enhance the level of strategic variety, the greater the pace-gaps become, and the less effect that strategic variety has...

Tuesday, May 15, 2018

Front-wing in yaw

Armchair aerodynamicists might be interested in a 2015 paper, 'Aerodynamic characteristics of a wing-and-flap in ground effect and yaw'. 

The quartet of authors from Cranfield University analyse a simple raised-nose and front-wing assembly, consisting of a main-plane and a pair of flaps, equipped with rectangular endplates. On each side of the wing, three vortices are created: an upper endplate vortex, a lower endplate vortex, and a vortex at the inboard edge of the flap. (The latter is essentially a weaker version of the Y250 which plays such an important role in contemporary F1 aerodynamics). 

The authors assess their front-wing in yaw, using both CFD and the wind-tunnel, and make the following observations:

1) In yaw, vortices generated by a lateral movement of air in the same direction as the free-stream, increase in strength, whereas those which form due to air moving in the opposite direction are weakened.

2) The leeward side of the wing generates more downforce than the windward side. This is due to an increase in pressure on the leeward pressure surface and a decrease in suction on the windward suction surface. The stagnation pressure is increased on the inner side of the leeward endplate, and the windward endplate partially blocks the flow from entering the region below the wing.

3) A region of flow separation occurs on the windward flap suction surface.

4) Trailing edge separation occurs in the central region of the wing. This is explained by the following: (i) The aluminium wing surface was milled in the longitudinal direction, hence there is increased surface roughness, due to the material grain, for air flowing spanwise across the surface; (ii) There is a reduction in the mass flow-rate underneath the wing; (iii) The effective chord-length has increased in yaw.

5) The vortices follow the free-stream direction. Hence, for example, the windward flap-edge vortex is drawn further towards the centreline when the wing is in yaw.

One comment of my own concerns the following statement:

"The yaw rate for a racing car can be high, up to 50°/sec, but is only significant aerodynamically during quick change of direction events, such as initial turn-in to the corner. The yaw angle, however, is felt throughout the corner and is usually in the vicinity of 3-5°. Although the yaw angle changes throughout the corner the yaw rate is not sufficiently high, other than for the initial turn-in event, to warrant any more than quasi-static analysis."

This is true, but it's vital to point out that the stability of a car in the dynamic corner-entry condition determines how much speed a driver can take into a corner. If the car is unstable at the point of corner-entry, the downforce available in a quasi-static state of yaw will be not consistently accessible. 

Aerodynamicists have an understandable tendency to weight conditions by their 'residency time'. i.e., the fraction of the grip-limited portion of a lap occupied by that condition. The fact that the high yaw-rate corner-entry condition lasts for only a fraction of a second is deceptive. Minimum corner speed depends not only on the downforce available in a quasi-static state of yaw, but whether the driver can control the transition from the straight-ahead condition to the quasi-static state of yaw.

Sunday, April 29, 2018

Local cosmography and the Wiener filter

The local cosmic neighbourhood has recently been mapped in spectacular fashion by the Cosmicflows research programme, yielding papers in Nature, and an article in Scientific American. The images generated are stunning.
Perspective view of the X-Y equatorial plane of our cosmic neighbourhood in supergalactic coordinates. The density of matter is represented by colour contours, deep blue regions indicating voids, red indicating zones of high density. The velocity streams are represented in white, with individual galaxies as spheres.

There's also an interesting mathematical back-story here because the work has been underpinned by techniques developed over 20 years ago by Yehuda Hoffman and colleagues. Drawing upon the exposition provided by Hoffman, let's take a look at these methods, beginning with Gaussian fields.

Gaussian fields

A random field is a field in which the value at each point is sampled from a univariate probability distribution, and the values at multiple points are sampled from a multivariate distribution. Typically, the sampled values at different points are spatially correlated, with a degree of correlation that declines with distance.

A Gaussian field is a type of random field in which the distribution at each point is Gaussian, and relationship between the values at different points is given by a multivariate Gaussian distribution. The properties of a Gaussian field are specified by its covariance matrix. Assuming a Gaussian field $\mathbf{s}$ of zero mean, this is denoted as:
\mathbf{S} = \langle \mathbf{s} \mathbf{s}^T\rangle
$$ Treating $\mathbf{s}$ as a vector of random variables, this expression is understood as the 'outer-product' of the column vector $\mathbf{s}$ with its transpose row vector $\mathbf{s}^T$:
\mathbf{S} = {\langle s_i s_j \rangle}

Given the Gaussian field $\mathbf{s}$, the generation of the measured data-set $\mathbf{d}$  is represented as follows:
\mathbf{d} = \mathbf{Rs}  + \mathbf{\epsilon}
$$ $\mathbf{R}$ represents the characteristics of the measuring instrument, and $\epsilon$ represents measurement noise. The covariance matrix of the noise is denoted as follows:
 \mathbf{N} = \langle \mathbf{\epsilon} \mathbf{\epsilon}^T\rangle
$$ The transformation $\mathbf{R}$ is typically a convolution. In astronomical measurements of the sky it is often referred to as the Point Spread Function. It specifies how the energy detected to belong to a particular pixel of the sky is actually a weighted sum of the energy belonging to a range of adjacent pixels. Similarly, in spectroscopy it specifies how the energy detected at each particular wavelength is actually a weighted sum of the energy belonging to adjacent wavelengths, thereby smearing out the energy spectrum.

Given the true spectrum $f(\lambda)$, and the convolution kernel $g(\lambda - \lambda_1)$, the measured spectrum $I(\lambda)$ is defined by the convolution:
I(\lambda) = f * g = \int f(\lambda_1) g(\lambda - \lambda_1) d\lambda_1
Wiener filter

Stochastic filtering provides a collection of techniques for constructing estimates of the true values of a quantity or field from sparse and noisy measurement data. The Wiener filter is one such technique. It provides a transformation $\mathbf{F}$ that maps a measured dataset $\mathbf{d}$ into an estimate of the true field:
\mathbf{s}^{WF} = \mathbf{F} \mathbf{d}
$$ The discrepancy between the true field and the estimated field is called the residual:
\mathbf{r}  = \mathbf{s} - \mathbf{s}^{WF}
$$ The residual possesses its own covariance matrix $\langle \mathbf{r}\mathbf{r}^T \rangle$. The Wiener filter is defined so that it minimizes the covariance of the residual.

The Wiener filter possesses another property which makes it the natural choice of filter for cosmography: Given the prior distribution of the true field $\mathbf{s}$; given the relationship between the true field and the measured values; and given the measured values, a posterior Bayesian distribution $p(\mathbf{s}|\mathbf{d})$ can be obtained over the true field. In the case where the true field is a Gaussian field, and the noise is also Gaussian, the Wiener filter picks out the mean value of the posterior distribution.

The Wiener filter is given by the expression:
\mathbf{F} = \mathbf{SR}^T (\mathbf{RSR}^T + \mathbf{N})^{-1}
Constrained realizations

The method of constrained realizations goes back to a paper by Yehuda Hoffman and Erez Ribak in 1991 ('Constrained realizations of Gaussian fields - A simple algorithm', Ap. J. Lett., vol 380, pL5-L8). It's based upon a simple but crucial point: the application of the Wiener filter to a measured dataset will produce an estimated field which includes the covariance of the residual: $$\mathbf{s}^{WF} = \mathbf{s} - \mathbf{r}$$
Hence, the estimated field will be smoother than the true field. The idea proposed by Hoffman and Ribak is simple: to generate a realistic realization of the true field, you need to add a sampled realization from the residual field.

Their method works as follows:

(i) Generate a random realization $\tilde{\mathbf{s}}$ of the true field. (The tilde here indicates a realization).

(ii) Generate a realization of the measurement noise, and apply the measurement transformation to $\tilde{\mathbf{s}}$ to yield a random dataset realization: $$\mathbf{\tilde{d}} = \mathbf{R\tilde{s}}  + \mathbf{\tilde{\epsilon}}$$
(iii) Apply the Wiener filter to $\mathbf{\tilde{d}}$ to create an estimate of the true field realization: $\mathbf{F} \mathbf{\tilde{d}} $.

(iv) Generate a realization of the residual:
$$\mathbf{\tilde{r}} = \tilde{\mathbf{s}} - \mathbf{F} \mathbf{\tilde{d}} $$
(v) Add the realization of the residual to the estimated field which has been obtained by applying the Wiener filter to the actual measured dataset: 
$$\eqalign{\mathbf{s}^{CR} &=  \mathbf{\tilde{r}} + \mathbf{Fd} \cr &=\tilde{\mathbf{s}}+ \mathbf{F} (\mathbf{d-\tilde{d}})}$$
Reconstruction of continuous fields from finite data

Here the Gaussian field $\mathbf{s}$ is assumed to be a function $f(\mathbf{r})$, so that the covariance matrix becomes the auto-correlation function $\xi$:

\mathbf{S} = \langle f(\mathbf{r}_i) f(\mathbf{r}_j)\rangle =  \xi(\mathbf{r}_i,\mathbf{r}_j)
$$ Assume that measurements are made of this field at a finite number of points, ${F_i}$. If there is no convolution operation, so that $\mathbf{R} = \mathbf{I}$, then in this case the combined application of the Wiener filter and constrained realization yields the following expression:
 f(\mathbf{r})^{CR} = \tilde{f}(\mathbf{r})+ \xi(\mathbf{r}_i,\mathbf{r}) ( \xi(\mathbf{r}_i,\mathbf{r}_j) + \mathbf{N}_{ij} )^{-1}(F_j-\tilde{f}(\mathbf{r}_j))
$$ Repeated indices are summed over here.


The Cosmicflows research programme applied these techniques to a finite collection of sparse and noisy galaxy recession velocities to reconstruct the full 3-dimensional peculiar velocity field in our local cosmic neighbourhood. (The latter is defined to be a length-scale of the order 100 Mpc). 'Peculiar' velocities in this context are those which remain after the recession velocity due to cosmic expansion has been subtracted.

Our cosmic neighbourhood,within a cube 200Mpc on each side. The Milky Way is located at the origin of coordinates. There are three isodensity colour contours. The velocity flows are depicted as black lines.
Assuming that cosmic structure formation has been seeded by perturbations sampled from a Gaussian random field, and assuming that the expansion of the universe followed the Lambda-Cold Dark Matter ($\Lambda$CDM) model, the combination of the Wiener filter and Constrained Realization was applied to a sample of galaxy recession velocities along the line-of-sight. The density field in our neighbourhood was then inferred from the reconstructed velocity field.

Some of the most striking images produced by the Cosmicflows team are those which depict the streamlines of the peculiar velocity field in our cosmic neighbourhood. These reveal regions in which the streamlines converge, due to concentrations in the density of matter such as the Great Attractor, and regions in which the streamlines diverge from cosmic voids. In particular, the Cosmicflows team have attempted to identify the surface of divergent points that surrounds us. They define the volume within that watershed as our local galaxy Supercluster, which they name 'Laniakea'.

The Cosmicflows programme assumes only linear deviations from homogeneity and isotropy. This assumption preserves the Gaussian nature of the peculiar velocity field and the density perturbation field. But it does have the consequence that the velocity field is irrotational. i.e, it possesess zero vorticity. Hence, the images of cosmic streamlines contain watersheds, but no whirlpools. Whilst the linear theory should prevail on large scale, the non-linearity should dominate on smaller scales. The Cosmicflows team claim that the non-linearity should only dominate on the Mpc length-scale, and that their approach is therefore valid.

A slice through the equatorial plane in the supergalactic coordinate system. The boundary of the Laniakea supercluster is depicted as an orange line. The diagram is shaded with density contours, deep blue regions indicating voids, and red regions indicating zones of high density. The velocity streams within the Laniakea basis of attraction are represented in white.

Sunday, December 03, 2017

Neural networks and the neutron transport equation

The Monte Carlo simulation technique was conceived at Los Alamos in the late 1940s by Stanislaw Ulam and John von Neumann with a specific application in mind: neutron transport and nuclear fission chain-reactions. Perhaps, however, other simulation techniques are now available. I'd like to propose one such alternative below, but first we need to understand what the neutron transport problem is.

The neutron transport equation is a special case of the Boltzmann transport equation, the primary equation of non-equilibrium statistical mechanics. In the general case of the Boltzmann equation, the quantity of interest is a distribution function on phase-space, which specifies the expected number of particles per unit of phase-space volume. (At low particle-number densities, one might refer to the probability of particle occupancy per unit of phase-space volume).

In the case of the neutron transport equation, the quantity of interest can be taken to be $n(\mathbf{r},\mathbf{v},t)$, the expected number of neutrons per unit of physical space about $\mathbf{r}$, per unit of velocity space about $\mathbf{v}$, at time $t$. (Whilst, strictly speaking, phase space deals with position and momentum, it is often interchangeable with position and velocity).

The Boltzmann equation is of some philosophical interest because it can be used to describe the approach of a system towards equilibrium. Certainly, a population of high-energy neutrons in a region of space occupied by non-fissile atomic nuclei at room temperature, would transfer energy to the atomic nuclei by means of elastic and inelastic scattering interactions, and approach a state of equilibrium. The neutrons would 'thermalise', their energy spectrum approaching that of a Maxwell-Boltzmann distribution, with a temperature equal to that of the background nuclei.

However, the neutron transport equation can be deployed to describe the evolution of the neutron population in a nuclear reactor, and such a system remains, for a reasonable period of time, and with the appropriate control systems, in a stable state far-from-equilibrium.

A fissile chain reaction occurs when neutrons are absorbed by heavy atomic nuclei, which fission into smaller, more stable nuclei, and emit high-energy neutrons in the process. By means of this fissile chain reaction, the neutron temperature, flux and number density can be maintained at a constant level, despite intensive interactions with a background population of atomic nuclei at a much lower temperature. 

There are two populations of particles here, which remain at different temperatures despite being in contact with each other. The system therefore remains out of thermal equilibrium. Nevertheless, the entropy of the system is still increasing because the fission reactions release the free energy of the heavy fissile isotopes, and transform it into heat. A nuclear reactor is, of course, far from being a closed system, and stability far-from-equilibrium is only possible in this case if the heat generated by the fissile reactions is transported away, typically by conduction into a liquid or gas, which convectively transports the heat to a location outside the reactor vessel.

Before we define the neutron transport equation, let's begin with some definitions. Let's start with the neutron flux, $\phi(\mathbf{r},\mathbf{v},t) =  v \;n(\mathbf{r},\mathbf{v},t)$. For each neutron energy level $E = 1/2 m_n v^2$, (where $m_n$ is the rest-mass of a neutron), this is the number of neutrons at a point $\mathbf{r}$ passing through a unit surface area, per unit time. For each energy level $E$, it is equivalent to the total path length travelled by the energy-$E$ neutrons per unit volume at $\mathbf{r}$, in a unit of time.

To calculate the rate at which neutron reactions take place, the flux is multiplied by a 'macroscopic cross-section' $\Sigma_j(\mathbf{r},\mathbf{v},t)$. The macroscopic cross-sections are the product of the microscopic cross-sections with the number density of the target nuclei $\Sigma_j(\mathbf{r},\mathbf{v},t) = \Sigma_i \sigma_{ij}(\mathbf{r},\mathbf{v},t) N_i(\mathbf{r},t)$. 

The microscopic cross-sections $\sigma_{ij}(\mathbf{r},\mathbf{v},t)$ express the probability of a reaction of type $j$ between a neutron and a target nucleus of species $i$. There are microscopic cross-sections for neutron absorption, fission, elastic scattering and inelastic scattering. The microscopic cross-sections have units of area per atom, and depend upon the energy (velocity) of the incoming neutron, as well as the temperature of the target nuclei.

The macroscopic cross-sections have units of inverse distance. As such, they define the probability of a reaction per unit of neutron path-length. When multiplied with a neutron flux (equivalent to the total path-length travelled by the neutrons per unit time per unit volume), this yields the reaction rate per unit volume.

One other definition to note is the distinction between 'prompt' and 'delayed' neutrons. The prompt neutrons are released in the fission event itself, whilst the delayed neutrons are emitted after the beta decay of certain fission product fragments, and typically occur some time after the fission event. 

With those definitions in place, let's proceed to the neutron transport equation itself. (The equations which follow are adapted from Mathematical Methods in Nuclear Reactor Dynamics, Z.Akcasu, G.S.Lellouche, and L.M.Shotkin, Academic Press, 1971). We will assume for simplicity that there is a single type of fissile isotope. 

The time evolution of the neutron distribution function is governed by the following equation:

$$\eqalign{\partial n(\mathbf{r},&\mathbf{v},t)/\partial t = - \mathbf{v} \cdot \nabla n(\mathbf{r},\mathbf{v},t) -  \Sigma(\mathbf{r},\mathbf{v},t) \;v \;n(\mathbf{r},\mathbf{v},t) \cr &+ f_0(\mathbf{v}) (1-\beta)\int \nu(\mathbf{r},\mathbf{v}',t) \Sigma_f(\mathbf{r},\mathbf{v}',t) \;v' n(\mathbf{r},\mathbf{v}',t) d^3 \mathbf{v}' \cr &+ \Sigma_{i=1}^{6} f_i(\mathbf{v}) \lambda_i C_i(\mathbf{r},t)\cr &+ \int \Sigma_s(\mathbf{r},\mathbf{v}' \rightarrow \mathbf{v},t) \;v' n(\mathbf{r},\mathbf{v}',t)d^3 \mathbf{v}'}$$ Let's consider the various terms and factors in this equation one-by-one.

$\mathbf{v} \cdot \nabla n(\mathbf{r},\mathbf{v},t)$ is the loss of neutrons of velocity $\mathbf{v}$ from the volume about a point $\mathbf{r}$ due to the flow of neutrons down a spatial concentration gradient $\nabla n(\mathbf{r},\mathbf{v},t)$. The loss is only non-zero if the concentration gradient has a non-zero component in the direction of the velocity vector $\mathbf{v}$.

$\Sigma(\mathbf{r},\mathbf{v},t) \;v \;n(\mathbf{r},\mathbf{v},t)$ is the loss of neutrons of velocity $\mathbf{v}$ from the volume about a point $\mathbf{r}$ due to \emph{any} reaction with the atomic nuclei in that volume. $\Sigma$ is the sum of the macroscopic cross-sections for neutron capture $\Sigma_c$, fission $\Sigma_f$, and scattering $\Sigma_s$.

$\Sigma_f(\mathbf{r},\mathbf{v}',t) \;v' n(\mathbf{r},\mathbf{v}',t)$ is the rate at which fission events are triggered by incoming neutrons of velocity $\mathbf{v}'$. 

$\nu(\mathbf{r},\mathbf{v}',t)$ is the mean number of neutrons output from a fission event triggered by an incoming neutron of velocity $\mathbf{v}'$. 

$\beta$ is the fraction of fission neutrons which are delayed, hence $1-\beta$ is the fraction of fission neutrons which are prompt.

$ f_0(v)$ is the probability density function for prompt fission neutrons. i.e., it specifies the probability that a prompt fission neutron will have a speed $v$, (and a kinetic energy $E = 1/2 m_n v^2$).  It is equivalent to the energy spectrum of prompt fission neutrons.

Assuming the outgoing prompt fission neutrons are emitted isotropically, the probability of a prompt fission neutron having a velocity $\mathbf{v}$ is $f_0(\mathbf{v}) = f_0(v)/4 \pi$.

Hence $f_0(\mathbf{v})(1-\beta) \int \nu(\mathbf{r},\mathbf{v}',t) \Sigma_f(\mathbf{r},\mathbf{v}',t) \;v' n(\mathbf{r},\mathbf{v}',t) d^3 \mathbf{v}'$ is the rate at which prompt neutrons of velocity $\mathbf{v}$ are created at position $\mathbf{r}$ in the reactor, by incoming neutrons of any velocity $\mathbf{v}'$.

$C_i(\mathbf{r},t)$ is the concentration of species $i$ delayed neutron precursors, and $\lambda_i$ is the decay constant of that species, hence $\lambda_i C_i(\mathbf{r},t)$ is the decay-rate of the $i$-th delayed neutron precursor. This is equivalent to the production-rate of delayed neutrons from the $i$-th precursor. 

$f_i(\mathbf{v})$ is the probability density function over velocity for delayed neutrons produced by the $i$-th precursor species, hence $\Sigma_{i=1}^{6} f_i(\mathbf{v}) \lambda_i C_i(\mathbf{r},t)$ specifies the rate at which delayed neutrons of velocity $\mathbf{v}$ are created at position $\mathbf{r}$ in the reactor.

$\Sigma_s(\mathbf{r},\mathbf{v}' \rightarrow \mathbf{v},t)$ is the macroscopic cross-section for elastic or inelastic scattering events in which an incoming neutron of velocity $\mathbf{v}'$ transitions to an outgoing neutron of velocity $\mathbf{v}$ by colliding with a target nucleus. Hence $\int \Sigma_s(\mathbf{r},\mathbf{v}' \rightarrow \mathbf{v},t) \;v' n(\mathbf{r},\mathbf{v}',t) d^3 \mathbf{v}'$ specifies the rate at which neutrons of velocity $\mathbf{v}$ are created at position $\mathbf{r}$ in the reactor by incoming neutrons of any velocity $\mathbf{v}'$ scattering with target nuclei.

The concentration of each delayed neutron precursor satisfies the following equation:
$$\eqalign{\partial C_i(\mathbf{r},t)/\partial t &= \beta_i \int \nu(\mathbf{r},\mathbf{v}',t) \Sigma_f(\mathbf{r},\mathbf{v}',t) \;v' n(\mathbf{r},\mathbf{v}',t) d^3 \mathbf{v}' \cr &- \lambda_i C_i(\mathbf{r},t)} $$ $\beta_i$ is the fraction of fission neutrons produced by the delayed neutron precursor of species $i$. Hence, $\beta = \Sigma_i \beta_i$.

The rate at which delayed neutron precursors of species $i$ are produced by fission events is given by $\beta_i \int \nu(\mathbf{r},\mathbf{v}',t) \Sigma_f(\mathbf{r},\mathbf{v}',t) \;v' n(\mathbf{r},\mathbf{v}',t) d^3 \mathbf{v}'$, and the rate at which they decay is given by $\lambda_i C_i(\mathbf{r},t)$.  

Note that, unlike the general Boltzmann equation, there is no term for neutron-neutron interactions. Neutron densities in a reactor are only of the order of $10^9/cm^3$, compared to the number density of atomic nuclei, which is of the order $10^{22}-10^{23}/cm^3$. Hence, neutron-neutron interactions can be neglected.

Now, the interesting point about the neutron transport equation is that the most important terms have the form of integral transforms:
$$(Tf)(x) = \int K(x,y)f(y) dy \, $$ where $K(x,y)$ is the 'kernel' of the transform. In discrete form, this becomes:
$$ (Tf)(x_i) = \Sigma_{j} K(x_i,y_j)f(y_j)  \,.$$
The neutron transport equation takes neutron fluxes as the input, and calculates reaction rates, and thence heat production, by integrating those fluxes with macroscopic cross-sections. The macroscopic cross-sections provide the kernels of the integral transforms. In general schematic terms:
$$\text{Output}(\mathbf{r},t) = g\left(\int \Sigma(\mathbf{r},\mathbf{v'},t) \;\text{Input}(\mathbf{r},\mathbf{v}',t)d^3 \mathbf{v}' \right)\, ,$$ where $g$ is some function.

For example, given the heat produced per fission event, $W_f$, the rate of fission-product heat-production throughout the reactor is given by: $$H_f(\mathbf{r},t) = W_f \int \Sigma_f(\mathbf{r},\mathbf{v}',t) \;v' n(\mathbf{r},\mathbf{v}',t) d^3 \mathbf{v}'\, .$$ Neural networks which implement convolutions have become immensely powerful tools for pattern recognition in recent years. Convolutions are a particular type of integral transform, so let's briefly recall how such neural networks are defined.

On an abstract level, a neural network consists of a set of nodes, and a set of connections between the nodes. The nodes possess activation levels; the connections between nodes possess weights; and the nodes have numerical rules for calculating their next activation level from a combination of the previous activation level, and the weighted inputs from other nodes.

The nodes are generally divided into three classes: input nodes, hidden/intermediate nodes, and output nodes. There is a directionality to a neural network in the sense that patterns of activation propagate through it from the input nodes to the output nodes, and in a feedforward network there is a partial ordering relationship defined on the nodes, which prevents downstream nodes from signalling those upstream.

For the implementation of integral transforms, feedforward neural networks with strictly defined layers are used. The activation levels of the input layer of nodes represents the values of the input function at discrete positions; the weights represent the values of the discretized kernel; and the values of the nodes in the layer beneath represent the discretized version of the transformed function.

Thus, the activation levels $x^l_i$ of the neurons in layer $l$ are given by weighted sums over the activation levels of the neurons in layer $l-1$:
$$x^l_i =  f \left(\sum_{j=1}^{n}W^l_{ij}x^{l-1}_j \right), \,  \text{for } \, i = 1,\ldots,n$$ $W^l$ is the matrix of weights connecting layer $l-1$ to layer $l$, with $W^l_{ij}$ representing the strength of the connection from the $j$-th neuron in layer $l-1$ to the $i$-th neuron in layer $l$. $f$ is a non-linear threshold function.

Now, the empirical cross-section data required for nuclear reactor kinetics is far from complete. The cross-sections are functions of neutron energy, and can oscillate wildly in some energy ranges, (see diagram below). The cross-section curves are therefore not defined to arbitrary levels of resolution.

Perhaps, however, neural networks can offer a solution to this problem. If the input layer of a neural network is used to represent the neutron flux at various positions throughout a reactor, and the output layer is used to represent, say, the temperature levels measured throughout the reactor, such a network could be trained to predict the correct temperature distribution for any given pattern of neutron flux. It would do so by adjusting its weights, and because the weights would represent the nuclear reaction cross-sections, this would provide a means of filling in the gaps in the nuclear physics datasets.

Questions one might pose immediately include the following: (i) are the neutron flux measurements in a reactor sufficiently accurate to facilitate this technique; and (ii) how many neurons would one need in a layer of the neural network to provide the necessary degree of spectral resolution?

These are questions I don't yet know the answer to...

Wednesday, November 01, 2017

The problem with Fred Pearce

Environmental journalist Fred Pearce published an article on William Penney (‘Atomic Briton who brought home the bomb’) in NewScientist magazine on 14th October 2017, (p42-43). The article concludes with some brazen distortion of the facts.

Pearce claims that the Orange Herald device, a large fission bomb detonated as part of the Grapple operation in May 1957, misled “US legislators in Congress…Congress amended the McMahon Act, believing they would be sharing science with a fellow H-bomb nation.” Pearce refers to this as "Penney's nuclear bluff."

The definitive reference work on the history of British H-bomb development is Lorna Arnold’s ‘Britain and the H-bomb’ (2001). Here she reports that “weapon debris, radioflash data and microbarograph readings…showed that [Grapple] had only been partially successful. A ‘thermonuclear bluff’ had never been seriously contemplated; the Americans had regularly been assisted to take measurements and collect data at several British trials, including Grapple,” (p151).

The British went on to conduct a successful 3 megaton H-bomb test, Grapple Y, in April of 1958. Strangely, this fact is absent from Pearce’s article. The amendment of the McMahon Act was passed by Congress some months later, on June 30th 1958, and the US-UK Mutual Defence Agreement was signed on 3rd July 1958.

These dates are also omitted from Pearce's article. It's easy to see why, because their inclusion destroys Pearce's argument.

Sadly, then, the only bluff here comes from Mr Pearce. If NewScientist magazine wishes to mislead its readers, publishing Mr Pearce's work is certainly the most effective way of so doing.

Saturday, October 21, 2017

Diffusers and rear-wheel wakes

There is a persistent notion amongst some Formula One technical analysts that the low pressure wake behind the rear wheels can be connected to the lateral extremities of the diffuser airflow, thereby enhancing the flow capacity of the underbody, and its downforce-generating potential. In particular, the notion has been repeatedly promoted by Autosport Technical Consultant Gary Anderson:

"Mercedes has worked very hard in making the low pressure area behind the rear tyres connect up to the trailing edge of the diffuser. In effect this gives the diffuser more extraction capacity," ( The key technical developments from Australia, 25th March 2017).

Now, it's certainly true that if the diffuser is expanded in a lateral direction without causing separation of the boundary layer, then the expansion ratio of the diffuser will be increased, and it'll generate more downforce. It's also true that the wakes shed by the wheels are areas of low pressure, situated as they are behind rotating bluff bodies. So surely, one might think, there will be a pressure gradient directed towards those rear-wheel wakes, and surely the airflow exiting the diffuser can be connected to them, thereby increasing its effective expansion ratio?

Unfortunately, whilst the wakes behind bluff bodies do indeed tend to be regions of low pressure, they are also regions of high turbulence, and the airflow 'sees' a region of turbulence as an obstruction. Directing the lateral extremities of diffuser airflow towards the rear wheel wakes does not therefore offer a straightforward boost in the power of the diffuser, and could even promote diffuser separation.

One illuminating way to understand this is to look at the Reynolds-averaged Navier-Stokes (RANS) equations for a flow-field containing turbulence. A solution of these equations represents the mean velocity flow field $\overline{u}$ and the mean pressure field $\overline{p}$ in a region of space. For a time-independent incompressible flow, each component $\overline{u}_i$ of the mean velocity vector field is required to satisfy the equation
\rho (\mathbf{\overline{u}} \cdot \nabla) \overline{u}_i  = - \frac{\partial \overline{p}}{\partial x_i} +\frac{\overline{\tau}_{ij}}{\partial x_j} - \rho \frac{\overline{u'_i u'_j}}{\partial x_j} \;.
$$ This equation is simply a version of Newton's second law, $F=ma$, albeit with the accelerative term on the left-hand side, and the force terms on the right-hand side.

In the case of a continuous medium, the density $\rho$ is substituted in place of the mass, and $(\mathbf{\overline{u}} \cdot \nabla) \overline{u}_i$ represents the acceleration experienced by parcels of air as the velocity field changes from one spatial position to another. 

Each term on the right-hand side of the equation represents a different type of force. The first term $- \partial \overline{p}/\partial x_i$ is the familiar pressure gradient. The negative sign indicates that the force points in the opposite direction to the gradient: the fluid will be pushed away from high pressure, and sucked toward low pressure.

Pressure, however, is only the isotropic component of stress. When the isotropic component has been subtracted from the total stress, what remains is called the 'deviatoric' stress $\tau_{ij}$. This represents the stresses which occur due to viscosity $\nu$. These are the forces which occur within a continuous medium when there are shear motions. In the case of a Newtonian fluid such as air, the deviatoric stress is a function of the viscosity and the velocity shear:
$$ \tau_{ij} = \rho \nu \bigg[ \frac{\partial u_i}{\partial x_j}+ \frac{\partial u_j}{\partial x_i} \bigg] $$In general, forces are generated by spatial gradients of the stress, and the second term on the right-hand side of the RANS equation represents the force due to the spatial gradient in the mean deviatoric stress. These 'tangential' forces are crucial inside the boundary layer of a fluid, but more generally they play a role wherever one layer of fluid runs parallel to another layer travelling at a different speed. Here, the viscosity entails that momentum is transferred from the higher velocity layer to the lower velocity layer, helping to pull it along. This is a source of acceleration in the flow-field which cannot be explained by pressure gradients alone.

The third term on the right-hand side of the RANS equation represents the effective force due to spatial gradients in the turbulence. In a turbulent flow-field, the velocity at a point is decomposed into a sum $u_i = \overline{u}_i + u'_i$ of the mean-flow $\overline{u}_i $ and the turbulent fluctuations, $u'_i$. The expression $\overline{u'_i u'_j}$ represents a type of turbulent stress, hence its spatial gradient provides another source of acceleration in the mean flow-field.

This third term is crucial to understand why the rear wheel wakes behave like obstructions in the flow-field. Note the negative sign associated with the turbulent-stress term. That entails that the force vector points away from a region of turbulence. Airflow exiting a region of low turbulent intensity will effectively experience a repulsion force as it approaches a region of high turbulence.  

Hence, trying to join the diffuser-flow to the rear wheel wake is not necessarily a good idea. A better idea is to create vortices from the edges of the diffuser which push the rear wheel wake further outboard. This might enable one to increase the expansion ratio of the diffuser without provoking separation.

Sunday, October 08, 2017

Why nuclear disarmament is wrong

The 2017 Nobel Peace Prize was awarded this week to the International Campaign to Abolish Nuclear Weapons (ICAN), a coalition of 468 non-governmental organisations across 101 countries. Berit Reiss-Andersen, the chair of the Nobel committee, stated that the award recognised ICAN's work “to draw attention to the catastrophic humanitarian consequences of any use of nuclear weapons and for its groundbreaking efforts to achieve a treaty-based prohibition of such weapons”. According to the BBC, ICAN's supporters “include actor Michael Sheen.”

Now, whilst one contradicts a B-list actor at one's peril, it is nevertheless a good juncture to review exactly why organisations such as ICAN are wrong, and why nuclear disarmament would be a bad thing. Let's begin with those “catastrophic humanitarian consequences of any use of nuclear weapons”, by returning to 1945 and the use of nuclear weapons to end the Second World War.

The image of the mushroom cloud, and the destruction inflicted on Hiroshima and Nagasaki dominates modern media coverage of these events. Rarely, however, does the media also recall the incendiary bombing campaign conducted by the Americans prior to the use of nuclear weapons.

Between March and June of 1945, Japan's six largest industrial centres, Tokyo, Nagoya, Kobe, Osaka, Yokohama and Kawasaki, were devastated. As military historian John Keegan wrote, “Japan's flimsy wood-and-paper cities burned far more easily than European stone and mid-June...260,000 people had been killed, 2 million buildings destroyed and between 9 and 13 million people made July 60 per cent of the ground area of the country's sixty larger cities and towns had been burnt out,” (The Second World War, 1989, p481).

Unfortunately, this mass bombing campaign, conducted with conventional chemical munitions, and inflicted upon civilians and military alike, did not stop the war. Only the bombing of Hiroshima and Nagasaki stopped the war.

In terms of the number of deaths, “reported numbers vary, but it has been estimated that by the end of 1945, 90 000 to 120 000 out of a civilian population of about 330 000 in Hiroshima, and 60 000 to 80 000 out of 280 000 in Nagasaki, would be dead as a result of exposure to the intense heat, physical force, and ionizing radiations emitted by the bombs,” (Long-term Radiation-Related Health Effects in a Unique Human Population: Lessons Learned from the Atomic Bomb Survivors of Hiroshima and Nagasaki).

So, the first conclusion to draw from this is that conventional munitions killed more people, and didn't stop the war, while nuclear weapons killed less people, and did stop the war. In terms of “humanitarian consequences”, being burnt alive by incendiary weapons rather than the blast wave, thermal radiation or ionising radiation of a nuclear detonation, seems scant consolation. 

In the decades since the Second World War, the presence of nuclear weapon stockpiles have been justified on the basis of deterrence: as long as the use of nuclear weapons by one side will result in a retaliatory strike that guarantees their own destruction, then a nuclear war is unwinnable, hence there is no incentive to use nuclear weapons. 

Despite the logic of deterrence, many continue to argue that nuclear weapons should now be abolished by means of multi-lateral disarmament. A recent article in NewScientist by Debora Mackenzie argued that deterrence is unstable:

“The growth in US missile defence systems...undermine deterrence by, in theory, allowing a country to launch a first attack safe in the knowledge that it can intercept any retaliatory strikes...deterrence is only ever a temporary stand-off, lasting just until the enemy finds a way to neutralise your deterrent. Ultimately, the technological capacity to see, hear and otherwise detect and destroy other countries' weapons could become so good that first strikes will become winnable, and deterrence will no longer work...What else will keep the nuclear peace? Optimists are promoting a UN treaty to ban all nuclear weapons,” (Accidental Armageddon, 23rd September 2017).

Which brings us back to ICAN, who promoted the 'Nuclear Weapons Ban Treaty'. The nine recognised nuclear powers refused to sign this at the United Nations in July. And they were right not to do so, for the following reason:

A world without nuclear weapons is a world in which a nuclear war is winnable. As demonstrated in the 1940s, it only requires one nation to secretly begin the production of nuclear weapons, (breaking whatever treaty they may have signed), to gain a head-start on their enemies, and they will be able to use nuclear weapons without fear of reprisal. A world without nuclear weapons is a world in which there is an incentive to use nuclear weapons. Multi-lateral nuclear disarmament would therefore take us into the most unstable and dangerous state of all.

Once nuclear weapons have been invented, there is no going back to a world without them. It's not a question of optimism or pessimism, it's a question of logic.

Wednesday, September 13, 2017

F1 1980 - Separation and curvature

As noted in the previous post, the airflow in the aft section of a venturi duct has a propensity to separate. Whilst the primary cause of boundary layer separation is the severity of the adverse pressure gradient experienced during pressure recovery, curvature upstream of the pressure recovery region can also exert a significant influence. In this context, a useful rule-of-thumb to remember is that the thicker the boundary layer at the start of the pressure recovery region, the earlier separation will occur. The rate at which the thickness of the boundary layer on a flat surface increases with distance from the leading edge is generally used as a baseline, with respect to which the effects of curvature can be compared.

To understand the influence of curvature, let’s first introduce a distinction between 2-dimensional and 3-dimensional boundary layers. In a 2-dimensional boundary layer, the velocity profile and thickness of the boundary later vary only in a longitudinal direction, along the direction of streamwise flow. The boundary-layer velocity is a function only of height above the solid surface and longitudinal distance; it is therefore 2-dimensional. In contrast, in a 3-dimensional boundary layer the velocity profile and thickness vary in both a longitudinal and a lateral direction. 

Consider first a 2-dimensional boundary layer on a surface with either convex or concave curvature. Concave curvature increases the rate at which a boundary layer thickens (compared to a flat surface), whilst convex curvature either thins a boundary layer, or reduces the rate at which the thickness would otherwise increase.

One way to understand this is in terms of radial pressure gradients. For a flowfield to negotiate a curve, a pressure gradient develops which is directed towards the centre of the radius of curvature, balancing the centrifugal force associated with the curved flow.

A flowfield bounded by a concave curve is such that the centre of curvature is located inside the fluid itself, hence a pressure gradient develops which points upwards from the solid surface into the fluid, effectively trying to peel the boundary layer off the surface

In contrast, a flowfield bounded by a convex surface is such that the centre of curvature is located the ‘other side’ of the solid surface, hence a pressure gradient develops which points downwards onto the surface, effectively pushing the boundary layer onto it.

Hence, concave curvature is liable to trigger boundary layer separation, while convex curvature promotes boundary layer adhesion.

So much for the influence of curvature on a 2-dimensional boundary layer. Most actual flowfields tend to possess ‘crossflow’ velocity components in addition to streamwise components. Crossflow components point in a lateral direction. In the context of wings, this is often referred to as ‘spanwise flow’. The representation of separation under these circumstances requires the introduction of the aforementioned 3-dimensional boundary layers.

The crossflow velocity components correspond to the existence of crossflow pressure gradients. These pressure gradients will induce streamline curvature both inside the boundary layer attached to the solid surface, and in the adjacent outer-flow streamlines. The streamline curvature, however, will be greater inside the boundary layer. Hence, the skin-friction lines on the solid surface (otherwise known as the shear stress at the wall), have greater curvature than the streamlines just outside the boundary layer. (Understanding Aerodynamics, Doug McLean, Wiley, 2013, p88).

Inserting a bend or kink into the wall of a venturi tunnel will generate a radial crossflow pressure gradient, pointing towards the centre of the radius of curvature. The outer-flow streamlines will turn the corner due to this radial pressure gradient. The skin-friction lines on the ceiling of the tunnel, however, will turn the corner at a tighter angle.

The curvature of a surface will itself generate streamline curvature, but this effect is distinct from the streamline curvature generated by a crossflow pressure gradient. If an outer-flow streamline is projected onto a curve in the solid surface, the curvature at each point of that curve can be decomposed into a component which is parallel to the tangent plane of the surface at that point, and a component which is perpendicular to the tangent plane. The perpendicular component represents the part of the curvature which is due to the streamline simply following the extrinsic curvature of the surface in 3-dimensional space. In contrast, the parallel component represents the intrinsic curvature of the projected streamline due to a crossflow pressure gradient. If there is no crossflow, then the projected streamlines are geodesics of the surface, with zero intrinsic curvature. (McLean, p306-307).   

A similar but distinct type of curvature effect occurs when a solid is bounded by an axisymmetric surface, whose radius varies in a longitudinal direction. If the lateral extent of a surface tapers in a longitudinal direction, then successive lateral slices through the surface possess an increasingly smaller diameter. For example, in the special case of a cone-shaped surface, oriented with the tip of the cone pointing downstream, successive lateral slices through the surface of the cone have a smaller diameter. A boundary layer attached to such a surface will thicken at a faster rate than it would over a flat surface with the same streamwise pressure gradient, (McLean p124). This occurs as a consequence of the preservation of mass and the relative incompressibility of the air: the boundary layer air is forced to thicken as its lateral dimensions contract. This makes such a boundary layer more liable to detach.

Conversely, consider a surface which flares outwards with longitudinal direction, an extreme case of which would be a cone-shaped surface with its tip pointing upstream. The boundary layer on such a surface will either get thinner as the lateral extent of the surface increases, or its thickness will increase at a slower rate than it would on a flat surface in the same streamwise pressure gradient. Hence, a surface which spreads outwards promotes boundary layer adhesion.

In both cases the outer-flow streamlines are following longitudinal geodesics of the surface, and there is no pressure-driven crossflow, (ibid). A Formula 1 car, however, is rarely equipped with axisymmetric appendages. Rather, it exhibits reflection symmetry in a longitudinal plane, and as a consequence the flow around the nose and engine cover are special cases of ‘plane of symmetry’ flows (ibid., p125-126). In such flows, the boundary layer along the plane of symmetry resembles a 2-dimensional boundary layer, with no crossflow component, but either side of the symmetry plane there are crossflow components which either induce divergence or convergence.

In the case of a Formula 1 car, the flow over the nose will be a divergent plane-of-symmetry flow, and that over the engine cover will tend to be a convergent plane-of-symmetry flow. 

So, equipped with this understanding of the effects of curvature, let’s consider an example of its impact on F1 ground-effect aerodynamics. In 1980, some of the teams created vertical surfaces at the rear of the sidepods to partially seal the venturi tunnels from the effects of the rotating rear wheel. The motive for this may have been twofold: to enhance underbody performance, and also to reduce rear wheel lift and drag. However, these plates, when considered in horizontal cross-section, traced a sinuous curve which started with concave curvature, passed through a point of inflection, and ended with convex curvature. Hence, whilst such plates may have prevented the flow in the venturi tunnels from directly interacting with the rotating wheel, the geometrical restriction imposed by the presence of the wheel was in no way eliminated.

If a venturi tunnel entered a constriction towards the rear of the sidepod, then the reduced cross-sectional area would have a tendency to thicken the boundary layer. Moreover, at just this point, the initial concave curvature on the outer wall of the tunnel would also contribute towards thickening the boundary layer. Exacerbating matters yet further, the turbulent jet from the inner contact patch of the rotating rear wheel would be injected into this region of the underbody. All three factors, in conjunction, would have tended to promote boundary layer separation in this part of the underbody. The only mitigation here is that the cross-sectional constriction would have weakened the adverse pressure gradient.

As a specific example of the challenges in this region of the underbody, the Williams FW07B MKIV underwing, as specified in a design drawing from April 1980, contained a dashed outline of an alternative profile for the sinuous section of the outer wall as it passes inside the rear wheel. The rationale behind this is alluded to in a briefing note written by Patrick Head, dated 1st April 1980, (just in advance of the introduction of the MKIV underwing at the Belgian Grand Prix). Here, he notes that Williams would be “running the wide rear track with new rear plates and engine fairings plus a wheel fairing which will reduce leakage into the rear of the side wing and increase the velocities. A new side wing profile is also to be made with an altered profile in the defuser (sic) section to reduce proneness to separation.”

The alternative profile reduced the concave curvature, but it did so at the expense of beginning the transition further upstream, therefore sacrificing channel width. Hence, there was a trade-off here: concave curvature or convergence; both would have thickened the boundary layer.

Frank Dernie has since testified that “most people’s diffusers stopped at the rear suspension. It was very difficult to keep the flow attached any further back…I am told the Brabham BT49 never had attached flow rearward of the chassis because they never found a solution to keeping the flow attached after the sudden change of section.” (Motorsport Magazine, November 2004, X-ray Spec: Williams FW07, p77).

In fact, the initial underbody profile on the Williams FW07B in 1980 did attempt to extend the diffuser tunnels beyond the leading edge of the rear suspension. These gearbox enclosures and sidepod extensions appeared on the car during practice in Argentina, but serious porpoising problems were experienced, and the sidepods and underbodies were returned to 1979 MKIII specification for the race. The porpoising was attributed to the skirts jamming, hence the extensions were tried again in conjunction with the MKIII sidepods and underwing during practice in South Africa. They were, however, notable by their absence when the MKIV underwing made its debut in Belgium. 

FW07B venturi extensions, as seen at Kyalami. (Grand Prix International magazine)

F1 1980 - Nozzles and streamtubes

Let’s delve a little more deeply into the nature of ground-effect downforce. The underbody of a ground-effect car can be treated as a type of (subsonic) converging-diverging nozzle. Such a nozzle consists of a mouth, a throat, and a diffuser. The mouth consists of a duct with a contracting cross-section, which accelerates air into the narrowest section, the throat. In accordance with the Bernoulli effect, the pressure is at its lowest in the throat, and the airflow velocity is at its highest. The air then flows from the throat into the diffuser, a duct with an expanding cross-section, which decelerates the air, and thereby returns it towards the freestream pressure, a process referred to as ‘pressure recovery’.

To give an illustration of the relative proportions here, the MKIV underbody on the Williams FW07B had a throat about 30 inches (762mm) in length, compared with a mouth only about 10 inches (254mm) long. The diffuser was about 45 inches (1143mm) in longitudinal extent.

Pressure recovery is a delicate process because it creates an ‘adverse pressure gradient’. The pressure increases in the direction of flow, hence there is a force pushing against the flow in the diffuser. Such an adverse pressure gradient tends to promote separation of the boundary layer. When separation occurs, the boundary layer is released into the interior of the fluid, where it breaks up into turbulence. This reduces the effective cross-sectional area and flow capacity of the diffuser, which in turn reduces the low pressure upstream at the throat. Separation also transforms a portion of the mean-flow kinetic energy into turbulent kinetic energy, which eventually dissipates as heat energy. To avoid separation, the diffuser tends to be much longer than the mouth and throat, with a more gradual slope than that between mouth and throat.

At a fixed freestream velocity (determined by the car-speed), the steady-state mass-flow rate through this nozzle is determined by the area of the diffuser outlet (assuming there is no separation), and by the ‘base pressure’* at the diffuser exit. The latter will be lower than the freestream pressure due largely to the low pressure created by the suction surface of the rear-wing, but also due to the low-pressure wake behind the car.

To understand this further, it’s useful to introduce the concept of a ‘streamtube’. This is defined by taking a closed loop in the flowfield, identifying the streamline which passes through each point of the loop, and extruding the loop along those streamlines. This defines the surface of the streamtube. By definition, because the surface of a streamtube is constructed from streamlines, the velocity field is tangent to the surface of the tube, hence no mass can flow through the surface. Moreover, in a steady flow the mass flow-rate is the same through any cross-section of the streamtube.

Now, whilst the underbody of a ground-effect car has a solid mouth, (defined in 1980 by the geometry of the sidepod inlets), the flow upstream of the mouth is not confined by solid walls. Instead, it is defined by the streamtube of the flow which enters each venturi tunnel.

At a fixed car-speed, the greater the exit area of the diffuser, and/or the lower the base pressure created by the rear-wing, the greater the cross-sectional area of the streamtubes feeding the sidepod inlets. The greater the cross-sectional area of the streamtube feeding the mouth of each venturi tunnel, the greater the contraction as the air enters the throat of the tunnel, hence the greater the acceleration of the air and the greater the pressure drop. Therefore, “the degree of expansion of the air in the diffuser rather than the physical dimensions of the mouth determines the effective contraction of air into the throat, hence the maximum airspeed that will be obtained,” (Ian Bamsey, The Anatomy and Development of the Sports Prototype Racing Car, Haynes, 1991, p63).

A principal concern in the design of the underbody mouth is the avoidance of separation. Depending upon the car-speed and the base-pressure, the stream-tubes entering the venturi tunnels may either expand or contract as they approach the mouth. There will be a stagnation line somewhere around the upper-lip of each mouth: flow below this line will enter the venturi duct, while flow above it will pass over the top of the sidepod. If the stream-tubes expand approaching the mouth of each tunnel, (as they might do at high car speeds), then the stagnation line might lie just inside the upper lip of the tunnel, and the external flow might separate as it accelerates over and around the upper lip. Conversely, if the stream-tubes contract approaching each mouth, the stagnation line might exist just outside the upper lip, and the flow might separate as it accelerates under that lip into the tunnel. The latter condition would inject turbulence into the throat of the underbody tunnel, leading to a significant loss of downforce.

*Note that whilst the ‘base pressure’ is lower than the static pressure of the freestream, it is not the point of lowest pressure, the latter being located in the throat of the venturi. The air doesn't flow towards the rear of the car because of a pressure gradient; it flows to the rear because the car is in motion with respect to the air!

Friday, August 11, 2017

Curved flow and the Arrows A3

After something of a sustained gestation period, the publication of F1 Retro 1980 is imminent, so it's a good opportunity to take a look at one of the more interesting aerodynamic experiments seen that season: the underbody venturi extensions on the Arrows A3 at Brands Hatch. 

This was the latest in a series of attempts to improve upon the original F1 ground-effect concept. In 1979, the Lotus 80 and the Arrows A2 had both attempted to extend the area of the underbody, but both had failed to reap the expected benefits.

The Lotus 80, in its initial configuration, featured skirts under the nose, and separate skirts extending all the way from the leading edge of the sidepods, inside the rear wheels, to the back of the car. The failure of the Lotus 80 is commonly attributed both to an ineffective skirt system, and an insufficiently rigid chassis.  

The Arrows A2 featured an engine and gearbox inclined at 2.5 degrees in an attempt to exploit the full width of the rear underbody. In its original configuration the A2 also dispensed with a conventional rear-wing, replacing it with a flap mounted across the rear-deck. The sidepod skirts were complemented by a parallel pair of skirts running inside the width of the rear wheels to the back of the car. Unfortunately, the higher CoG at the back entailed the car had to be run with a stiff rear anti-roll bar, detracting from the handling, (Tony Southgate - From Drawing Board to Chequered Flag, MRP 2010, p108).

The 1980 Arrows A3 was a more conventional car, with the engine and gearbox returned to a horizontal inclination. However, at Brands Match in 1980, Arrows experimented, like the initial Lotus 80, with skirts under the nose. Developed in the Imperial College wind-tunnel, the Arrows version of the idea had skirts suspended from sponsons attached to the lower edges of the monocoque, running back beneath the lower front wishbones to the leading edge of the sidepods. At the same event, the team also tried extending the rear underbody all the way to the trailing edge of the rear suspension, with bulbous fairings either side of the gearbox fairing. This was done with the avowed intention of sealing the underbody from the detrimental effects of rear wheel turbulence.

Sadly, although the nose-skirts were intended to cure understeer, it was reported that they actually exacerbated the understeer.

Now, many aerodynamic difficulties encountered in this era of Formula One were actually just a manifestation of inadequate stiffness in the chassis or suspension. However, for the sake of argument, let's pursue an aerodynamic hypothesis to explain why the nose-skirts on the A3 worsened its understeer characteristic.

The nose skirts on the Lotus 80 and Arrows A3 would have suffered from the fact that a Formula 1 car has to generate its downforce in a state of yaw. Thus, in a cornering condition, a car is subjected to a curved flow-field. This is difficult to replicate in a wind-tunnel, hence a venturi tunnel design which worked well in a straight-ahead wind-tunnel condition could have failed dramatically under curved flow conditions. To understand this better, a short digression on curved flow and yaw angles is in order.

The first point to note is that a car follows a curved trajectory through a corner, hence if we switch to a reference frame in which the car is fixed but the air is moving, then the air has to follow a curved trajectory. If we freeze the relative motion mid-corner, with the car pointing at a tangent to the curve, then the air at the front of the car will be coming from approximately the direction of the inside front-wheel, while the air at the back of the car will be coming from an outer direction.

That's the simplest way of thinking about it, but there's a further subtlety. The negotiate a corner, a car generates: (i) a lateral force towards the centre of the corner's radius of curvature; and (ii) a yaw moment about its vertical axis.

Imagine the two extremes of motion where only one of these eventualities occur. In the first case, the car would continue pointing straight ahead, but would follow a curved path around the corner, exiting at right-angles to its direction of travel. In the second case, it would spin around its vertical axis while its centre-of-mass continued to travel in a straight line.

In the first case, the lateral component of the car's velocity vector corresponds to a lateral component in the airflow over the car. The angle which the airflow vector subtends to the longitudinal axis of the car, is the same along the length of the vehicle.

In the second case, the spinning motion also induces an additional component to the airflow over the car. It's a solid body spinning about its centre of mass with a fixed angular velocity, and the tangential velocity of that spin induces an additional component to the airflow velocity along the length of the car. However, the further away a point is from the axis of rotation, the greater the tangential velocity; such points have to sweep out circles of greater circumference than points closer to the centre of mass, hence their tangential velocity is greater.

Curved-flow, side-slip and yaw-angle. (From 'Development methodologies for Formula One aerodynamics', Ogawa et al, Honda R&D Technical Review 2009).
Now imagine the two types of motion combined. The result is depicted above, in the left-part of the diagram. The white arrows depict the component of the airflow due to 'side-slip': the car's instantaneous velocity vector subtends a small angle to the direction in which its longitudinal axis is pointing. In the reference frame in which the car is fixed, this corresponds to a lateral component in the direction of the airflow which is constant along on the length of the car.

When the yaw moment of the car is included (indicated by the curved blue arrow about the centre-of-mass), it induces an additional airflow component, indicated by the green arrows. Two things should be noted: (i) the green arrows at the front of the car point in the opposite direction from the green arrows at the rear; and (ii) the magnitude of the green arrows increases with distance from the centre of mass. The front of the car is rotating towards the inside of the corner, while the rear of the car is rotating away, hence the difference in the direction of the green arrows. And, as we explained above, the tangential velocity increases with distance from the axis of rotation, hence the increase in the magnitude of the green arrows.

The net result, indicated by the red arrows, is that the yaw-angle of the airflow has a different sign at the front and rear of the car, and the magnitude of the yaw angle increases with distance from the centre-of-mass. (The red arrows in the diagram are pointing in the direction in which the car is travelling; the airflow direction is obtained by reversing these arrows).

So, to return to 1980, the Arrows A3 design trialed at Brands Hatch moved the mouth of the venturi tunnel forward to the nose of the car. The further forward the mouth, the greater the angle of the curved onset flow to the longitudinal axis of the car, and the further away it is from the straight-ahead condition. Hence, the curved flow might well have separated from the leading edge of the skirt on the side of the car facing the inside of the corner, injecting a turbulent wake directly down the centre of the underbody. In this respect, the conventional location of the venturi inlets on a 1980 F1 car, (i.e., behind the front wheel centreline), would have reduced yaw sensitivity.

Front-wings and rear-wings certainly have to operate in state of yaw, and do so with a relatively high level of success. However, such devices have a larger aspect-ratio than an underbody venturi, which has to keep its boundary layer attached for a much longer distance.

It should also be noted that the flow through the underbody tunnels, like that through any type of duct, suffers from ‘losses’ which induce drag. The energy budget of a flow-field can be partitioned into kinetic energy, pressure-energy, and ‘internal’ heat energy. Viscous friction in the boundary layers, and any turbulence which follows from separation in the duct, creates heat energy, and irreversibly reduces the sum of the mean-flow kinetic energy and the pressure energy.

These energy losses are proportional to the length of the duct, the average flow velocity through the duct, and inversely proportional to the effective cross-sectional diameter of the duct. Due to such losses, it is not possible for full pressure recovery to be attained in the diffuser and its wake, and this will contribute to the total drag of the car. Hence, whilst underbody downforce comes with less of a drag penalty than that associated with inverted wings in freestream flow, it is nevertheless true that the longer the venturi tunnels, and the greater the average velocity of the underbody flow, the greater the drag of the car. 
Moreover, the longer the mouth and throat of a venturi tunnel, the thicker the boundary layer at the start of the pressure recovery region, and the more prone it will be to separation in that adverse pressure gradient. All of which mitigates against a quick and easy gain from extending the area of the underbody.

Monday, August 07, 2017

Driverless cars and cities

Driverless cars are somewhat in the news this year, with Ford investing $1bn to meet their objective of launching a fleet of autonomous cars in 2021. Coincidentally, the July 2017 issue of 'Scientific American' features an article extolling the virtues of a driverless future in modern cities. The article is written by Carlo Ratti and Assaf Biderman, who work for something called the 'Senseable Lab' at the Massachusetts Institute of Technology. 

 A number of the claims made in the article are worth reviewing. Let's start with the following:

"On average, cars sit idle 96 percent of the time. That makes them ideal candidates for the sharing economy...The potential to reduce congestion is enormous...'Your' car could give you a lift to work in the morning and then, rather than sitting in a parking lot, give a lift to someone else in your family - or to anyone else in your neighbourhood or social media community...a city might get by with just 20 percent the number of cars now in use...fewer cars might also mean shorter travel times, less congestion and a smaller environmental impact."

A number of thoughts occur in response to these claims:

1) Ride-sharing would reduce the number of cars, not the number of journeys. Every journey which currently takes place would still take place, but in addition would be all the journeys made when a car needs to travel from the point where one passenger disembarks to the point where the next embarks. At present, each journey contains a passenger; with the proposed ride-sharing of driverless cars, there would be additional journeys in which the cars contain no passengers at all. All other things being equal, that would increase congestion and pollution, not reduce it. 

2) The modern technological world, including the GPUs and artificial neural networks which have created the possibility of driverless vehicles, has been built upon the wealth of a capitalist economy. Such an economy is driven by, amongst other things, the incentivization of private ownership. In particular, people like owning their own cars. It's not clear why a technological development alone, such as that of the driverless car, will prompt society to adopt a new model of shared ownership.

3) Not everyone lives in cities. Universities tend to be located in cities, hence many academics fall into the habit of thinking that everyone lives and works in cities. Many people live outside cities, and drive into them to their places of work. They drive into the cities from different places at the same time each morning. For such people, there needs to be a one-to-one correspondence between cars and passengers.

4) People like the convenience and efficacy of having a car parked adjacent to their home or place of work. If you're a parent, and your child falls ill at home, or there's an accident at school, you want to drive there immediately, not wait for a shared resource to arrive.

5) If cars are constantly in use, their components will degrade in a shorter period of time, so maintenance costs will greater, and the environmental impact of manufacturing new tyres, batteries etc. will be greater.

So that's just for starters. What else do our MIT friends have to say? Well, they next claim that "vacant parking lots could be converted to offer shared public amenities such as playgrounds, cafes, fitness trails and bike lanes."

Unfortunately, most car parks are privately owned, either by retail outlets or employers. If they become redundant, then those private companies will either extend their existing office space or floor space, or sell to the highest bidder. Car-parks are unlikely to become playgrounds.

The authors then claim that current traffic-light controlled intersections could be managed in the style of air traffic control systems: 

"On approaching an intersection, a vehicle would automatically contact a traffic-management system to request access. It would then be assigned an individualized time, or 'slot', to pass through the intersection.

"Slot-based intersections could significantly reduce queues and delays...Analyses show that systems assigning slots in real time could allow twice as many vehicles to cross an intersection in the same amount of time as traffic lights usually do...Travel and waiting times would drop; fuel-consumption would go down; and less stop-and-go traffic would mean less air pollution."

Sadly, this is a concept which seems to imagine that cities consist of grids of multi-lane highways. Most cities in the world don't. And in every city, the following 'min-max' principle of road-capacity applies:

For a sequence of interconnected roads, given the capacity (i.e., the maximum flow-rate) in each component, the capacity of the entire sequence is the minimum of those individual capacities. 

Hence, even if the capacity of every multi-lane intersection in a city is doubled, the capacity of a linked sequence is determined by the component with the lowest capacity. In many cities, multi-lane highways taper into single-lane roads, and it is the single-lane roads which limit the overall capacity. Doubling the capacity of intersections would merely change the spatial distribution of the queues.

So, all in all, not a positive advert for driverless cars.