Streamwise vortices occur when fluid spirals around an axis which points in the same direction as the overall direction of fluid flow. In particular, streamwise vortices are generated by aircraft wing-tips, and by the front-wing of a Formula 1 car at the inboard transition between the neutral central section and the inner tip of the main-plane and flap(s). The latter is the so-called Y250 vortex. Surprisingly, the method by which such streamwise vorticity is generated also plays a crucial role in the generation of atmospheric tornados.
Let's begin with the meteorology. A tornado is a funnel of concentrated vertical vorticity in the atmosphere. Most tornados are generated within supercell thunderstorms when the updraft of the storm combines with the horizontal vorticity generated by vertical wind shear. The updraft tilts the horizontal vorticity into vertical vorticity, generating a rotating updraft.
However, there are two distinct types of vertical wind shear: Unidirectional and directional. The former generates crosswise vorticity, whilst the latter generates streamwise vorticity.
When the wind shear associated with a storm is unidirectional, the updraft acquires no net rotation. The updraft raises the crosswise vorticity into a hairpin shape, with one cyclonically rotating leg, on the right as one looks downstream, and an anticyclonic leg on the left. Updrafts only acquire net cyclonic rotation when the horizontal vorticity has a streamwise component. (Diagrams above and below from St Andrews University Climate and Weather Systems website).
Specifically, cyclonic tornado formation requires that the wind veers with vertical height, (meaning that its direction rotates in a clockwise sense).
In effect, the flow of air through the updraft becomes analagous to flow over a hill (personal communication with Robert Davies-Jones): the flow into the updraft has cyclonic vorticity, and the flow velocity there reinforces the vertical velocity of the updraft; the downward flow on the other side, where the anticyclonic vorticity exists, partially cancels the vertical velocity of the updraft. Hence, the cyclonic part of the updraft becomes dominant.
Before we turn to consider wing-tip vortices, we need to recall the mathematical definition of vorticity, and the vorticity transport equation.
Let's start with some notation. In what follows, we shall denote the streamwise direction as x, the lateral (aka 'spanwise' or 'crosswise') direction as y, and the vertical direction as z. The velocity vector field U has components in these directions, denoted respectively as Ux, Uy, and Uz, There is also a vorticity vector field, whose components will be denoted as ωx, ωy, and ωz.
The vorticity vector field ω is defined as the curl of the velocity vector field:
ω = (ωx , ωy, ωz)
= (∂Uz/∂y − ∂Uy/∂z , ∂Ux/∂z − ∂Uz/∂x , ∂Uy/∂x − ∂Ux/∂y)
We're also interested here in the Vorticity Transport Equation (VTE) for ωx, the streamwise component of vorticity. In this context we can simplify the VTE by omitting turbulent, viscous and baroclinic terms to obtain:
Dωx/Dt = ωx(∂Ux/∂x) + ωy(∂Ux/∂y) + ωz(∂Ux/∂z)
The left-hand side here, Dωx/Dt, is the material derivative of the x-component of vorticity; it denotes the change of ωx in material fluid elements convected downstream by the flow.
Now, for a racecar, streamwise vorticity can be created by at least two distinct front-wing mechanisms:
1) A combination of initial lateral vorticity ωy, and a lateral gradient in streamwise velocity, ∂Ux/∂y ≠ 0.
2) A vertical gradient in the lateral component of velocity, ∂Uy/∂z ≠ 0, (corresponding to directional vertical wind shear in meteorology).
In the case of the first mechanism, one can vary the chord, camber, or angle of attack possessed by sections of the wing to create a lateral gradient in the streamwise velocity ∂Ux/∂y ≠ 0. Given that ωy ≠ 0 in the boundary layer of the wing, combining this with ∂Ux/∂y ≠ 0 entails that the second term on the right-hand side in the VTE is non-zero, which entails that Dωx/Dt ≠ 0. Thus, the creation of the spanwise-gradient in the streamwise velocity skews the initially spanwise vortex lines until they possess a significant component ωx in a streamwise direction.
However, it is perhaps the second mechanism which provides the best insight into the formation of wing-tip vortices. As the diagram above illustrates for the case of an aircraft wing (G.A.Tokaty, A History and Philosophy of Fluid Mechanics), the spanwise component of the flow varies above and below the wing. This corresponds to a non-zero value of ∂Uy/∂z, and such a non-zero value plugs straight into the definition of the curl of the velocity vector field, yielding a non-zero value for the streamwise vorticity ωx:
ωx = ∂Uz/∂y − ∂Uy/∂z
Putting this in meteorological terms, looking from the front of a Formula 1 car (with inverted wing-sections, remember), the left-hand-side of the front-wing has a veering flow-field at the junction between the flap/main-plane and the neutral section. The streamlines are, in meteorological terms, South-Easterlies under the wing, veering to South-Westerlies above. This produces streamwise vorticity of positive sign.
On the right-hand side, the flow-field is backing with increasing vertical height z. The streamlines are South-Westerlies under the wing, backing to South-Easterlies above. This produces streamwise vorticity with a negative sign.
Thus, we have demonstrated that the generation of the Y250 vortex employs the same mechanism for streamwise vorticity formation as that required for tornadogenesis.
Wednesday, December 23, 2015
Monday, December 21, 2015
The open-tailed box effect
The modern understanding of racecar aerodynamics holds that copious amounts of downforce can be produced by accelerating the airflow under the car, in effect turning the region between the underbody and ground plane into a mobile nozzle.
The Lotus 78 of 1977 famously introduced venturi profiles beneath the car, and sliding skirts to seal the low pressure area thereby created. However, it is less well-known that underbody skirts had fitfully appeared on various cars earlier in the decade. Moreover, it is slightly disconcerting to hear the explanations proffered by several F1 designers from the middle 1970s for the function of these devices.
Gordon Murray introduced inch-deep skirts on the underside of the 1975 Brabham BT44 in conjunction with an overall 'upturned saucer' design, and explains his thinking as follows:
"With any moving form you have a stagnation point where air meets it and decides how much is going to flow over, below or around it...I decided, instead of presenting some sort of parabolic-shaped bluff body to the air, I wouldn't give the air a chance." He sketches a triangular shape. "That way the stagnation point was there," he says, pointing to the leading edge of the triangle's base, which is very low to the ground. "So all the air had to go over the top and you had the minimum coming under the car," (F1 Magazine, May 2001, p140-141).
Gordon Coppuck, however, had already experimented with skirts on the McLaren M23:
"In 1974 at Dijon-Prenois, vertical plastic skirts around the under-periphery of the car were tried, but they quickly wore away on contact with the track. The idea was to exclude air from underneath the car and so minimise lift," (p49, McLaren M23, Ian Wagstaff, Haynes 2013). The skirts were fitted again to the M23 at some races in early 1976, this time provoking complaints from competitors such as Colin Chapman (!) and Ken Tyrrell.
Talk of minimising lift by forcing air over the top of the car seems misguided because the upper surface of a racecar is generally convex, and the air will tend to be accelerated by a convex surface, producing low pressure on the upper surfaces, somewhat counter to the overall objective.
Nevertheless, it seems that there actually was a beneficial effect to be had from partially excluding air from the underbody, and this is clearly explained by Ian Bamsey in his fantastic book The Anatomy and Development of the Sports Prototype Racing Car (Haynes, 1991):
"The [Shadow] DN8 had conventional wings and a flat bottom and, following the fashion of 1976, it was fitted with skirts along the side of its monocoque, these joined in a vee under the nose. Under certain conditions the skirts rubbed on the track and their general effect was to sweep the air aside, in snowplough fashion. Thus, the overall effect was not one of spatial acceleration of the underbody air, it was one of exclusion. The flow blockage allowed the forward migration of the naturally low pressure air at the back of the car into the skirt's exclusion zone. This was the principle of the so-called open tailed box. A box with the road forming its bottom and only its tail open will experience a pressure reduction within as it progresses along the track," (p59).
So, although the effect may be quite weak, it is possible to generate downforce by excluding air from the underbody.
The Lotus 78 of 1977 famously introduced venturi profiles beneath the car, and sliding skirts to seal the low pressure area thereby created. However, it is less well-known that underbody skirts had fitfully appeared on various cars earlier in the decade. Moreover, it is slightly disconcerting to hear the explanations proffered by several F1 designers from the middle 1970s for the function of these devices.
Gordon Murray introduced inch-deep skirts on the underside of the 1975 Brabham BT44 in conjunction with an overall 'upturned saucer' design, and explains his thinking as follows:
"With any moving form you have a stagnation point where air meets it and decides how much is going to flow over, below or around it...I decided, instead of presenting some sort of parabolic-shaped bluff body to the air, I wouldn't give the air a chance." He sketches a triangular shape. "That way the stagnation point was there," he says, pointing to the leading edge of the triangle's base, which is very low to the ground. "So all the air had to go over the top and you had the minimum coming under the car," (F1 Magazine, May 2001, p140-141).
Gordon Coppuck, however, had already experimented with skirts on the McLaren M23:
"In 1974 at Dijon-Prenois, vertical plastic skirts around the under-periphery of the car were tried, but they quickly wore away on contact with the track. The idea was to exclude air from underneath the car and so minimise lift," (p49, McLaren M23, Ian Wagstaff, Haynes 2013). The skirts were fitted again to the M23 at some races in early 1976, this time provoking complaints from competitors such as Colin Chapman (!) and Ken Tyrrell.
Talk of minimising lift by forcing air over the top of the car seems misguided because the upper surface of a racecar is generally convex, and the air will tend to be accelerated by a convex surface, producing low pressure on the upper surfaces, somewhat counter to the overall objective.
Nevertheless, it seems that there actually was a beneficial effect to be had from partially excluding air from the underbody, and this is clearly explained by Ian Bamsey in his fantastic book The Anatomy and Development of the Sports Prototype Racing Car (Haynes, 1991):
"The [Shadow] DN8 had conventional wings and a flat bottom and, following the fashion of 1976, it was fitted with skirts along the side of its monocoque, these joined in a vee under the nose. Under certain conditions the skirts rubbed on the track and their general effect was to sweep the air aside, in snowplough fashion. Thus, the overall effect was not one of spatial acceleration of the underbody air, it was one of exclusion. The flow blockage allowed the forward migration of the naturally low pressure air at the back of the car into the skirt's exclusion zone. This was the principle of the so-called open tailed box. A box with the road forming its bottom and only its tail open will experience a pressure reduction within as it progresses along the track," (p59).
So, although the effect may be quite weak, it is possible to generate downforce by excluding air from the underbody.
Sunday, December 20, 2015
The L'Oreal Women in Science initiative
"Much remains to be done with regard to gender balance in science. Most tellingly, women account for only 30% of the world’s researchers. There are still great barriers that discourage women from entering the profession and obstacles continue to block progress for those already in the field."
So complains the L'Oreal-UNESCO 'For Women in Science' initiative. Since 1998 this programme has "strived to support and recognize accomplished women researchers, to encourage more young women to enter the profession and to assist them once their careers are in progress," by means of awards, fellowships, and advertising campaigns declaring that 'Science Needs Women'.
In comparison, the plight of men employed in the nursing profession has received little attention. To place this in the type of quantitative context which should appeal to 'Women in Science', the UK Office for National Statistics compiles an Annual Survey of Hours and Earnings (ASHE), based upon a sample taken from HM Revenue and Customs' Pay As You Earn (PAYE) records. Amongst other information, this reveals the number of men and women employed in different professions. The 2015 results estimate that the number of men and women employed in nursing are as follows:
Women in nursing: 673,000
Men in nursing: 109,000
Hence, only 14% of nurses in the UK are men, a figure somewhat lower than the 30% of 'Women in Science' worldwide. This shocking gender imbalance suggests that men are systematically discouraged from entering the nursing profession, are discriminated against within the profession, and have their progress blocked within the field.
Now, some people might argue that this is only natural because men have a tendency to be more aggressive and competitive than women, a characteristic which makes women rather more suited to the caring professions.
This, however, is merely one of the phony arguments used by the nursing matriarchy to preserve the pre-eminent status of women within the profession. Men have evolved by sexual selection to be more aggressive and competitive in order to make themselves more attractive to women, and thereby enhance their prospects of being chosen for mating. It is therefore women and their mating criteria which are ultimately responsible for the aggressive and competitive nature of men.
Hence, it is about time that L'Oreal expanded its concerns over professional gender imbalance, and initiated a range of awards and fellowships to assist the cause of Men in Nursing (MIN). If possible, the assistance of the BBC should be sought to promulgate a range of positive Male Nursing stereotypes within its programming; for example, all hospital scenes should feature male nurses in prominent roles, leading and directing their female colleagues.
But hold on: what's this on the L'Oreal website?
"More women scientists should also be able to obtain positions of responsibility, just like their male counterparts, so that future generations will have role models to inspire them. The current situation, however, indicates that, well into the third millennium, a considerable discrepancy exists between what society professes to believe and what we actually do."
The third millennium? The third millennium since what exactly? The 'Women in Science' will be able to tell you that Homo Sapiens have been around for approximately 1.8 million years, so that's about one thousand eight hundred millennia. Not three.
Perhaps we should only consider the period of time which has elapsed since Homo Sapiens made the transition from the hunter-gatherer lifestyle to agriculture and settlement. But that would still be about 12,000 years, four times the number of millennia that L'Oreal are willing to acknowledge.
It's the type of error one would expect of a cosmetics-oriented organisation, rather than a scientifically-oriented one. Perhaps, then, we shall have to cast our net more widely to find a suitable sponsor for MIN...
So complains the L'Oreal-UNESCO 'For Women in Science' initiative. Since 1998 this programme has "strived to support and recognize accomplished women researchers, to encourage more young women to enter the profession and to assist them once their careers are in progress," by means of awards, fellowships, and advertising campaigns declaring that 'Science Needs Women'.
In comparison, the plight of men employed in the nursing profession has received little attention. To place this in the type of quantitative context which should appeal to 'Women in Science', the UK Office for National Statistics compiles an Annual Survey of Hours and Earnings (ASHE), based upon a sample taken from HM Revenue and Customs' Pay As You Earn (PAYE) records. Amongst other information, this reveals the number of men and women employed in different professions. The 2015 results estimate that the number of men and women employed in nursing are as follows:
Women in nursing: 673,000
Men in nursing: 109,000
Hence, only 14% of nurses in the UK are men, a figure somewhat lower than the 30% of 'Women in Science' worldwide. This shocking gender imbalance suggests that men are systematically discouraged from entering the nursing profession, are discriminated against within the profession, and have their progress blocked within the field.
Now, some people might argue that this is only natural because men have a tendency to be more aggressive and competitive than women, a characteristic which makes women rather more suited to the caring professions.
This, however, is merely one of the phony arguments used by the nursing matriarchy to preserve the pre-eminent status of women within the profession. Men have evolved by sexual selection to be more aggressive and competitive in order to make themselves more attractive to women, and thereby enhance their prospects of being chosen for mating. It is therefore women and their mating criteria which are ultimately responsible for the aggressive and competitive nature of men.
Hence, it is about time that L'Oreal expanded its concerns over professional gender imbalance, and initiated a range of awards and fellowships to assist the cause of Men in Nursing (MIN). If possible, the assistance of the BBC should be sought to promulgate a range of positive Male Nursing stereotypes within its programming; for example, all hospital scenes should feature male nurses in prominent roles, leading and directing their female colleagues.
But hold on: what's this on the L'Oreal website?
"More women scientists should also be able to obtain positions of responsibility, just like their male counterparts, so that future generations will have role models to inspire them. The current situation, however, indicates that, well into the third millennium, a considerable discrepancy exists between what society professes to believe and what we actually do."
The third millennium? The third millennium since what exactly? The 'Women in Science' will be able to tell you that Homo Sapiens have been around for approximately 1.8 million years, so that's about one thousand eight hundred millennia. Not three.
Perhaps we should only consider the period of time which has elapsed since Homo Sapiens made the transition from the hunter-gatherer lifestyle to agriculture and settlement. But that would still be about 12,000 years, four times the number of millennia that L'Oreal are willing to acknowledge.
It's the type of error one would expect of a cosmetics-oriented organisation, rather than a scientifically-oriented one. Perhaps, then, we shall have to cast our net more widely to find a suitable sponsor for MIN...
Tuesday, September 15, 2015
Tyrrell 008 and Thunderbird 2
Patrick Depailler at the entry to Mirabeau Bas, Monaco 1978 |
The conventional wisdom on such wings is that they induce a span-wise component to the wing-flow, directed towards the roots of the wing. This has two consequences:
(i) The strength of the wing-tip vortices is reduced, decreasing vortex-drag.
(ii) Yaw instability is increased. As the vehicle begins to yaw, the effective forward-sweep is increased on the outer-wing, and the effective sweep is reduced on the inner wing. The further reduces the drag on the outer wing, and increases the drag on the inner wing, and this differential drag creates a torque which further increases the yaw angle.
So perhaps the Tyrrell 008 front-wing was designed to improve turn-in response.
Detailed information on the aerodynamic performance of Thunderbird 2 appears to have been lost when Century 21 Productions closed its studio on the Slough Trading Estate in late 1970.
Sunday, September 13, 2015
Tyre friction and self-affine surfaces
The friction generated by an automobile tyre is crucially dependent upon the roughness of the road surface over which the tyre is moving. The theoretical representation of this phenomenon developed by the academic community over the past 20 years has been largely predicated on the assumption that the road can be represented as a statistically self-affine fractal surface. The purpose of this article is to explain what this means, but also to question whether this assumption is in need of some generalisation.
To begin, we need to understand two concepts: the correlation function and the power spectrum of a surface.
Surfaces in the real world are not perfectly smooth, they're rough. Such surfaces are mathematically represented as realisations of a random field. This means that the height of the surface at each point is effectively sampled from a statistical distribution. Each realisation of a random field is unique, but one can classify surface types by the properties of the random field from which their realisations are drawn. For example, each sheet of titanium manufactured by a certain process will share the same statistical properties, even though the precise surface morphology of each particular sheet is unique.
Let us denote the height of a surface at a point x as h(x). The height function will have a mean <h(x)> and a variance. (Here and below, we use angular brackets to denote the mean value of the variable within the brackets). The variance measures the amount of dispersion either side of the mean. Typically, the variance is calculated as:
Var = <h(x)2> − <h(x)>2
Mathematically, the height at any pair of points, x and x+r, could be totally independent. In this event, the following equation would hold:
<h(x)h(x+r)> = <h(x)2>
The magnitude of the difference between <h(x)h(x+r)> and <h(x)2> therefore indicates the level of correlation between the height at points x and x+r. This information is encapsulated in the height auto-correlation function:
ξ(r) = <h(x)h(x+r)> − <h(x)2>
Now the auto-correlation function has an alter-ego called the power spectrum. This is the Fourier transform of the auto-correlation function. It contains the same information as the auto-correlation function, but enables you to view the correlation function as a superposition of waves with different amplitudes and wavelengths. Each of the component waves is called a mode, and if the power spectrum has a peak at a particular mode, it shows that the height of the surface has a degree of correlation at certain regular intervals.
Related to the auto-correlation function is the height-difference correlation function:
C(r) = <(h(x+r)−h(x))2>
This is essentially the variance in height-difference, as a function of distance from an arbitrary point x. This is a useful function to plot graphically because it represents the difference between the auto-correlation function and the overall variance, as a function of distance r from an arbitrary point x:
C(r) = 2(Var−ξ(r))
Which brings us to self-affine fractal surfaces. For such a surface, a typical height-difference correlation function is plotted below, (Evaluation of self-affine surfaces and their implications for frictional dynamics as indicated by a Rouse material, G.Heinrich, M.Kluppel, T.A.Vilgis, Computational and Theoretical Polymer Science 10 (2000), pp53-61).
Points only a small distance away from an arbitrary starting point x can be expected to have a height closely correlated with the height at x, hence C(r) is small to begin with. However, as r increases, so C(r) also increases, until at a critical distance ξ||, C(r) equals the variance to be found across the entire surface. Above ξ||, C(r) tends to a constant and ξ(r) tends to zero. ξ|| can be dubbed the lateral correlation length. In road surfaces, it corresponds to the average diameter of the aggregate stones.
To understand what a self-affine fractal surface is, first recall that a self-similar fractal surface is a surface which is invariant under magnification. In other words, the application of a scale factor x → a⋅x leaves the surface unchanged.
In contrast, a self-affine surface is invariant if a separate scale factor is applied to the horizontal and vertical directions. Specifically, the scale factor applied in the vertical direction must be suppressed by a power between 0 and 1. If x represents the horizontal components of a point in 3-dimensional space, and z represents the vertical component, then it is mapped by a self-affine transformation to x → a⋅x and z → aH⋅z, where H is the Hurst exponent. In the height-difference correlation function plotted above, the initial slope is equal to 2H, twice the value of the Hurst exponent.
Note, however, that road surfaces are considered to be statistically self-affine surfaces, which is not the same thing as being exactly self-affine. If you zoomed in on such a surface with the specified horizontal and vertical scale-factors, the magnified subset would not coincide exactly with the parent surface. It would, however, be drawn from a random field possessing the same properties as the parent surface, hence such a surface is said to be statistically self-affine.
A yet further adaptation is necessary to make the self-affine model applicable to road surfaces. Roads are known to be characterised by two distinct length-scales: the macroscopic one determined by the size of aggregate stones, and the microscopic one determined by the surface properties of those stones, (see diagram below).
One attempt to adapt the self-affine model to road surfaces introduces two distinct Hurst exponents, one for the micro-roughness and one (purportedly) for the macro-roughness, as shown below, (Investigation and modelling of rubber stationary friction on rough surfaces, A.Le Gal and M.Kluppel, Journal of Physics: Condensed Matter 20 (2008)):
This, however, doesn't seem quite right. The macro-roughness of a road surface is defined by the morphology of the largest asperities in the road, the stone aggregate. Yet, as Le Gal and Kluppel state, a road surface only displays self-affine behaviour "within a defined wave length interval. The upper cut-off length is identified with the largest surface corrugations: for road surfaces, this corresponds to the limit of macrotexture, e.g. the aggregate size."
It's not totally clear, then, whether the macro-roughness of a road surface falls within the limits of self-affine behaviour, or whether it actually defines the upper limit of this behaviour.
So whilst the notion that a road surface is statistically self-affine appears, at first sight, to have been empirically verified by the correlation functions and power spectra taken of road surfaces, perhaps there's still some elbow-room to suggest a generalisation of this concept.
For example, consider mounded surfaces. These are surfaces in which there are asperities at fairly regular intervals. In the case of road surfaces, this corresponds to the presence of aggregate stones at regular intervals. Such as surface resembles a self-affine surface in the sense that it has a lateral correlation length ξ||. However, there is an additional length-scale λ defining the typical spacing between the asperities, as represented in the diagram below, (Evolution of thin film morphology: Modelling and Simulations, M.Pelliccione and T-M.Lu, 2008, p50).
In terms of a road surface, whilst ξ|| characterizes the average size of the aggregate stones, λ characterizes the average distance between the stones.
In terms of the height-difference correlation function C(r), a mounded surface resembles a self-affine surface below the lateral correlation length, r < ξ||. However, above ξ||, where the self-affine surface has a constant profile for C(r), the profile for a mounded surface is oscillatory (see example plot below, ibid. p51). Correspondingly, the power spectrum for a mounded surface has a peak at wavelength λ, where no peak exists for a self-affine surface.
The difference between a mounded surface and a genuinely self-affine surface is something which will only manifest itself empirically by taking multiple samples from the surface. Individual samples from a self-affine surface will show oscillations in the height-difference correlation function above the lateral correlation length, but the oscillations will randomly vary from one sample to another. In contrast, the samples from a mounded surface will have oscillations of a similar wavelength, (see plots below, from Characterization of crystalline and amorphous rough surface, Y.Zhao, G.C.Wang, T.M.Lu, Academic Press, 2000, p101).
Conceptually, what's particularly interesting about mounded surfaces is that they're generalisations of the self-affine surfaces normally assumed in tyre friction studies. Below the lateral correlation length-scale ξ||, a mounded surface is self-affine (M.Pelliccione and T-M.Lu, p52). One can say that a mounded surface is locally self-affine, but not globally self-affine. Note that whilst every globally-affine surface is locally self-affine, not every locally self-affine surface is globally self-affine.
A self-affine road surface will have aggregate stones of various sizes and separations, whilst a mounded road surface will have aggregate stones of similar size and regular separation.
In fact, one might hypothesise that many actual road surfaces in the world are indeed locally self-affine, but not globally self-affine. For this to be true, it is merely necessary for there to be some regularity in the separation of aggregate within the asphalt. If the distance between aggregate stones is random, then a road surface can indeed be represented as globally self-affine. However, if there is any regularity to the separation of aggregate, then the surface will merely be locally self-affine. If true, then existing academic studies of tyre friction have fixated on a special case which is a good first approximation, but which does not in general obtain.
To begin, we need to understand two concepts: the correlation function and the power spectrum of a surface.
Surfaces in the real world are not perfectly smooth, they're rough. Such surfaces are mathematically represented as realisations of a random field. This means that the height of the surface at each point is effectively sampled from a statistical distribution. Each realisation of a random field is unique, but one can classify surface types by the properties of the random field from which their realisations are drawn. For example, each sheet of titanium manufactured by a certain process will share the same statistical properties, even though the precise surface morphology of each particular sheet is unique.
Let us denote the height of a surface at a point x as h(x). The height function will have a mean <h(x)> and a variance. (Here and below, we use angular brackets to denote the mean value of the variable within the brackets). The variance measures the amount of dispersion either side of the mean. Typically, the variance is calculated as:
Var = <h(x)2> − <h(x)>2
Mathematically, the height at any pair of points, x and x+r, could be totally independent. In this event, the following equation would hold:
<h(x)h(x+r)> = <h(x)2>
The magnitude of the difference between <h(x)h(x+r)> and <h(x)2> therefore indicates the level of correlation between the height at points x and x+r. This information is encapsulated in the height auto-correlation function:
ξ(r) = <h(x)h(x+r)> − <h(x)2>
Now the auto-correlation function has an alter-ego called the power spectrum. This is the Fourier transform of the auto-correlation function. It contains the same information as the auto-correlation function, but enables you to view the correlation function as a superposition of waves with different amplitudes and wavelengths. Each of the component waves is called a mode, and if the power spectrum has a peak at a particular mode, it shows that the height of the surface has a degree of correlation at certain regular intervals.
Related to the auto-correlation function is the height-difference correlation function:
C(r) = <(h(x+r)−h(x))2>
This is essentially the variance in height-difference, as a function of distance from an arbitrary point x. This is a useful function to plot graphically because it represents the difference between the auto-correlation function and the overall variance, as a function of distance r from an arbitrary point x:
C(r) = 2(Var−ξ(r))
Which brings us to self-affine fractal surfaces. For such a surface, a typical height-difference correlation function is plotted below, (Evaluation of self-affine surfaces and their implications for frictional dynamics as indicated by a Rouse material, G.Heinrich, M.Kluppel, T.A.Vilgis, Computational and Theoretical Polymer Science 10 (2000), pp53-61).
Points only a small distance away from an arbitrary starting point x can be expected to have a height closely correlated with the height at x, hence C(r) is small to begin with. However, as r increases, so C(r) also increases, until at a critical distance ξ||, C(r) equals the variance to be found across the entire surface. Above ξ||, C(r) tends to a constant and ξ(r) tends to zero. ξ|| can be dubbed the lateral correlation length. In road surfaces, it corresponds to the average diameter of the aggregate stones.
To understand what a self-affine fractal surface is, first recall that a self-similar fractal surface is a surface which is invariant under magnification. In other words, the application of a scale factor x → a⋅x leaves the surface unchanged.
In contrast, a self-affine surface is invariant if a separate scale factor is applied to the horizontal and vertical directions. Specifically, the scale factor applied in the vertical direction must be suppressed by a power between 0 and 1. If x represents the horizontal components of a point in 3-dimensional space, and z represents the vertical component, then it is mapped by a self-affine transformation to x → a⋅x and z → aH⋅z, where H is the Hurst exponent. In the height-difference correlation function plotted above, the initial slope is equal to 2H, twice the value of the Hurst exponent.
Note, however, that road surfaces are considered to be statistically self-affine surfaces, which is not the same thing as being exactly self-affine. If you zoomed in on such a surface with the specified horizontal and vertical scale-factors, the magnified subset would not coincide exactly with the parent surface. It would, however, be drawn from a random field possessing the same properties as the parent surface, hence such a surface is said to be statistically self-affine.
A yet further adaptation is necessary to make the self-affine model applicable to road surfaces. Roads are known to be characterised by two distinct length-scales: the macroscopic one determined by the size of aggregate stones, and the microscopic one determined by the surface properties of those stones, (see diagram below).
One attempt to adapt the self-affine model to road surfaces introduces two distinct Hurst exponents, one for the micro-roughness and one (purportedly) for the macro-roughness, as shown below, (Investigation and modelling of rubber stationary friction on rough surfaces, A.Le Gal and M.Kluppel, Journal of Physics: Condensed Matter 20 (2008)):
This, however, doesn't seem quite right. The macro-roughness of a road surface is defined by the morphology of the largest asperities in the road, the stone aggregate. Yet, as Le Gal and Kluppel state, a road surface only displays self-affine behaviour "within a defined wave length interval. The upper cut-off length is identified with the largest surface corrugations: for road surfaces, this corresponds to the limit of macrotexture, e.g. the aggregate size."
It's not totally clear, then, whether the macro-roughness of a road surface falls within the limits of self-affine behaviour, or whether it actually defines the upper limit of this behaviour.
So whilst the notion that a road surface is statistically self-affine appears, at first sight, to have been empirically verified by the correlation functions and power spectra taken of road surfaces, perhaps there's still some elbow-room to suggest a generalisation of this concept.
For example, consider mounded surfaces. These are surfaces in which there are asperities at fairly regular intervals. In the case of road surfaces, this corresponds to the presence of aggregate stones at regular intervals. Such as surface resembles a self-affine surface in the sense that it has a lateral correlation length ξ||. However, there is an additional length-scale λ defining the typical spacing between the asperities, as represented in the diagram below, (Evolution of thin film morphology: Modelling and Simulations, M.Pelliccione and T-M.Lu, 2008, p50).
In terms of a road surface, whilst ξ|| characterizes the average size of the aggregate stones, λ characterizes the average distance between the stones.
In terms of the height-difference correlation function C(r), a mounded surface resembles a self-affine surface below the lateral correlation length, r < ξ||. However, above ξ||, where the self-affine surface has a constant profile for C(r), the profile for a mounded surface is oscillatory (see example plot below, ibid. p51). Correspondingly, the power spectrum for a mounded surface has a peak at wavelength λ, where no peak exists for a self-affine surface.
The difference between a mounded surface and a genuinely self-affine surface is something which will only manifest itself empirically by taking multiple samples from the surface. Individual samples from a self-affine surface will show oscillations in the height-difference correlation function above the lateral correlation length, but the oscillations will randomly vary from one sample to another. In contrast, the samples from a mounded surface will have oscillations of a similar wavelength, (see plots below, from Characterization of crystalline and amorphous rough surface, Y.Zhao, G.C.Wang, T.M.Lu, Academic Press, 2000, p101).
Conceptually, what's particularly interesting about mounded surfaces is that they're generalisations of the self-affine surfaces normally assumed in tyre friction studies. Below the lateral correlation length-scale ξ||, a mounded surface is self-affine (M.Pelliccione and T-M.Lu, p52). One can say that a mounded surface is locally self-affine, but not globally self-affine. Note that whilst every globally-affine surface is locally self-affine, not every locally self-affine surface is globally self-affine.
A self-affine road surface will have aggregate stones of various sizes and separations, whilst a mounded road surface will have aggregate stones of similar size and regular separation.
In fact, one might hypothesise that many actual road surfaces in the world are indeed locally self-affine, but not globally self-affine. For this to be true, it is merely necessary for there to be some regularity in the separation of aggregate within the asphalt. If the distance between aggregate stones is random, then a road surface can indeed be represented as globally self-affine. However, if there is any regularity to the separation of aggregate, then the surface will merely be locally self-affine. If true, then existing academic studies of tyre friction have fixated on a special case which is a good first approximation, but which does not in general obtain.
Thursday, September 03, 2015
BMW's F1 'rocket fuel' and aromatic hydrocarbons
The story of BMW's turbo 'rocket fuel' has long since passed into Formula 1 legend, but there's a longer and deeper story here, involving the German war effort, some organic chemistry, and the history of oil refining techniques. But let's begin with the legend, and the breakthrough which enabled the Brabham-BMW of Nelson Piquet to win the 1983 Drivers' Championship:
[BMW motorsport technical director, Paul] Rosche telephoned a contact at chemicals giant BASF and asked if a different fuel formulation might do the trick. After a little research, a fuel mix was unearthed that had been developed for Luftwaffe fighters during World War II, when Germany had been short of lead. Rosche asked for a 200-litre drum of the fuel for testing and, when it arrived, he took it straight to the dyno.
"Suddenly the detonation was gone. We could increase the boost pressure, and the power, without problems. The maximum boost pressure we saw on the dyno was 5.6 bar absolute, at which the engine was developing more than 1400 horsepower. It was maybe 1420 or 1450 horsepower, we really don't know because we couldn't measure it — our dyno only went up to 1400." ('Generating the Power', MotorsportMagazine, January 2001, p37).
An aromatic hydrocarbon called toluene is commonly held to have been the magic compound in this fuel brew, but erstwhile Brabham chief mechanic Charlie Whiting goes further:
"There were some interesting ingredients in it, and toluene has been mentioned. But it would have had far more exciting things in it, I think, than toluene. I suspect – well, I know – that it was something the BMW engineers had dug out of the cupboard from the Second World War. Almost literally rocket fuel," ('Poacher Turned Gamekeeper', MotorsportMagazine, December 2013, p74).
Before we delve into the chemistry of fuels, let's establish some context here. The current F1 turbo engine regulations require detonation-resistant fuels with a high calorific value per unit mass. Detonation resistance enables one to increase the compression ratio, and thereby increase the work done on each piston-stroke, while the limits on total fuel mass and fuel mass-flow rate require fuel with a high energy content per unit mass.
In contrast, in the 1980s the regulations required detonation-resistant fuels with a high calorific value per unit volume. From 1984, the amount of fuel permitted was limited, but the limitation was defined in terms of fuel volume rather than mass, hence fuel with a high mass-density became advantageous. By this time, the teams had already followed BMW's lead and settled upon fuels with a high proportion of aromatic hydrocarbons.
To understand the significance of this, we need to start with the fact that there are four types of hydrocarbon:
(i) Paraffins (sometimes called alkanes)
(ii) Naphthenes (sometimes called cycloalkanes)
(iii) Aromatics (sometimes called arenes)
(iv) Olefins (sometimes called alkenes)
Each hydrocarbon molecule contains hydrogen and carbon atoms, bound together by covalent bonds. The hydrocarbon types differ from each other by the number of bonds between adjacent atoms, and by the overall topology by which the atoms are connected together. So let's briefly digress to consider the nature of covalent bonding.
The electrons in an atom are stacked in so-called 'shells', each of which can contain a maximum number of members. The first shell can contain only two electrons, while the second can contain eight. If the outermost electron shell possessed by an atom is incomplete, then the atom will be disposed to interact or bond with other atoms.
A neutral hydrogen atom has one electron, so its one and only shell needs one further electron to complete it. A neutral carbon atom has six electrons, two of which fill the lowermost shell, leaving only four in the next shell. Hence, another four electrons are required to complete the second shell of the carbon atom.
In covalent bonding, an electron from one atom is shared with an adjacent atom, and the adjacent atom reciprocates by sharing one of its electrons. This sharing of electron pairs enables groups of atoms to complete their electron shells, and thereby reside in a more stable configuration. In particular, a carbon atom, lacking four electrons in its outermost shell, has a propensity to covalently bind with four other neighbours, while a hydrogen atom has a propensity to bind with just one neighbour. By this means, chains of hydrocarbons are built.
Methane, for example, (see diagram above) consists of a single carbon atom, bound to four hydrogen atoms. The four shared electrons from the hydrogen atoms complete the outermost shell around the carbon atom, and each hydrogen atom has its one and only shell completed by virtue of sharing one of the carbon atom's electrons.
If there is a single covalent bond between each pair of carbon atoms, then the hydrocarbon is said to be saturated. In contrast, if there are more than one covalent bond between a pair carbon atoms, the molecule is said to be unsaturated.
Aromatic compounds possess a higher carbon-to-hydrogen ratio than paraffinic compounds, and because the carbon atom is of greater mass than a hydrogen atom, this entails that aromatic compounds permit a greater mass density. This characteristic was perfect for the turbo engine regulations in the 1980s, and toluene was the most popular aromatic hydrocarbon which combined detonation-resistance and high mass density.
To put toluene into context, we need to begin with the best-known aromatic hydrocarbon, benzene. This is a hexagonal ring of six carbon atoms, each one of which is bound to a single hydrogen atom. Toluene is a variant of this configuration in which one of those hydrogen atoms is replaced by a methyl group. The latter is one of the primary building blocks of hydrocarbon chemistry, a single carbon atom bound to three hydrogen atoms. The carbon atom in a methyl group naturally binds to another carbon atom, in this case one of the carbon atoms in the hexagonal ring. Hence toluene is also called methyl-benzene.
Closely related to toluene is xylene, another variant of benzene, but one in which two of the hydrogen atoms are replaced by methyl groups. (Hence xylene is also called dimethyl-benzene). If the two methyl groups are bound to adjacent carbon atoms in the ring, the compound is dubbed o-xylene; if the docking sites of the two methyl groups are separated from each other by two steps, then the result is dubbed m-xylene; and if the docking sites are on opposite sides of the ring, the compound is called p-xylene.
Most teams seem to have settled on the use of toluene and xylene. By mid-season 1987, for example, Honda "reached an 84% level of toluene," (Ian Bamsey, McLaren Honda Turbo - A Technical Appraisal, p32).
With respect to the Cosworth turbo used by Benetton in 1987, Pat Symonds recalls that "the problem was the engine had been developed around BP fuel, and we had a Mobil contract. Fuels then weren’t petrol, they were a chemical mix of benzene, toluene and xylene. We kept detonating pistons, and it wasn’t until mid-season that we got it right," (Lunch with Pat Symonds, MotorsportMagazine, September 2012). In fact, Pat attests that the Cosworth fuel was an equal mix of benzene, toluene and xylene, (private communication).
At Ferrari, AGIP later recalled that their toluene and xylene based fuel reached density values of up to 0.85, in some contrast with the paraffinic fuels of the subsequent normally-aspirated era, with density values of 0.71 or 0.73. "Given the ignition delays of heavy products, we had to add more volatile components that would facilitate that ignition," (Luciano Nicastro, Head of R&D at AGIP Petroli, 'Ferrari Formula 1 Annual 1990', Enrico Benzing, p185).
Renault, in contrast, claim to have used mesitylene, as Elf's Jean-Claude Fayard explains:
"We found a new family of hydrocarbons which...contained a strong proportion of mesitylene [trimethyl-benzene] and they had a boiling point of 150C, but with a combustion capability even higher than that of toluene," (Alpine and Renault, Roy Smith, p142).
Mesitylene is a variant of benzene in which three methyl groups are docked at equal intervals around the hexagonal carbon ring, (naturally, mesitylene is also called trimethyl-benzene).
Now, the fact that Paul Rosche grabbed a barrel of aviation fuel used by the Luftwaffe is significant because German WWII aviation fuel differed substantially from that used by the allies. Faced with limited access to crude oil, and a poorly developed refining industry, the Germans developed war-time aviation fuels with a high aromatic content.
Courtesy of the alkylation process, the original version of which was developed by BP in 1936, the allies could synthesise iso-octane from a reaction involving shorter-chain paraffins, such as iso-butane, and olefins such as butene or iso-butene. By definition, iso-octane has an octane rating of 100, defining the standard for detonation-resistance. Using 100-octane fuel synthesised by the alkylation process, the British were able to defeat the Luftwaffe in the 1940 Battle of Britain.
In contrast, German aviation fuel was largely obtained from coal by applying hydrogenation processes. With limited capacity to produce paraffinic components, the initial B-4 grade of aviation fuel used by the Germans had an octane range of only 87-89, a level which itself was only obtained with the addition of the anti-detonation agent, Lead Tetra-Ethyl. A superior C-3 specification of aviation fuel was subsequently produced, with an octane rating of 95-97, but only by substantially increasing the proportion of aromatic hydrocarbons:
"The B-4 grade...contained normally 10 to 15 percent volume aromatics, 45 percent volume naphthenes, and the remainder paraffins...The C-3 grade was a mixture of 10 to 15 percent volume of synthetic isoparaffins (alkylates and isooctanes)...[and] not more than 45 percent volume aromatics," (US Navy, Technical Report No. 145-45. Manufacture of Aviation Gasoline in Germany, Composition and Specifications).
The Germans, however, also included some interesting additives:
"The Bf 109E-8's DB601N engine used the GM-1 nitrous oxide injection system...Injected into the supercharger inlet, the gas provided additional oxygen for combustion at high altitude and acted as an anti-detonant, cooling the air-fuel mixture," ('The Decisive Duel: Spitfire vs 109', David Isby).
"Additional power came from water-methanol and nitrous-oxide injection," ('To Command the Sky: The Battle for Air Superiority over Germany, 1942-44', Stephen L.McFarland and Wesley Phillips, p58).
At which point, one might recall Charlie Whiting's suggestion that the 1983 BMW fuel brew "had far more exciting things in it" than toluene. This, despite regulations which explicitly stated that fuel should be 97% hydrocarbons, and should not contain "alcohols, nitrocompounds or other power boosting additives." Still, there's breaking the rules, and then there's getting caught breaking the rules. Perhaps BMW were a little naughty in 1983, before settling down with an 80% toluene brew.
The current turbo regulations, however, require a much lower aromatic content, stipulating the following maxima:
Aromatics wt% 40
Olefins wt% 17
Total di-olefins wt% 1.0
Total styrene and alkyl derivatives wt% 1.0
Which entails, in a curious twist, that the current maximum aromatic content almost matches that of the C-3 aviation fuel developed in war-time Germany...
[BMW motorsport technical director, Paul] Rosche telephoned a contact at chemicals giant BASF and asked if a different fuel formulation might do the trick. After a little research, a fuel mix was unearthed that had been developed for Luftwaffe fighters during World War II, when Germany had been short of lead. Rosche asked for a 200-litre drum of the fuel for testing and, when it arrived, he took it straight to the dyno.
"Suddenly the detonation was gone. We could increase the boost pressure, and the power, without problems. The maximum boost pressure we saw on the dyno was 5.6 bar absolute, at which the engine was developing more than 1400 horsepower. It was maybe 1420 or 1450 horsepower, we really don't know because we couldn't measure it — our dyno only went up to 1400." ('Generating the Power', MotorsportMagazine, January 2001, p37).
An aromatic hydrocarbon called toluene is commonly held to have been the magic compound in this fuel brew, but erstwhile Brabham chief mechanic Charlie Whiting goes further:
"There were some interesting ingredients in it, and toluene has been mentioned. But it would have had far more exciting things in it, I think, than toluene. I suspect – well, I know – that it was something the BMW engineers had dug out of the cupboard from the Second World War. Almost literally rocket fuel," ('Poacher Turned Gamekeeper', MotorsportMagazine, December 2013, p74).
Before we delve into the chemistry of fuels, let's establish some context here. The current F1 turbo engine regulations require detonation-resistant fuels with a high calorific value per unit mass. Detonation resistance enables one to increase the compression ratio, and thereby increase the work done on each piston-stroke, while the limits on total fuel mass and fuel mass-flow rate require fuel with a high energy content per unit mass.
In contrast, in the 1980s the regulations required detonation-resistant fuels with a high calorific value per unit volume. From 1984, the amount of fuel permitted was limited, but the limitation was defined in terms of fuel volume rather than mass, hence fuel with a high mass-density became advantageous. By this time, the teams had already followed BMW's lead and settled upon fuels with a high proportion of aromatic hydrocarbons.
To understand the significance of this, we need to start with the fact that there are four types of hydrocarbon:
(i) Paraffins (sometimes called alkanes)
(ii) Naphthenes (sometimes called cycloalkanes)
(iii) Aromatics (sometimes called arenes)
(iv) Olefins (sometimes called alkenes)
Methane, ethane and propane. Each larger disk represents a carbon atom; each white disk represents a hydrogen atom; and each black disk represents a covalent bond. |
The electrons in an atom are stacked in so-called 'shells', each of which can contain a maximum number of members. The first shell can contain only two electrons, while the second can contain eight. If the outermost electron shell possessed by an atom is incomplete, then the atom will be disposed to interact or bond with other atoms.
A neutral hydrogen atom has one electron, so its one and only shell needs one further electron to complete it. A neutral carbon atom has six electrons, two of which fill the lowermost shell, leaving only four in the next shell. Hence, another four electrons are required to complete the second shell of the carbon atom.
In covalent bonding, an electron from one atom is shared with an adjacent atom, and the adjacent atom reciprocates by sharing one of its electrons. This sharing of electron pairs enables groups of atoms to complete their electron shells, and thereby reside in a more stable configuration. In particular, a carbon atom, lacking four electrons in its outermost shell, has a propensity to covalently bind with four other neighbours, while a hydrogen atom has a propensity to bind with just one neighbour. By this means, chains of hydrocarbons are built.
Methane, for example, (see diagram above) consists of a single carbon atom, bound to four hydrogen atoms. The four shared electrons from the hydrogen atoms complete the outermost shell around the carbon atom, and each hydrogen atom has its one and only shell completed by virtue of sharing one of the carbon atom's electrons.
If there is a single covalent bond between each pair of carbon atoms, then the hydrocarbon is said to be saturated. In contrast, if there are more than one covalent bond between a pair carbon atoms, the molecule is said to be unsaturated.
Aromatic compounds possess a higher carbon-to-hydrogen ratio than paraffinic compounds, and because the carbon atom is of greater mass than a hydrogen atom, this entails that aromatic compounds permit a greater mass density. This characteristic was perfect for the turbo engine regulations in the 1980s, and toluene was the most popular aromatic hydrocarbon which combined detonation-resistance and high mass density.
To put toluene into context, we need to begin with the best-known aromatic hydrocarbon, benzene. This is a hexagonal ring of six carbon atoms, each one of which is bound to a single hydrogen atom. Toluene is a variant of this configuration in which one of those hydrogen atoms is replaced by a methyl group. The latter is one of the primary building blocks of hydrocarbon chemistry, a single carbon atom bound to three hydrogen atoms. The carbon atom in a methyl group naturally binds to another carbon atom, in this case one of the carbon atoms in the hexagonal ring. Hence toluene is also called methyl-benzene.
Closely related to toluene is xylene, another variant of benzene, but one in which two of the hydrogen atoms are replaced by methyl groups. (Hence xylene is also called dimethyl-benzene). If the two methyl groups are bound to adjacent carbon atoms in the ring, the compound is dubbed o-xylene; if the docking sites of the two methyl groups are separated from each other by two steps, then the result is dubbed m-xylene; and if the docking sites are on opposite sides of the ring, the compound is called p-xylene.
Most teams seem to have settled on the use of toluene and xylene. By mid-season 1987, for example, Honda "reached an 84% level of toluene," (Ian Bamsey, McLaren Honda Turbo - A Technical Appraisal, p32).
With respect to the Cosworth turbo used by Benetton in 1987, Pat Symonds recalls that "the problem was the engine had been developed around BP fuel, and we had a Mobil contract. Fuels then weren’t petrol, they were a chemical mix of benzene, toluene and xylene. We kept detonating pistons, and it wasn’t until mid-season that we got it right," (Lunch with Pat Symonds, MotorsportMagazine, September 2012). In fact, Pat attests that the Cosworth fuel was an equal mix of benzene, toluene and xylene, (private communication).
At Ferrari, AGIP later recalled that their toluene and xylene based fuel reached density values of up to 0.85, in some contrast with the paraffinic fuels of the subsequent normally-aspirated era, with density values of 0.71 or 0.73. "Given the ignition delays of heavy products, we had to add more volatile components that would facilitate that ignition," (Luciano Nicastro, Head of R&D at AGIP Petroli, 'Ferrari Formula 1 Annual 1990', Enrico Benzing, p185).
Renault, in contrast, claim to have used mesitylene, as Elf's Jean-Claude Fayard explains:
"We found a new family of hydrocarbons which...contained a strong proportion of mesitylene [trimethyl-benzene] and they had a boiling point of 150C, but with a combustion capability even higher than that of toluene," (Alpine and Renault, Roy Smith, p142).
Mesitylene is a variant of benzene in which three methyl groups are docked at equal intervals around the hexagonal carbon ring, (naturally, mesitylene is also called trimethyl-benzene).
Now, the fact that Paul Rosche grabbed a barrel of aviation fuel used by the Luftwaffe is significant because German WWII aviation fuel differed substantially from that used by the allies. Faced with limited access to crude oil, and a poorly developed refining industry, the Germans developed war-time aviation fuels with a high aromatic content.
Courtesy of the alkylation process, the original version of which was developed by BP in 1936, the allies could synthesise iso-octane from a reaction involving shorter-chain paraffins, such as iso-butane, and olefins such as butene or iso-butene. By definition, iso-octane has an octane rating of 100, defining the standard for detonation-resistance. Using 100-octane fuel synthesised by the alkylation process, the British were able to defeat the Luftwaffe in the 1940 Battle of Britain.
In contrast, German aviation fuel was largely obtained from coal by applying hydrogenation processes. With limited capacity to produce paraffinic components, the initial B-4 grade of aviation fuel used by the Germans had an octane range of only 87-89, a level which itself was only obtained with the addition of the anti-detonation agent, Lead Tetra-Ethyl. A superior C-3 specification of aviation fuel was subsequently produced, with an octane rating of 95-97, but only by substantially increasing the proportion of aromatic hydrocarbons:
"The B-4 grade...contained normally 10 to 15 percent volume aromatics, 45 percent volume naphthenes, and the remainder paraffins...The C-3 grade was a mixture of 10 to 15 percent volume of synthetic isoparaffins (alkylates and isooctanes)...[and] not more than 45 percent volume aromatics," (US Navy, Technical Report No. 145-45. Manufacture of Aviation Gasoline in Germany, Composition and Specifications).
The Germans, however, also included some interesting additives:
"The Bf 109E-8's DB601N engine used the GM-1 nitrous oxide injection system...Injected into the supercharger inlet, the gas provided additional oxygen for combustion at high altitude and acted as an anti-detonant, cooling the air-fuel mixture," ('The Decisive Duel: Spitfire vs 109', David Isby).
"Additional power came from water-methanol and nitrous-oxide injection," ('To Command the Sky: The Battle for Air Superiority over Germany, 1942-44', Stephen L.McFarland and Wesley Phillips, p58).
At which point, one might recall Charlie Whiting's suggestion that the 1983 BMW fuel brew "had far more exciting things in it" than toluene. This, despite regulations which explicitly stated that fuel should be 97% hydrocarbons, and should not contain "alcohols, nitrocompounds or other power boosting additives." Still, there's breaking the rules, and then there's getting caught breaking the rules. Perhaps BMW were a little naughty in 1983, before settling down with an 80% toluene brew.
The current turbo regulations, however, require a much lower aromatic content, stipulating the following maxima:
Aromatics wt% 40
Olefins wt% 17
Total di-olefins wt% 1.0
Total styrene and alkyl derivatives wt% 1.0
Which entails, in a curious twist, that the current maximum aromatic content almost matches that of the C-3 aviation fuel developed in war-time Germany...
Tuesday, August 04, 2015
Ferrari, cheating, and pop-off valves
The September 2015 issue of Motorsport Magazine contains an interesting interview with erstwhile McLaren and Ferrari engineer, Gordon Kimball. Together with some revealing anecdotes about Senna and Berger, Kimball also concedes the following:
"In 1988 I was engineering Gerhard Berger in the F187/88C. That was the year McLaren dominated with Honda and Bernie did all he could to help us. It was the era of turbos and pop-off valves and we had a low-pressure passage that went past the pop-off valve and would pull it open, so we could run more boost. We kept pushing that further and further, waiting to get caught, but we never were. I guess Bernie wanted somebody to try to beat McLaren, so he helped us."
Now, the first point to make here is that it is actually fairly well-known that engine manufacturers were flouting the pop-off valve regulations in the late 1980s. The pop-off valve was first introduced in 1987, when it was intended to restrict turbo boost pressure to 4.0 bar. The valve was supplied by the governing body, FISA, and attached to the plenum chamber, upstream of the inlet runners to each cylinder. A new design pop-off valve was then introduced for 1988, which was intended to restrict boost pressure to 2.5 bar.
Ian Bamsey noted the following in his monumental 1988 work, The 1000bhp Grand Prix cars, "In 1987 some engines were coaxed to run at more than 4.0 bar. With a carefully located single pop off valve merely an irritating leak in a heavily boosted system as much as 4.4 bar could be felt in the manifold. The key was in the location of the valve. It was possible to position it over a venturi in the charge plumbing system. Air gained speed through the venturi losing pressure. Either side of the venturi the flow was correct and the pressure was higher," (p29).
In fact, there appears to have been at least two distinct methods of flouting the 4.0 bar limit. If one attached the pop-off valve over a venturi, then one could keep the valve closed (contra Kimball's explanation) even if the effective boost pressure was greater than 4.0 bar. A second method simply involved inducting compressed air into the plenum chamber at a greater mass-flow rate than the open pop-off valve could vent it:
"Turbo boost was theoretically restricted to four bar via popoff valves, but there was a way around this on self-contained V6s like the Honda. They required just one pop-off valve (as opposed to those like the Porsche and Ford which effectively ran as two separate three-cylinder units and so needed two pop-off valves) by overboosting, forcing the pop-off to open and then controlling it against boost. It meant 900bhp in races, 1050bhp in qualifying," (Mark Hughes, Motorsport Magazine, January 2007, page 92).
Indeed, the general suggestion at the time is that it was Honda, rather than Ferrari, which first identified these loophole(s). Bamsey makes this point in his superb 1990 work, McLaren Honda Turbo - A Technical Appraisal: "By mid-season [1987]...Ferrari is believed to have achieved levels of 4.1/4.2 bar through careful location of the pop off valve, a technique Honda is alleged to have pioneered," (p92).
The next question, however, concerns what happened in 1988, when the more stringent 2.5 bar limit was imposed, and a new design of pop-off valve was supplied to the teams. This valve (perhaps by deliberate design), was somewhat tardy in closing once it has been opened:
"The new pop off opened in a different manner and once opened pressure tumbled to 2.0 bar and still the valve didn't close properly...on overrun the effect of a shut throttle and a still spinning compressor (the turbine not instantly stopping, of course) could cause pressure in the plenum to overshoot 2.5 bar. In blowing the pop off open, that adversely affected the next acceleration...The answer to the problem was in the form of the so called XE2 [specification engine]...run by all four Honda cars in the San Marino Grand Prix.
"The XE2 changed the throttle position, removing the separate butterfly for each inlet tract and instead putting a butterfly in each bank's charge plumbing just ahead of the plenum inlet and thus ahead of the pop off," (ibid 1990, p91-92).
No questions of dubious legality there. However, Bamsey also explains that an XE3 version of the engine was developed by Honda, purportedly for exclusive use in the high-altitude conditions of Mexico City: "The Mexican air is thin - the pressure is around a quarter bar - so the turbine has to work harder. Back pressure [in the exhaust manifold] becomes a potential problem, affecting volumetric efficiency and hence torque. Power is a function of torque and engine speed: Honda sought higher revs to compensate. Thus the XE3 employed an 82mm bore size [compared to 79mm on the XE2] and it was apparently tuned for a higher peak power speed. It was a complete success and on occasion was tried for qualifying elsewhere thereafter (in particular, at Monza)," (ibid. 1990, p92).
What's interesting here is that the XE3 seems to have caused some scrutineering difficulties at Mexico. Road and Track magazine reported that there was "a claim that Honda had built vortex generators into its system - which would allow it to use more than 2.5 bar - and FISA scrutineers spent an unusual amount of time examining the McLarens in Mexico," (Road and Track, volume 40, p85).
Generating a vortex would offer an alternative means of keeping the pop-off valve closed. Even with a constant diameter pipe, the pressure could be lowered by transforming some of the pressure energy into the rotational energy of a vortex. One would presumably need an expanding section downstream to burst the vortex in a controlled manner, but it does offer a method of reducing the pressure without using a venturi. It's intriguing to read that an engine ostensibly developed for high-altitude conditions was used in qualifying for the rest of the season...
So perhaps it would be wrong to cast Ferrari here in their stereotypical role as regulatory bandits. Although Kimball does also suggest that their fuel-tanks carried somewhat more than the mandatory 150 litres of fuel when they won the Italian Grand Prix that year!
"In 1988 I was engineering Gerhard Berger in the F187/88C. That was the year McLaren dominated with Honda and Bernie did all he could to help us. It was the era of turbos and pop-off valves and we had a low-pressure passage that went past the pop-off valve and would pull it open, so we could run more boost. We kept pushing that further and further, waiting to get caught, but we never were. I guess Bernie wanted somebody to try to beat McLaren, so he helped us."
FISA Pop-off valve (drawing by Bent Sorenson, reproduced from 'The Anatomy and Development of the Formula One Racing Car from 1975', Sal Incandels, p200) |
Ian Bamsey noted the following in his monumental 1988 work, The 1000bhp Grand Prix cars, "In 1987 some engines were coaxed to run at more than 4.0 bar. With a carefully located single pop off valve merely an irritating leak in a heavily boosted system as much as 4.4 bar could be felt in the manifold. The key was in the location of the valve. It was possible to position it over a venturi in the charge plumbing system. Air gained speed through the venturi losing pressure. Either side of the venturi the flow was correct and the pressure was higher," (p29).
In fact, there appears to have been at least two distinct methods of flouting the 4.0 bar limit. If one attached the pop-off valve over a venturi, then one could keep the valve closed (contra Kimball's explanation) even if the effective boost pressure was greater than 4.0 bar. A second method simply involved inducting compressed air into the plenum chamber at a greater mass-flow rate than the open pop-off valve could vent it:
"Turbo boost was theoretically restricted to four bar via popoff valves, but there was a way around this on self-contained V6s like the Honda. They required just one pop-off valve (as opposed to those like the Porsche and Ford which effectively ran as two separate three-cylinder units and so needed two pop-off valves) by overboosting, forcing the pop-off to open and then controlling it against boost. It meant 900bhp in races, 1050bhp in qualifying," (Mark Hughes, Motorsport Magazine, January 2007, page 92).
Indeed, the general suggestion at the time is that it was Honda, rather than Ferrari, which first identified these loophole(s). Bamsey makes this point in his superb 1990 work, McLaren Honda Turbo - A Technical Appraisal: "By mid-season [1987]...Ferrari is believed to have achieved levels of 4.1/4.2 bar through careful location of the pop off valve, a technique Honda is alleged to have pioneered," (p92).
The next question, however, concerns what happened in 1988, when the more stringent 2.5 bar limit was imposed, and a new design of pop-off valve was supplied to the teams. This valve (perhaps by deliberate design), was somewhat tardy in closing once it has been opened:
"The new pop off opened in a different manner and once opened pressure tumbled to 2.0 bar and still the valve didn't close properly...on overrun the effect of a shut throttle and a still spinning compressor (the turbine not instantly stopping, of course) could cause pressure in the plenum to overshoot 2.5 bar. In blowing the pop off open, that adversely affected the next acceleration...The answer to the problem was in the form of the so called XE2 [specification engine]...run by all four Honda cars in the San Marino Grand Prix.
"The XE2 changed the throttle position, removing the separate butterfly for each inlet tract and instead putting a butterfly in each bank's charge plumbing just ahead of the plenum inlet and thus ahead of the pop off," (ibid 1990, p91-92).
No questions of dubious legality there. However, Bamsey also explains that an XE3 version of the engine was developed by Honda, purportedly for exclusive use in the high-altitude conditions of Mexico City: "The Mexican air is thin - the pressure is around a quarter bar - so the turbine has to work harder. Back pressure [in the exhaust manifold] becomes a potential problem, affecting volumetric efficiency and hence torque. Power is a function of torque and engine speed: Honda sought higher revs to compensate. Thus the XE3 employed an 82mm bore size [compared to 79mm on the XE2] and it was apparently tuned for a higher peak power speed. It was a complete success and on occasion was tried for qualifying elsewhere thereafter (in particular, at Monza)," (ibid. 1990, p92).
What's interesting here is that the XE3 seems to have caused some scrutineering difficulties at Mexico. Road and Track magazine reported that there was "a claim that Honda had built vortex generators into its system - which would allow it to use more than 2.5 bar - and FISA scrutineers spent an unusual amount of time examining the McLarens in Mexico," (Road and Track, volume 40, p85).
Generating a vortex would offer an alternative means of keeping the pop-off valve closed. Even with a constant diameter pipe, the pressure could be lowered by transforming some of the pressure energy into the rotational energy of a vortex. One would presumably need an expanding section downstream to burst the vortex in a controlled manner, but it does offer a method of reducing the pressure without using a venturi. It's intriguing to read that an engine ostensibly developed for high-altitude conditions was used in qualifying for the rest of the season...
So perhaps it would be wrong to cast Ferrari here in their stereotypical role as regulatory bandits. Although Kimball does also suggest that their fuel-tanks carried somewhat more than the mandatory 150 litres of fuel when they won the Italian Grand Prix that year!
Tuesday, May 05, 2015
A history of porpoising
Skirted ground-effect Formula 1 cars of the late 1970s and early 1980s were occasionally afflicted by a type of instability referred to as 'porpoising'. Many cars suffered, but the phenomenon is nicely described by Peter Wright in relation to the development of the Lotus T80:
"The car was so sensitive that, above a certain critical speed, it became aerodynamically unstable in pitch. One test day at Silverstone, Mario Andretti coined the term 'porpoising' to describe the phenomenon when he observed daylight under the front wheels while at speed on the straight.
"Since 1977 I had been working with David Williams, Head of the Flight Instrumentation Department at the Cranfield College of Aeronautics. He had designed and built a digital data system for use on the T78 when it had become apparent that it would be absolutely essential to gather data from the chassis in order to progress with the development of ground effect. When the T80 porpoising started, I discussed the phenomenon with him, and he offered to model it and validate the results with the data we had. He established that it was an aero-elasticity problem, akin to flutter in an aircraft wing. The changing aerodynamic loads, as the car bounced and pitched, excited the pitch and heave modes of the sprung mass on its springs and tires." (Formula 1 Technology, p36 and p308.)
However, pace Wright, the same phenomenon had already been identified and named at least as early as the 1940s, albeit in the field of seaplane hydrodynamics; specifically, during the take-off and landing of such craft. A Wartime Report issued by NACA in June 1943, begins:
"Porpoising is a self-sustaining oscillatory motion in the vertical longitudinal plane...Observations of porpoising show that there are two principal oscillatory motions (1) a vertical oscillation of the center of gravity and (2) an angular oscillation about the center of gravity. These two motions are seen to have the same period but to differ in phase." (Some systematic model experiments on the porpoising characteristics of flying-boat hulls, Kenneth S.M. Davidson and F.W.S. Locke Jr, p7).
The British were also heavily involved in the early study of porpoising, an Aeronautical Research Council report in 1954 defining the phenomenon as follows:
"Porpoising, basically, consists of a combination of oscillations in pitch and heave. It includes both stable and unstable oscillations, a stable oscillation being one which damps out. (A review of porpoising instability of seaplanes, A.G.Smith and H.G.White, p5).
All of which is an important reminder that ground-effect was of crucial importance to hydroplanes long before Formula 1 happened upon the phenomenon.
Saturday, April 04, 2015
Optimal control theory and Ferrari's turbo-electric hybrid
The Department of Engineering Science at the University of Oxford, published an interesting paper in 2014 which appears to shed some light on the deployment of energy-recovery systems in contemporary Formula One.
Entitled Optimal control of Formula One car energy recovery systems, (a free version can be downloaded here), the paper considers the most efficient use of the kinetic motor-generator unit (ERS-K), and the thermal motor-generator unit (ERS-H), to minimise lap-time, given the various regulatory constraints. (Recall that the primary constraints are: 100kg fuel capacity, 100kg/hr maximum fuel flow, 4MJ Energy Store capacity, 2MJ per lap maximum energy flow from ERS-K to the Energy Store, and 4MJ per lap maximum energy flow from the Energy Store to the ERS-K). The paper outlines a mathematical approach to this Optimal Control problem, and concludes with results obtained for the Barcelona track.
In the course of the paper, a number of specific figures are quoted for engine power. For example, the power of the internal combustion (IC) engine under the maximum fuel-flow rate, with the turbo wastegate closed, is quoted as 440kW (590bhp); it is claimed that by having the turbo wastegate open, the power of the IC engine can be boosted by 20kW (~27bhp), but in the process the ERS-H has to use 60kW of power from the Energy Store to power the compressor; and with the wastegate closed, the 20kW reduction in IC power is compensated by the 40kW generated by the ERS-H. (Opening the wastegate boosts IC power because the back-pressure in the exhaust system is reduced).
Running with the wastegate closed is therefore considered to be the most efficient solution for racing conditions. However, the paper also considers qualifying conditions, where the Energy Store can be depleted over the course of a lap without any detrimental consequences:
"In its qualifying configuration the engine is run with the waste gate open for sustained periods of time when maximum engine power is needed. During these periods of time the energy store will be supplying both the MGU-K and the MGU-H, with the latter used to drive the engine boost compressor...In contrast to the racing lap, the waste gate is typically open when the engine is being fully fuelled. On the entry to turns 1, 4, 7 and 10 the waste gate is being closed a little before simultaneously cutting the fuel and the MGU-K."
Professor of Control Engineering David Limebeer delivered a presentation of the work at a Matlab conference the same year (video here). Another version of the work, Faster, Higher and Greener, featuring Spa rather than Barcelona, was published in the April 2015 edition of the IEEE Control Systems Magazine. In his Matlab presentation, Professor Limebeer also credits Peter Fussey, Mehdi Masouleh, Matteo Massaro, Giacomo Perantoni, Mark Pullin, and Ingrid Salisbury.
After reading their work, I e-mailed Professor Limebeer, and asked if he'd considered collaborating with a Formula One team. I received a slightly odd response. After a further internet search, I found out why. In the November 2014 issue of Vehicle Electronics, David reports "We have done this work with one of the Formula One teams, but we can’t tell you which one."
Which is totally understandable. University departments have to protect the confidentiality of their work with Formula One teams. Unfortunately, however, the University of Oxford, Department of Engineering Science Newsletter 2013-2014, proudly reveals:
The Ferrari F1 Connection.
Mr Stefano Domenicali, Scuderia Ferrari Team Principal, visited the Department in May 2013 to deliver the annual Maurice Lubbock Memorial Lecture. During this lecture he announced the evolving research partnership between the University and Ferrari.
DPhil Engineering Science students Chris Lim, Giacomo Perantoni and Ingrid Salisbury are working with Ferrari on novel ways to improve Formula One performance. Chris Lim said: “I’m very excited that I’ll be the first student working with Ferrari in the Department’s Southwell Laboratory, under the supervision of Professor Peter Ireland, the Department’s Professor of Turbomachinery. It’s a privilege to work with a prestigious manufacturer such as Ferrari in an industry like Formula One where the application of thermo-fluids has such a large impact”.
Pictured from left to right are: Chris Lim (postgraduate), Ingrid Salisbury (postgraduate), Mr Stefano Domenicali, Giacomo Perantoni (postgraduate) and Professor David Limebeer (supervisor to Giacomo Perantoni and Ingrid Salisbury).
In light of this, then, the figures quoted in these papers can be interpreted as pertaining to Ferrari's turbo-electric hybrid. The first paper was submitted for publication in late 2013, and the assumptions used there are the same as those used in the 2015 paper, so it appears that Ferrari development data from no later than 2013 was used throughout.
Entitled Optimal control of Formula One car energy recovery systems, (a free version can be downloaded here), the paper considers the most efficient use of the kinetic motor-generator unit (ERS-K), and the thermal motor-generator unit (ERS-H), to minimise lap-time, given the various regulatory constraints. (Recall that the primary constraints are: 100kg fuel capacity, 100kg/hr maximum fuel flow, 4MJ Energy Store capacity, 2MJ per lap maximum energy flow from ERS-K to the Energy Store, and 4MJ per lap maximum energy flow from the Energy Store to the ERS-K). The paper outlines a mathematical approach to this Optimal Control problem, and concludes with results obtained for the Barcelona track.
In the course of the paper, a number of specific figures are quoted for engine power. For example, the power of the internal combustion (IC) engine under the maximum fuel-flow rate, with the turbo wastegate closed, is quoted as 440kW (590bhp); it is claimed that by having the turbo wastegate open, the power of the IC engine can be boosted by 20kW (~27bhp), but in the process the ERS-H has to use 60kW of power from the Energy Store to power the compressor; and with the wastegate closed, the 20kW reduction in IC power is compensated by the 40kW generated by the ERS-H. (Opening the wastegate boosts IC power because the back-pressure in the exhaust system is reduced).
Running with the wastegate closed is therefore considered to be the most efficient solution for racing conditions. However, the paper also considers qualifying conditions, where the Energy Store can be depleted over the course of a lap without any detrimental consequences:
"In its qualifying configuration the engine is run with the waste gate open for sustained periods of time when maximum engine power is needed. During these periods of time the energy store will be supplying both the MGU-K and the MGU-H, with the latter used to drive the engine boost compressor...In contrast to the racing lap, the waste gate is typically open when the engine is being fully fuelled. On the entry to turns 1, 4, 7 and 10 the waste gate is being closed a little before simultaneously cutting the fuel and the MGU-K."
Professor of Control Engineering David Limebeer delivered a presentation of the work at a Matlab conference the same year (video here). Another version of the work, Faster, Higher and Greener, featuring Spa rather than Barcelona, was published in the April 2015 edition of the IEEE Control Systems Magazine. In his Matlab presentation, Professor Limebeer also credits Peter Fussey, Mehdi Masouleh, Matteo Massaro, Giacomo Perantoni, Mark Pullin, and Ingrid Salisbury.
After reading their work, I e-mailed Professor Limebeer, and asked if he'd considered collaborating with a Formula One team. I received a slightly odd response. After a further internet search, I found out why. In the November 2014 issue of Vehicle Electronics, David reports "We have done this work with one of the Formula One teams, but we can’t tell you which one."
Which is totally understandable. University departments have to protect the confidentiality of their work with Formula One teams. Unfortunately, however, the University of Oxford, Department of Engineering Science Newsletter 2013-2014, proudly reveals:
The Ferrari F1 Connection.
Mr Stefano Domenicali, Scuderia Ferrari Team Principal, visited the Department in May 2013 to deliver the annual Maurice Lubbock Memorial Lecture. During this lecture he announced the evolving research partnership between the University and Ferrari.
DPhil Engineering Science students Chris Lim, Giacomo Perantoni and Ingrid Salisbury are working with Ferrari on novel ways to improve Formula One performance. Chris Lim said: “I’m very excited that I’ll be the first student working with Ferrari in the Department’s Southwell Laboratory, under the supervision of Professor Peter Ireland, the Department’s Professor of Turbomachinery. It’s a privilege to work with a prestigious manufacturer such as Ferrari in an industry like Formula One where the application of thermo-fluids has such a large impact”.
Pictured from left to right are: Chris Lim (postgraduate), Ingrid Salisbury (postgraduate), Mr Stefano Domenicali, Giacomo Perantoni (postgraduate) and Professor David Limebeer (supervisor to Giacomo Perantoni and Ingrid Salisbury).
In light of this, then, the figures quoted in these papers can be interpreted as pertaining to Ferrari's turbo-electric hybrid. The first paper was submitted for publication in late 2013, and the assumptions used there are the same as those used in the 2015 paper, so it appears that Ferrari development data from no later than 2013 was used throughout.
Monday, March 09, 2015
Adrian Newey and the bar-headed goose
The April edition of Motorsport Magazine contains a fabulous F1 season preview from Mark Hughes, which includes the news that Adrian Newey has recently been taking a break in the Himalayas.
Now, whilst it's likely that the principal purpose of this expedition was to enlighten the Dalai Lama on the importance of using large-eddy simulation to understand the interaction of brake-duct winglets with the spat vortex, it's also possible that Adrian was drawn by the legendary bi-annual migration of the bar-headed goose.
A more recent iteration of the research, The roller-coaster flight strategy of bar-headed geese conserves energy during Himalayan migration, (Science, 2015), suggests that the geese "opt repeatedly to shed hard-won altitude only subsequently to regain height later in the same flight. An example of this tactic can be seen in a 15.2-hour section of a 17-hour flight in which, after an initial climb to 3200 m, the goose followed an undulating profile involving a total ascent of 6340 m with a total descent of 4950 m for a net altitude gain of only 1390 m. Revealingly, calculations show that steadily ascending in a straight line would have increased the journey cost by around 8%. As even horizontal flapping flight is relatively expensive, the increase in energy consumption due to occasional climbs is not as important as the effect of reducing the general costs of flying by seeking higher-density air at lower altitudes.
"When traversing mountainous areas, a terrain tracking strategy or flying in the cool of the night can reduce the cost of flight in bar-headed geese through exposure to higher air density. Ground-hugging flight may also confer additional advantages including maximizing the potential of any available updrafts of air, reduced exposure to crosswinds and headwinds, greater safety through improved ground visibility, and increased landing opportunities. The atmospheric challenges encountered at the very highest altitudes, coupled with the need for near-maximal physical performance in such conditions, likely explains why bar-headed geese rarely fly close to their altitude ceiling, typically remaining below 6000 m."
Now, whilst it's likely that the principal purpose of this expedition was to enlighten the Dalai Lama on the importance of using large-eddy simulation to understand the interaction of brake-duct winglets with the spat vortex, it's also possible that Adrian was drawn by the legendary bi-annual migration of the bar-headed goose.
These birds are amongst the highest-flying in the world, and travel across the Himalayas in a single day. William Bryant Logan claims in Air: Restless Shaper of the World (2012), that "the bar-headed goose has been recorded at altitudes of over thirty-three thousand feet. This is the altitude where your pilot remarks that the outside temperature is 40 degrees below zero, where the great fast-flowing rivers of the jet streams set weather systems spinning. The air here contains only one-fifth of the oxygen near sea-level, where the goose winters in lowland India wetlands and marshes. Yet in the space of a few hours the bird can fly from the wetlands to the top of the high peaks and then out onto the world's largest high plateau. There are lower passes through the mountains, but the goose does not take them. It may even preferentially go higher."
However, it seems that some of the claims made for the bar-headed goose lack empirical support. Research led by Bangor University tracked the bar-headed geese with GPS as they migrated over the Himalayas, and reached the following conclusion in 2011:
"Data reveal that they do not normally fly higher than 6,300 m
elevation, flying through the Himalayan passes rather than over the
peaks of the mountains...It has also been long believed that bar-headed geese use jet stream
tail winds to facilitate their flight across the Himalaya.
Surprisingly, latest research has shown that despite the prevalence of
predictable tail winds that blow up the Himalayas (in the same
direction of travel as the geese), bar-headed geese spurn the winds,
waiting for them to die down overnight, when they then undertake the
greatest rates of climbing flight ever recorded for a bird, and sustain
these climbs rates for hours on end."
A more recent iteration of the research, The roller-coaster flight strategy of bar-headed geese conserves energy during Himalayan migration, (Science, 2015), suggests that the geese "opt repeatedly to shed hard-won altitude only subsequently to regain height later in the same flight. An example of this tactic can be seen in a 15.2-hour section of a 17-hour flight in which, after an initial climb to 3200 m, the goose followed an undulating profile involving a total ascent of 6340 m with a total descent of 4950 m for a net altitude gain of only 1390 m. Revealingly, calculations show that steadily ascending in a straight line would have increased the journey cost by around 8%. As even horizontal flapping flight is relatively expensive, the increase in energy consumption due to occasional climbs is not as important as the effect of reducing the general costs of flying by seeking higher-density air at lower altitudes.
"When traversing mountainous areas, a terrain tracking strategy or flying in the cool of the night can reduce the cost of flight in bar-headed geese through exposure to higher air density. Ground-hugging flight may also confer additional advantages including maximizing the potential of any available updrafts of air, reduced exposure to crosswinds and headwinds, greater safety through improved ground visibility, and increased landing opportunities. The atmospheric challenges encountered at the very highest altitudes, coupled with the need for near-maximal physical performance in such conditions, likely explains why bar-headed geese rarely fly close to their altitude ceiling, typically remaining below 6000 m."
Tuesday, March 03, 2015
Driver core-skin temperature gradients and blackouts
Whilst it is highly beneficial to reduce the surface-to-bulk temperature gradient of a racing-tyre, the same cannot be said for the cognitive organisms controlling the slip-angles and slip-ratios of those tyres.
A 2014 paper in the Journal of Thermal Biology, Physiological strain of stock car drivers during competitive racing, revealed that not only does the core body temperature increase during a motor-race, (if we do indeed count a stock-car race as such), but the skin temperature can also rise to such a degree that the core-to-skin temperature delta decreases from ~2 degrees to ~1.3 degrees.
A 2014 paper in the Journal of Thermal Biology, Physiological strain of stock car drivers during competitive racing, revealed that not only does the core body temperature increase during a motor-race, (if we do indeed count a stock-car race as such), but the skin temperature can also rise to such a degree that the core-to-skin temperature delta decreases from ~2 degrees to ~1.3 degrees.
The authors suggest that a reduced core-to-skin temperature gradient increases the cardiovascular stress "by reducing central blood volume." Citing a 1972 study of military pilots, they also suggest that when such conditions are combined with G-forces, the grayout (sic) threshold is reduced.
Intriguingly, in the wake of the Fernando Alonso's alien abduction incident at Barcelona last week, they also assert that "A consequence of this combination may possibly result in a lower blackout tolerance."
Monday, March 02, 2015
McLaren front-wing vortices, circa 2003
Academic dissertations conducted in association with Formula 1 teams tend to be subject to multi-year embargoes. Hence, Jonathan Pegrum's 2006 work, Experimental Study of the Vortex System Generated by a Formula 1 Front Wing, is somewhat outdated, but might still be of some interest to budding aerodynamicists.
Currently an Aerodynamics Team Leader at McLaren, Pegrum's study concentrated on a front-wing configuration not dissimilar from that on an MP4-18/19 (2003-2004).
A constellation of four co-rotating vortices were created: (i) a main bottom edge vortex, generated by the pressure difference across the endplate due to the low pressure under the wing; (ii) a top edge vortex, generated by the pressure difference across the endplate due to the high pressure above the wing; (iii) a canard vortex, a leading edge vortex generated by the semi-delta wing ('canard') attached to the outer surface of the endplate; and (iv) a footplate vortex, generated by the pressure-difference across the footplate operating in ground-effect.
The interaction between these counter-rotating vortices is such that the primary vortex is repelled away from the solid surface. This phenomenon, of course, is still very much of interest when it comes to the Y250 vortex and its cousins.
Currently an Aerodynamics Team Leader at McLaren, Pegrum's study concentrated on a front-wing configuration not dissimilar from that on an MP4-18/19 (2003-2004).
A constellation of four co-rotating vortices were created: (i) a main bottom edge vortex, generated by the pressure difference across the endplate due to the low pressure under the wing; (ii) a top edge vortex, generated by the pressure difference across the endplate due to the high pressure above the wing; (iii) a canard vortex, a leading edge vortex generated by the semi-delta wing ('canard') attached to the outer surface of the endplate; and (iv) a footplate vortex, generated by the pressure-difference across the footplate operating in ground-effect.
Pegrum shows (in the absence of a wheel, below), that the strongest vortices are the bottom-edge and top-edge vortices, but all four mutually interact in the manner of unequal, co-rotating vortices, undergoing the early stages of a merger.
Now, whilst co-rotating vortices have a tendency to merge, counter-rotating vortices have a tendency to repel. Pegrum highlights the 1971 work of Harvey and Perry, Flowfield Produced by Trailing Vortices in the Vicinity of the Ground, which demonstrated that when a vortex spinning around an axis in the direction of the freestream passes close to a solid surface, it tends to pull a counter-rotating vortex off the boundary layer of the solid surface, (as illustrated below by Puel and de Saint Victor, Interaction of Wake Vortices with the Ground, 2000).
The interaction between these counter-rotating vortices is such that the primary vortex is repelled away from the solid surface. This phenomenon, of course, is still very much of interest when it comes to the Y250 vortex and its cousins.
Thursday, February 19, 2015
Proof that Formula 1 was better in the past
If you're a long-time Formula 1 fan, then the chances are that you believe the sport was better in the past. However, the chances are that you will have also read arguments from younger journalists and fans, to the effect that Formula 1 in the modern era is better than it was in the past.
Fortunately, there is an objective means to resolve this dispute: churn.
In sport, churn provides a straightforward measure of the uncertainty of outcome. Churn is simply the average difference between the relative rankings of the competitors at two different measurement points. One can measure the churn at an individual race by comparing finishing positions to grid positions; one can measure the churn from one race to another within a season by comparing the finishing positions in each race; and one can measure the inter-seasonal churn by comparing the championship positions from one year to another.
The latter measure provides an objective means of tracking the level of seasonal uncertainty in Formula 1, and F1 Data Junkie Tony Hirst has recently compiled precisely these statistics, for both the drivers' championship and the constructors' championship, (see figures below). In each case, Hirst compiled the churn and the 'adjusted churn'. The latter is the better measure because it normalises the statistics using the maximum possible value of the churn in each year. The maximum can change as the number of competitors changes.
The results for the drivers' championship indicates that churn peaked in 1980. Given that the interest of many, if not most spectators, is dominated by the outcome of the drivers championship, this suggests that Formula 1 peaked circa 1980.
The results for the manufacturers' championship are slightly different, suggesting that uncertainty peaked in the late 1960s, (although the best-fit line peaks in the middle 1970s).
Fortunately, there is an objective means to resolve this dispute: churn.
In sport, churn provides a straightforward measure of the uncertainty of outcome. Churn is simply the average difference between the relative rankings of the competitors at two different measurement points. One can measure the churn at an individual race by comparing finishing positions to grid positions; one can measure the churn from one race to another within a season by comparing the finishing positions in each race; and one can measure the inter-seasonal churn by comparing the championship positions from one year to another.
The latter measure provides an objective means of tracking the level of seasonal uncertainty in Formula 1, and F1 Data Junkie Tony Hirst has recently compiled precisely these statistics, for both the drivers' championship and the constructors' championship, (see figures below). In each case, Hirst compiled the churn and the 'adjusted churn'. The latter is the better measure because it normalises the statistics using the maximum possible value of the churn in each year. The maximum can change as the number of competitors changes.
The results for the drivers' championship indicates that churn peaked in 1980. Given that the interest of many, if not most spectators, is dominated by the outcome of the drivers championship, this suggests that Formula 1 peaked circa 1980.
The results for the manufacturers' championship are slightly different, suggesting that uncertainty peaked in the late 1960s, (although the best-fit line peaks in the middle 1970s).
One could, of course, make the alternative proposal that the churn within individual races is more important to spectators' interest, but at the very least we now have an objective statistical measure which provides good reason for believing that Formula 1 was better in the 1970s and early 1980s.
Monday, February 16, 2015
Lovelock and emergentism
In James Lovelock's 2006 work, The Revenge of Gaia, he concludes the chapter entitled What is Gaia? with a description of the regulator in James Watt's steam engine, and the following argument:
"Simple working regulators, the physiological systems in our bodies that regulate our temperature, blood pressure and chemical composition...are all outside the sharply-defined boundary of Cartesian cause-and-effect thinking. Whenever an engineer like Watt 'closes the loop' linking the parts of his regulator and sets the engine running, there is no linear way to explain its working. The logic becomes circular; more importantly, the whole thing has become more than the sum of its parts. From the collection of elements now in operation, a new property, self-regulation, emerges - a property shared by all living things, mechanisms like thermostats, automatic pilots, and the Earth itself.
"The philosopher Mary Midgley in her pellucid writing reminds us that the twentieth century was the time when Cartesian science triumphed...Life, the universe, consciousness, and even simpler things like riding a bicycle, are inexplicable in words. We are only just beginning to tackle these emergent phenomena, and in Gaia they are as difficult as the near magic of the quantum physics of entanglement."
Now Lovelock is an elegant and fascinating author, but here his thought is lazy, sloganistic and poorly-informed. There are multiple confusions here, and such confusions are endemic amongst a number of writers and journalists who take an interest in science, so let's try and clear them up.
Firstly, we encounter the slogan that a system can be 'more than the sum of its parts'. Unfortunately, the authors who make this statement never seem to conjoin the assertion with a definition of what they mean by the phrase 'sum of its parts'. Most scientists would say that the sum of the parts of a system comprises the parts of the system, their properties, and all the relationships and interactions between the parts. If you think that there is more to a whole system than its parts, their properties and the relationships between the parts, then that amounts to a modern form of vitalism and/or dualism, the notion that living things and/or conscious things depend upon non-physical elements. Calling it 'emergentism' is simply a way of trying to dress up a disreputable idea in different language, rather in the manner than creationism was re-marketed as 'intelligent design'.
Assertions that a system can be more than the sum of its parts are frequently combined with attacks on so-called 'reductionistic' science. Anti-reductionistic authors can often be found pointing out that whole systems possess properties which are not possessed by any of the parts of which that system is composed. However, if such authors think this is somehow anti-reductionistic, then they have profoundly mis-understood what reductionistic science does. Scientists understand that whole systems possess properties which are not possessed by any of the parts; that's precisely because the parts engage in various relationships and interactions. A primary objective of reductionistic science is to try and understand the properties of a whole system in terms of its parts, and the relationships between the parts: diamond and graphite, for example, are both composed of the same parts, (carbon atoms), but what gives diamond and graphite their different properties are the different arrangements of the carbon atoms. Explaining the different properties of carbon and diamond in terms of the different relationships between the parts of which they are composed is a triumph of so-called 'reductionistic' science.
The next confusion we find in Lovelock's argument is the notion that twentieth-century science was somehow linear, or Cartesian, and non-linear systems with feedback somehow lie outside the domain of this world-view. Given the huge body of twentieth-century science devoted to non-linear systems, this will come as something of surprise to many scientists. For example, in General Relativity, (that exemplar of twentieth-century science), the field equations are non-linear. Lovelock might even have heard the phrase 'matter tells space how to curve, and space tells matter how to move'; a feedback cycle, in other words! Yet General Relativity is also a prime exemplar of determinism: the state of the universe at one moment in time uniquely determines its state at all other moments in time. There is clearly no reason to accept the implication that cause-and-effect must be confined to linear chains; non-linear systems with feedback are causal systems just as much as linear systems.
It's amusing to note that Lovelock concludes his attack on so-called 'Cartesian' science with an allusion to quantum entanglement. Clearly, quantum entanglement is a product of quantum physics, that other exemplar of twentieth century physics. So, in one and same breath, twentieth century science is accused of being incapable of dealing with emergentism, yet also somehow yields the primary example of emergentism.
Authors such as Lovelock, Midgley, and their journalistic brethren, are culpable here of insufficient curiosity and insufficient understanding. The arguments they raise against twentieth-century science merely indicate that they have failed to fully understand twentieth-century science and physics.
"Simple working regulators, the physiological systems in our bodies that regulate our temperature, blood pressure and chemical composition...are all outside the sharply-defined boundary of Cartesian cause-and-effect thinking. Whenever an engineer like Watt 'closes the loop' linking the parts of his regulator and sets the engine running, there is no linear way to explain its working. The logic becomes circular; more importantly, the whole thing has become more than the sum of its parts. From the collection of elements now in operation, a new property, self-regulation, emerges - a property shared by all living things, mechanisms like thermostats, automatic pilots, and the Earth itself.
"The philosopher Mary Midgley in her pellucid writing reminds us that the twentieth century was the time when Cartesian science triumphed...Life, the universe, consciousness, and even simpler things like riding a bicycle, are inexplicable in words. We are only just beginning to tackle these emergent phenomena, and in Gaia they are as difficult as the near magic of the quantum physics of entanglement."
Now Lovelock is an elegant and fascinating author, but here his thought is lazy, sloganistic and poorly-informed. There are multiple confusions here, and such confusions are endemic amongst a number of writers and journalists who take an interest in science, so let's try and clear them up.
Firstly, we encounter the slogan that a system can be 'more than the sum of its parts'. Unfortunately, the authors who make this statement never seem to conjoin the assertion with a definition of what they mean by the phrase 'sum of its parts'. Most scientists would say that the sum of the parts of a system comprises the parts of the system, their properties, and all the relationships and interactions between the parts. If you think that there is more to a whole system than its parts, their properties and the relationships between the parts, then that amounts to a modern form of vitalism and/or dualism, the notion that living things and/or conscious things depend upon non-physical elements. Calling it 'emergentism' is simply a way of trying to dress up a disreputable idea in different language, rather in the manner than creationism was re-marketed as 'intelligent design'.
Assertions that a system can be more than the sum of its parts are frequently combined with attacks on so-called 'reductionistic' science. Anti-reductionistic authors can often be found pointing out that whole systems possess properties which are not possessed by any of the parts of which that system is composed. However, if such authors think this is somehow anti-reductionistic, then they have profoundly mis-understood what reductionistic science does. Scientists understand that whole systems possess properties which are not possessed by any of the parts; that's precisely because the parts engage in various relationships and interactions. A primary objective of reductionistic science is to try and understand the properties of a whole system in terms of its parts, and the relationships between the parts: diamond and graphite, for example, are both composed of the same parts, (carbon atoms), but what gives diamond and graphite their different properties are the different arrangements of the carbon atoms. Explaining the different properties of carbon and diamond in terms of the different relationships between the parts of which they are composed is a triumph of so-called 'reductionistic' science.
The next confusion we find in Lovelock's argument is the notion that twentieth-century science was somehow linear, or Cartesian, and non-linear systems with feedback somehow lie outside the domain of this world-view. Given the huge body of twentieth-century science devoted to non-linear systems, this will come as something of surprise to many scientists. For example, in General Relativity, (that exemplar of twentieth-century science), the field equations are non-linear. Lovelock might even have heard the phrase 'matter tells space how to curve, and space tells matter how to move'; a feedback cycle, in other words! Yet General Relativity is also a prime exemplar of determinism: the state of the universe at one moment in time uniquely determines its state at all other moments in time. There is clearly no reason to accept the implication that cause-and-effect must be confined to linear chains; non-linear systems with feedback are causal systems just as much as linear systems.
It's amusing to note that Lovelock concludes his attack on so-called 'Cartesian' science with an allusion to quantum entanglement. Clearly, quantum entanglement is a product of quantum physics, that other exemplar of twentieth century physics. So, in one and same breath, twentieth century science is accused of being incapable of dealing with emergentism, yet also somehow yields the primary example of emergentism.
Authors such as Lovelock, Midgley, and their journalistic brethren, are culpable here of insufficient curiosity and insufficient understanding. The arguments they raise against twentieth-century science merely indicate that they have failed to fully understand twentieth-century science and physics.