Wednesday, August 31, 2011

Alonso vs Webber and Hamilton at Eau Rouge

It's difficult to find a precedent for Mark Webber's frightening pass on Fernando Alonso last Sunday, but there is an interesting contrast.



On the first lap of the 2007 Belgian Grand Prix, McLaren team-mates Alonso and Hamilton raced wheel-to-wheel down to Eau Rouge, with Hamilton on the inside for the left-hand entry.

On that occasion, however, Fernando was able to take more speed into the corner, and claim the position into the right-handed uphill element. Here's Lewis's account of it at the time:

"At Eau Rouge it was just common sense to ease off a fraction. Fernando had the momentum and was going quicker into it. It would have been stupid of me to keep it flat, but I was tempted. That worked in a Formula 3 car in the wet, but I'm not sure it would in a Formula 1 car..."

The two situations are not completely similar, because Webber was able to use the slipstream on Sunday, and gain extra momentum over Alonso. Nevertheless, the fact that Hamilton failed to make the move stick from the inside against the same adversary, provides a vivid demonstration of just how much commitment Webber needed.

Monday, August 22, 2011

Wittgenstein's aircraft engine

Ludwig Wittgenstein's Tractatus Logico-Philosophicus (1921) consists of numbered paragraphs, the first of which reads, 'The world is everything that is the case', and the last of which states, 'Whereof one cannot speak, thereof one must be silent.'

As Anthony Quinton explained in discussion with Bryan Magee, Wittgenstein "detested...the idea of philosophy as a trade, a 9-to-5 occupation, which you do with a part of yourself, and then go off and lead the rest of your life in a detached and unrelated way. He was a man of the utmost moral intensity. He took himself and his work with very great seriousness. When his work wasn't going well he got into a desperate and agonized condition. The result of this displays itself in his manner of writing. You feel that his whole idea of himself is behind everything that he says...[He] doesn't want to make the thing too easy - he doesn't want to express himself in a way that people can pick up by simply running their eyes over the pages. His philosophy is an instrument for changing the whole intellectual aspect of its readers' lives, and therefore the way to it is made difficult," (Talking Philosophy, p83).

Wittgenstein, however, came to philosophy by starting off as an aeronautical engineer at Manchester University between 1908 and 1910. Here, he devised and patented a new design of aircraft engine, but became interested in the mathematics used to describe his engine. The questions Wittgenstein began asking himself about the nature of mathematics, then brought him to Bertrand Russell's Principles of Mathematics. Discussing this with Frege in Germany, Wittgenstein abandoned his aeronautical career, and went to Cambridge to study logic under Russell.

Wittgenstein's engine design is rather interesting, and a couple of recent papers have explained his concept in detail. Ian Lemco outlined Wittgenstein's aeronautical research in a 2007 paper, and co-wrote an exposition of his combustion chamber design with John Cater in 2009.

Ludwig, it seems, was inspired by an idea proposed in the 1st century BC, by Hero of Alexandria, to drive a propeller by emitting jets of gas from nozzles placed in the tips of the rotor-blades. In particular, Wittgenstein proposed that the tips of the rotors contain combustion chambers, and the centrifugal force of the rotating propeller alone should be responsible for compressing the mixture of air and fuel; no need for pistons, in other words.

In modern terms, Wittgenstein proposed a tip-jet engine design. Such engines subdivide into cold-tip jets and hot-tip jets: the former are driven by, say, compressed air, created by a remote compressor, while the latter are driven by the direct exhaust jet flow of combustion. The Sud-Ouest Djinn helicopter, for example, employs cold-tip jets, while the Hiller YH-32 Hornet uses hot-tip jets.

All of which sounds not totally dissimilar to the distinction between hot-blown and cold-blown diffusers in modern-day Formula One...

Sunday, August 21, 2011

Weak polygyny and Formula One

Weak asymmetries are responsible for just about everything we experience.

Most of the universe we observe, all the galaxies and the stars and the planets, is composed of matter rather than anti-matter, yet the universe should have started with equal amounts of the two. If all the processes in particle physics were exactly symmetric, then most of the matter and anti-matter should have mutually annihilated, yielding a universe containing almost nothing but photon radiation.

What we actually observe is approximately two billion photons for every proton or neutron of matter, and in effect, this figure expresses the exact asymmetry between matter and anti-matter. It's thought that as a result of a small asymmetry in certain high-energy processes, the early universe developed slightly more quarks than anti-quarks. To be more precise, there were a billion-and-one quarks for every billion anti-quarks. Two photons were produced for each annihilation event between a quark and an anti-quark, and the remaining quarks were bound into protons and neutrons, hence the current universe possesses approximately two billion photons for every proton or neutron of matter.

So the weak asymmetry between quarks and anti-quarks is necessary to explain the existence of all the stars and planets. But what about human culture and civilization, all its cities and technologies and literature? How do these emerge from evolutionary biology?

One suggestion is that the weak polygyny of human society is a necessary condition. Polygyny is a sexual asymmetry in which some of the males in a species possess stable reproductive relationships with multiple females in so-called harems, leaving the remaining males as bachelors. This leads to varying forms of intense competition between the males, which often manifests itself in sexual dimorphism, the existence of different male/female sizes or capacities.

Human polygyny is less than that of gorillas, where there is correspondingly a large difference between the size of the males and females, but greater than that of gibbons, who are monogamous, and where the males and females are duly of comparable size.

The evidence for human polygyny is rather strong. G.P.Murdock's Ethnographic Atlas, for example, lists 849 human societies, and finds that 83% are polygynous. And as Richard Dawkins points out in The Ancestor's Tale, research conducted by Laura Betzig indicates that "overtly monogamous societies like ancient Rome and medieval Europe were really polygynous under the surface. A rich nobleman, or Lord of the Manor, may have had only one legal wife but he had a de facto harem of female slaves, or housemaids and tenants' wives and daughters."

This weak polygyny is reflected in human sexual dimorphism, but because humans are an intelligent species, it has a physical and a cultural component. Men are, on average, larger and stronger than women, but men also seek to gain access to harems, not by direct competition, but by seeking power, wealth and status. As a by-product of this, virtually all of human culture, the philosophy, the politics, the science, the technology, the art, the business, and the sport, has been produced by men.

And where else in the world can you find an activity which combines sport, business, politics and technology, in such a tightly integrated package, than Formula One? In essence, then, Formula One is a by-product of the human male desire to gain access to female harems. Small asymmetries matter.

Thursday, August 18, 2011

Front-wing ground effect

Red Bull, McLaren and Ferrari currently appear to be converging on the same aerodynamic solution: a high-rake, nose-down stance to maximise the ground effect component of front-wing downforce, (with the use of exhaust-blown diffusers to retain rear downforce). Front-wing ground effect has always had a role to play, but the current emphasis is perhaps a consequence of the new technical regulations introduced for the 2009 season, which permitted the front-wing to be much closer to the ground.

To understand front-wing ground effect, it's worth revisiting some research performed by Zhang, Zerihan, Ruhrmann and Deviese in the early noughties, Tip Vortices Generated By A Wing In Ground Effect. This examined a single-element wing in isolation from rotating wheels and other downstream appendages, but the results are still very relevant.
The principal point is that front-wing ground-effect depends upon two mechanisms: firstly, as the wing gets closer to the ground, a type of venturi effect occurs, accelerating the air between the ground and the wing to generate greater downforce. But in addition, a vortex forms underneath the end of the wing, close to the junction between the wing and the endplate, and this both produces downforce and keeps the boundary layer of the wing attached at a higher angle-of-attack.

The diagrams above show how this underwing vortex intensifies as the wing gets closer to the ground. In this regime, the downforce increases exponentially as the height of the wing is reduced. Beneath a certain critical height, however, the strength of the vortex reduces. Beneath this height, the downforce will continue to increase due to the venturi effect, but the rate of increase will be more linear. Eventually, at a very low height above the ground, the vortex bursts, the boundary layer separates from the suction surface, and the downforce actually reduces.
So, for a wing in isolation, the ground effect is fairly well understood. One imagines, however, that the presence of a rotating wheel immediately behind the wing makes things a little more difficult!

The diagram here, from the seminal work in the 1970s by Fackrell and Harvey, demonstrates that the rotating wheel creates a high pressure region in front of it, (zero degrees is the horizontal forward-pointing direction, and 90 degrees corresponds to the contact patch beneath the tyre). Placing a high-pressure area immediately behind a wing will presumably steepen the adverse pressure gradient on the suction surface of the wing, causing premature detachment of the boundary layer. Hence, when the wings were widened in the new regulations, most designers immediately directed the endplates of the wings outwards, seeking to direct the flow away from those high-pressure areas.

Wednesday, August 17, 2011

Peridynamics

Q:So what exactly is peridynamics?
A: Well, it's a new formulation of solid mechanics, which in turn, is part of continuum mechanics. Continuum mechanics represents those parts of the macroscopic world which can be idealised as continuous, extended entities. If you've got a gas or a liquid, you can represent it using fluid mechanics. Fluids, however, don't have strength, whereas solids do. To represent a solid, you need to use solid mechanics.

Q: So why the need for a new formulation?
A: Well, it's basically all about fracture. The trouble with fracture is that, by definition, it constitutes a discontinuity in a solid, and given that solid mechanics is predicated upon the continuity of things, the conventional formulation struggles to deal with fracture.

Q: And what does peridynamics postulate to resolve the problem?
A: Cauchy's momentum equation, the governing equation of continuum mechanics, defines the force at a point by the divergence of the stress tensor. The divergence is, of course, a differential operator, and if your equations are based upon derivatives, then your equations will fail in the presence of a discontinuity. Peridynamics attempts to get around this by replacing the spatial derivatives of the stress tensor at each point with the integral of a force density function centred at that point. This, then, is a radical approach, which attempts to generalise from Cauchy's conception of the internal stresses in a solid. The field equations in this formulation, it is claimed, can be applied to discontinuities such as cracks.

Q: Are there any philosophical implications?
A: Definitely, yes. On smaller length scales, where fluids and solids are discrete, people use something called Molecular Dynamics to represent substances. And the equations of Molecular Dynamics are intrinsically non-local; the net force on each particle is determined by the joint effect of all the inter-atomic forces due to other particles, not just those immediately adjacent to the particle in question. Finding the force on a particle by adding all the contributions from particles in a neighbourhood of that particle, is a discrete version of an integral. Conventional solid mechanics, however, is distinctly local. This means that the inter-theoretic relationship between Molecular Dynamics and conventional solid mechanics is very unsatisfactory. However, by using the non-local reformulation provided by peridynamics, the inter-theoretic relationship is far more satisfactory.

It's an interesting case, which demonstrates that macroscopic theories sometimes need to be reformulated using concepts and structures taken from the microscopic theory.

Friday, August 12, 2011

Multi-element wings and DRS

So why are the wings on aircraft and racing cars broken up into multiple elements, with slots in-between? Well, it was found reasonably early in the history of aerodynamics that this technique enabled the total wing to continue generating lift at an angle of attack at which it would have stalled, were it to have been fashioned as a single element. The lift/downforce generated by a wing increases as the angle of attack increases, hence multiple element wings are a means of increasing peak lift/downforce. (In the case of aircraft, they are also a means of maintaining lift at the lower airspeeds associated with landing and taking-off).

But how does the introduction of slots achieve this effect? Well, A.M.O. Smith identified five distinct mechanisms in his 1974 paper, High-Lift Aerodynamics: slat effect, circulation effect, dumping effect, off-surface pressure recovery, and fresh-boundary layer effect.

So let's have a go at attempting to understand what these effects are. To start off, however, we need to recall some of the fundamental facts about how a wing works.

A wing generates lift/downforce because it generates a circulatory component to the airflow. The circulation only exists because of a thin layer of airflow adjacent to the wing called the boundary layer. Viscous effects operate in the boundary layer, but outside the boundary layer the airflow can be idealised as being inviscid.

When people speak of the velocity and pressure of the airflow above and below a wing, they are implicitly speaking of the velocity and pressure on the dividing line which separates the boundary layer from the inviscid airflow. Here, Bernoulli's law applies: if the airflow is accelerated, the pressure decreases, whilst if the airflow decelerates, the pressure increases.

The low pressure surface of a wing initially accelerates the airflow, and then decelerates it towards the trailing edge. Hence, there is higher pressure at the trailing edge than at the point of maximum velocity, and this corresponds to an adverse pressure gradient along the latter part of the boundary layer.

The circulation around a wing is crucially dependent upon the boundary layer remaining attached to the surface of the wing. If the adverse pressure gradient is too steep, reverse flow ensues, the boundary layer detaches, and the wing stalls. This will happen as one attempts to increase the amount of lift/downforce by increasing the angle of attack.

Ok, so that's some of the fundamentals of wing aerodynamics. Now, if the boundary later detaches when the adverse pressure gradient becomes too steep, it follows that reducing the severity of the adverse pressure gradient at a fixed angle of attack will keep the boundary layer attached. And this is exactly what a multi-element wing does.

Imagine for a moment a three-element racecar wing. The small leading element is called a slat, and the element behind the main plane is called the flap. Imagine the airflow coming from left to right. There will be an anti-clockwise circulatory component to the airflow around each element. One effect of this will be to reduce the acceleration of the airflow at the leading edge of the main element, and to thereby reduce the low pressure peak at that point. In simplistic terms, the circulatory component to the flow at the trailing edge of the slat is in an opposite direction to that at the leading edge of the main plane, hence the slot gap reduces the velocity of the airflow here. By reducing the low pressure peak at the leading edge of the main plane, the adverse pressure gradient along the main plane will be reduced, thereby helping the main plane to hang onto its boundary layer. This is the slat effect.

Meanwhile, the flap will have its own circulation, and as a consequence, at the point where the trailing edge of the main plane discharges its boundary layer, the airflow velocity will be greater than it would in the absence of a flap. Thus, the high pressure at the trailing edge of the main plane is reduced, once again reducing the adverse pressure gradient along the main plane, helping to keep the boundary layer attached. This is the dumping effect.

Now, according to Smith, the circulation of the flap enhances the circulation of the main plane, and in the presence of a slat, the circulation of the main plane enhances the circulation of the slat. As yet I can't intuitively see why this is the case. Smith claims, however, that this circulation effect is closely related to the dumping effect, and asserts that the downstream element induces cross-flow on the trailing edge of the upstream element, which enhances its circulation.

The off-surface pressure recovery effect, meanwhile, is a consequence of the dumping effect. A downstream element reduces the deceleration towards the trailing edge of an upstream element, keeping the boundary layer attached, and releasing the boundary layer from the trailing edge of the surface, where it completes its deceleration in a manner which doesn't cause reverse flow. The boundary layer of the main plane, for example, will discharge into the region outside the boundary layer of the flap, and continue to decelerate until it reaches the trailing edge of the entire wing system, (see the diagram here from Zhang and Zerihan, Aerodynamics of a double-element wing in ground effect, 2003).

The final effect, the fresh boundary layer effect, means that each element acquires its very own boundary layer, fed by the freestream velocity. This keeps the boundary layer of each element thinner than the boundary layer on a single wing of the same length, and thinner boundary layers are able to withstand greater adverse pressure gradients.

So it's all about increasing circulation and mitigating the causes and effects of adverse pressure gradients.

Note, of course, that the function of a DRS rear-wing in modern Formula 1 is dependent upon these aerodynamic effects. The rear wing is designed so that the main plane is at an angle of attack which would cause the boundary layer to detach in the absence of the flap. With the flap in place, the severity of the adverse pressure gradient is reduced by the acceleration of the airflow around the leading edge of the flap. Open the flap, and the main plane is suddenly dumping its boundary layer into freestream airflow, as a result of which the adverse pressure gradient steepens, and the boundary layer detaches, causing the main plane to stall.

Sunday, August 07, 2011

Renormalization in quantum field theory

So what exactly is renormalization in quantum field theory? Well, quantum field theory makes experimentally verified predictions about collisions between particles. In particular, it makes predictions about the probability of going from a particular incoming state to a particular outgoing state, and these are called transition probabilities:




An incoming particle is represented by a quantum state Ψi, the interaction process is represented by a scattering operator S, and the potential outgoing state is represented by the quantum state Ψf.

In many physically relevant situations, the incoming state has a specific energy Ei and momentum ki, and each possible outgoing state also has a specific energy Ef and momentum kf. An outgoing state with a specific momentum kf, also has a specific direction Ω associated with it.

These transition probabilities can be used to construct cross-section data. The cross-section for a reaction is effectively an expression of its probability. In practice, cross-sections provide an economical way of bundling the transition probabilities between entire classes of quantum states. For example, the differential cross-section σ(E,Ω) is proportional to the probability of a transition from any incoming state Ψi of energy E to any outgoing state Ψf in which the momentum vector kf points in the direction of Ω. Integrating a differential cross-section over all possible directions then gives a total scattering cross-section σ(E).

So, what about the scattering operator S? Well, this contains the information that specifies the nature of the interaction. The nature of the interaction is specified using objects from classical physics, either the interaction Hamiltonian or the interaction Lagrangian. The interaction Lagrangian will contain values for the masses and charges (aka coupling constants) of the interacting fields. The scattering operator can be expressed in terms of the interaction Hamiltonian density operator HI(x), which in turn, can be obtained from the interaction Lagrangian density. To be specific, the scattering operator can be expressed as the following Dyson perturbation series:








T[HI(x1),...,HI(xn)] simply denotes a time-ordered permutation of the interaction Hamiltonian density operators.

Inserting the expression for the scattering operator into the expression for a transition probability, yields an infinite series, and the trouble is that every term in this series transpires to be a divergent integral. Renormalization involves taking only the first few terms in such a series, and then manipulating the integrals in those terms to obtain finite results.

The most sophisticated account of renormalization goes as follows. The troublesome integrals tend to be integrals over an infinite energy range, and the integrals go to infinity as the energy goes to positive infinity. So begin by introducing a cut-off Λ0 at a large, but finite energy. Correlate this cut-off with a particular conventional interaction Lagrangian, with conventional values for the masses and coupling constants. Now stipulate that the masses and coupling constants are functions of the cut-off energy Λ. Thus, as the upper limit of the integral is now permitted to go to infinity, Λ → ∞, the masses and coupling constants becoming running masses and coupling constants, m(Λ) and g(Λ), and the Lagrangian also acquires evolving counter-terms which incorporate those running masses and coupling constants. The functional forms of m(Λ) and g(Λ) are chosen to ensure that the integrals are now finite as the limit Λ → ∞ is taken.

Thus, for example, in the case of quantum electrodynamics, the Lagrangian is modified as follows:




The charge and mass have the following running values (c0 and its tilde-counterpart being proportional to ln (Λ/Λ0):





This is called the Renormalization Group (RG) approach. It basically amounts to saying that there is a flow in the space of Lagrangians under energy-scale transformations. Changing the cut-off in divergent integrals is then seen to be equivalent to adding/subtracting extra terms in the Lagrangian, which in turn is equivalent to changing the values of the masses and coupling constants. There are, of course, numerous qualifications, exceptions and counter-examples, but that is the basic idea.

At a classical level in mathematical physics, the equations of a theory can be economically specified by a Lagrangian, hence it is typical in physics to identify a theory with its Lagrangian. Thus, a flow in the space of Lagrangians is also a flow in the space of theories; the RG approach is saying that different theories are appropriate at different energy scales.

I'm indebted here to the material in the following couple of papers, which also constitute excellent further reading for the enquiring mind:

Hartmann, S. (2001). Effective field theories, reductionism and scientific explanation, Studies in the History and Philosophy of Modern Physics, 32, pp267-304.

Huggett, N. and Weingard, R. (1995). The Renormalisation Group and Effective Field Theories, Synthese, Vol. 102, No. 1, pp. 171-194.

Saturday, August 06, 2011

The Hamilton duels

There were two great wheel-to-wheel battles in the Hungarian Grand Prix, both, predictably, featuring Lewis Hamilton. First off was the Hamilton-Vettel duel between laps 1 and 5, and then there was the equally thrilling Hamilton-Button contest between laps 47 and 52.

The first lap saw Hamilton and Button side-by-side, scrabbling for grip coming out of the first corner on their intermediate tyres, Hamilton taking second place down the outside into turn 2 as Button backed out of it. Lewis then set off after Vettel, the McLaren spectacularly sideways accelerating out of turn 2 on the second lap.

Once again, the McLarens were the only leading cars generating strong wing-tip vortices down the main straight, and Lewis clearly had a grip advantage over Vettel in these early laps on a damp track. Vettel, however, provided a robust defense.

On lap 3, Lewis decided to try the outside of Vettel into turn 2, briefly putting his outside wheels onto the grass as he did so. It was remarkably similar to the moment in Canada this year when Lewis was attempting to overtake Schumacher into the hairpin, although on that occasion Lewis was badly squeezed by the Mercedes driver making a second move under braking. This time round, Lewis was able to take a run around the outside of turn 2, but Vettel anticipated the move and simply ran Lewis out to the edge of the track, forcing him to back off and drop in behind the Red Bull.

On lap 4, Lewis again got a run on the Red Bull into turn 2, but this time decided to try the inside. Yet again, however, Vettel had an answer, and simply carried enough speed around the outside to retain his place into turn 3. Vettel was demonstrating all the racecraft which some have accused him of lacking, but on lap 5 he finally over-egged it into turn 2, running wide and letting Lewis into the lead.

The later Hamilton-Button duel was triggered, of course, when Lewis spun at the chicane on lap 47, Jenson taking the lead. Being on softer tyres, Lewis was potentially at an advantage in the battle which ensued, but Lewis's tyres were also wearing badly, to the extent that he was forced to pit at the end of lap 52. It's possible, therefore, that the two drivers actually had comparable levels of grip.

By lap 49, Button was extending the gap to Lewis, demonstrating he had superior grip on a mostly dry track surface. On lap 50, however, the rain began to fall again, and by the exit of the chicane, Lewis was back in the wheel-tracks of the other McLaren. Into turn 2 on lap 51, Jenson's famed ability to magically sense the levels of grip available, momentarily deserted him, and he ran ride, letting Lewis back into the lead.

Lewis immediately gained a 2 second gap over Button, but struggled badly with grip over the remainder of the lap, and coming onto the main straight to start lap 52, Button was right behind him. With the advantage of DRS, Jenson overtook his compatriot into turn 1, a quartet of wing-tip vortices briefly streaming in their joint wake.

Down they went into turn 2, and Jenson turned into the corner a little defensively on a tighter line than normal, and missed the apex, Lewis cutting underneath to re-take the lead. Great stuff!

Battle was then suspended over the remainder of the lap as both drivers attempted to absorb the information and instructions the McLaren team were communicating vis-a-vis the potential requirement to fit intermediate tyres. Lewis was able to receive messages from the team, but unable to make himself heard in response, whilst Jenson was at one stage invited to queue behind Lewis as both cars were fitted with intermediates.

Ultimately, of course, Lewis's race-winning prospects were already done-for, and the vital decision, the race-winning decision, was Jenson's choice not to pit.

Thursday, August 04, 2011

A way to subvert the blown diffuser ban?

Exhaust-blown diffusers will effectively be banned in Formula 1 from next year, but there may be other ways of blowing the diffuser, and generating the side-edge vortices which appear to be crucial to maximising diffuser downforce.

For example, from 2014, Formula 1's engine formula will change from a normally aspirated 2.4 litre V8 to a 1.6 litre turbo-charged V6. The turbine in such an engine is constantly generating compressed air. Moreover, the inlet manifold of a turbo engine has a blow-off valve, specifically designed to release pressure when the driver lifts off the throttle or the throttle is closed. The blow-off valve could be vented down to the sides of the diffuser, providing vital extra downforce when a driver comes off the throttle turning into a corner.

From 2012, the regulations will prohibit exhaust-blown diffusers by stipulating that the exhausts are moved to a location in which they cannot influence the diffuser. These new regulations, however, will say nothing (as far as I'm aware) about blowing the diffuser with compressed air from the inlet manifold of a 2014 turbo engine!

Unfortunately, there is at least one potential snag: the Wikipedia entry on blow-off valves claims that "Motor sports governed by the FIA have made it illegal to vent unmuffled blowoff valves to the atmosphere." There is no citation, however, so it's difficult to ascertain if this is true, or even if it will apply to the 2014 F1 engine regulations. In fact, this is presumably something yet to be determined. Worth keeping an eye, then, on how those regulations are finally worded...

In the meantime, the teams could use compressed air cylinders to blow the diffusers, perhaps just for a qualifying lap. The primary declared purpose of these cylinders would be to supply the pneumatic valve system in the engine, of course, but as a safety measure, it might be necessary to vent excessive pressure. For safety. And cooling.

Tuesday, August 02, 2011

Diffusers and rake

A recent column by Mark Hughes (Autosport, July 21, p21), and a subsequent explanation Mark elicited from McLaren technical director Paddy Lowe (Autosport, July 28, p41), provide some extra illumination on the overall aerodynamic concept pursued in Formula 1 by Red Bull since 2010, and followed to some extent by other teams this year.

Both articles explain that the basic idea has been to run a car with a significant degree of rake, so that the front ride-height is lower than the rear. The effect of this is twofold: the front-wing generates greater downforce due to ground effect, and the rear diffuser also acquires the potential to generate greater downforce.

Maximising the downforce of the diffuser is, however, a subtle issue. The downforce generated by a diffuser is a function of two variables: (i) the angle of the diffuser, and (ii) the height above the ground. Generally speaking, the peak downforce of the diffuser increases with the angle of the diffuser. Then, for a fixed diffuser angle, the downforce generated will increase according to an exponential curve as the height reduces, until a first critical point is reached (see diagram above, taken from Ground Effect Aerodynamics of Race Cars, Zhang, Toet and Zerihan, Applied Mechanics Reviews, January 2006, Vol 59, pp33-49). As the height is reduced further, the downforce will increase again, but according to a linear slope, until a second critical point is reached, after which the downforce falls off a cliff.

Without running any rake, the diffuser is limited by regulation to a shallower angle than seen in years gone by. By increasing the rake, the effective angle of the diffuser is increased, thereby increasing the potential peak downforce. However, increasing the rake also has the effect of increasing the height of the diffuser.

So, how does one combat the detrimental effect of increasing the height of the diffuser? Well, the key, I think, is to understand exactly how a reduction in height increases the downforce generated by a diffuser. The crucial point is that the edges of the diffuser generate a pair of counter-rotating vortices, and the magnitude of the downforce generated is determined by the strength of these vortices. The downforce increases exponentially as the height is reduced, because the strength of these vortices is increasing. The first critical point corresponds to the height at which the vortex strength begins to decrease, and the second critical point corresponds to the height at which the vortices breakdown.

So, to pose the question again, how do we mitigate the downforce-reducing effect of an increase in diffuser height? Simple, one merely uses the exhaust gases to boost the strength of the side-edge vortices to levels otherwise seen at lower heights.

In fact, this is to simplify the issue, because the exhaust gases playing on the sides of the diffuser have two effects: (i) to strengthen the side-edge vortices inside the diffuser, and (ii) to act as air curtains, preventing the ingress of turbulent air created by the rotating rear wheels.

So, with exhaust-blown diffusers to be banned from next year, the trick will be to find other ways of boosting the strength of those side-edge vortices. Do so, and you'll still be able to run your car with a significant degree of rake.