This year's Autocourse is printed on golf-leaf infused paper, using inks derived from the pituitary gland of the Himalayan gazelle.
At least, that's the only justification one can imagine for a price-tag of £50.
There was a time when one could look forward to the stunning photography in Autocourse each year, but those days are long gone. This year's edition contains only one memorable image, a two-page spread of Lewis Hamilton and Sebastien Vettel, wheel-to-wheel into Turn 3 at the Hungaroring, Hamilton's outside wheels skirting the grass-verge. Unfortunately, most people will have already seen this image, and the photography elsewhere never rises above the mediocre.
In fact, the quality of the photographic reproduction in Autocourse has become remarkably dark, virtually every image dominated by the sheer quantity of black ink. As an indication of this, it's almost impossible to find an image of a car, taken from the front, in which the undernose splitter can actually be discerned. By way of contrast (excuse the pun), if you happen to have the £4.40 Autosport Formula 1 review at hand, compare the picture on p32-33, taken from Ste Devote at Monaco, with the picture on p153 of Autocourse, taken from exactly the same vantage point. The difference is almost literally night and day.
It's difficult to know whether this is determined by the combination of inks and glossy paper used by Autocourse, or whether there's some artistic motivation behind it. There's almost a photophobic, crepuscular mood running through the annual: an article on Pirelli opens with a two-page spread of Hamilton and Webber in the gloaming at Korea; the team-by-team review begins with a two-page spread of the F1 paddock in semi-darkness; the race reports are prefaced with a two-page spread of a Williams passing through a silhouetted Eau Rouge; there's a two-page spread of Lewis Hamilton beneath leaden skies at the Nurburgring; there's a two-page spread of Singapore, in the darkness; and there's a two-page spread of Jenson celebrating his Japanese victory...in the darkness. It's like a book directed by David Fincher.
Formula 1 should be bright and colourful. Autocourse makes it look like an activity which takes place at 7pm on a damp October day in Macclesfield.
So do you get anything for your £50? Well, yes, you get Mark Hughes's team-by-team analysis, which is reliably superb. Mark, of course, also does something similar in Autosport's Formula 1 review, but the Autocourse version is more detailed in places, and contains extended explanation from each team's technical director. Paddy Lowe and Pat Fry, in particular, are fascinating this year as they explain where things went awry.
So there's something good here, but not £50-worth.
Friday, December 30, 2011
Thursday, December 22, 2011
Red Bull and Immersed Boundary Methods
Autosport's recent 2011 Formula 1 review pointed out that whilst Red Bull were the first team to appear with exhausts blowing the outer extremities of the diffuser, "others, notably Renault and Ferrari, had tried the layout in their tunnels before the Red Bull appeared and couldn't make it work, and Newey later confirmed that it actually took months of simulation work to maximise."
So what is it that Red Bull were able to do that other teams weren't? Was it mere persistence in the wind-tunnel with a flow regime that transpired to be extremely sensitive to the exact position and geometry of the exhaust outlet? Or were Red Bull able to apply some form of computer simulation not currently utilised by other teams?
Perhaps the former is the most likely answer, but let's pursue the alternative explanation, and see if we can join up the dots. And let's start with the fact that Red Bull use Ansys Fluent as their CFD package. In this promotional video from late 2010, it's acknowledged that Red Bull use Fluent to model their exhaust flow, (although this obviously doesn't entail that it's their only simulation tool for doing so).
Speaking recently about the High Performance Computing solutions provided by Ansys, Nathan Sykes, CFD Team Leader at Red Bull Racing, pointed out that "To retain freedom to innovate and adapt the car quickly, we rely on a robust modeling process. This puts new designs on the track quickly. To accomplish our goal, we continually need to leverage technologies that help us introduce and evaluate new ideas. With a significant reduction in process times over the last three years, ANSYS HPC solutions have continued to be the tool of choice for us."
Now, the normal aerodynamic optimisation cycle involves shaping a part in CAD, importing it into CFD, meshing it in CFD, running the CFD solver, and then post-processing the results. Meshing, in particular, can be very time-consuming. There is, however, a means of short-circuiting the cycle, called the Immersed Boundary Method, and in an environment such as Formula 1, where aerodynamic turnover is paramount, any team able to successfully implement this method could gain a significant advantage.
Immersed Boundary Methods provide a means for dealing with geometries which may be complex, or in a state of motion. They enable one to mimic the effect that an appendage has on the fluid flow in terms of something called a 'body force'. For example, if a fixed solid object is introduced into a region previously occupied by fluid flow, then the no-slip boundary condition must be imposed on the new surface, (i.e., the velocity there must be zero). In effect, this requires the application of a force which reduces the pre-existing velocity to zero. To calculate the necessary body force, one could in principle insert the necessary acceleration into the (Reynolds-Averaged) Navier-Stokes equations, as below, with udesired=0 in this case:
Coincidentally, Ansys Fluent 12.0, released in 2009, has an Immersed Boundary module, developed with Cascade Technologies Inc. This is what Ansys said at the time:
A conventional fluid dynamics simulation starts with the transfer of CAD data to a grid-generation package, in which a surface mesh and then a volume mesh are generated before the simulation can be set up and the solution run. The effort and time required for such pre-processing tasks can be significant. For example, in cases with complex or dirty geometry that require CAD cleanup, this part of the process may take 50 percent to 90 percent of the total time required for the simulation. The Immersed Boundary module addresses such issues by providing a rapid, automated, preliminary design approach.
Fluid flow simulations using the Immersed Boundary module for ANSYS FLUENT 12.0 software start with the surface data of the simulation geometry in the STL file format, which is commonly used in rapid prototyping and computer-aided manufacturing. This CAD geometry does not need to be clean, does not require smooth surfaces or geometry connectivity, and may contain overlapping surfaces, small holes and even missing parts. The simulation geometry is meshed automatically. Mesh refinement also is carried out automatically after specifying the desired resolution on the boundaries, ensuring the accuracy required for preliminary design evaluation. Using the immersed boundary meshing technique greatly reduces the amount of time spent preparing the geometry for meshing and creating the mesh.
At first sight, Immersed Boundary Methods do not appear to be available in Star-CCM+, one of Fluent's main competitors. Star-CCM+ does, however, provide a Surface Wrapper, a type of shrink-wrapper, which fixes gaps and overlaps in complex CAD geometries. Nevertheless, in Star-CCM+ it appears to be necessary to create a body-fitted mesh: a surface mesh must be created on the surface imported from CAD, and then a volume mesh is grown outwards from the surface mesh.
Immersed Boundary Methods have become increasingly popular over the past decade, and knowledge of such techniques will have been carried into the world of Formula 1 by many recent PhDs. Nevertheless, it's interesting to speculate whether Red Bull have stolen another march on the opposition here...
So what is it that Red Bull were able to do that other teams weren't? Was it mere persistence in the wind-tunnel with a flow regime that transpired to be extremely sensitive to the exact position and geometry of the exhaust outlet? Or were Red Bull able to apply some form of computer simulation not currently utilised by other teams?
Perhaps the former is the most likely answer, but let's pursue the alternative explanation, and see if we can join up the dots. And let's start with the fact that Red Bull use Ansys Fluent as their CFD package. In this promotional video from late 2010, it's acknowledged that Red Bull use Fluent to model their exhaust flow, (although this obviously doesn't entail that it's their only simulation tool for doing so).
Speaking recently about the High Performance Computing solutions provided by Ansys, Nathan Sykes, CFD Team Leader at Red Bull Racing, pointed out that "To retain freedom to innovate and adapt the car quickly, we rely on a robust modeling process. This puts new designs on the track quickly. To accomplish our goal, we continually need to leverage technologies that help us introduce and evaluate new ideas. With a significant reduction in process times over the last three years, ANSYS HPC solutions have continued to be the tool of choice for us."
Now, the normal aerodynamic optimisation cycle involves shaping a part in CAD, importing it into CFD, meshing it in CFD, running the CFD solver, and then post-processing the results. Meshing, in particular, can be very time-consuming. There is, however, a means of short-circuiting the cycle, called the Immersed Boundary Method, and in an environment such as Formula 1, where aerodynamic turnover is paramount, any team able to successfully implement this method could gain a significant advantage.
Immersed Boundary Methods provide a means for dealing with geometries which may be complex, or in a state of motion. They enable one to mimic the effect that an appendage has on the fluid flow in terms of something called a 'body force'. For example, if a fixed solid object is introduced into a region previously occupied by fluid flow, then the no-slip boundary condition must be imposed on the new surface, (i.e., the velocity there must be zero). In effect, this requires the application of a force which reduces the pre-existing velocity to zero. To calculate the necessary body force, one could in principle insert the necessary acceleration into the (Reynolds-Averaged) Navier-Stokes equations, as below, with udesired=0 in this case:
Coincidentally, Ansys Fluent 12.0, released in 2009, has an Immersed Boundary module, developed with Cascade Technologies Inc. This is what Ansys said at the time:
A conventional fluid dynamics simulation starts with the transfer of CAD data to a grid-generation package, in which a surface mesh and then a volume mesh are generated before the simulation can be set up and the solution run. The effort and time required for such pre-processing tasks can be significant. For example, in cases with complex or dirty geometry that require CAD cleanup, this part of the process may take 50 percent to 90 percent of the total time required for the simulation. The Immersed Boundary module addresses such issues by providing a rapid, automated, preliminary design approach.
Fluid flow simulations using the Immersed Boundary module for ANSYS FLUENT 12.0 software start with the surface data of the simulation geometry in the STL file format, which is commonly used in rapid prototyping and computer-aided manufacturing. This CAD geometry does not need to be clean, does not require smooth surfaces or geometry connectivity, and may contain overlapping surfaces, small holes and even missing parts. The simulation geometry is meshed automatically. Mesh refinement also is carried out automatically after specifying the desired resolution on the boundaries, ensuring the accuracy required for preliminary design evaluation. Using the immersed boundary meshing technique greatly reduces the amount of time spent preparing the geometry for meshing and creating the mesh.
At first sight, Immersed Boundary Methods do not appear to be available in Star-CCM+, one of Fluent's main competitors. Star-CCM+ does, however, provide a Surface Wrapper, a type of shrink-wrapper, which fixes gaps and overlaps in complex CAD geometries. Nevertheless, in Star-CCM+ it appears to be necessary to create a body-fitted mesh: a surface mesh must be created on the surface imported from CAD, and then a volume mesh is grown outwards from the surface mesh.
Immersed Boundary Methods have become increasingly popular over the past decade, and knowledge of such techniques will have been carried into the world of Formula 1 by many recent PhDs. Nevertheless, it's interesting to speculate whether Red Bull have stolen another march on the opposition here...
Tuesday, December 13, 2011
Unleashing radiation in a wind-tunnel
There are currently two primary methods of wind-tunnel flow visualisation: Particle Image Velocimetry (PIV) and Laser Doppler Anemometry (LDA). Both techniques seed the airflow with tracer particles, and use lasers and optical detectors and cameras to provoke and record a pattern of scattered light. This poses a problem, in that the wheels, wings, and diffusers of interest to the aerodynamicist, are normally opaque to the passage of optical radiation. Hence, PIV and LDA experiments typically require the construction of transparent wings and aerodynamic appendages.
There is, however, a possible solution to this problem: Why not use radioactive isotopes to obtain quantitative flow data from wind-tunnel testing? One could inject a harmless radioactive tracer into the flow, such as one of those used in the medical imaging industry; technetium-99m-labelled DTPA (diethylene triamine pentaacetic acid) would be an obvious candidate here. One could then use gamma (ray) cameras to image the flow in a similar way that optical cameras are currently used in PIV and LDA.
There would, of course, be the need for some additional precautions. However, an isotope such as technetium-99 is considered sufficiently harmless to be injected into medical patients, and has a half-life of only 6 hours, so a wind-tunnel would not need to be decontaminated by the Nuclear Decommissioning Authority!
In fact, taking a closer look reveals that there are already significant areas of shared technology between wind-tunnel flow visualisation and lung scintigraphy, the use of gamma cameras to record 2-dimensional images formed by the emission of gamma rays from inhaled radioisotopes:
“99mTc labelled aerosols, 0.5-3 [microns] in size, are used routinely in lung ventilation studies. Radiolabelled aerosols are produced by nebulizing 99mTc-DTPA (or other appropriate 99mTc-products) in commerically available nebulizers,” (p276, Fundamentals of Nuclear Pharmacy, 2010, Saha).
When such aerosols are inhaled for lung scintigraphy, droplet sizes must be small enough to permit diffusion deep into the lungs; specifically, diameters smaller than 2 microns are preferred. In the case of wind-tunnel flow visualisation, the tracer particles must follow the flow. Given that the ratio of the tracer particle density to the flow density is typically of the order 103 in gas flows, it is necessary to use tracer particles of diameter between 0.5 and 5 [microns], (p288, Springer Handbook of Experimental Fluid Mechanics, Tropea, Yarin and Foss, 2007). The method by which such tracer particles are injected into the airflow suggest close reciprocities with lung scintigraphy:
"By far the most common method of seeding gas flows is through liquid atomization. Of the many atomizer types available the common nebulizer used in inhalation devices is the most suitable...The droplet size depends primarily on the atomizing airflow rate and on the liquid used. Typical mean particle sizes range from 0.2 [microns] using DEHS...to 4-5 [microns] with water...For many applications, the common inhalation or medication nebulizer offers an economical solution and can be obtained through medical suppliers," (ibid., p293).
Thus, the medical and wind-tunnel industries already use the same nebulizing technology, and comparable droplet diameters. In particular, technetium-labelled DTPA has a comparable density, in solution, to the DEHS (di-ethyl-hexyl-sebacat) widely used for seeding airflows in PIV experiments.
One potential limiting factor, however, may be the current resolution of gamma-ray cameras. A gamma camera consists of a scintillation crystal, which converts gamma rays into optical-wavelength light, detected by photomultiplier tubes behind the crystal. However, despite a recent breakthrough which demonstrates that gamma rays can be focused, there is currently no equivalent of an optical lens. Instead, a collimator, consisting of an array of tiny pin-holes, is used. The collimator absorbs some of the radiation, limiting the sensitivity of a gamma camera, and also places a limit on the spatial resolution. Typical current resolution is 7-12mm at a distance of 10cm, (p96, Nuclear Medicine Instrumentation, Prekeges, 2009).
Despite such problems, the possibilities for development abound.
There is, however, a possible solution to this problem: Why not use radioactive isotopes to obtain quantitative flow data from wind-tunnel testing? One could inject a harmless radioactive tracer into the flow, such as one of those used in the medical imaging industry; technetium-99m-labelled DTPA (diethylene triamine pentaacetic acid) would be an obvious candidate here. One could then use gamma (ray) cameras to image the flow in a similar way that optical cameras are currently used in PIV and LDA.
There would, of course, be the need for some additional precautions. However, an isotope such as technetium-99 is considered sufficiently harmless to be injected into medical patients, and has a half-life of only 6 hours, so a wind-tunnel would not need to be decontaminated by the Nuclear Decommissioning Authority!
In fact, taking a closer look reveals that there are already significant areas of shared technology between wind-tunnel flow visualisation and lung scintigraphy, the use of gamma cameras to record 2-dimensional images formed by the emission of gamma rays from inhaled radioisotopes:
“99mTc labelled aerosols, 0.5-3 [microns] in size, are used routinely in lung ventilation studies. Radiolabelled aerosols are produced by nebulizing 99mTc-DTPA (or other appropriate 99mTc-products) in commerically available nebulizers,” (p276, Fundamentals of Nuclear Pharmacy, 2010, Saha).
When such aerosols are inhaled for lung scintigraphy, droplet sizes must be small enough to permit diffusion deep into the lungs; specifically, diameters smaller than 2 microns are preferred. In the case of wind-tunnel flow visualisation, the tracer particles must follow the flow. Given that the ratio of the tracer particle density to the flow density is typically of the order 103 in gas flows, it is necessary to use tracer particles of diameter between 0.5 and 5 [microns], (p288, Springer Handbook of Experimental Fluid Mechanics, Tropea, Yarin and Foss, 2007). The method by which such tracer particles are injected into the airflow suggest close reciprocities with lung scintigraphy:
"By far the most common method of seeding gas flows is through liquid atomization. Of the many atomizer types available the common nebulizer used in inhalation devices is the most suitable...The droplet size depends primarily on the atomizing airflow rate and on the liquid used. Typical mean particle sizes range from 0.2 [microns] using DEHS...to 4-5 [microns] with water...For many applications, the common inhalation or medication nebulizer offers an economical solution and can be obtained through medical suppliers," (ibid., p293).
Thus, the medical and wind-tunnel industries already use the same nebulizing technology, and comparable droplet diameters. In particular, technetium-labelled DTPA has a comparable density, in solution, to the DEHS (di-ethyl-hexyl-sebacat) widely used for seeding airflows in PIV experiments.
One potential limiting factor, however, may be the current resolution of gamma-ray cameras. A gamma camera consists of a scintillation crystal, which converts gamma rays into optical-wavelength light, detected by photomultiplier tubes behind the crystal. However, despite a recent breakthrough which demonstrates that gamma rays can be focused, there is currently no equivalent of an optical lens. Instead, a collimator, consisting of an array of tiny pin-holes, is used. The collimator absorbs some of the radiation, limiting the sensitivity of a gamma camera, and also places a limit on the spatial resolution. Typical current resolution is 7-12mm at a distance of 10cm, (p96, Nuclear Medicine Instrumentation, Prekeges, 2009).
Despite such problems, the possibilities for development abound.
Saturday, December 03, 2011
How Red Bull create streamwise vorticity
Red Bull arrived in Singapore this year with interesting little mini-arches in their front-wing, where the inner end of each main plane meets the 50cm-wide neutral central section. Craig Scarborough suggested at the time that "this shape is to create a vortex along the Y250 axis." As Craig explains elsewhere, "flow structures along this axis [250mm from the centreline] drive airflow under the floor towards the diffuser and around the sidepod undercuts."
So how does such a shape create streamwise vorticity? Well, the answer lies in a subfield of fluid mechanics called 'secondary flows', (with thanks to Professor Gary Coleman of Southampton University, for pointing me in the direction of this field). Such flows typically involve a primary flow - with the streamlines oriented in a particular direction, and a vorticity field perpendicular to the primary flow - in which there is also some type of differential convection to the primary flow. ('Convection' here simply means the transport of fluid by bulk motion, sometimes referred to as advection if there is any confusion with thermal convection). This differential convection tilts and stretches the vorticity lines, increasing the magnitude of the vorticity, and re-directing it in a streamwise orientation. The streamlines corresponding to this vorticity constitute the secondary motion, superimposed upon the primary streamlines.
This type of secondary flow is exactly what Red Bull are using to create separated streamwise vortices from the boundary layer on their front-wing. But before proceeding further, let's establish some notation. In what follows, we shall denote the streamwise direction as x, the direction normal to the wing as y, and the spanwise direction as z. We also have three components for the velocity vector field, which will be denoted as U, V and W, respectively. There is also a vorticity vector field, whose components will be denoted as ωx, ωy, and ωz.
On the underside of the front-wing is a boundary layer, and like all boundary layers, there is a velocity gradient ∂U/∂y in a direction normal to the wing, given that the velocity is zero at the solid surface. This entails that the boundary layer possesses vorticity in a spanwise direction ωz. The vortex lines in this boundary layer are perpendicular to the streamwise direction of flow. The trick is then to convert some of this spanwise vorticity into streamwise vorticity ωx. It transpires that the way to do this is to create a lateral pressure gradient ∂p/∂z.
Now, the front-wing operates in ground effect, so the pressure in an elevated mini-arch will be less than it is underneath the adjacent portion of the main plane, creating just such a pressure gradient. The crucial point is that this lateral pressure gradient corresponds to the creation of a spanwise-gradient in the streamwise velocity ∂U/∂z > 0. To see why this is crucial, however, we need to look at the Vorticity Transport Equation (VTE) for ωx, the streamwise component of vorticity. The effect in question can be seen by studying incompressible, inviscid, laminar flow, so we can simplify the VTE by omitting the turbulent and viscous terms to obtain:
Dωx/Dt = ωx(∂U/∂x) + ωy(∂U/∂y) + ωz(∂U/∂z)
The left-hand side here, Dωx/Dt, is the material derivative of the x-component of vorticity; it denotes the change of ωx in material fluid elements convected downstream by the flow. Now, we started with ωz > 0 in the boundary layer, and by virtue of creating a lateral pressure gradient, we also have ∂U/∂z > 0. This means that the third term on the right-hand side in the equation above is positive, which (assuming the other pair of terms are non-negative) entails that Dωx/Dt > 0.
Thus, the creation of the spanwise-gradient in the streamwise velocity ∂U/∂z, skews the initially spanwise vortex lines ωz until they possess a significant component ωx in a streamwise direction. The lateral pressure gradient has created streamwise vorticity.
As Peter Bradshaw writes, "if the lateral deflection that produces longitudinal vorticity extends for only a small spanwise direction, then the longitudinal vorticity becomes concentrated into a vortex," (Turbulent secondary flows, Ann. Rev. Fluid Mechanics 1987, p64). Which is exactly what Red Bull, and for that matter, many other Formula 1 teams, do when they incorporate mini-arches into their front-wings.
As a final aside, note that there is an interesting duality at the heart of fluid mechanics, namely that between a description which uses the velocity and pressure fields, and a description which uses the vorticity field instead. The vorticity has been described as "the sinews and muscles of fluid mechanics," (Kuchemann 1965, Report on the IUTAM Symposium on concentrated vortex motions in fluids, Fluid Mech. 21). P.A. Davidson points out that in the case of incompressible flow, because pressure waves can travel infinitely fast, the velocity vector field is a non-local field; the vorticity field, in contrast, is local. "While linear momentum can be instantaneously redistributed throughout space by the pressure field, vorticity can only spread through a fluid in an incremental fashion, either by diffusion or else by material transport (advection). Without doubt, it is the vorticity field, and not [the velocity field], which is the more fundamental," (Turbulence, 2004, p39).
An aerodynamicist with an especially strong visual imagination, perhaps someone who had been stimulated to develop such mental capabilities to compensate for dyslexia, might be able to develop a better understanding of the fluid flow around a Formula 1 car by thinking in terms of vorticity, or by developing the ability to mentally switch back and forth between the vorticity and velocity representations. Such an individual might even reject tools such as CAD and CFD, preferring instead to work on a drawing board...
So how does such a shape create streamwise vorticity? Well, the answer lies in a subfield of fluid mechanics called 'secondary flows', (with thanks to Professor Gary Coleman of Southampton University, for pointing me in the direction of this field). Such flows typically involve a primary flow - with the streamlines oriented in a particular direction, and a vorticity field perpendicular to the primary flow - in which there is also some type of differential convection to the primary flow. ('Convection' here simply means the transport of fluid by bulk motion, sometimes referred to as advection if there is any confusion with thermal convection). This differential convection tilts and stretches the vorticity lines, increasing the magnitude of the vorticity, and re-directing it in a streamwise orientation. The streamlines corresponding to this vorticity constitute the secondary motion, superimposed upon the primary streamlines.
This type of secondary flow is exactly what Red Bull are using to create separated streamwise vortices from the boundary layer on their front-wing. But before proceeding further, let's establish some notation. In what follows, we shall denote the streamwise direction as x, the direction normal to the wing as y, and the spanwise direction as z. We also have three components for the velocity vector field, which will be denoted as U, V and W, respectively. There is also a vorticity vector field, whose components will be denoted as ωx, ωy, and ωz.
On the underside of the front-wing is a boundary layer, and like all boundary layers, there is a velocity gradient ∂U/∂y in a direction normal to the wing, given that the velocity is zero at the solid surface. This entails that the boundary layer possesses vorticity in a spanwise direction ωz. The vortex lines in this boundary layer are perpendicular to the streamwise direction of flow. The trick is then to convert some of this spanwise vorticity into streamwise vorticity ωx. It transpires that the way to do this is to create a lateral pressure gradient ∂p/∂z.
Now, the front-wing operates in ground effect, so the pressure in an elevated mini-arch will be less than it is underneath the adjacent portion of the main plane, creating just such a pressure gradient. The crucial point is that this lateral pressure gradient corresponds to the creation of a spanwise-gradient in the streamwise velocity ∂U/∂z > 0. To see why this is crucial, however, we need to look at the Vorticity Transport Equation (VTE) for ωx, the streamwise component of vorticity. The effect in question can be seen by studying incompressible, inviscid, laminar flow, so we can simplify the VTE by omitting the turbulent and viscous terms to obtain:
Dωx/Dt = ωx(∂U/∂x) + ωy(∂U/∂y) + ωz(∂U/∂z)
The left-hand side here, Dωx/Dt, is the material derivative of the x-component of vorticity; it denotes the change of ωx in material fluid elements convected downstream by the flow. Now, we started with ωz > 0 in the boundary layer, and by virtue of creating a lateral pressure gradient, we also have ∂U/∂z > 0. This means that the third term on the right-hand side in the equation above is positive, which (assuming the other pair of terms are non-negative) entails that Dωx/Dt > 0.
Thus, the creation of the spanwise-gradient in the streamwise velocity ∂U/∂z, skews the initially spanwise vortex lines ωz until they possess a significant component ωx in a streamwise direction. The lateral pressure gradient has created streamwise vorticity.
As Peter Bradshaw writes, "if the lateral deflection that produces longitudinal vorticity extends for only a small spanwise direction, then the longitudinal vorticity becomes concentrated into a vortex," (Turbulent secondary flows, Ann. Rev. Fluid Mechanics 1987, p64). Which is exactly what Red Bull, and for that matter, many other Formula 1 teams, do when they incorporate mini-arches into their front-wings.
As a final aside, note that there is an interesting duality at the heart of fluid mechanics, namely that between a description which uses the velocity and pressure fields, and a description which uses the vorticity field instead. The vorticity has been described as "the sinews and muscles of fluid mechanics," (Kuchemann 1965, Report on the IUTAM Symposium on concentrated vortex motions in fluids, Fluid Mech. 21). P.A. Davidson points out that in the case of incompressible flow, because pressure waves can travel infinitely fast, the velocity vector field is a non-local field; the vorticity field, in contrast, is local. "While linear momentum can be instantaneously redistributed throughout space by the pressure field, vorticity can only spread through a fluid in an incremental fashion, either by diffusion or else by material transport (advection). Without doubt, it is the vorticity field, and not [the velocity field], which is the more fundamental," (Turbulence, 2004, p39).
An aerodynamicist with an especially strong visual imagination, perhaps someone who had been stimulated to develop such mental capabilities to compensate for dyslexia, might be able to develop a better understanding of the fluid flow around a Formula 1 car by thinking in terms of vorticity, or by developing the ability to mentally switch back and forth between the vorticity and velocity representations. Such an individual might even reject tools such as CAD and CFD, preferring instead to work on a drawing board...
Saturday, November 12, 2011
Linking Red Bull's fuel loads to McLaren's rear-wing
One of the oddities of the 2011 Formula 1 season has been the contrast between the alacrity with which McLaren responded to the failure of their experimental 'bagpipe' exhaust system in pre-season testing, and their belated, late-season introduction of a Red-Bull style rear-wing, featuring a more powerful DRS effect.
Whilst a new and highly effective exhaust-blown diffuser was available on the McLaren from the first race, the new rear-wing combination, with smaller flap and larger main plane, only began to make sporadic appearances in practice from the middle of the season, and the system was only definitively installed for the Japanese Grand Prix at Suzuka.
Now, it's well-understood that a major aerodynamic component cannot be changed independently of the other primary aerodynamic components on a car, so McLaren presumably needed to make changes to the airflow feeding the rear-wing, or deal with the consequent change to the centre-of-pressure, before they could introduce the smaller flap design. Nevertheless, given the clear benefits of a powerful DRS system, as demonstrated by Red Bull from day one, McLaren do seem to have been a little tardy in this respect.
There is, however, a possible exculpatory explanation. Mark Hughes has recently drawn attention to the fact that during Friday practice this year, Red Bull have apparently used a second stint fuel load in the long run phase of these sessions, whilst McLaren have used a first stint fuel load. Conversely, Red Bull have tended to fuel more heavily on the short-run practice laps. This fuel-load combination has disguised Red Bull's real qualifying pace, relative to McLaren, but exaggerated their prospective race pace.
Perhaps, then, in the early stages of the season, McLaren came away from the races believing that their potential qualifying performance was stronger than their potential race performance, relative to Red Bull, and they therefore needed to continue optimising their car for race performance. This, in turn, meant retaining a larger rear-wing flap with a less powerful DRS stall.
Whilst a new and highly effective exhaust-blown diffuser was available on the McLaren from the first race, the new rear-wing combination, with smaller flap and larger main plane, only began to make sporadic appearances in practice from the middle of the season, and the system was only definitively installed for the Japanese Grand Prix at Suzuka.
Now, it's well-understood that a major aerodynamic component cannot be changed independently of the other primary aerodynamic components on a car, so McLaren presumably needed to make changes to the airflow feeding the rear-wing, or deal with the consequent change to the centre-of-pressure, before they could introduce the smaller flap design. Nevertheless, given the clear benefits of a powerful DRS system, as demonstrated by Red Bull from day one, McLaren do seem to have been a little tardy in this respect.
There is, however, a possible exculpatory explanation. Mark Hughes has recently drawn attention to the fact that during Friday practice this year, Red Bull have apparently used a second stint fuel load in the long run phase of these sessions, whilst McLaren have used a first stint fuel load. Conversely, Red Bull have tended to fuel more heavily on the short-run practice laps. This fuel-load combination has disguised Red Bull's real qualifying pace, relative to McLaren, but exaggerated their prospective race pace.
Perhaps, then, in the early stages of the season, McLaren came away from the races believing that their potential qualifying performance was stronger than their potential race performance, relative to Red Bull, and they therefore needed to continue optimising their car for race performance. This, in turn, meant retaining a larger rear-wing flap with a less powerful DRS stall.
Monday, October 31, 2011
Airbox spillage and fluidics
A couple of weeks ago, the FIA issued a Technical Directive to the Formula One teams, announcing that off-throttle blowing of the exhausts will be severely curtailed in 2012 by engine mapping restrictions.
In combination with stringent requirements on the position and angle of the exhaust exits, this is intended to minimise the exploitation of exhaust flow for aerodynamic purposes. It will, however, have a secondary consequence. As Gary Anderson recently explained, off-throttle exhaust flow also serves to reduce spillage from the airbox:
"In the past when the driver closed the throttle to slow for a corner, the airbox spillage became a lot worse. If the airflow attachment on the sides of the engine cover was not good, the performance of the rear wing would be compromised – not something the driver wants under braking or on corner entry.
"Step forward the blown diffuser. Hot or cold blowing allows the engine to work like an air pump, moving this airflow through and out of the exhausts. This reduces the potential turbulent airflow creating negative performance on the rear wing.
If off-throttle blowing of the exhausts is genuinely to be prohibited next year by means of engine mapping restrictions, this will presumably re-create the problem of airflow spilling out of the airbox when the driver lifts off the throttle on turn-in to a corner.
So here's an idea: Why not introduce a fluidic switch which, under certain circumstances, re-routes the airbox airflow through the chassis to the lower leading edge of the sidepods? This could have the joint benefit of boosting the velocity of the underbody flow, and improving airflow to the rear wing, just at the time when the driver most needs it, when the car is in pitch under braking and turn-in.
In combination with stringent requirements on the position and angle of the exhaust exits, this is intended to minimise the exploitation of exhaust flow for aerodynamic purposes. It will, however, have a secondary consequence. As Gary Anderson recently explained, off-throttle exhaust flow also serves to reduce spillage from the airbox:
"In the past when the driver closed the throttle to slow for a corner, the airbox spillage became a lot worse. If the airflow attachment on the sides of the engine cover was not good, the performance of the rear wing would be compromised – not something the driver wants under braking or on corner entry.
"Step forward the blown diffuser. Hot or cold blowing allows the engine to work like an air pump, moving this airflow through and out of the exhausts. This reduces the potential turbulent airflow creating negative performance on the rear wing.
If off-throttle blowing of the exhausts is genuinely to be prohibited next year by means of engine mapping restrictions, this will presumably re-create the problem of airflow spilling out of the airbox when the driver lifts off the throttle on turn-in to a corner.
So here's an idea: Why not introduce a fluidic switch which, under certain circumstances, re-routes the airbox airflow through the chassis to the lower leading edge of the sidepods? This could have the joint benefit of boosting the velocity of the underbody flow, and improving airflow to the rear wing, just at the time when the driver most needs it, when the car is in pitch under braking and turn-in.
Friday, October 28, 2011
Exhaust-blown diffusers in 2012?
The 2012 Formula One regulations are intended to prohibit the use of exhaust-blown diffusers: stringent requirements have been placed on the location of the exhaust exit, and a recent announcement from the FIA suggests that engine mapping restrictions will be imposed to eliminate off-throttle pumping of the exhaust jet.
Craig Scarborough has produced a fantastic analysis of the exact restrictions to be placed on the location and orientation of the exhaust exit. In short, these move the exhaust exit to at least 500mm in front of the rear axle line, and 250mm above the reference plane underneath the car. The exhaust exit must also be angled upwards by at least 10 degrees. Hence, it will no longer be possible to blow the exhaust directly between the outer edge of the diffuser and inner face of the rotating rear wheel. Moreover, it will be illegal to place any sprung bodywork in a cone-shaped region, aligned with the exhaust exit, diverging at 3 degrees, and terminating at the rear axle line.
So will this be sufficient to eliminate exhaust-blown diffusers? Well, the first thing to note is that whilst it will be impossible to point the exhaust exit down at the diffuser, this won't necessarily prevent the exhaust jet itself from playing in that direction. When an exhaust jet exits into a cross-stream, the jet almost behaves like a deformable solid, as emphasised by F.L.Parra and K.Kontis in their 2006 paper, Aerodynamic effectiveness of the flow of exhaust gases in a generic formula one car configuration, from which the illustration here is taken.
If the exhaust exit is placed flush in the rearward face of sidepods sweeping downwards at a fairly steep angle, then the freestream airflow could deflect the exhaust jet towards the diffuser. The degree to which the jet is deflected is determined by the ratio between the velocity of the jet and the velocity of the cross-stream flow. The smaller the ratio, the more the jet is deflected.
Hence, there is something of a trade-off necessary here. To allow the exhaust jet to be deflected down towards the diffuser requires a lower exhaust jet velocity, yet for the exhaust jet to be effective in that region, requires higher jet velocities. There may be a compromise solution available here, an optimum exhaust velocity, which permits the jet to be directed towards the outer edge of the diffuser with sufficient velocity to have an effect, but that's something which only CFD and wind-tunnel experimentation will be able to determine...
Craig Scarborough has produced a fantastic analysis of the exact restrictions to be placed on the location and orientation of the exhaust exit. In short, these move the exhaust exit to at least 500mm in front of the rear axle line, and 250mm above the reference plane underneath the car. The exhaust exit must also be angled upwards by at least 10 degrees. Hence, it will no longer be possible to blow the exhaust directly between the outer edge of the diffuser and inner face of the rotating rear wheel. Moreover, it will be illegal to place any sprung bodywork in a cone-shaped region, aligned with the exhaust exit, diverging at 3 degrees, and terminating at the rear axle line.
So will this be sufficient to eliminate exhaust-blown diffusers? Well, the first thing to note is that whilst it will be impossible to point the exhaust exit down at the diffuser, this won't necessarily prevent the exhaust jet itself from playing in that direction. When an exhaust jet exits into a cross-stream, the jet almost behaves like a deformable solid, as emphasised by F.L.Parra and K.Kontis in their 2006 paper, Aerodynamic effectiveness of the flow of exhaust gases in a generic formula one car configuration, from which the illustration here is taken.
If the exhaust exit is placed flush in the rearward face of sidepods sweeping downwards at a fairly steep angle, then the freestream airflow could deflect the exhaust jet towards the diffuser. The degree to which the jet is deflected is determined by the ratio between the velocity of the jet and the velocity of the cross-stream flow. The smaller the ratio, the more the jet is deflected.
Hence, there is something of a trade-off necessary here. To allow the exhaust jet to be deflected down towards the diffuser requires a lower exhaust jet velocity, yet for the exhaust jet to be effective in that region, requires higher jet velocities. There may be a compromise solution available here, an optimum exhaust velocity, which permits the jet to be directed towards the outer edge of the diffuser with sufficient velocity to have an effect, but that's something which only CFD and wind-tunnel experimentation will be able to determine...
Wednesday, September 21, 2011
Turbulence in Singapore
The fourth Grand Prix of Singapore will take place this weekend, and whilst the city-state forms an impressive backdrop for a race, overtaking has been notoriously difficult in previous years here. As difficult, in fact, as it is at Valencia, another street-circuit situated at sea level.
Which is intriguing, because the atmospheric density is greater at sea level, and this has certain aerodynamic ramifications. Assuming the pressure at Singapore is 101 kPa (the standard sea level pressure), a temperature of 20 degrees Celsius corresponds to an air density of 1.196 kg/m3.
Singapore, however, is also notoriously humid, and because water vapour is lighter than dry air, humid air is less dense than dry air. Assuming a relative humidity of 80%, and a temperature of 20 degrees, the air density at Singapore is about 1.190 kg/m3. (The tables here are taken from Soil mechanics for unsaturated soils, Fredlund and Rahardjo, p23-24).
In contrast, at a circuit such as Spa Francorchamps, which lies at an altitude of about 400m, the standard atmospheric pressure is about 95 kPa, and at a temperature of 20 degrees the air density is only 1.124 kg/m3.
Now, greater air density increases downforce and drag, but it also changes the Reynolds number:
Re = (airspeed x length x air density)/viscosity of air
The viscosity of air increases as a function of temperature, (as tabulated on the left here), but is largely independent of pressure. Hence, the increased air density entails that the Reynolds number of the airflow at Singapore (and Valencia) will be slightly greater than it is at venues such as Spa.
How much greater? Well, by a factor of 1.190/1.124 = 1.058. In other words, the Reynolds number at Singapore is about 5% greater than it is at Spa.
Now, the Reynolds number specifies the ratio of the inertial forces to the viscous forces, and this is important for quantifying the effect of turbulence. The greater the Reynolds number, the more turbulent the flow. In particular, as a rule-of-thumb, viscous dissipation of turbulent energy only kicks-in when the Reynolds number of the turbulent eddies approaches unity. Thus, if the air density at Singapore is 5% greater than that at Spa, the viscous dissipation of turbulence doesn't kick-in until the turbulent eddies reach a size about 5% smaller than at Spa.
If the premises here are correct, and the reasoning is valid, then the cars will have a slightly longer turbulent wake at Singapore (and Valencia), than they have at venues such as Spa. That would help to explain why it's so difficult to overtake at Singapore (and Valencia).
Nevertheless, circuit design is still by far the most important factor. Zandvoort, after all, was situated amongst the sand dunes bordering the North Sea, yet the racing there was amongst the best you could care to see.
Which is intriguing, because the atmospheric density is greater at sea level, and this has certain aerodynamic ramifications. Assuming the pressure at Singapore is 101 kPa (the standard sea level pressure), a temperature of 20 degrees Celsius corresponds to an air density of 1.196 kg/m3.
Singapore, however, is also notoriously humid, and because water vapour is lighter than dry air, humid air is less dense than dry air. Assuming a relative humidity of 80%, and a temperature of 20 degrees, the air density at Singapore is about 1.190 kg/m3. (The tables here are taken from Soil mechanics for unsaturated soils, Fredlund and Rahardjo, p23-24).
In contrast, at a circuit such as Spa Francorchamps, which lies at an altitude of about 400m, the standard atmospheric pressure is about 95 kPa, and at a temperature of 20 degrees the air density is only 1.124 kg/m3.
Now, greater air density increases downforce and drag, but it also changes the Reynolds number:
Re = (airspeed x length x air density)/viscosity of air
The viscosity of air increases as a function of temperature, (as tabulated on the left here), but is largely independent of pressure. Hence, the increased air density entails that the Reynolds number of the airflow at Singapore (and Valencia) will be slightly greater than it is at venues such as Spa.
How much greater? Well, by a factor of 1.190/1.124 = 1.058. In other words, the Reynolds number at Singapore is about 5% greater than it is at Spa.
Now, the Reynolds number specifies the ratio of the inertial forces to the viscous forces, and this is important for quantifying the effect of turbulence. The greater the Reynolds number, the more turbulent the flow. In particular, as a rule-of-thumb, viscous dissipation of turbulent energy only kicks-in when the Reynolds number of the turbulent eddies approaches unity. Thus, if the air density at Singapore is 5% greater than that at Spa, the viscous dissipation of turbulence doesn't kick-in until the turbulent eddies reach a size about 5% smaller than at Spa.
If the premises here are correct, and the reasoning is valid, then the cars will have a slightly longer turbulent wake at Singapore (and Valencia), than they have at venues such as Spa. That would help to explain why it's so difficult to overtake at Singapore (and Valencia).
Nevertheless, circuit design is still by far the most important factor. Zandvoort, after all, was situated amongst the sand dunes bordering the North Sea, yet the racing there was amongst the best you could care to see.
Sunday, September 18, 2011
The Miles-Phillips Mechanism
Two distinct mechanisms have been proposed to explain the means by which the wind is capable of generating waves and perturbations on the surface of lakes and oceans: Kelvin-Helmholtz instability (KHI), and the Miles-Phillips Mechanism.
Now, KHI reputedly requires a minimum wind speed of 6 m s-1 to make waves grow against the competing effects of gravity and surface tension. Thus, whilst KHI is relevant to the generation of large wavelength perturbations, it is the Miles-Phillips Mechanism which is relevant to low wind speeds, and short-wavelength perturbations. In particular, the Miles-Phillips Mechanism involves a resonant interaction between the surface of the water and turbulent fluctuations in the air.
So, in the interests of science, I wandered down acorn-strewn paths to my local lake, to see if I could identify the Miles-Phillips Mechanism in action. What I observed over the course of several days, were a complex sequence of meta-stable and transient patterns. All the photos here were taken at the same time of day, around 2pm.
The first couple of pictures are from Friday 16th September. There was a light breeze blowing from left-to-right here, and this appeared to maintain a band of short wavelength perturbations in the middle of the lake. There is a clearly-defined transition, however, towards the margins of the lake, where the ripples were of a visibly longer wavelength. The shorter modes completely disrupt the reflective properties of the lake, but you can still see distorted images of the surrounding trees in the areas with the longer wavelength disturbances.
This is in sharp contrast with the pattern exhibited on two days previously, when a stable pattern of short-wavelength perturbations covered most of the lake. Note the absence of any reflective images at all.
On Sunday 18th September, the breeze was light, but rapidly fluctuating, and bands of short-wavelength perturbations would arise, and then dissipate, over a timescale of just a few minutes. In the first photo here, virtually the entire surface is smooth and reflective...
But within little more than five minutes, a band of short-wavelength ripples had covered the middle of the lake.
Such patterns would rise and fall, and drift back and forth across the lake as the local wind shifted and fluctuated. The wind variation was imperceptible from the viewpoint of the observer, and the patterns became as inexplicable and mesmerising as a mere screen-saver.
Now, KHI reputedly requires a minimum wind speed of 6 m s-1 to make waves grow against the competing effects of gravity and surface tension. Thus, whilst KHI is relevant to the generation of large wavelength perturbations, it is the Miles-Phillips Mechanism which is relevant to low wind speeds, and short-wavelength perturbations. In particular, the Miles-Phillips Mechanism involves a resonant interaction between the surface of the water and turbulent fluctuations in the air.
So, in the interests of science, I wandered down acorn-strewn paths to my local lake, to see if I could identify the Miles-Phillips Mechanism in action. What I observed over the course of several days, were a complex sequence of meta-stable and transient patterns. All the photos here were taken at the same time of day, around 2pm.
The first couple of pictures are from Friday 16th September. There was a light breeze blowing from left-to-right here, and this appeared to maintain a band of short wavelength perturbations in the middle of the lake. There is a clearly-defined transition, however, towards the margins of the lake, where the ripples were of a visibly longer wavelength. The shorter modes completely disrupt the reflective properties of the lake, but you can still see distorted images of the surrounding trees in the areas with the longer wavelength disturbances.
This is in sharp contrast with the pattern exhibited on two days previously, when a stable pattern of short-wavelength perturbations covered most of the lake. Note the absence of any reflective images at all.
On Sunday 18th September, the breeze was light, but rapidly fluctuating, and bands of short-wavelength perturbations would arise, and then dissipate, over a timescale of just a few minutes. In the first photo here, virtually the entire surface is smooth and reflective...
But within little more than five minutes, a band of short-wavelength ripples had covered the middle of the lake.
Such patterns would rise and fall, and drift back and forth across the lake as the local wind shifted and fluctuated. The wind variation was imperceptible from the viewpoint of the observer, and the patterns became as inexplicable and mesmerising as a mere screen-saver.
Saturday, September 17, 2011
The cost of motorsport books
Here's a rather stark illustration of US/UK pricing differentials.
The Autocourse 60 Years of Grand Prix Motor Racing is available on Amazon.co.uk for a whopping £44.96. Exactly the same book is also available on Amazon.com for the rather more affordable sum of $41.97, which equals £26.58 at current exchange rates.
There is, it seems, a somewhat different pricing strategy in different markets...
The Autocourse 60 Years of Grand Prix Motor Racing is available on Amazon.co.uk for a whopping £44.96. Exactly the same book is also available on Amazon.com for the rather more affordable sum of $41.97, which equals £26.58 at current exchange rates.
There is, it seems, a somewhat different pricing strategy in different markets...
Monday, September 05, 2011
Formula 1 aerodynamics in the 1970s
For most of the 1970s, there seems to have been a fundamental schism in the front-end aerodynamic concept of Formula 1 cars. Some of the cars, such as the McLarens, Lotuses and Ferraris, continued to run with front wings, but another group appeared to abandon that concept for most of the decade, running instead a front spoiler/airdam/splitter. This latter group included luminaries such as March, Brabham and Tyrrell, with Jackie Stewart winning the 1971 and 1973 World Championships in Tyrrell designs sporting just such a front-end.
So what was the idea? Well, part of the motivation was presumably to reduce the lift, drag and turbulence created by the front wheels. The front spoilers were much wider than front wings, and partially shrouded the front wheels, diverting airflow down the sides of the car.
So that was part of the idea. The other possible motive is perhaps more interesting, because it involves ground-effect. A spoiler/airdam provides a vertical barrier which (i) maximises the high pressure stagnation point at the front of the car, and (ii) accelerates the airflow through the restricted gap between the spoiler/airdam and the ground surface. A horizontal splitter projecting from the bottom of the spoiler/airdam then takes advantage of the high pressure of the stagnation point to generate some extra downforce.
A front airdam/spoiler is partially, then, a ground-effect device, which perhaps explains why cars such as the Brabhams and Tyrrells were still able to win Grands Prix against those utilising conventional front-wing arrangements. The photo here shows a Tyrrell running quite a degree of rake, which would serve to accentuate the ground-effect of the front spoiler.
So, perhaps surprisingly, ground-effect in Formula 1 actually predates the underbody venturi tunnels and skirts used on the Lotus 78/79. And in fact, Gordon Murray began experimenting with ground-effect on the Brabham BT44 back in 1974, arriving at "an inch-deep underbody vee, something like a front airdam, but halfway down the car." (Vacuum Clean-Up, Adam Cooper, Motorsport, May 1998, pp64-69).
The introduction of underbody venturi and skirts presumably spelt the death-knell for front spoilers, as the emphasis then shifted to feeding the underbody with as much airflow as possible. Still, it would be interesting to hear from those involved, what the initial impetus was for adopting those spoilers, and how effective they really were.
So what was the idea? Well, part of the motivation was presumably to reduce the lift, drag and turbulence created by the front wheels. The front spoilers were much wider than front wings, and partially shrouded the front wheels, diverting airflow down the sides of the car.
So that was part of the idea. The other possible motive is perhaps more interesting, because it involves ground-effect. A spoiler/airdam provides a vertical barrier which (i) maximises the high pressure stagnation point at the front of the car, and (ii) accelerates the airflow through the restricted gap between the spoiler/airdam and the ground surface. A horizontal splitter projecting from the bottom of the spoiler/airdam then takes advantage of the high pressure of the stagnation point to generate some extra downforce.
A front airdam/spoiler is partially, then, a ground-effect device, which perhaps explains why cars such as the Brabhams and Tyrrells were still able to win Grands Prix against those utilising conventional front-wing arrangements. The photo here shows a Tyrrell running quite a degree of rake, which would serve to accentuate the ground-effect of the front spoiler.
So, perhaps surprisingly, ground-effect in Formula 1 actually predates the underbody venturi tunnels and skirts used on the Lotus 78/79. And in fact, Gordon Murray began experimenting with ground-effect on the Brabham BT44 back in 1974, arriving at "an inch-deep underbody vee, something like a front airdam, but halfway down the car." (Vacuum Clean-Up, Adam Cooper, Motorsport, May 1998, pp64-69).
The introduction of underbody venturi and skirts presumably spelt the death-knell for front spoilers, as the emphasis then shifted to feeding the underbody with as much airflow as possible. Still, it would be interesting to hear from those involved, what the initial impetus was for adopting those spoilers, and how effective they really were.
Saturday, September 03, 2011
Suspension camber in Grand Prix racing
Formula One's latest cause celebre revolves around Red Bull's decision to race at Spa with a greater degree of negative front-wheel camber than recommended by Pirelli.
Negative camber simply means that both wheels are inclined inwards at the top. The benefit of this is that the outer wheel generates greater lateral force on the entry to a corner (so-called camber thrust, similar to the way a motorbike rider generates lateral force by keeling the bike over), but the disadvantage is that the inner shoulders of both front tyres will suffer greater stress when the car runs in a straightline, and at Spa this caused both Red Bull drivers to suffer tyre blisters.
It's interesting to recall, however, that in the pre-war era of Grand Prix racing, the cars were actually set-up with visible levels of positive front-end camber. In other words, the front-wheels were inclined outwards at the top.
So why was this? Well, there seem to be at least two distinct reasons. The first was relevant prior to the mid-1930s, when cars employed what now look like rather primitive beam axle front suspension systems. Under the extra load generated by braking, the front axle would sag, and pull the front wheels inward at the top, as illustrated in this diagram taken from Matt Joseph's excellent 'Collector Car Restoration Bible: Practical Techniques for Professional Results'. Thus, a degree of positive static camber was necessary to offset this effect.
The eventual transition to independent, double-wishbone, ball-joint suspension, meant that wheel camber was no longer affected by the loads generated under straightline braking (or acceleration). However, even after the adoption of more modern suspension in the mid-1930s, the Mercedes and Auto Union Grand Prix cars continued to run with appreciable levels of positive camber. The primary reason for this appears to involve a concept called the scrub radius.
Now, when the front wheels of a car are steered, the wheels pivot around some axis. Originally, this steering axis was implemented with a physical rod called a king-pin, which was attached to each end of the beam axle. With independent, double-wishbone suspension, this king-pin is replaced by the line drawn between the upper and lower ball-joints at the outer end of the wishbones. This axis is also the line along which the weight of the car is projected down to the ground. The distance between the point where this line intersects the ground and the contact patch of the tyre, is called the scrub radius.
As Joseph explains (p261), a non-zero scrub radius causes several problems: it puts large forces into the king-pins; it acts like a lever, thereby putting large shocks into the steering; and it makes it harder to steer a car. Positive camber was the common solution devised for minimising the scrub radius. If the wheels are inclined outwards at the top, then the contact patches will be placed directly under, or at least closer to, the point where the steering axis intersects the road surface.
There's just one more complication to consider. Under the chassis roll generated by cornering, a double-wishbone suspension system will experience a positive camber increment on the more heavily loaded outer wheel, and a negative camber change on the lightly-loaded inner wheel. By setting a car up with a degree of positive static camber, this will result in the outer wheel acquiring an even greater degree of positive camber during cornering, while the inner wheel reaches a more vertical inclination, as nicely demonstrated in the photo of the Mercedes above.
Negative camber simply means that both wheels are inclined inwards at the top. The benefit of this is that the outer wheel generates greater lateral force on the entry to a corner (so-called camber thrust, similar to the way a motorbike rider generates lateral force by keeling the bike over), but the disadvantage is that the inner shoulders of both front tyres will suffer greater stress when the car runs in a straightline, and at Spa this caused both Red Bull drivers to suffer tyre blisters.
It's interesting to recall, however, that in the pre-war era of Grand Prix racing, the cars were actually set-up with visible levels of positive front-end camber. In other words, the front-wheels were inclined outwards at the top.
So why was this? Well, there seem to be at least two distinct reasons. The first was relevant prior to the mid-1930s, when cars employed what now look like rather primitive beam axle front suspension systems. Under the extra load generated by braking, the front axle would sag, and pull the front wheels inward at the top, as illustrated in this diagram taken from Matt Joseph's excellent 'Collector Car Restoration Bible: Practical Techniques for Professional Results'. Thus, a degree of positive static camber was necessary to offset this effect.
The eventual transition to independent, double-wishbone, ball-joint suspension, meant that wheel camber was no longer affected by the loads generated under straightline braking (or acceleration). However, even after the adoption of more modern suspension in the mid-1930s, the Mercedes and Auto Union Grand Prix cars continued to run with appreciable levels of positive camber. The primary reason for this appears to involve a concept called the scrub radius.
Now, when the front wheels of a car are steered, the wheels pivot around some axis. Originally, this steering axis was implemented with a physical rod called a king-pin, which was attached to each end of the beam axle. With independent, double-wishbone suspension, this king-pin is replaced by the line drawn between the upper and lower ball-joints at the outer end of the wishbones. This axis is also the line along which the weight of the car is projected down to the ground. The distance between the point where this line intersects the ground and the contact patch of the tyre, is called the scrub radius.
As Joseph explains (p261), a non-zero scrub radius causes several problems: it puts large forces into the king-pins; it acts like a lever, thereby putting large shocks into the steering; and it makes it harder to steer a car. Positive camber was the common solution devised for minimising the scrub radius. If the wheels are inclined outwards at the top, then the contact patches will be placed directly under, or at least closer to, the point where the steering axis intersects the road surface.
There's just one more complication to consider. Under the chassis roll generated by cornering, a double-wishbone suspension system will experience a positive camber increment on the more heavily loaded outer wheel, and a negative camber change on the lightly-loaded inner wheel. By setting a car up with a degree of positive static camber, this will result in the outer wheel acquiring an even greater degree of positive camber during cornering, while the inner wheel reaches a more vertical inclination, as nicely demonstrated in the photo of the Mercedes above.
Thursday, September 01, 2011
Spot the difference
This is Vittorio Brambilla, otherwise know as the Monza Gorilla, and best remembered for crashing immediately after winning the 1975 Austrian Grand Prix.
Not to be confused with...
Michela Vittoria Brambilla, Italian beauty queen, philosophy-graduate, businesswoman, and erstwhile Minister of Tourism in Silvio Berlusconi's government.
Not to be confused with...
Michela Vittoria Brambilla, Italian beauty queen, philosophy-graduate, businesswoman, and erstwhile Minister of Tourism in Silvio Berlusconi's government.
Wednesday, August 31, 2011
Alonso vs Webber and Hamilton at Eau Rouge
It's difficult to find a precedent for Mark Webber's frightening pass on Fernando Alonso last Sunday, but there is an interesting contrast.
On the first lap of the 2007 Belgian Grand Prix, McLaren team-mates Alonso and Hamilton raced wheel-to-wheel down to Eau Rouge, with Hamilton on the inside for the left-hand entry.
On that occasion, however, Fernando was able to take more speed into the corner, and claim the position into the right-handed uphill element. Here's Lewis's account of it at the time:
"At Eau Rouge it was just common sense to ease off a fraction. Fernando had the momentum and was going quicker into it. It would have been stupid of me to keep it flat, but I was tempted. That worked in a Formula 3 car in the wet, but I'm not sure it would in a Formula 1 car..."
The two situations are not completely similar, because Webber was able to use the slipstream on Sunday, and gain extra momentum over Alonso. Nevertheless, the fact that Hamilton failed to make the move stick from the inside against the same adversary, provides a vivid demonstration of just how much commitment Webber needed.
On the first lap of the 2007 Belgian Grand Prix, McLaren team-mates Alonso and Hamilton raced wheel-to-wheel down to Eau Rouge, with Hamilton on the inside for the left-hand entry.
On that occasion, however, Fernando was able to take more speed into the corner, and claim the position into the right-handed uphill element. Here's Lewis's account of it at the time:
"At Eau Rouge it was just common sense to ease off a fraction. Fernando had the momentum and was going quicker into it. It would have been stupid of me to keep it flat, but I was tempted. That worked in a Formula 3 car in the wet, but I'm not sure it would in a Formula 1 car..."
The two situations are not completely similar, because Webber was able to use the slipstream on Sunday, and gain extra momentum over Alonso. Nevertheless, the fact that Hamilton failed to make the move stick from the inside against the same adversary, provides a vivid demonstration of just how much commitment Webber needed.
Monday, August 22, 2011
Wittgenstein's aircraft engine
Ludwig Wittgenstein's Tractatus Logico-Philosophicus (1921) consists of numbered paragraphs, the first of which reads, 'The world is everything that is the case', and the last of which states, 'Whereof one cannot speak, thereof one must be silent.'
As Anthony Quinton explained in discussion with Bryan Magee, Wittgenstein "detested...the idea of philosophy as a trade, a 9-to-5 occupation, which you do with a part of yourself, and then go off and lead the rest of your life in a detached and unrelated way. He was a man of the utmost moral intensity. He took himself and his work with very great seriousness. When his work wasn't going well he got into a desperate and agonized condition. The result of this displays itself in his manner of writing. You feel that his whole idea of himself is behind everything that he says...[He] doesn't want to make the thing too easy - he doesn't want to express himself in a way that people can pick up by simply running their eyes over the pages. His philosophy is an instrument for changing the whole intellectual aspect of its readers' lives, and therefore the way to it is made difficult," (Talking Philosophy, p83).
Wittgenstein, however, came to philosophy by starting off as an aeronautical engineer at Manchester University between 1908 and 1910. Here, he devised and patented a new design of aircraft engine, but became interested in the mathematics used to describe his engine. The questions Wittgenstein began asking himself about the nature of mathematics, then brought him to Bertrand Russell's Principles of Mathematics. Discussing this with Frege in Germany, Wittgenstein abandoned his aeronautical career, and went to Cambridge to study logic under Russell.
Wittgenstein's engine design is rather interesting, and a couple of recent papers have explained his concept in detail. Ian Lemco outlined Wittgenstein's aeronautical research in a 2007 paper, and co-wrote an exposition of his combustion chamber design with John Cater in 2009.
Ludwig, it seems, was inspired by an idea proposed in the 1st century BC, by Hero of Alexandria, to drive a propeller by emitting jets of gas from nozzles placed in the tips of the rotor-blades. In particular, Wittgenstein proposed that the tips of the rotors contain combustion chambers, and the centrifugal force of the rotating propeller alone should be responsible for compressing the mixture of air and fuel; no need for pistons, in other words.
In modern terms, Wittgenstein proposed a tip-jet engine design. Such engines subdivide into cold-tip jets and hot-tip jets: the former are driven by, say, compressed air, created by a remote compressor, while the latter are driven by the direct exhaust jet flow of combustion. The Sud-Ouest Djinn helicopter, for example, employs cold-tip jets, while the Hiller YH-32 Hornet uses hot-tip jets.
All of which sounds not totally dissimilar to the distinction between hot-blown and cold-blown diffusers in modern-day Formula One...
As Anthony Quinton explained in discussion with Bryan Magee, Wittgenstein "detested...the idea of philosophy as a trade, a 9-to-5 occupation, which you do with a part of yourself, and then go off and lead the rest of your life in a detached and unrelated way. He was a man of the utmost moral intensity. He took himself and his work with very great seriousness. When his work wasn't going well he got into a desperate and agonized condition. The result of this displays itself in his manner of writing. You feel that his whole idea of himself is behind everything that he says...[He] doesn't want to make the thing too easy - he doesn't want to express himself in a way that people can pick up by simply running their eyes over the pages. His philosophy is an instrument for changing the whole intellectual aspect of its readers' lives, and therefore the way to it is made difficult," (Talking Philosophy, p83).
Wittgenstein, however, came to philosophy by starting off as an aeronautical engineer at Manchester University between 1908 and 1910. Here, he devised and patented a new design of aircraft engine, but became interested in the mathematics used to describe his engine. The questions Wittgenstein began asking himself about the nature of mathematics, then brought him to Bertrand Russell's Principles of Mathematics. Discussing this with Frege in Germany, Wittgenstein abandoned his aeronautical career, and went to Cambridge to study logic under Russell.
Wittgenstein's engine design is rather interesting, and a couple of recent papers have explained his concept in detail. Ian Lemco outlined Wittgenstein's aeronautical research in a 2007 paper, and co-wrote an exposition of his combustion chamber design with John Cater in 2009.
Ludwig, it seems, was inspired by an idea proposed in the 1st century BC, by Hero of Alexandria, to drive a propeller by emitting jets of gas from nozzles placed in the tips of the rotor-blades. In particular, Wittgenstein proposed that the tips of the rotors contain combustion chambers, and the centrifugal force of the rotating propeller alone should be responsible for compressing the mixture of air and fuel; no need for pistons, in other words.
In modern terms, Wittgenstein proposed a tip-jet engine design. Such engines subdivide into cold-tip jets and hot-tip jets: the former are driven by, say, compressed air, created by a remote compressor, while the latter are driven by the direct exhaust jet flow of combustion. The Sud-Ouest Djinn helicopter, for example, employs cold-tip jets, while the Hiller YH-32 Hornet uses hot-tip jets.
All of which sounds not totally dissimilar to the distinction between hot-blown and cold-blown diffusers in modern-day Formula One...
Sunday, August 21, 2011
Weak polygyny and Formula One
Weak asymmetries are responsible for just about everything we experience.
Most of the universe we observe, all the galaxies and the stars and the planets, is composed of matter rather than anti-matter, yet the universe should have started with equal amounts of the two. If all the processes in particle physics were exactly symmetric, then most of the matter and anti-matter should have mutually annihilated, yielding a universe containing almost nothing but photon radiation.
What we actually observe is approximately two billion photons for every proton or neutron of matter, and in effect, this figure expresses the exact asymmetry between matter and anti-matter. It's thought that as a result of a small asymmetry in certain high-energy processes, the early universe developed slightly more quarks than anti-quarks. To be more precise, there were a billion-and-one quarks for every billion anti-quarks. Two photons were produced for each annihilation event between a quark and an anti-quark, and the remaining quarks were bound into protons and neutrons, hence the current universe possesses approximately two billion photons for every proton or neutron of matter.
So the weak asymmetry between quarks and anti-quarks is necessary to explain the existence of all the stars and planets. But what about human culture and civilization, all its cities and technologies and literature? How do these emerge from evolutionary biology?
One suggestion is that the weak polygyny of human society is a necessary condition. Polygyny is a sexual asymmetry in which some of the males in a species possess stable reproductive relationships with multiple females in so-called harems, leaving the remaining males as bachelors. This leads to varying forms of intense competition between the males, which often manifests itself in sexual dimorphism, the existence of different male/female sizes or capacities.
Human polygyny is less than that of gorillas, where there is correspondingly a large difference between the size of the males and females, but greater than that of gibbons, who are monogamous, and where the males and females are duly of comparable size.
The evidence for human polygyny is rather strong. G.P.Murdock's Ethnographic Atlas, for example, lists 849 human societies, and finds that 83% are polygynous. And as Richard Dawkins points out in The Ancestor's Tale, research conducted by Laura Betzig indicates that "overtly monogamous societies like ancient Rome and medieval Europe were really polygynous under the surface. A rich nobleman, or Lord of the Manor, may have had only one legal wife but he had a de facto harem of female slaves, or housemaids and tenants' wives and daughters."
This weak polygyny is reflected in human sexual dimorphism, but because humans are an intelligent species, it has a physical and a cultural component. Men are, on average, larger and stronger than women, but men also seek to gain access to harems, not by direct competition, but by seeking power, wealth and status. As a by-product of this, virtually all of human culture, the philosophy, the politics, the science, the technology, the art, the business, and the sport, has been produced by men.
And where else in the world can you find an activity which combines sport, business, politics and technology, in such a tightly integrated package, than Formula One? In essence, then, Formula One is a by-product of the human male desire to gain access to female harems. Small asymmetries matter.
Most of the universe we observe, all the galaxies and the stars and the planets, is composed of matter rather than anti-matter, yet the universe should have started with equal amounts of the two. If all the processes in particle physics were exactly symmetric, then most of the matter and anti-matter should have mutually annihilated, yielding a universe containing almost nothing but photon radiation.
What we actually observe is approximately two billion photons for every proton or neutron of matter, and in effect, this figure expresses the exact asymmetry between matter and anti-matter. It's thought that as a result of a small asymmetry in certain high-energy processes, the early universe developed slightly more quarks than anti-quarks. To be more precise, there were a billion-and-one quarks for every billion anti-quarks. Two photons were produced for each annihilation event between a quark and an anti-quark, and the remaining quarks were bound into protons and neutrons, hence the current universe possesses approximately two billion photons for every proton or neutron of matter.
So the weak asymmetry between quarks and anti-quarks is necessary to explain the existence of all the stars and planets. But what about human culture and civilization, all its cities and technologies and literature? How do these emerge from evolutionary biology?
One suggestion is that the weak polygyny of human society is a necessary condition. Polygyny is a sexual asymmetry in which some of the males in a species possess stable reproductive relationships with multiple females in so-called harems, leaving the remaining males as bachelors. This leads to varying forms of intense competition between the males, which often manifests itself in sexual dimorphism, the existence of different male/female sizes or capacities.
Human polygyny is less than that of gorillas, where there is correspondingly a large difference between the size of the males and females, but greater than that of gibbons, who are monogamous, and where the males and females are duly of comparable size.
The evidence for human polygyny is rather strong. G.P.Murdock's Ethnographic Atlas, for example, lists 849 human societies, and finds that 83% are polygynous. And as Richard Dawkins points out in The Ancestor's Tale, research conducted by Laura Betzig indicates that "overtly monogamous societies like ancient Rome and medieval Europe were really polygynous under the surface. A rich nobleman, or Lord of the Manor, may have had only one legal wife but he had a de facto harem of female slaves, or housemaids and tenants' wives and daughters."
This weak polygyny is reflected in human sexual dimorphism, but because humans are an intelligent species, it has a physical and a cultural component. Men are, on average, larger and stronger than women, but men also seek to gain access to harems, not by direct competition, but by seeking power, wealth and status. As a by-product of this, virtually all of human culture, the philosophy, the politics, the science, the technology, the art, the business, and the sport, has been produced by men.
And where else in the world can you find an activity which combines sport, business, politics and technology, in such a tightly integrated package, than Formula One? In essence, then, Formula One is a by-product of the human male desire to gain access to female harems. Small asymmetries matter.
Thursday, August 18, 2011
Front-wing ground effect
Red Bull, McLaren and Ferrari currently appear to be converging on the same aerodynamic solution: a high-rake, nose-down stance to maximise the ground effect component of front-wing downforce, (with the use of exhaust-blown diffusers to retain rear downforce). Front-wing ground effect has always had a role to play, but the current emphasis is perhaps a consequence of the new technical regulations introduced for the 2009 season, which permitted the front-wing to be much closer to the ground.
To understand front-wing ground effect, it's worth revisiting some research performed by Zhang, Zerihan, Ruhrmann and Deviese in the early noughties, Tip Vortices Generated By A Wing In Ground Effect. This examined a single-element wing in isolation from rotating wheels and other downstream appendages, but the results are still very relevant.
The principal point is that front-wing ground-effect depends upon two mechanisms: firstly, as the wing gets closer to the ground, a type of venturi effect occurs, accelerating the air between the ground and the wing to generate greater downforce. But in addition, a vortex forms underneath the end of the wing, close to the junction between the wing and the endplate, and this both produces downforce and keeps the boundary layer of the wing attached at a higher angle-of-attack.
The diagrams above show how this underwing vortex intensifies as the wing gets closer to the ground. In this regime, the downforce increases exponentially as the height of the wing is reduced. Beneath a certain critical height, however, the strength of the vortex reduces. Beneath this height, the downforce will continue to increase due to the venturi effect, but the rate of increase will be more linear. Eventually, at a very low height above the ground, the vortex bursts, the boundary layer separates from the suction surface, and the downforce actually reduces.
So, for a wing in isolation, the ground effect is fairly well understood. One imagines, however, that the presence of a rotating wheel immediately behind the wing makes things a little more difficult!
The diagram here, from the seminal work in the 1970s by Fackrell and Harvey, demonstrates that the rotating wheel creates a high pressure region in front of it, (zero degrees is the horizontal forward-pointing direction, and 90 degrees corresponds to the contact patch beneath the tyre). Placing a high-pressure area immediately behind a wing will presumably steepen the adverse pressure gradient on the suction surface of the wing, causing premature detachment of the boundary layer. Hence, when the wings were widened in the new regulations, most designers immediately directed the endplates of the wings outwards, seeking to direct the flow away from those high-pressure areas.
To understand front-wing ground effect, it's worth revisiting some research performed by Zhang, Zerihan, Ruhrmann and Deviese in the early noughties, Tip Vortices Generated By A Wing In Ground Effect. This examined a single-element wing in isolation from rotating wheels and other downstream appendages, but the results are still very relevant.
The principal point is that front-wing ground-effect depends upon two mechanisms: firstly, as the wing gets closer to the ground, a type of venturi effect occurs, accelerating the air between the ground and the wing to generate greater downforce. But in addition, a vortex forms underneath the end of the wing, close to the junction between the wing and the endplate, and this both produces downforce and keeps the boundary layer of the wing attached at a higher angle-of-attack.
The diagrams above show how this underwing vortex intensifies as the wing gets closer to the ground. In this regime, the downforce increases exponentially as the height of the wing is reduced. Beneath a certain critical height, however, the strength of the vortex reduces. Beneath this height, the downforce will continue to increase due to the venturi effect, but the rate of increase will be more linear. Eventually, at a very low height above the ground, the vortex bursts, the boundary layer separates from the suction surface, and the downforce actually reduces.
So, for a wing in isolation, the ground effect is fairly well understood. One imagines, however, that the presence of a rotating wheel immediately behind the wing makes things a little more difficult!
The diagram here, from the seminal work in the 1970s by Fackrell and Harvey, demonstrates that the rotating wheel creates a high pressure region in front of it, (zero degrees is the horizontal forward-pointing direction, and 90 degrees corresponds to the contact patch beneath the tyre). Placing a high-pressure area immediately behind a wing will presumably steepen the adverse pressure gradient on the suction surface of the wing, causing premature detachment of the boundary layer. Hence, when the wings were widened in the new regulations, most designers immediately directed the endplates of the wings outwards, seeking to direct the flow away from those high-pressure areas.
Wednesday, August 17, 2011
Peridynamics
Q:So what exactly is peridynamics?
A: Well, it's a new formulation of solid mechanics, which in turn, is part of continuum mechanics. Continuum mechanics represents those parts of the macroscopic world which can be idealised as continuous, extended entities. If you've got a gas or a liquid, you can represent it using fluid mechanics. Fluids, however, don't have strength, whereas solids do. To represent a solid, you need to use solid mechanics.
Q: So why the need for a new formulation?
A: Well, it's basically all about fracture. The trouble with fracture is that, by definition, it constitutes a discontinuity in a solid, and given that solid mechanics is predicated upon the continuity of things, the conventional formulation struggles to deal with fracture.
Q: And what does peridynamics postulate to resolve the problem?
A: Cauchy's momentum equation, the governing equation of continuum mechanics, defines the force at a point by the divergence of the stress tensor. The divergence is, of course, a differential operator, and if your equations are based upon derivatives, then your equations will fail in the presence of a discontinuity. Peridynamics attempts to get around this by replacing the spatial derivatives of the stress tensor at each point with the integral of a force density function centred at that point. This, then, is a radical approach, which attempts to generalise from Cauchy's conception of the internal stresses in a solid. The field equations in this formulation, it is claimed, can be applied to discontinuities such as cracks.
Q: Are there any philosophical implications?
A: Definitely, yes. On smaller length scales, where fluids and solids are discrete, people use something called Molecular Dynamics to represent substances. And the equations of Molecular Dynamics are intrinsically non-local; the net force on each particle is determined by the joint effect of all the inter-atomic forces due to other particles, not just those immediately adjacent to the particle in question. Finding the force on a particle by adding all the contributions from particles in a neighbourhood of that particle, is a discrete version of an integral. Conventional solid mechanics, however, is distinctly local. This means that the inter-theoretic relationship between Molecular Dynamics and conventional solid mechanics is very unsatisfactory. However, by using the non-local reformulation provided by peridynamics, the inter-theoretic relationship is far more satisfactory.
It's an interesting case, which demonstrates that macroscopic theories sometimes need to be reformulated using concepts and structures taken from the microscopic theory.
A: Well, it's a new formulation of solid mechanics, which in turn, is part of continuum mechanics. Continuum mechanics represents those parts of the macroscopic world which can be idealised as continuous, extended entities. If you've got a gas or a liquid, you can represent it using fluid mechanics. Fluids, however, don't have strength, whereas solids do. To represent a solid, you need to use solid mechanics.
Q: So why the need for a new formulation?
A: Well, it's basically all about fracture. The trouble with fracture is that, by definition, it constitutes a discontinuity in a solid, and given that solid mechanics is predicated upon the continuity of things, the conventional formulation struggles to deal with fracture.
Q: And what does peridynamics postulate to resolve the problem?
A: Cauchy's momentum equation, the governing equation of continuum mechanics, defines the force at a point by the divergence of the stress tensor. The divergence is, of course, a differential operator, and if your equations are based upon derivatives, then your equations will fail in the presence of a discontinuity. Peridynamics attempts to get around this by replacing the spatial derivatives of the stress tensor at each point with the integral of a force density function centred at that point. This, then, is a radical approach, which attempts to generalise from Cauchy's conception of the internal stresses in a solid. The field equations in this formulation, it is claimed, can be applied to discontinuities such as cracks.
Q: Are there any philosophical implications?
A: Definitely, yes. On smaller length scales, where fluids and solids are discrete, people use something called Molecular Dynamics to represent substances. And the equations of Molecular Dynamics are intrinsically non-local; the net force on each particle is determined by the joint effect of all the inter-atomic forces due to other particles, not just those immediately adjacent to the particle in question. Finding the force on a particle by adding all the contributions from particles in a neighbourhood of that particle, is a discrete version of an integral. Conventional solid mechanics, however, is distinctly local. This means that the inter-theoretic relationship between Molecular Dynamics and conventional solid mechanics is very unsatisfactory. However, by using the non-local reformulation provided by peridynamics, the inter-theoretic relationship is far more satisfactory.
It's an interesting case, which demonstrates that macroscopic theories sometimes need to be reformulated using concepts and structures taken from the microscopic theory.
Friday, August 12, 2011
Multi-element wings and DRS
So why are the wings on aircraft and racing cars broken up into multiple elements, with slots in-between? Well, it was found reasonably early in the history of aerodynamics that this technique enabled the total wing to continue generating lift at an angle of attack at which it would have stalled, were it to have been fashioned as a single element. The lift/downforce generated by a wing increases as the angle of attack increases, hence multiple element wings are a means of increasing peak lift/downforce. (In the case of aircraft, they are also a means of maintaining lift at the lower airspeeds associated with landing and taking-off).
But how does the introduction of slots achieve this effect? Well, A.M.O. Smith identified five distinct mechanisms in his 1974 paper, High-Lift Aerodynamics: slat effect, circulation effect, dumping effect, off-surface pressure recovery, and fresh-boundary layer effect.
So let's have a go at attempting to understand what these effects are. To start off, however, we need to recall some of the fundamental facts about how a wing works.
A wing generates lift/downforce because it generates a circulatory component to the airflow. The circulation only exists because of a thin layer of airflow adjacent to the wing called the boundary layer. Viscous effects operate in the boundary layer, but outside the boundary layer the airflow can be idealised as being inviscid.
When people speak of the velocity and pressure of the airflow above and below a wing, they are implicitly speaking of the velocity and pressure on the dividing line which separates the boundary layer from the inviscid airflow. Here, Bernoulli's law applies: if the airflow is accelerated, the pressure decreases, whilst if the airflow decelerates, the pressure increases.
The low pressure surface of a wing initially accelerates the airflow, and then decelerates it towards the trailing edge. Hence, there is higher pressure at the trailing edge than at the point of maximum velocity, and this corresponds to an adverse pressure gradient along the latter part of the boundary layer.
The circulation around a wing is crucially dependent upon the boundary layer remaining attached to the surface of the wing. If the adverse pressure gradient is too steep, reverse flow ensues, the boundary layer detaches, and the wing stalls. This will happen as one attempts to increase the amount of lift/downforce by increasing the angle of attack.
Ok, so that's some of the fundamentals of wing aerodynamics. Now, if the boundary later detaches when the adverse pressure gradient becomes too steep, it follows that reducing the severity of the adverse pressure gradient at a fixed angle of attack will keep the boundary layer attached. And this is exactly what a multi-element wing does.
Imagine for a moment a three-element racecar wing. The small leading element is called a slat, and the element behind the main plane is called the flap. Imagine the airflow coming from left to right. There will be an anti-clockwise circulatory component to the airflow around each element. One effect of this will be to reduce the acceleration of the airflow at the leading edge of the main element, and to thereby reduce the low pressure peak at that point. In simplistic terms, the circulatory component to the flow at the trailing edge of the slat is in an opposite direction to that at the leading edge of the main plane, hence the slot gap reduces the velocity of the airflow here. By reducing the low pressure peak at the leading edge of the main plane, the adverse pressure gradient along the main plane will be reduced, thereby helping the main plane to hang onto its boundary layer. This is the slat effect.
Meanwhile, the flap will have its own circulation, and as a consequence, at the point where the trailing edge of the main plane discharges its boundary layer, the airflow velocity will be greater than it would in the absence of a flap. Thus, the high pressure at the trailing edge of the main plane is reduced, once again reducing the adverse pressure gradient along the main plane, helping to keep the boundary layer attached. This is the dumping effect.
Now, according to Smith, the circulation of the flap enhances the circulation of the main plane, and in the presence of a slat, the circulation of the main plane enhances the circulation of the slat. As yet I can't intuitively see why this is the case. Smith claims, however, that this circulation effect is closely related to the dumping effect, and asserts that the downstream element induces cross-flow on the trailing edge of the upstream element, which enhances its circulation.
The off-surface pressure recovery effect, meanwhile, is a consequence of the dumping effect. A downstream element reduces the deceleration towards the trailing edge of an upstream element, keeping the boundary layer attached, and releasing the boundary layer from the trailing edge of the surface, where it completes its deceleration in a manner which doesn't cause reverse flow. The boundary layer of the main plane, for example, will discharge into the region outside the boundary layer of the flap, and continue to decelerate until it reaches the trailing edge of the entire wing system, (see the diagram here from Zhang and Zerihan, Aerodynamics of a double-element wing in ground effect, 2003).
The final effect, the fresh boundary layer effect, means that each element acquires its very own boundary layer, fed by the freestream velocity. This keeps the boundary layer of each element thinner than the boundary layer on a single wing of the same length, and thinner boundary layers are able to withstand greater adverse pressure gradients.
So it's all about increasing circulation and mitigating the causes and effects of adverse pressure gradients.
Note, of course, that the function of a DRS rear-wing in modern Formula 1 is dependent upon these aerodynamic effects. The rear wing is designed so that the main plane is at an angle of attack which would cause the boundary layer to detach in the absence of the flap. With the flap in place, the severity of the adverse pressure gradient is reduced by the acceleration of the airflow around the leading edge of the flap. Open the flap, and the main plane is suddenly dumping its boundary layer into freestream airflow, as a result of which the adverse pressure gradient steepens, and the boundary layer detaches, causing the main plane to stall.
But how does the introduction of slots achieve this effect? Well, A.M.O. Smith identified five distinct mechanisms in his 1974 paper, High-Lift Aerodynamics: slat effect, circulation effect, dumping effect, off-surface pressure recovery, and fresh-boundary layer effect.
So let's have a go at attempting to understand what these effects are. To start off, however, we need to recall some of the fundamental facts about how a wing works.
A wing generates lift/downforce because it generates a circulatory component to the airflow. The circulation only exists because of a thin layer of airflow adjacent to the wing called the boundary layer. Viscous effects operate in the boundary layer, but outside the boundary layer the airflow can be idealised as being inviscid.
When people speak of the velocity and pressure of the airflow above and below a wing, they are implicitly speaking of the velocity and pressure on the dividing line which separates the boundary layer from the inviscid airflow. Here, Bernoulli's law applies: if the airflow is accelerated, the pressure decreases, whilst if the airflow decelerates, the pressure increases.
The low pressure surface of a wing initially accelerates the airflow, and then decelerates it towards the trailing edge. Hence, there is higher pressure at the trailing edge than at the point of maximum velocity, and this corresponds to an adverse pressure gradient along the latter part of the boundary layer.
The circulation around a wing is crucially dependent upon the boundary layer remaining attached to the surface of the wing. If the adverse pressure gradient is too steep, reverse flow ensues, the boundary layer detaches, and the wing stalls. This will happen as one attempts to increase the amount of lift/downforce by increasing the angle of attack.
Ok, so that's some of the fundamentals of wing aerodynamics. Now, if the boundary later detaches when the adverse pressure gradient becomes too steep, it follows that reducing the severity of the adverse pressure gradient at a fixed angle of attack will keep the boundary layer attached. And this is exactly what a multi-element wing does.
Imagine for a moment a three-element racecar wing. The small leading element is called a slat, and the element behind the main plane is called the flap. Imagine the airflow coming from left to right. There will be an anti-clockwise circulatory component to the airflow around each element. One effect of this will be to reduce the acceleration of the airflow at the leading edge of the main element, and to thereby reduce the low pressure peak at that point. In simplistic terms, the circulatory component to the flow at the trailing edge of the slat is in an opposite direction to that at the leading edge of the main plane, hence the slot gap reduces the velocity of the airflow here. By reducing the low pressure peak at the leading edge of the main plane, the adverse pressure gradient along the main plane will be reduced, thereby helping the main plane to hang onto its boundary layer. This is the slat effect.
Meanwhile, the flap will have its own circulation, and as a consequence, at the point where the trailing edge of the main plane discharges its boundary layer, the airflow velocity will be greater than it would in the absence of a flap. Thus, the high pressure at the trailing edge of the main plane is reduced, once again reducing the adverse pressure gradient along the main plane, helping to keep the boundary layer attached. This is the dumping effect.
Now, according to Smith, the circulation of the flap enhances the circulation of the main plane, and in the presence of a slat, the circulation of the main plane enhances the circulation of the slat. As yet I can't intuitively see why this is the case. Smith claims, however, that this circulation effect is closely related to the dumping effect, and asserts that the downstream element induces cross-flow on the trailing edge of the upstream element, which enhances its circulation.
The off-surface pressure recovery effect, meanwhile, is a consequence of the dumping effect. A downstream element reduces the deceleration towards the trailing edge of an upstream element, keeping the boundary layer attached, and releasing the boundary layer from the trailing edge of the surface, where it completes its deceleration in a manner which doesn't cause reverse flow. The boundary layer of the main plane, for example, will discharge into the region outside the boundary layer of the flap, and continue to decelerate until it reaches the trailing edge of the entire wing system, (see the diagram here from Zhang and Zerihan, Aerodynamics of a double-element wing in ground effect, 2003).
The final effect, the fresh boundary layer effect, means that each element acquires its very own boundary layer, fed by the freestream velocity. This keeps the boundary layer of each element thinner than the boundary layer on a single wing of the same length, and thinner boundary layers are able to withstand greater adverse pressure gradients.
So it's all about increasing circulation and mitigating the causes and effects of adverse pressure gradients.
Note, of course, that the function of a DRS rear-wing in modern Formula 1 is dependent upon these aerodynamic effects. The rear wing is designed so that the main plane is at an angle of attack which would cause the boundary layer to detach in the absence of the flap. With the flap in place, the severity of the adverse pressure gradient is reduced by the acceleration of the airflow around the leading edge of the flap. Open the flap, and the main plane is suddenly dumping its boundary layer into freestream airflow, as a result of which the adverse pressure gradient steepens, and the boundary layer detaches, causing the main plane to stall.
Sunday, August 07, 2011
Renormalization in quantum field theory
So what exactly is renormalization in quantum field theory? Well, quantum field theory makes experimentally verified predictions about collisions between particles. In particular, it makes predictions about the probability of going from a particular incoming state to a particular outgoing state, and these are called transition probabilities:
An incoming particle is represented by a quantum state Ψi, the interaction process is represented by a scattering operator S, and the potential outgoing state is represented by the quantum state Ψf.
In many physically relevant situations, the incoming state has a specific energy Ei and momentum ki, and each possible outgoing state also has a specific energy Ef and momentum kf. An outgoing state with a specific momentum kf, also has a specific direction Ω associated with it.
These transition probabilities can be used to construct cross-section data. The cross-section for a reaction is effectively an expression of its probability. In practice, cross-sections provide an economical way of bundling the transition probabilities between entire classes of quantum states. For example, the differential cross-section σ(E,Ω) is proportional to the probability of a transition from any incoming state Ψi of energy E to any outgoing state Ψf in which the momentum vector kf points in the direction of Ω. Integrating a differential cross-section over all possible directions then gives a total scattering cross-section σ(E).
So, what about the scattering operator S? Well, this contains the information that specifies the nature of the interaction. The nature of the interaction is specified using objects from classical physics, either the interaction Hamiltonian or the interaction Lagrangian. The interaction Lagrangian will contain values for the masses and charges (aka coupling constants) of the interacting fields. The scattering operator can be expressed in terms of the interaction Hamiltonian density operator HI(x), which in turn, can be obtained from the interaction Lagrangian density. To be specific, the scattering operator can be expressed as the following Dyson perturbation series:
T[HI(x1),...,HI(xn)] simply denotes a time-ordered permutation of the interaction Hamiltonian density operators.
Inserting the expression for the scattering operator into the expression for a transition probability, yields an infinite series, and the trouble is that every term in this series transpires to be a divergent integral. Renormalization involves taking only the first few terms in such a series, and then manipulating the integrals in those terms to obtain finite results.
The most sophisticated account of renormalization goes as follows. The troublesome integrals tend to be integrals over an infinite energy range, and the integrals go to infinity as the energy goes to positive infinity. So begin by introducing a cut-off Λ0 at a large, but finite energy. Correlate this cut-off with a particular conventional interaction Lagrangian, with conventional values for the masses and coupling constants. Now stipulate that the masses and coupling constants are functions of the cut-off energy Λ. Thus, as the upper limit of the integral is now permitted to go to infinity, Λ → ∞, the masses and coupling constants becoming running masses and coupling constants, m(Λ) and g(Λ), and the Lagrangian also acquires evolving counter-terms which incorporate those running masses and coupling constants. The functional forms of m(Λ) and g(Λ) are chosen to ensure that the integrals are now finite as the limit Λ → ∞ is taken.
Thus, for example, in the case of quantum electrodynamics, the Lagrangian is modified as follows:
The charge and mass have the following running values (c0 and its tilde-counterpart being proportional to ln (Λ/Λ0):
This is called the Renormalization Group (RG) approach. It basically amounts to saying that there is a flow in the space of Lagrangians under energy-scale transformations. Changing the cut-off in divergent integrals is then seen to be equivalent to adding/subtracting extra terms in the Lagrangian, which in turn is equivalent to changing the values of the masses and coupling constants. There are, of course, numerous qualifications, exceptions and counter-examples, but that is the basic idea.
At a classical level in mathematical physics, the equations of a theory can be economically specified by a Lagrangian, hence it is typical in physics to identify a theory with its Lagrangian. Thus, a flow in the space of Lagrangians is also a flow in the space of theories; the RG approach is saying that different theories are appropriate at different energy scales.
I'm indebted here to the material in the following couple of papers, which also constitute excellent further reading for the enquiring mind:
Hartmann, S. (2001). Effective field theories, reductionism and scientific explanation, Studies in the History and Philosophy of Modern Physics, 32, pp267-304.
Huggett, N. and Weingard, R. (1995). The Renormalisation Group and Effective Field Theories, Synthese, Vol. 102, No. 1, pp. 171-194.
An incoming particle is represented by a quantum state Ψi, the interaction process is represented by a scattering operator S, and the potential outgoing state is represented by the quantum state Ψf.
In many physically relevant situations, the incoming state has a specific energy Ei and momentum ki, and each possible outgoing state also has a specific energy Ef and momentum kf. An outgoing state with a specific momentum kf, also has a specific direction Ω associated with it.
These transition probabilities can be used to construct cross-section data. The cross-section for a reaction is effectively an expression of its probability. In practice, cross-sections provide an economical way of bundling the transition probabilities between entire classes of quantum states. For example, the differential cross-section σ(E,Ω) is proportional to the probability of a transition from any incoming state Ψi of energy E to any outgoing state Ψf in which the momentum vector kf points in the direction of Ω. Integrating a differential cross-section over all possible directions then gives a total scattering cross-section σ(E).
So, what about the scattering operator S? Well, this contains the information that specifies the nature of the interaction. The nature of the interaction is specified using objects from classical physics, either the interaction Hamiltonian or the interaction Lagrangian. The interaction Lagrangian will contain values for the masses and charges (aka coupling constants) of the interacting fields. The scattering operator can be expressed in terms of the interaction Hamiltonian density operator HI(x), which in turn, can be obtained from the interaction Lagrangian density. To be specific, the scattering operator can be expressed as the following Dyson perturbation series:
T[HI(x1),...,HI(xn)] simply denotes a time-ordered permutation of the interaction Hamiltonian density operators.
Inserting the expression for the scattering operator into the expression for a transition probability, yields an infinite series, and the trouble is that every term in this series transpires to be a divergent integral. Renormalization involves taking only the first few terms in such a series, and then manipulating the integrals in those terms to obtain finite results.
The most sophisticated account of renormalization goes as follows. The troublesome integrals tend to be integrals over an infinite energy range, and the integrals go to infinity as the energy goes to positive infinity. So begin by introducing a cut-off Λ0 at a large, but finite energy. Correlate this cut-off with a particular conventional interaction Lagrangian, with conventional values for the masses and coupling constants. Now stipulate that the masses and coupling constants are functions of the cut-off energy Λ. Thus, as the upper limit of the integral is now permitted to go to infinity, Λ → ∞, the masses and coupling constants becoming running masses and coupling constants, m(Λ) and g(Λ), and the Lagrangian also acquires evolving counter-terms which incorporate those running masses and coupling constants. The functional forms of m(Λ) and g(Λ) are chosen to ensure that the integrals are now finite as the limit Λ → ∞ is taken.
Thus, for example, in the case of quantum electrodynamics, the Lagrangian is modified as follows:
The charge and mass have the following running values (c0 and its tilde-counterpart being proportional to ln (Λ/Λ0):
This is called the Renormalization Group (RG) approach. It basically amounts to saying that there is a flow in the space of Lagrangians under energy-scale transformations. Changing the cut-off in divergent integrals is then seen to be equivalent to adding/subtracting extra terms in the Lagrangian, which in turn is equivalent to changing the values of the masses and coupling constants. There are, of course, numerous qualifications, exceptions and counter-examples, but that is the basic idea.
At a classical level in mathematical physics, the equations of a theory can be economically specified by a Lagrangian, hence it is typical in physics to identify a theory with its Lagrangian. Thus, a flow in the space of Lagrangians is also a flow in the space of theories; the RG approach is saying that different theories are appropriate at different energy scales.
I'm indebted here to the material in the following couple of papers, which also constitute excellent further reading for the enquiring mind:
Hartmann, S. (2001). Effective field theories, reductionism and scientific explanation, Studies in the History and Philosophy of Modern Physics, 32, pp267-304.
Huggett, N. and Weingard, R. (1995). The Renormalisation Group and Effective Field Theories, Synthese, Vol. 102, No. 1, pp. 171-194.
Saturday, August 06, 2011
The Hamilton duels
There were two great wheel-to-wheel battles in the Hungarian Grand Prix, both, predictably, featuring Lewis Hamilton. First off was the Hamilton-Vettel duel between laps 1 and 5, and then there was the equally thrilling Hamilton-Button contest between laps 47 and 52.
The first lap saw Hamilton and Button side-by-side, scrabbling for grip coming out of the first corner on their intermediate tyres, Hamilton taking second place down the outside into turn 2 as Button backed out of it. Lewis then set off after Vettel, the McLaren spectacularly sideways accelerating out of turn 2 on the second lap.
Once again, the McLarens were the only leading cars generating strong wing-tip vortices down the main straight, and Lewis clearly had a grip advantage over Vettel in these early laps on a damp track. Vettel, however, provided a robust defense.
On lap 3, Lewis decided to try the outside of Vettel into turn 2, briefly putting his outside wheels onto the grass as he did so. It was remarkably similar to the moment in Canada this year when Lewis was attempting to overtake Schumacher into the hairpin, although on that occasion Lewis was badly squeezed by the Mercedes driver making a second move under braking. This time round, Lewis was able to take a run around the outside of turn 2, but Vettel anticipated the move and simply ran Lewis out to the edge of the track, forcing him to back off and drop in behind the Red Bull.
On lap 4, Lewis again got a run on the Red Bull into turn 2, but this time decided to try the inside. Yet again, however, Vettel had an answer, and simply carried enough speed around the outside to retain his place into turn 3. Vettel was demonstrating all the racecraft which some have accused him of lacking, but on lap 5 he finally over-egged it into turn 2, running wide and letting Lewis into the lead.
The later Hamilton-Button duel was triggered, of course, when Lewis spun at the chicane on lap 47, Jenson taking the lead. Being on softer tyres, Lewis was potentially at an advantage in the battle which ensued, but Lewis's tyres were also wearing badly, to the extent that he was forced to pit at the end of lap 52. It's possible, therefore, that the two drivers actually had comparable levels of grip.
By lap 49, Button was extending the gap to Lewis, demonstrating he had superior grip on a mostly dry track surface. On lap 50, however, the rain began to fall again, and by the exit of the chicane, Lewis was back in the wheel-tracks of the other McLaren. Into turn 2 on lap 51, Jenson's famed ability to magically sense the levels of grip available, momentarily deserted him, and he ran ride, letting Lewis back into the lead.
Lewis immediately gained a 2 second gap over Button, but struggled badly with grip over the remainder of the lap, and coming onto the main straight to start lap 52, Button was right behind him. With the advantage of DRS, Jenson overtook his compatriot into turn 1, a quartet of wing-tip vortices briefly streaming in their joint wake.
Down they went into turn 2, and Jenson turned into the corner a little defensively on a tighter line than normal, and missed the apex, Lewis cutting underneath to re-take the lead. Great stuff!
Battle was then suspended over the remainder of the lap as both drivers attempted to absorb the information and instructions the McLaren team were communicating vis-a-vis the potential requirement to fit intermediate tyres. Lewis was able to receive messages from the team, but unable to make himself heard in response, whilst Jenson was at one stage invited to queue behind Lewis as both cars were fitted with intermediates.
Ultimately, of course, Lewis's race-winning prospects were already done-for, and the vital decision, the race-winning decision, was Jenson's choice not to pit.
The first lap saw Hamilton and Button side-by-side, scrabbling for grip coming out of the first corner on their intermediate tyres, Hamilton taking second place down the outside into turn 2 as Button backed out of it. Lewis then set off after Vettel, the McLaren spectacularly sideways accelerating out of turn 2 on the second lap.
Once again, the McLarens were the only leading cars generating strong wing-tip vortices down the main straight, and Lewis clearly had a grip advantage over Vettel in these early laps on a damp track. Vettel, however, provided a robust defense.
On lap 3, Lewis decided to try the outside of Vettel into turn 2, briefly putting his outside wheels onto the grass as he did so. It was remarkably similar to the moment in Canada this year when Lewis was attempting to overtake Schumacher into the hairpin, although on that occasion Lewis was badly squeezed by the Mercedes driver making a second move under braking. This time round, Lewis was able to take a run around the outside of turn 2, but Vettel anticipated the move and simply ran Lewis out to the edge of the track, forcing him to back off and drop in behind the Red Bull.
On lap 4, Lewis again got a run on the Red Bull into turn 2, but this time decided to try the inside. Yet again, however, Vettel had an answer, and simply carried enough speed around the outside to retain his place into turn 3. Vettel was demonstrating all the racecraft which some have accused him of lacking, but on lap 5 he finally over-egged it into turn 2, running wide and letting Lewis into the lead.
The later Hamilton-Button duel was triggered, of course, when Lewis spun at the chicane on lap 47, Jenson taking the lead. Being on softer tyres, Lewis was potentially at an advantage in the battle which ensued, but Lewis's tyres were also wearing badly, to the extent that he was forced to pit at the end of lap 52. It's possible, therefore, that the two drivers actually had comparable levels of grip.
By lap 49, Button was extending the gap to Lewis, demonstrating he had superior grip on a mostly dry track surface. On lap 50, however, the rain began to fall again, and by the exit of the chicane, Lewis was back in the wheel-tracks of the other McLaren. Into turn 2 on lap 51, Jenson's famed ability to magically sense the levels of grip available, momentarily deserted him, and he ran ride, letting Lewis back into the lead.
Lewis immediately gained a 2 second gap over Button, but struggled badly with grip over the remainder of the lap, and coming onto the main straight to start lap 52, Button was right behind him. With the advantage of DRS, Jenson overtook his compatriot into turn 1, a quartet of wing-tip vortices briefly streaming in their joint wake.
Down they went into turn 2, and Jenson turned into the corner a little defensively on a tighter line than normal, and missed the apex, Lewis cutting underneath to re-take the lead. Great stuff!
Battle was then suspended over the remainder of the lap as both drivers attempted to absorb the information and instructions the McLaren team were communicating vis-a-vis the potential requirement to fit intermediate tyres. Lewis was able to receive messages from the team, but unable to make himself heard in response, whilst Jenson was at one stage invited to queue behind Lewis as both cars were fitted with intermediates.
Ultimately, of course, Lewis's race-winning prospects were already done-for, and the vital decision, the race-winning decision, was Jenson's choice not to pit.
Thursday, August 04, 2011
A way to subvert the blown diffuser ban?
Exhaust-blown diffusers will effectively be banned in Formula 1 from next year, but there may be other ways of blowing the diffuser, and generating the side-edge vortices which appear to be crucial to maximising diffuser downforce.
For example, from 2014, Formula 1's engine formula will change from a normally aspirated 2.4 litre V8 to a 1.6 litre turbo-charged V6. The turbine in such an engine is constantly generating compressed air. Moreover, the inlet manifold of a turbo engine has a blow-off valve, specifically designed to release pressure when the driver lifts off the throttle or the throttle is closed. The blow-off valve could be vented down to the sides of the diffuser, providing vital extra downforce when a driver comes off the throttle turning into a corner.
From 2012, the regulations will prohibit exhaust-blown diffusers by stipulating that the exhausts are moved to a location in which they cannot influence the diffuser. These new regulations, however, will say nothing (as far as I'm aware) about blowing the diffuser with compressed air from the inlet manifold of a 2014 turbo engine!
Unfortunately, there is at least one potential snag: the Wikipedia entry on blow-off valves claims that "Motor sports governed by the FIA have made it illegal to vent unmuffled blowoff valves to the atmosphere." There is no citation, however, so it's difficult to ascertain if this is true, or even if it will apply to the 2014 F1 engine regulations. In fact, this is presumably something yet to be determined. Worth keeping an eye, then, on how those regulations are finally worded...
In the meantime, the teams could use compressed air cylinders to blow the diffusers, perhaps just for a qualifying lap. The primary declared purpose of these cylinders would be to supply the pneumatic valve system in the engine, of course, but as a safety measure, it might be necessary to vent excessive pressure. For safety. And cooling.
For example, from 2014, Formula 1's engine formula will change from a normally aspirated 2.4 litre V8 to a 1.6 litre turbo-charged V6. The turbine in such an engine is constantly generating compressed air. Moreover, the inlet manifold of a turbo engine has a blow-off valve, specifically designed to release pressure when the driver lifts off the throttle or the throttle is closed. The blow-off valve could be vented down to the sides of the diffuser, providing vital extra downforce when a driver comes off the throttle turning into a corner.
From 2012, the regulations will prohibit exhaust-blown diffusers by stipulating that the exhausts are moved to a location in which they cannot influence the diffuser. These new regulations, however, will say nothing (as far as I'm aware) about blowing the diffuser with compressed air from the inlet manifold of a 2014 turbo engine!
Unfortunately, there is at least one potential snag: the Wikipedia entry on blow-off valves claims that "Motor sports governed by the FIA have made it illegal to vent unmuffled blowoff valves to the atmosphere." There is no citation, however, so it's difficult to ascertain if this is true, or even if it will apply to the 2014 F1 engine regulations. In fact, this is presumably something yet to be determined. Worth keeping an eye, then, on how those regulations are finally worded...
In the meantime, the teams could use compressed air cylinders to blow the diffusers, perhaps just for a qualifying lap. The primary declared purpose of these cylinders would be to supply the pneumatic valve system in the engine, of course, but as a safety measure, it might be necessary to vent excessive pressure. For safety. And cooling.
Tuesday, August 02, 2011
Diffusers and rake
A recent column by Mark Hughes (Autosport, July 21, p21), and a subsequent explanation Mark elicited from McLaren technical director Paddy Lowe (Autosport, July 28, p41), provide some extra illumination on the overall aerodynamic concept pursued in Formula 1 by Red Bull since 2010, and followed to some extent by other teams this year.
Both articles explain that the basic idea has been to run a car with a significant degree of rake, so that the front ride-height is lower than the rear. The effect of this is twofold: the front-wing generates greater downforce due to ground effect, and the rear diffuser also acquires the potential to generate greater downforce.
Maximising the downforce of the diffuser is, however, a subtle issue. The downforce generated by a diffuser is a function of two variables: (i) the angle of the diffuser, and (ii) the height above the ground. Generally speaking, the peak downforce of the diffuser increases with the angle of the diffuser. Then, for a fixed diffuser angle, the downforce generated will increase according to an exponential curve as the height reduces, until a first critical point is reached (see diagram above, taken from Ground Effect Aerodynamics of Race Cars, Zhang, Toet and Zerihan, Applied Mechanics Reviews, January 2006, Vol 59, pp33-49). As the height is reduced further, the downforce will increase again, but according to a linear slope, until a second critical point is reached, after which the downforce falls off a cliff.
Without running any rake, the diffuser is limited by regulation to a shallower angle than seen in years gone by. By increasing the rake, the effective angle of the diffuser is increased, thereby increasing the potential peak downforce. However, increasing the rake also has the effect of increasing the height of the diffuser.
So, how does one combat the detrimental effect of increasing the height of the diffuser? Well, the key, I think, is to understand exactly how a reduction in height increases the downforce generated by a diffuser. The crucial point is that the edges of the diffuser generate a pair of counter-rotating vortices, and the magnitude of the downforce generated is determined by the strength of these vortices. The downforce increases exponentially as the height is reduced, because the strength of these vortices is increasing. The first critical point corresponds to the height at which the vortex strength begins to decrease, and the second critical point corresponds to the height at which the vortices breakdown.
So, to pose the question again, how do we mitigate the downforce-reducing effect of an increase in diffuser height? Simple, one merely uses the exhaust gases to boost the strength of the side-edge vortices to levels otherwise seen at lower heights.
In fact, this is to simplify the issue, because the exhaust gases playing on the sides of the diffuser have two effects: (i) to strengthen the side-edge vortices inside the diffuser, and (ii) to act as air curtains, preventing the ingress of turbulent air created by the rotating rear wheels.
So, with exhaust-blown diffusers to be banned from next year, the trick will be to find other ways of boosting the strength of those side-edge vortices. Do so, and you'll still be able to run your car with a significant degree of rake.
Both articles explain that the basic idea has been to run a car with a significant degree of rake, so that the front ride-height is lower than the rear. The effect of this is twofold: the front-wing generates greater downforce due to ground effect, and the rear diffuser also acquires the potential to generate greater downforce.
Maximising the downforce of the diffuser is, however, a subtle issue. The downforce generated by a diffuser is a function of two variables: (i) the angle of the diffuser, and (ii) the height above the ground. Generally speaking, the peak downforce of the diffuser increases with the angle of the diffuser. Then, for a fixed diffuser angle, the downforce generated will increase according to an exponential curve as the height reduces, until a first critical point is reached (see diagram above, taken from Ground Effect Aerodynamics of Race Cars, Zhang, Toet and Zerihan, Applied Mechanics Reviews, January 2006, Vol 59, pp33-49). As the height is reduced further, the downforce will increase again, but according to a linear slope, until a second critical point is reached, after which the downforce falls off a cliff.
Without running any rake, the diffuser is limited by regulation to a shallower angle than seen in years gone by. By increasing the rake, the effective angle of the diffuser is increased, thereby increasing the potential peak downforce. However, increasing the rake also has the effect of increasing the height of the diffuser.
So, how does one combat the detrimental effect of increasing the height of the diffuser? Well, the key, I think, is to understand exactly how a reduction in height increases the downforce generated by a diffuser. The crucial point is that the edges of the diffuser generate a pair of counter-rotating vortices, and the magnitude of the downforce generated is determined by the strength of these vortices. The downforce increases exponentially as the height is reduced, because the strength of these vortices is increasing. The first critical point corresponds to the height at which the vortex strength begins to decrease, and the second critical point corresponds to the height at which the vortices breakdown.
So, to pose the question again, how do we mitigate the downforce-reducing effect of an increase in diffuser height? Simple, one merely uses the exhaust gases to boost the strength of the side-edge vortices to levels otherwise seen at lower heights.
In fact, this is to simplify the issue, because the exhaust gases playing on the sides of the diffuser have two effects: (i) to strengthen the side-edge vortices inside the diffuser, and (ii) to act as air curtains, preventing the ingress of turbulent air created by the rotating rear wheels.
So, with exhaust-blown diffusers to be banned from next year, the trick will be to find other ways of boosting the strength of those side-edge vortices. Do so, and you'll still be able to run your car with a significant degree of rake.
Sunday, July 31, 2011
Lewis Hamilton's Garden of Forking Paths
It's often said that there are more ways to lose a Grand Prix than to win one, and the diagram here makes that explicit.
Lewis Hamilton lost the Hungarian Grand Prix on Sunday primarily as a result of the tyre-choice made at the third pit-stop. Leading the race from Jenson Button and Sebastien Vettel, Lewis took his third set of options, while Jenson and Sebastien took a set of the harder, prime tyres. With thirty laps to the end of the race, Hamilton would require another pitstop, whereas Button and Vettel wouldn't. That decision alone restricted Hamilton to third place, at best.
The subsequent drive-through penalty and stop for intermediate tyres merely reduced Lewis's highest possible finishing position to 4th, which he duly achieved after passing Webber.
The diagram here demonstrates twelve possible paths through the last thirty laps of the race. Branches to the left constitute errors. The branch furthermost to the left is the one actually followed, while the branch furthermost to the right is the one for which victory would have been most likely. All the other branches, with one exception, lead to an eventual 3rd or 4th place.
The one exceptional case corresponds to the scenario in which Lewis took primes at the third pit-stop, but still spun, and lost the lead to Jenson on the damp track-surface. If Lewis had avoided a drive-through penalty, he would then have finished either 2nd behind Jenson, or perhaps even have retaken the lead. Whether Lewis could have made a set of primes last thirty laps, however, is unknown.
Subscribe to:
Posts (Atom)