It would be easy to think that the existence of night is solely a consequence of the rotation of the Earth and its location relative to the Sun. But it is not. It is a consequence of the expansion of the Universe. If the Universe were not expanding then, wherever we looked into space, our line of sight would end at a star. The result would be like looking into a forest of trees. In a universe that didn't expand, the whole sky would resemble the surface of a star; we would be illuminated by perpetual starlight. What saves us from this everlasting light is the expansion of the Universe. It degrades the intensity of the light from distant stars and galaxies, and it leaves the night sky dark. (John Barrow, The Artful Universe, p45.)
As the Sun sets in Singapore, and European visitors fight the disruption to their circadian cycles, the aesthetics of Formula 1 undergoes a phase transition. A silver strip of metal halide light runs between the colonial palm trees, beneath the bejewelled post-modernist towers, underneath the concrete stanchions of the flyovers, alongside the armadillo-contoured concert hall and theatre, and beside the neoclassical and Palladian civic architecture. Specular reflections shimmer from the compound surfaces of the cars; the Ferraris become molten lava, and the McLarens dissolve into liquid metal.
Permitting this extravagant display of light and pattern is the dark night sky, a phenomenon whose very existence requires a cosmological explanation, as recognized by the 19th century German physician and amateur astronomer, Heinrich Olbers. Olbers realised that if our universe were a static universe, infinite in space and time, and homogeneously populated with stars (or galaxies or galaxy clusters etc.), then the sky should be bright throughout the day and night. The fact that the night sky is dark is therefore Olbers's paradox. Modern physics solves Olbers's paradox by virtue of the fact that light travels at a finite speed, and by virtue of the fact the Friedmann-Robertson-Walker models of relativistic cosmology represent our universe to be an expanding universe of finite age.
In a universe of finite age, in which light travels at a finite speed, there will be a finite cosmological horizon around every astronomically observant species; the universe is 14 billion years old, hence the light from stars more than 14 billion lights years away has not had time to reach us. Moreover, in an expanding universe, the expansion red-shifts distant starlight towards energies invisible to the naked eye, and reduces the brightness of the light.
There is a twist, however, for "it seems that the background of the sky is bright, even at night. Of course, it is not as bright as the surface of the Sun, nor does it shine at the same wavelengths. However, according to models of the big bang, the entire universe was so hot around 14 billion years ago that each of its points was a luminous as the surface of the Sun. Each direction leaving from our eye reaches a point of this past Universe. And by the same reasoning as that of Olbers, even in the absence of every star, we should be surrounded by this enormous bright object, the early Universe...We indeed receive this radiation, but it is shifted towards long wavelengths and weakened...Since it is very old, the shift is very strong: redshifted by a factor greater than 1000, it has transformed the light into microwaves. This electromagnetic fossil radiation, a vestige of the primitive epoch, was detected for the first time in 1964. Today it is being exhaustively observed under the name of cosmological background radiation." (Jean-Pierre Luminet, The Wraparound Universe, p159-160).
This weekend then, the Formula 1 cars will race at night through the streets of Singapore, and will do so oblivious to the omnipresent background radiation, and the cosmological expansion which permits this visual and kinetic cornucopia.
Friday, September 25, 2009
Tuesday, September 22, 2009
Singapore and interstellar gas clouds
Sitting at a country pub on the banks of the canal, basking in the Sun beside a weeping willow, the barges chugging languorously past, Singapore seems more than a world away. Swatting away an over-ambitious wasp, the skyline of this extraterrestrial plutopolis rises in the imagination like numerous glass, steel and concrete stalagmites, slowly precipitating from the steady drip of money over economic aeons.
Four and a half billion years ago, a cloud of interstellar gas and 'dust' (tiny grains of solid matter) contracted under the force of its own gravity, started to spin, and formed a rotating disk. The ball of material at the centre of the disk reached ever higher temperatures and pressures, until nuclear fusion ignited inside, and a star, our Sun, was born. The residual material in the surrounding disk then coalesced into an array of planets, our solar system.
The most abundant elements in the contracting cloud were hydrogen H, helium He, oxygen O, carbon C, nitrogen N, neon Ne, magnesium Mg, silicon Si, iron F and sulphur S. The silicon combined with oxygen to make silicates, and further combined with iron and magnesium to make what is colloquially known as rock. The remaining oxygen combined with hydrogen to make water. Within the outer reaches of the protoplanetary disk, the water was frozen, and the accretion of such icy masses gave the more distant planetesimals a head-start over the rocky masses which formed closer to the Sun. The more distant objects acquired sufficient mass to attract hydrogen, helium, and compounds such as methane CH4 and ammonia NH3, thereby creating the gas giants: Jupiter, Saturn, Uranus and Neptune.
On the surface of the third planet from the Sun, a rocky planet, oceans of liquid water formed. The oceans were populated by countless microscopic photosynthesizing organisms. Many had shells or skeletons made of calcium carbonate, and these shells and skeletons were continuously returned to the ocean floor, where they accumulated over the ages as layers of chalk or limestone. Over those same timespans, grains of silica SiO2, or sand, were created from the weathering of silicate rock on the surface of the planetary crust. Then, eventually, an intelligent species emerged from the biosphere of the planet, delved into the planetary crust, and devised construction materials such as glass (silicon oxide which has been cooled sufficiently rapidly that the molecules are unable to form a regular crystal lattice), steel (iron judiciously doped with carbon atoms), and concrete (a coarse aggregate of limestone or gravel, combined with cement, water and sand, the cement itself a product of the heating and grinding of limestone and clay).
Four and a half billion years after the contraction of that cloud of interstellar gas and dust, Singapore rose as a glass, steel and concrete monument to the growth of complexity.
Four and a half billion years ago, a cloud of interstellar gas and 'dust' (tiny grains of solid matter) contracted under the force of its own gravity, started to spin, and formed a rotating disk. The ball of material at the centre of the disk reached ever higher temperatures and pressures, until nuclear fusion ignited inside, and a star, our Sun, was born. The residual material in the surrounding disk then coalesced into an array of planets, our solar system.
The most abundant elements in the contracting cloud were hydrogen H, helium He, oxygen O, carbon C, nitrogen N, neon Ne, magnesium Mg, silicon Si, iron F and sulphur S. The silicon combined with oxygen to make silicates, and further combined with iron and magnesium to make what is colloquially known as rock. The remaining oxygen combined with hydrogen to make water. Within the outer reaches of the protoplanetary disk, the water was frozen, and the accretion of such icy masses gave the more distant planetesimals a head-start over the rocky masses which formed closer to the Sun. The more distant objects acquired sufficient mass to attract hydrogen, helium, and compounds such as methane CH4 and ammonia NH3, thereby creating the gas giants: Jupiter, Saturn, Uranus and Neptune.
On the surface of the third planet from the Sun, a rocky planet, oceans of liquid water formed. The oceans were populated by countless microscopic photosynthesizing organisms. Many had shells or skeletons made of calcium carbonate, and these shells and skeletons were continuously returned to the ocean floor, where they accumulated over the ages as layers of chalk or limestone. Over those same timespans, grains of silica SiO2, or sand, were created from the weathering of silicate rock on the surface of the planetary crust. Then, eventually, an intelligent species emerged from the biosphere of the planet, delved into the planetary crust, and devised construction materials such as glass (silicon oxide which has been cooled sufficiently rapidly that the molecules are unable to form a regular crystal lattice), steel (iron judiciously doped with carbon atoms), and concrete (a coarse aggregate of limestone or gravel, combined with cement, water and sand, the cement itself a product of the heating and grinding of limestone and clay).
Four and a half billion years after the contraction of that cloud of interstellar gas and dust, Singapore rose as a glass, steel and concrete monument to the growth of complexity.
Sunday, September 20, 2009
Branching and interfering parallel universes
"This universe is constantly splitting into a stupendous number of branches, all resulting from the measurementlike interactions between its myriads of components. Moreover, every quantum transition taking place on every star, in every galaxy, in every corner of the universe is splitting our local world into myriads of copies of itself." (Bryce De Witt, 1970).
The many-worlds interpretation of quantum mechanics famously holds that if a physical system is prepared into a state which is a superposition with respect to the possible values of a quantity A, then when the value of A is measured the universe splits into multiple branches, each of which realises one of the different possible definite values of quantity A. The superposed state Ψ is a sum
Ψ = c1 ψ1 + ⋅ ⋅ ⋅ + cn ψn ,
where each ψi is a quantum state in which quantity A possesses a definite value. The many-worlds interpretation proposes that when the universe branches, it branches into all the different states ψi from the superposition.
Extending this interpretation to the macroscopic world of human experience, it is postulated that the universe branches every time a choice is made, and that all the different possible lives we may have led if only our choices had been different, do in fact exist as different branches of the quantum universe.
The naive manner of picturing this process is to represent the different histories at each branching point as if they were the leaves of a book, radiating outwards from the spine. The spine in this analogy corresponds to a spacelike three-dimensional hypersurface, and the leaves radiating from it correspond to different four-dimensional space-time histories. John Earman illustrates this concept in the first diagram reproduced here, for the simple case where only two histories radiate from each branching point.
However, the notion that the entire universe branches in this style every time there is a measurement-like interaction, renders such branching a highly non-local process, and tacitly supposes that there is a unique global time coordinate for the universe. Treating a measurement-like interaction as a point event in space-time, there will be many spacelike hypersurfaces which pass through that point; the selection of only one of these as the branching hypersurface requires one to accept that there is a preferential time coordinate for the universe.
To avoid these difficulties, one can suggest that the universe only branches locally as the result of a measurement-like interaction. To be specific, one can suggest that the future light-cone of the interaction event has multiple branches, one for each possible outcome of the interaction. If one imagines such a universe as a two-dimensional sheet, then the image is one in which there are numerous pockets in the sheet, formed by the multiple branches of the future light cones. Roger Penrose drew just such an image of a branching universe in 1979, reproduced as the second diagram here.
Returning to the many-worlds interpretation, it is important to note that the different branches ψi of the overall wave-function Ψ, do not themselves correspond to different classical universes. Whilst the states ψi do indeed bestow definite values upon the quantity A, they are still quantum states in their own right, and as such, they fail to assign definite values to all the quantities possessed by the physical system under consideration.
This is crucial, because advocates of the many-worlds interpretation can often be found claiming that a quantum universe is a universe whose basic ontological fabric consists of interfering classical universes. One could conceivably subscribe to the many-worlds interpretation of quantum measurement without endorsing this stronger ontological claim. The many-worlds interpretation of quantum measurement requires one to accept that the universe is continually branching into components of the quantum wave-function, whilst the stronger ontological claim requires one to accept that the entire quantum wave-function and its branches, consist of bifurcating, interfering and merging classical histories.
The stronger claim seems to be fuelled by the 'sum-over-histories', or path-integral formulation of quantum mechanics, in which each branch of the quantum wave-function corresponds to a different set of interfering classical histories, and in which the different branches of the wave-function interfere when there is interference between these different sets of classical histories.
For example, in the famous double-slit experiment, there are two possible sets of classical histories. If we label the slits as Slit A and Slit B, then one set consists of all the possible trajectories through Slit A, and the other consists of all the possible trajectories through Slit B. When both slits are open, the paths through the different slits duly interfere with each other to produce the overall wave-function on the distant screen.
(The interference between the different branches of the wave-function are purportedly removed by decoherence on macroscopic scales, thereby explaining why we never observe quantum superpositions on such length scales).
In this sense, the classical histories form the warp and weft of the quantum fabric of the universe. Whilst this is a stronger ontological claim than the basic many-worlds interpretation, in many ways this is a more coherent picture than one in which an overall quantum state branches by fiat into its component quantum states.
The many-worlds interpretation of quantum mechanics famously holds that if a physical system is prepared into a state which is a superposition with respect to the possible values of a quantity A, then when the value of A is measured the universe splits into multiple branches, each of which realises one of the different possible definite values of quantity A. The superposed state Ψ is a sum
Ψ = c1 ψ1 + ⋅ ⋅ ⋅ + cn ψn ,
where each ψi is a quantum state in which quantity A possesses a definite value. The many-worlds interpretation proposes that when the universe branches, it branches into all the different states ψi from the superposition.
Extending this interpretation to the macroscopic world of human experience, it is postulated that the universe branches every time a choice is made, and that all the different possible lives we may have led if only our choices had been different, do in fact exist as different branches of the quantum universe.
The naive manner of picturing this process is to represent the different histories at each branching point as if they were the leaves of a book, radiating outwards from the spine. The spine in this analogy corresponds to a spacelike three-dimensional hypersurface, and the leaves radiating from it correspond to different four-dimensional space-time histories. John Earman illustrates this concept in the first diagram reproduced here, for the simple case where only two histories radiate from each branching point.
However, the notion that the entire universe branches in this style every time there is a measurement-like interaction, renders such branching a highly non-local process, and tacitly supposes that there is a unique global time coordinate for the universe. Treating a measurement-like interaction as a point event in space-time, there will be many spacelike hypersurfaces which pass through that point; the selection of only one of these as the branching hypersurface requires one to accept that there is a preferential time coordinate for the universe.
To avoid these difficulties, one can suggest that the universe only branches locally as the result of a measurement-like interaction. To be specific, one can suggest that the future light-cone of the interaction event has multiple branches, one for each possible outcome of the interaction. If one imagines such a universe as a two-dimensional sheet, then the image is one in which there are numerous pockets in the sheet, formed by the multiple branches of the future light cones. Roger Penrose drew just such an image of a branching universe in 1979, reproduced as the second diagram here.
Returning to the many-worlds interpretation, it is important to note that the different branches ψi of the overall wave-function Ψ, do not themselves correspond to different classical universes. Whilst the states ψi do indeed bestow definite values upon the quantity A, they are still quantum states in their own right, and as such, they fail to assign definite values to all the quantities possessed by the physical system under consideration.
This is crucial, because advocates of the many-worlds interpretation can often be found claiming that a quantum universe is a universe whose basic ontological fabric consists of interfering classical universes. One could conceivably subscribe to the many-worlds interpretation of quantum measurement without endorsing this stronger ontological claim. The many-worlds interpretation of quantum measurement requires one to accept that the universe is continually branching into components of the quantum wave-function, whilst the stronger ontological claim requires one to accept that the entire quantum wave-function and its branches, consist of bifurcating, interfering and merging classical histories.
The stronger claim seems to be fuelled by the 'sum-over-histories', or path-integral formulation of quantum mechanics, in which each branch of the quantum wave-function corresponds to a different set of interfering classical histories, and in which the different branches of the wave-function interfere when there is interference between these different sets of classical histories.
For example, in the famous double-slit experiment, there are two possible sets of classical histories. If we label the slits as Slit A and Slit B, then one set consists of all the possible trajectories through Slit A, and the other consists of all the possible trajectories through Slit B. When both slits are open, the paths through the different slits duly interfere with each other to produce the overall wave-function on the distant screen.
(The interference between the different branches of the wave-function are purportedly removed by decoherence on macroscopic scales, thereby explaining why we never observe quantum superpositions on such length scales).
In this sense, the classical histories form the warp and weft of the quantum fabric of the universe. Whilst this is a stronger ontological claim than the basic many-worlds interpretation, in many ways this is a more coherent picture than one in which an overall quantum state branches by fiat into its component quantum states.
Sunday, September 13, 2009
Genetic algorithms and Formula 1
The status quo has been rather turned on its head in Formula 1 this year, with the sport's two great leviathans, McLaren and Ferrari, struggling to adapt to the most radical change in technical regulations since 1983. The men and women from Woking and Maranello are generally considered to have the most sophisticated simulation technology in the business, yet it was an open question before the season began whether such a radical change in regulations would be most adequately dealt with by the intuitive and creative mind of an Adrian Newey, or the number-crunching power of inscrutable computer simulation.
The answer, of course, is that on this occasion the programmable calculators were comprehensively blown into the weeds. It appears that the simulation technology which was so successful when the regulations were stable and the cars evolved in a gradual, incremental manner, was decidely leaden-footed when a change of regulations opened a new space of design possibilities. Which begs a rather interesting question about exactly what type of simulation technology McLaren and Ferrari have been using.
To explain, it is necessary to travel back to 1993, when an engineer called Adrian Thompson devised an experiment to test the result of applying Darwinian natural selection principles to the design of electronic circuits. Thompson selected a simple problem: find a circuit design which maps a 1 kHz input signal to an output signal of zero volts, and a 10 kHz input signal to five volts output. The possible circuit designs would be implemented by a Field Programmable Gate Array (FPGA), a microchip containing 100 logic gates, whose connections can be modified by the software code loaded into the memory of the chip.
Thompson randomly generated an initial population of 50 instruction sets, selected the fittest, and then set about cross-breeding the instruction-sets over numerous generations. For each instruction set, a computer fed the two input signals into the FPGA, and recorded the output. Those instruction sets which came closest to satisfying the required map between input and output signals were selected as the fittest, and those instruction-sets were then cross-bred, with some random mutations added, to produce the next generation. The process was then iterated.
The eventual solution, obtained after 4,000 or so generations, was one which no human circuit designer would have conceived:
The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest– with no pathways that would allow them to influence the output– yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones...It seems that evolution had not merely selected the best code for the task, it had also advocated those programs which took advantage of the electromagnetic quirks of that specific microchip environment. The five separate logic cells were clearly crucial to the chip’s operation, but they were interacting with the main circuitry through some unorthodox method– most likely via the subtle magnetic fields that are created when electrons flow through circuitry, an effect known as magnetic flux. There was also evidence that the circuit was not relying solely on the transistors’ absolute ON and OFF positions like a typical chip; it was capitalizing upon analogue shades of gray along with the digital black and white.
As Thompson himself admitted, "really, I don't have the faintest idea how it works!"
A process which obtains an optimal solution to an engineering problem by such natural selection principles of evolution, is called a genetic algorithm. Hence, returning to the subject of Formula 1, it is interesting to raise the possibility that during the period of recent comparative stability in the technical regulations, teams such as McLaren and Ferrari have been evolving their cars using genetic algorithms. Genetic algorithms have been around for a while, and one presumes that the experts who work in Formula 1 simulation technology are well aware of them. Thus, either they've already been using them, or equally, perhaps they just don't work yet because the CFD technology isn't capable of selecting the fittest designs with sufficient speed to make the process practicable.
Applied to Formula 1 aerodynamic design, a genetic algorithm might work as follows: Take an initial design, and then randomly generate an array of variations upon it; feed this array of designs into a CFD simulation, and select a subset which is the fittest, aerodynamically speaking; cross-breed the fittest designs, and add some random mutations, to obtain the next generation; iterate indefinitely.
Such a process might well be very good at gradual development of aerodynamic designs, under a stable regulatory environment, but if there is a radical change in the regulations, it might need a degree of human creativity to orient the genetic algorithm, and pick the correct initial design for the genetic algorithm to begin evolving. Which might just explain the success of Red Bull and Team Brawn this year, and might ultimately explain why Lewis Hamilton crashed on the last lap of Sunday's Italian Grand Prix!
The answer, of course, is that on this occasion the programmable calculators were comprehensively blown into the weeds. It appears that the simulation technology which was so successful when the regulations were stable and the cars evolved in a gradual, incremental manner, was decidely leaden-footed when a change of regulations opened a new space of design possibilities. Which begs a rather interesting question about exactly what type of simulation technology McLaren and Ferrari have been using.
To explain, it is necessary to travel back to 1993, when an engineer called Adrian Thompson devised an experiment to test the result of applying Darwinian natural selection principles to the design of electronic circuits. Thompson selected a simple problem: find a circuit design which maps a 1 kHz input signal to an output signal of zero volts, and a 10 kHz input signal to five volts output. The possible circuit designs would be implemented by a Field Programmable Gate Array (FPGA), a microchip containing 100 logic gates, whose connections can be modified by the software code loaded into the memory of the chip.
Thompson randomly generated an initial population of 50 instruction sets, selected the fittest, and then set about cross-breeding the instruction-sets over numerous generations. For each instruction set, a computer fed the two input signals into the FPGA, and recorded the output. Those instruction sets which came closest to satisfying the required map between input and output signals were selected as the fittest, and those instruction-sets were then cross-bred, with some random mutations added, to produce the next generation. The process was then iterated.
The eventual solution, obtained after 4,000 or so generations, was one which no human circuit designer would have conceived:
The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest– with no pathways that would allow them to influence the output– yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones...It seems that evolution had not merely selected the best code for the task, it had also advocated those programs which took advantage of the electromagnetic quirks of that specific microchip environment. The five separate logic cells were clearly crucial to the chip’s operation, but they were interacting with the main circuitry through some unorthodox method– most likely via the subtle magnetic fields that are created when electrons flow through circuitry, an effect known as magnetic flux. There was also evidence that the circuit was not relying solely on the transistors’ absolute ON and OFF positions like a typical chip; it was capitalizing upon analogue shades of gray along with the digital black and white.
As Thompson himself admitted, "really, I don't have the faintest idea how it works!"
A process which obtains an optimal solution to an engineering problem by such natural selection principles of evolution, is called a genetic algorithm. Hence, returning to the subject of Formula 1, it is interesting to raise the possibility that during the period of recent comparative stability in the technical regulations, teams such as McLaren and Ferrari have been evolving their cars using genetic algorithms. Genetic algorithms have been around for a while, and one presumes that the experts who work in Formula 1 simulation technology are well aware of them. Thus, either they've already been using them, or equally, perhaps they just don't work yet because the CFD technology isn't capable of selecting the fittest designs with sufficient speed to make the process practicable.
Applied to Formula 1 aerodynamic design, a genetic algorithm might work as follows: Take an initial design, and then randomly generate an array of variations upon it; feed this array of designs into a CFD simulation, and select a subset which is the fittest, aerodynamically speaking; cross-breed the fittest designs, and add some random mutations, to obtain the next generation; iterate indefinitely.
Such a process might well be very good at gradual development of aerodynamic designs, under a stable regulatory environment, but if there is a radical change in the regulations, it might need a degree of human creativity to orient the genetic algorithm, and pick the correct initial design for the genetic algorithm to begin evolving. Which might just explain the success of Red Bull and Team Brawn this year, and might ultimately explain why Lewis Hamilton crashed on the last lap of Sunday's Italian Grand Prix!
Thursday, September 10, 2009
Omega Centauri
With a start, I wake in the middle of the night. A cool breeze caresses my face, and I hear the gentle ripple of water lapping at a shingle beach. I gather my senses, and as I focus my eyes, I find myself gazing upwards at thousands of stars speckled and daubed across the sable surface of the celestial sphere. Vivid reds, blues and oranges mix with the cold silver of ordinary starlight, and I feel a distinct sense that I have seen this stellar extravaganza somewhere before.
Wednesday, September 09, 2009
The Pomeroy Index
The Pomeroy Index is the primary means of measuring the relative speed of Formula 1 cars which not only raced in different years, but in different eras of the sport. Remarkably, it is capable of comparing the relative speeds of cars which never even raced on the same circuits. To achieve this, it uses a daisychaining technique, similar to the manner in which dendrochronology uses the overlap between tree rings from different eras to extend its dating technique all the way back from the present to prehistoric times (see The Greatest Show on Earth, Richard Dawkins, pp88-91). In both cases, it is the overlap principle which is vital. In the case of Formula 1, the daisychaining is achieved by identifying the circuits which are common to successive years of Grand Prix racing. Speed differences between successive years are averaged over these overlapping circuits, and the speed differences can then be daisychained all the way from the inception of Grand Prix racing in 1906 to the present day.
The index was invented by engineer and motoring journalist Laurence Pomeroy, and updated by Leonard (L.J.K.) Setright in 1966. (Another motoring journalist, Setright was hard to miss, "with his long, wispy beard, wide-brimmed hat, cape and black leather gloves, he looked like 'an Old Testament prophet suddenly arriving at a Hell's Angels meeting'." (On Roads, Joe Moran, p172)).
The index was resurrected and updated again more recently by Mark Hughes. The algorithm for calculating the index is as follows:
1) Identify the fastest car from each year by averaging the qualifying performance of all the cars over all the races.
2) For each pair of successive years, identify the overlapping circuits in the respective calendars. In other words, identify the circuits which were used in both years, in unaltered form.
3) Take the fastest car from the first year of Grand Prix racing, Ferenc Szisz's 1906 Renault, and assign it a Pomeroy Index of 100.
4) For year t+1, calculate the speed difference between the fastest car that year and the fastest car from year t, averaged over the overlapping circuits (and eliminating spurious cases where speed differentials were skewed by rain conditions). Express this speed difference as a percentage, and add it to the Pomeroy Index of the fastest car in year t to find the Pomeroy Index of the fastest car in year t+1. For example, if the fastest car in year t+1 is 2% faster than the fastest car from year t, and the fastest car in year t had a Pomeroy Index of 150, then the fastest car in year t+1 had a Pomeroy Index of 152.
5) Repeat step 4 until one reaches the current year.
An on-line version of the index from 1906 to 1966 exists for perusal, and Hughes's updated version in Autosport magazine obtained a value of 234.7 for Michael Schumacher's 2004 Ferrari. (Speeds have since fallen due to the imposition of smaller engines, rev-limits, a control-tyre formula, and a generally more restrictive set of technical regulations).
This doesn't mean, however, that the 2004 Ferrari was 2.347 times faster than the 1906 Renault. This would be to underestimate the speed difference between Herr Schumacher and Ur Szisz's respective steeds. Perhaps the crucial point to digest here is that average speeds in Formula 1 have historically increased, not in a linear fashion, and not even according to a power law; rather, average speeds in Formula 1 increase exponentially. Hence, the percentage speed increments tallied in the Pomeroy Index are akin to the yearly interest rates of a compound interest account. The 1935 Mercedes-Benz was 3% faster than the 1934 Auto Union, and the 1936 Auto Union was 5% faster than the 1935 Mercedes-Benz, but the 1936 Auto Union was more than 8% faster than the 1934 Auto Union because the 5% increase was 5% of a speed greater than the speed of the 1934 Auto Union.
Such an exponential increase in speed can be represented by the formula:
Q(t) = Q(0) (1 + r(t))t ,
where Q(t) is the speed in year t, Q(0) is the speed in year 0, t is the discrete year number, and r(t) is the interest rate in year t, expressed as a decimal. Thus, for example, if the year-on-year increase in speed were a constant 2%, then speeds would increase exponentially according to the formula:
Q(t) = Q(0) (1 + 0.02)t.
The index was invented by engineer and motoring journalist Laurence Pomeroy, and updated by Leonard (L.J.K.) Setright in 1966. (Another motoring journalist, Setright was hard to miss, "with his long, wispy beard, wide-brimmed hat, cape and black leather gloves, he looked like 'an Old Testament prophet suddenly arriving at a Hell's Angels meeting'." (On Roads, Joe Moran, p172)).
The index was resurrected and updated again more recently by Mark Hughes. The algorithm for calculating the index is as follows:
1) Identify the fastest car from each year by averaging the qualifying performance of all the cars over all the races.
2) For each pair of successive years, identify the overlapping circuits in the respective calendars. In other words, identify the circuits which were used in both years, in unaltered form.
3) Take the fastest car from the first year of Grand Prix racing, Ferenc Szisz's 1906 Renault, and assign it a Pomeroy Index of 100.
4) For year t+1, calculate the speed difference between the fastest car that year and the fastest car from year t, averaged over the overlapping circuits (and eliminating spurious cases where speed differentials were skewed by rain conditions). Express this speed difference as a percentage, and add it to the Pomeroy Index of the fastest car in year t to find the Pomeroy Index of the fastest car in year t+1. For example, if the fastest car in year t+1 is 2% faster than the fastest car from year t, and the fastest car in year t had a Pomeroy Index of 150, then the fastest car in year t+1 had a Pomeroy Index of 152.
5) Repeat step 4 until one reaches the current year.
An on-line version of the index from 1906 to 1966 exists for perusal, and Hughes's updated version in Autosport magazine obtained a value of 234.7 for Michael Schumacher's 2004 Ferrari. (Speeds have since fallen due to the imposition of smaller engines, rev-limits, a control-tyre formula, and a generally more restrictive set of technical regulations).
This doesn't mean, however, that the 2004 Ferrari was 2.347 times faster than the 1906 Renault. This would be to underestimate the speed difference between Herr Schumacher and Ur Szisz's respective steeds. Perhaps the crucial point to digest here is that average speeds in Formula 1 have historically increased, not in a linear fashion, and not even according to a power law; rather, average speeds in Formula 1 increase exponentially. Hence, the percentage speed increments tallied in the Pomeroy Index are akin to the yearly interest rates of a compound interest account. The 1935 Mercedes-Benz was 3% faster than the 1934 Auto Union, and the 1936 Auto Union was 5% faster than the 1935 Mercedes-Benz, but the 1936 Auto Union was more than 8% faster than the 1934 Auto Union because the 5% increase was 5% of a speed greater than the speed of the 1934 Auto Union.
Such an exponential increase in speed can be represented by the formula:
Q(t) = Q(0) (1 + r(t))t ,
where Q(t) is the speed in year t, Q(0) is the speed in year 0, t is the discrete year number, and r(t) is the interest rate in year t, expressed as a decimal. Thus, for example, if the year-on-year increase in speed were a constant 2%, then speeds would increase exponentially according to the formula:
Q(t) = Q(0) (1 + 0.02)t.
Tuesday, September 08, 2009
Is cheating part of the F1 memeplex?
Predictably, the hyperbole is beginning to flow, and suggestions are being made that the allegations against the Renault F1 team constitute the "biggest cheating crisis in the history of Formula One."
Perhaps a bit of perspective is required here. In fact, an understanding of cultural evolution may be beneficial. Whilst the concept of a replicating unit of cultural information, the meme, has received an extremely mixed reception amongst social scientists, and the similarities between memes and biological genes are limited, the concept still provides a fascinating way of looking at certain cultural phenomena.
A culture is essentially a system of minds interacting with each other and an external environment. The manifestations of a culture are the behaviours, information, and physical objects generated by that system of interacting minds. Minds store information both internally, in their own memories, and externally, in the form of written documents, and more recently, in the form of computer memories. Any system capable of storing information will become a potential host to replicating units of information. Human minds have become the hosts to replicating ideas and beliefs (memes), whilst networks of computers have become the hosts to replicating sets of computer instructions (computer viruses). Memes are often found in combination, and these replicating sets of related memes are called memeplexes.
Each commercial company has an evolving culture, which is at least partially independent of the employees working in that company at any one time. The culture of the company is a type of memeplex, defined by the specified policies, regulations, structures and processes of that company. Each successive 'generation' of employees will inherit the policies, regulations, structures and processes inherited by their predecessors, but with some degree of variation, large or small. This is a form of cultural inheritance and evolution.
The memeplex of a commercial company will be modified by two factors: Firstly, each generation of employees will possess their own memes as a result of their parental and scholastic upbringing, and their experiences in other companies, and they will inject these memes into the company's overall memeplex, thereby modifying the culture of the company to a greater or lesser degree. Secondly, a commercial company exists in a competitive environment, and its own survival is dependent upon how well it adapts to, and shapes that environment. The memeplex of each individual company will therefore be modified by the behaviour of other companies and organisations, which in turn possess their own distinct memeplexes. The commercial world is a battlefield of interacting memeplexes.
It is important to note that this type of cultural evolution does not generate the unique lineages that can be found in biological, genetic evolution. For a start, whilst the genetic evolution of non-microbial life forms primarily involves vertical transmission of replicating entities, from parent to offspring, horizontal transmission of information can be just as important as vertical transmission in cultural evolution. For example, in a commercial context, horizontal transmission takes place when employees join a company from other companies, and bring different memes with them that they acquired from those other companies. Moreover, although the creation of subsidiary companies is analogous to the biological generation of progeny, companies are also frequently subject to mergers and acquisitions, a process for which there is no (non-microbial) biological analogue. The existence of mergers and acquisitions in the past history of a company is inconsistent with the existence of a unique cultural lineage. The memes contained in a company's memeplex at any one time are therefore unlikely to have a uniquely traceable origin.
As another example, each sporting team or club has a cultural identity which is often independent of the team members who play for that team at any one time. In this case, the memes consist of skills, standards, strategies, tactics, and codes of conduct. In the case of football culture in particular, there is an excellent example of the horizontal transmission of memes. In South American and Southern European footballing cultures, there has traditionally been a much weaker taboo against diving than that found in the North European footballing code of conduct. The transfer of Southern European players to English football has consequently increased the prevalence of the diving meme in the English game.
Which brings us to Formula 1 and 'cheating'. There is a persistent meme in the culture of all Formula 1 teams, that finding loopholes or ambiguities in the sporting and technical regulations, is part of the game. As such, it has traditionally not even been considered to be cheating. Cars which generate downforce from cooling fans; underbody skirts which are hydraulically-raised prior to ground-clearance measurements; water-cooled brakes whose water tanks are replenished prior to car-weight measurements; 'rocket' fuels; cars which are underweight in qualifying; wings which deform under load to reduce drag down the straight; hidden launch control and traction control software at times when such electronic driver aids are banned; mass-dampers; teams that utilise design information stolen from other teams; double-diffusers; drivers who crash into other drivers to win championships; drivers who fake injuries to get races red-flagged; drivers who drag damaged cars onto the track to get races red-flagged; drivers who deliberately crash to bring out the safety car, etc etc. It's all part of the game, and has been virtually since the inception of the sport.
There are, however, other memeplexes which may threaten the survival of Formula 1 teams who continue to harbour this particular meme. These other memeplexes include beliefs such as the necessity and value of applying ever-increasing levels of surveillance in sport; the necessity and value of the absolutely rigorous enforcement of sporting regulations; and the necessity and value of ever-higher levels of safety in society as a whole. Even wielded impartially, such a memeplex is quite capable of threatening the survival of Formula 1 teams that continue to believe in the value of exploiting loopholes in the regulations. Wielded by agents that harbour vendettas, or seek to control the financial and political shape of the sport by eliminating other protagonists, it is a powerful memeplex indeed.
Perhaps a bit of perspective is required here. In fact, an understanding of cultural evolution may be beneficial. Whilst the concept of a replicating unit of cultural information, the meme, has received an extremely mixed reception amongst social scientists, and the similarities between memes and biological genes are limited, the concept still provides a fascinating way of looking at certain cultural phenomena.
A culture is essentially a system of minds interacting with each other and an external environment. The manifestations of a culture are the behaviours, information, and physical objects generated by that system of interacting minds. Minds store information both internally, in their own memories, and externally, in the form of written documents, and more recently, in the form of computer memories. Any system capable of storing information will become a potential host to replicating units of information. Human minds have become the hosts to replicating ideas and beliefs (memes), whilst networks of computers have become the hosts to replicating sets of computer instructions (computer viruses). Memes are often found in combination, and these replicating sets of related memes are called memeplexes.
Each commercial company has an evolving culture, which is at least partially independent of the employees working in that company at any one time. The culture of the company is a type of memeplex, defined by the specified policies, regulations, structures and processes of that company. Each successive 'generation' of employees will inherit the policies, regulations, structures and processes inherited by their predecessors, but with some degree of variation, large or small. This is a form of cultural inheritance and evolution.
The memeplex of a commercial company will be modified by two factors: Firstly, each generation of employees will possess their own memes as a result of their parental and scholastic upbringing, and their experiences in other companies, and they will inject these memes into the company's overall memeplex, thereby modifying the culture of the company to a greater or lesser degree. Secondly, a commercial company exists in a competitive environment, and its own survival is dependent upon how well it adapts to, and shapes that environment. The memeplex of each individual company will therefore be modified by the behaviour of other companies and organisations, which in turn possess their own distinct memeplexes. The commercial world is a battlefield of interacting memeplexes.
It is important to note that this type of cultural evolution does not generate the unique lineages that can be found in biological, genetic evolution. For a start, whilst the genetic evolution of non-microbial life forms primarily involves vertical transmission of replicating entities, from parent to offspring, horizontal transmission of information can be just as important as vertical transmission in cultural evolution. For example, in a commercial context, horizontal transmission takes place when employees join a company from other companies, and bring different memes with them that they acquired from those other companies. Moreover, although the creation of subsidiary companies is analogous to the biological generation of progeny, companies are also frequently subject to mergers and acquisitions, a process for which there is no (non-microbial) biological analogue. The existence of mergers and acquisitions in the past history of a company is inconsistent with the existence of a unique cultural lineage. The memes contained in a company's memeplex at any one time are therefore unlikely to have a uniquely traceable origin.
As another example, each sporting team or club has a cultural identity which is often independent of the team members who play for that team at any one time. In this case, the memes consist of skills, standards, strategies, tactics, and codes of conduct. In the case of football culture in particular, there is an excellent example of the horizontal transmission of memes. In South American and Southern European footballing cultures, there has traditionally been a much weaker taboo against diving than that found in the North European footballing code of conduct. The transfer of Southern European players to English football has consequently increased the prevalence of the diving meme in the English game.
Which brings us to Formula 1 and 'cheating'. There is a persistent meme in the culture of all Formula 1 teams, that finding loopholes or ambiguities in the sporting and technical regulations, is part of the game. As such, it has traditionally not even been considered to be cheating. Cars which generate downforce from cooling fans; underbody skirts which are hydraulically-raised prior to ground-clearance measurements; water-cooled brakes whose water tanks are replenished prior to car-weight measurements; 'rocket' fuels; cars which are underweight in qualifying; wings which deform under load to reduce drag down the straight; hidden launch control and traction control software at times when such electronic driver aids are banned; mass-dampers; teams that utilise design information stolen from other teams; double-diffusers; drivers who crash into other drivers to win championships; drivers who fake injuries to get races red-flagged; drivers who drag damaged cars onto the track to get races red-flagged; drivers who deliberately crash to bring out the safety car, etc etc. It's all part of the game, and has been virtually since the inception of the sport.
There are, however, other memeplexes which may threaten the survival of Formula 1 teams who continue to harbour this particular meme. These other memeplexes include beliefs such as the necessity and value of applying ever-increasing levels of surveillance in sport; the necessity and value of the absolutely rigorous enforcement of sporting regulations; and the necessity and value of ever-higher levels of safety in society as a whole. Even wielded impartially, such a memeplex is quite capable of threatening the survival of Formula 1 teams that continue to believe in the value of exploiting loopholes in the regulations. Wielded by agents that harbour vendettas, or seek to control the financial and political shape of the sport by eliminating other protagonists, it is a powerful memeplex indeed.
Monday, September 07, 2009
Star formation and entropy
If you take the statements about entropy in almost every elementary textbook, and indeed most advanced ones, they are contradicted when the gravitational field is turned on and is significant. For example, in the famous case of the gas container split into two halves by a barrier, with all the gas initially on one side, the standard statement is that the gas then spreads out to uniformly fill the whole container when the barrier is removed, with the entropy correspondingly increasing. But when gravitation is turned on, the final state is with all the matter clumped in a blob somewhere in the container, rather than being uniformly spread out...The question then is whether there is a definition of entropy for the gravitational field itself (as distinct from the matter filling space-time), and if so if the second law of thermodynamics applies to the system when this gravitational entropy is taken into account
...This issue remains one of the most significant unsolved problems in classical gravitational theory, for as explained above, even though this is not usually made explicit, it underlies the spontaneous formation of structure in the universe - the ability of the universe to act as a 'self-organizing' system where ever more complex structures evolve by natural processes, starting off with structure formed by the action of the gravitational field. (George Ellis, Cosmology and local physics).
As cosmologist George Ellis notes, for systems in which gravity plays no role, the states of maximum entropy are indeed the states in which the parts of the system are uniformly distributed in space. By virtue of the fact that gravity transforms initially uniform distributions of gas into localised agglomerations, such as stars, it is often concluded, perhaps complacently, that gravitational processes are capable of lowering the entropy of matter. As the quote from Ellis demonstrates, this thought is then often conjoined with the proposal that the second law of thermodynamics can only be preserved by taking into account the entropy of the gravitational field associated with such concentrations of matter.
However, Oxford philosopher of physics David Wallace has brilliantly dispelled misconceptions such as these in his recent paper, Gravity, entropy and cosmology: in search of clarity, and we shall attempt here to provide, appropriately enough, a condensed version of Wallace's exposition.
Begin by recalling that entropy is a property possessed by the states of physical systems. Classical physics conceives each physical system to possess a huge, multi-dimensional space of possible states, called the phase space of the system. The phase space is partitioned into regions called macrostates, consisting of states which share macroscopically indistinguishable properties. If there is any ambiguity, the exact states, (the points of the phase space), are then referred to as the microstates.
The entropy of a state is then defined to be a measure of the size of the macrostate volume in phase space to which that state belongs. Thus, the entropy of a state is a measure of how typical that state is within the entire space of possible states. The remorseless increase of entropy, enshrined in the second law of thermodynamics, is simply a reflection of the fact that the state of a closed system will, most probably, move into the regions of phase space which possess the greatest volume. (In this context, a closed system is a system in which there are no flows of matter or energy into or out of the system).
Turning to the question of gravitational contraction, it is vital at the outset to correct the following potential misconception. One might assume that, given a small initial perturbation in an otherwise uniform distribution of interstellar gas, a small region in which the density of matter becomes higher than the average, the gravity of that region will then attract more matter from the surroundings, and a positive feedback process will then ensue. One might think that as the excess density of matter increases, a greater force is exerted on the surrounding matter, thus increasing the agglomeration yet further, in a looped process, until eventually a star is formed. In fact, most interstellar gas is extremely reluctant to contract, simply due to its thermal pressure, which balances any attractive gravitational force.
This, however, is not because the contraction of a system necessarily reduces its entropy. As Wallace emphasises, whilst the contraction of a system to a smaller volume of space has an entropy-decreasing effect, the contraction will also raise the temperature of the system, and this has an entropy-increasing effect. One has to do the sums to work out whether the net effect is to increase or decrease the entropy.
It transpires that if the initial total energy of a system is positive (i.e., if the sum of the gravitational potential energy U and the kinetic energy K, is greater than zero, E = U + K > 0), then the entropy is maximised by the expansion of the system. Such a system is such that the typical velocity of the constituent particles exceeds the gravitational escape velocity of the system, hence the expansion of the system is easy to understand from this perspective. If, however, E = U + K < 0, then the system is said to be gravitationally bound, and in this case it is actually contraction which maximises the entropy. This, then, immediately demonstrates the complacency of the assumption that gravitational contraction is an entropy-lowering process.
Nevertheless, such contraction will only take a system to at most half of its initial radius, and fails to explain the formation of stars from interstellar gas clouds. Instead, the formation of stars is dependant upon the existence of a mechanism for removing heat (and therefore energy) from the contracting system.
Suppose the initial state is one in which thermal pressure balances the gravitational attraction, but suppose that there is then some heat flow out of the system. The kinetic energy of the system K, and therefore its thermal pressure, will reduce, as will the total energy E. As a result of the reduced thermal pressure, the system will contract, reducing the gravitational potential energy U of the system, and converting it into a kinetic energy K which is greater than the initial kinetic energy. Thus, the removal of heat from such a gravitationally bound system will actually increase its temperature. (Such a system is said to possess a negative heat capacity). This, in turn, will create a greater temperature gradient between the system and its surroundings, leading to more heat flow out of the system, and to further contraction. This process will continue until the pressure becomes sufficiently great at the centre of the contracting mass that nuclear fusion is triggered. As long as the heat produced by nuclear fusion is able to balance the heat flow out of the system, the thermal pressure will balance the gravitational force, and the contraction will cease.
Whilst this process successfully explains star formation, it is a mechanism which requires heat flow out of the system, and this is an entropy-decreasing effect. Given that the reduction in spatial volume also constitutes an entropy-decreasing effect, it can be safely concluded that the entropy of the matter in a gravitationally contracting system decreases. However, contrary to the suggestions made by Ellis and others, the entropy of the gravitational field does not need to be invoked and defined in order to reconcile this fact with the second law of thermodynamics.
As a gravitationally-bound system contracts, the frequency of the collisions between the constituent particles increases, and a certain fraction of those interactions will be so-called inelastic collisions, in which the atoms or molecules are raised into excited energy states. Those excited states decay via the emission of photons, and this electromagnetic radiation is then lost to the surroundings. It is this radiative emission which is the most effective means by which heat is transferred from the contracting body to its lower temperature surroundings. And crucially, the entropy of this radiation is sufficiently huge that it easily compensates, and then some, for the lower entropy of the contracting matter. The total entropy of a contracting gravitational system therefore increases, as long as one counts the contribution from the electromagnetic radiation.
...This issue remains one of the most significant unsolved problems in classical gravitational theory, for as explained above, even though this is not usually made explicit, it underlies the spontaneous formation of structure in the universe - the ability of the universe to act as a 'self-organizing' system where ever more complex structures evolve by natural processes, starting off with structure formed by the action of the gravitational field. (George Ellis, Cosmology and local physics).
As cosmologist George Ellis notes, for systems in which gravity plays no role, the states of maximum entropy are indeed the states in which the parts of the system are uniformly distributed in space. By virtue of the fact that gravity transforms initially uniform distributions of gas into localised agglomerations, such as stars, it is often concluded, perhaps complacently, that gravitational processes are capable of lowering the entropy of matter. As the quote from Ellis demonstrates, this thought is then often conjoined with the proposal that the second law of thermodynamics can only be preserved by taking into account the entropy of the gravitational field associated with such concentrations of matter.
However, Oxford philosopher of physics David Wallace has brilliantly dispelled misconceptions such as these in his recent paper, Gravity, entropy and cosmology: in search of clarity, and we shall attempt here to provide, appropriately enough, a condensed version of Wallace's exposition.
Begin by recalling that entropy is a property possessed by the states of physical systems. Classical physics conceives each physical system to possess a huge, multi-dimensional space of possible states, called the phase space of the system. The phase space is partitioned into regions called macrostates, consisting of states which share macroscopically indistinguishable properties. If there is any ambiguity, the exact states, (the points of the phase space), are then referred to as the microstates.
The entropy of a state is then defined to be a measure of the size of the macrostate volume in phase space to which that state belongs. Thus, the entropy of a state is a measure of how typical that state is within the entire space of possible states. The remorseless increase of entropy, enshrined in the second law of thermodynamics, is simply a reflection of the fact that the state of a closed system will, most probably, move into the regions of phase space which possess the greatest volume. (In this context, a closed system is a system in which there are no flows of matter or energy into or out of the system).
Turning to the question of gravitational contraction, it is vital at the outset to correct the following potential misconception. One might assume that, given a small initial perturbation in an otherwise uniform distribution of interstellar gas, a small region in which the density of matter becomes higher than the average, the gravity of that region will then attract more matter from the surroundings, and a positive feedback process will then ensue. One might think that as the excess density of matter increases, a greater force is exerted on the surrounding matter, thus increasing the agglomeration yet further, in a looped process, until eventually a star is formed. In fact, most interstellar gas is extremely reluctant to contract, simply due to its thermal pressure, which balances any attractive gravitational force.
This, however, is not because the contraction of a system necessarily reduces its entropy. As Wallace emphasises, whilst the contraction of a system to a smaller volume of space has an entropy-decreasing effect, the contraction will also raise the temperature of the system, and this has an entropy-increasing effect. One has to do the sums to work out whether the net effect is to increase or decrease the entropy.
It transpires that if the initial total energy of a system is positive (i.e., if the sum of the gravitational potential energy U and the kinetic energy K, is greater than zero, E = U + K > 0), then the entropy is maximised by the expansion of the system. Such a system is such that the typical velocity of the constituent particles exceeds the gravitational escape velocity of the system, hence the expansion of the system is easy to understand from this perspective. If, however, E = U + K < 0, then the system is said to be gravitationally bound, and in this case it is actually contraction which maximises the entropy. This, then, immediately demonstrates the complacency of the assumption that gravitational contraction is an entropy-lowering process.
Nevertheless, such contraction will only take a system to at most half of its initial radius, and fails to explain the formation of stars from interstellar gas clouds. Instead, the formation of stars is dependant upon the existence of a mechanism for removing heat (and therefore energy) from the contracting system.
Suppose the initial state is one in which thermal pressure balances the gravitational attraction, but suppose that there is then some heat flow out of the system. The kinetic energy of the system K, and therefore its thermal pressure, will reduce, as will the total energy E. As a result of the reduced thermal pressure, the system will contract, reducing the gravitational potential energy U of the system, and converting it into a kinetic energy K which is greater than the initial kinetic energy. Thus, the removal of heat from such a gravitationally bound system will actually increase its temperature. (Such a system is said to possess a negative heat capacity). This, in turn, will create a greater temperature gradient between the system and its surroundings, leading to more heat flow out of the system, and to further contraction. This process will continue until the pressure becomes sufficiently great at the centre of the contracting mass that nuclear fusion is triggered. As long as the heat produced by nuclear fusion is able to balance the heat flow out of the system, the thermal pressure will balance the gravitational force, and the contraction will cease.
Whilst this process successfully explains star formation, it is a mechanism which requires heat flow out of the system, and this is an entropy-decreasing effect. Given that the reduction in spatial volume also constitutes an entropy-decreasing effect, it can be safely concluded that the entropy of the matter in a gravitationally contracting system decreases. However, contrary to the suggestions made by Ellis and others, the entropy of the gravitational field does not need to be invoked and defined in order to reconcile this fact with the second law of thermodynamics.
As a gravitationally-bound system contracts, the frequency of the collisions between the constituent particles increases, and a certain fraction of those interactions will be so-called inelastic collisions, in which the atoms or molecules are raised into excited energy states. Those excited states decay via the emission of photons, and this electromagnetic radiation is then lost to the surroundings. It is this radiative emission which is the most effective means by which heat is transferred from the contracting body to its lower temperature surroundings. And crucially, the entropy of this radiation is sufficiently huge that it easily compensates, and then some, for the lower entropy of the contracting matter. The total entropy of a contracting gravitational system therefore increases, as long as one counts the contribution from the electromagnetic radiation.
Sunday, September 06, 2009
Clothoid curves, motorways, and Hermann Tilke
Between the wars...road engineers became obsessed with working out the perfect transition curve - a mathematical method of shifting smoothly between a straight and an arc so that centripetal force builds up gradually, not suddenly like on a fairground ride. The roadbuilders swapped various mathematical formulae until, in 1937, the county surveyor of Devon, Henry Criswell, produced a set of labour-saving tables for plotting beautifully fluid lines that were so user-friendly they knocked rival systems into touch. Criswell became the undispuated king of the 'clothoid curve' - a graceful arc with a slowly increasing curvature that kept motorists permanently on their toes...The M4, designed by computer from the early 1960s onwards, is a gentle series of transition curves from London to south Wales. (Joe Moran, On Roads, p34-35).
Discussion centred on the rate at which centripetal acceleration should be permitted to change. Acceleration had units of feet per second squared so that the rate of change of acceleration was measured in feet per second cubed (ft/sec3). Henry Criswell's tables were devised for 1 ft/sec3, but in later work he also produced tables for 2 ft/sec3, this latter figure was the same as that used in the USA while Australia used 3 ft/sec3.
In the 1940s John Leeming conducted a series of experiments to measure the rate of change of acceleration actually experienced by cars on the road...The speed chosen by drivers led to rates of change of centripetal acceleration up to 10 ft/sec3, which was very much higher than anticipated without any apparent discomfort to the driver or risk to safety. [Leeming's work] seems not to have been put to any widespread use despite the implication that transitions could be much shorter. (John Porter, The Motorway Achievement: Frontiers of knowledge and practice, p129).
This description of clothoid curves immediately reminds one of Hermann Tilke's Formula 1 circuit design ethos. In particular, turns 1 and 2 at Shanghai, pictured here, seem to have been lifted from the higher curvature parts of a clothoid spiral. Which almost tells you everything you need to know about modern F1 circuit design: the corners are drawn from the same palette of curves used by motorway architects.
Discussion centred on the rate at which centripetal acceleration should be permitted to change. Acceleration had units of feet per second squared so that the rate of change of acceleration was measured in feet per second cubed (ft/sec3). Henry Criswell's tables were devised for 1 ft/sec3, but in later work he also produced tables for 2 ft/sec3, this latter figure was the same as that used in the USA while Australia used 3 ft/sec3.
In the 1940s John Leeming conducted a series of experiments to measure the rate of change of acceleration actually experienced by cars on the road...The speed chosen by drivers led to rates of change of centripetal acceleration up to 10 ft/sec3, which was very much higher than anticipated without any apparent discomfort to the driver or risk to safety. [Leeming's work] seems not to have been put to any widespread use despite the implication that transitions could be much shorter. (John Porter, The Motorway Achievement: Frontiers of knowledge and practice, p129).
This description of clothoid curves immediately reminds one of Hermann Tilke's Formula 1 circuit design ethos. In particular, turns 1 and 2 at Shanghai, pictured here, seem to have been lifted from the higher curvature parts of a clothoid spiral. Which almost tells you everything you need to know about modern F1 circuit design: the corners are drawn from the same palette of curves used by motorway architects.
Saturday, September 05, 2009
Sapphire, Steel, and Uranium-238
It was an unusual blend of the supernatural chiller, and space-time claustrophobia. As the voice-over to the introduction explained:
All irregularities will be handled by the forces controlling each dimension. Transuranic, heavy elements may not be used where there is life. Medium atomic weights are available: Gold, Lead, Copper, Jet, Diamond, Radium, Sapphire, Silver and Steel. Sapphire and Steel have been assigned.
Which seems rather unfair to the transuranic elements. A fairly haphazard collection of minerals, alloys, gemstones, and chemical elements seem to have been bundled together here. Moreover, lead is nothing if not a heavy element, and radium has a somewhat tarnished reputation when it comes to its life-enhancing properties.
What is splendid about the opening sequence, however, is the way the nuclei of the chemical elements are represented as if they're also miniature gemstones, with hundreds of sparkling, colourful facets. It's a notion which modern mass spectrometers do little to promote, yet this diagram of the uranium-238 radioactive decay chain does seem to owe a little something to the idea.
Thursday, September 03, 2009
Ross Brawn and Harwell
Ross Brawn's reputation as the greatest technical director in modern Formula 1 has been thoroughly cemented this year by the extraordinary success of his eponymous team. The roots of a successful career can be traced to various factors, such as underlying personality and opportunity, but also in some cases to early career experiences. In Ross's case, it's interesting to note that, perhaps uniquely amongst motorsport engineers, his career began in the nuclear industry.
Ross spent five years in the early 1970s as a trainee at what was then dubbed the Atomic Energy Research Estasblishment (AERE) Harwell, Oxfordshire. As Ross explains in his own words, "I did a mechanical engineering apprenticeship at Harwell and then I went on to start an HNC, still funded by Harwell. My parents lived in Reading and I found an advertisement for Frank Williams Grand Prix, which were based in Reading at that time. I went along and was interviewed by Patrick Head. They were looking for a machinist which was one of the things I’d done at Harwell."
It's interesting to speculate on what Ross actually did at Harwell during these years. The early 1970s were a period in which Harwell were attempting to diversify into sectors outside atomic energy, and it's quite possible that Ross was involved in projects such as those. Whilst Ross himself claims that he did a mechanical engineering apprenticeship, and did some work as a machinist, elsewhere it is claimed that he studied instrumentation. Both may be true.
Nicholas Hance's excellent 2006 book, Harwell: The Enigma Revealed, offers an intriguing insight into some of the work that Brawn might have been doing there:
Metallurgy Division had the task of solving how to fabricate metals such as uranium, plutonium and thorium. It was soon apparent that increasing the efficiency of a nuclear power reactor meant operating it at the highest possible temperatures...The behaviour of such new materials at these higher temperatures needed to be understood. It had to solve the problems of corrosion in the very hostile environment of a reactor core. Nuclear fuel rods needed to be encapsulated inside special alloys. Furthermore, the intense radiation inside a reactor had a dramatic effect on the physical properties of metals. Ordinary steel, for example, suffered severe 'creep' when irradiated by neutrons and lost its strength as it became plastic. Other materials would become more brittle.
...Much of the earlt practical work of Metallurgy Division was carried out in B35. The former RAF workshop building was adapted with an air extraction system so that it could be used to machine uranium on a lathe. One of the first tasks was to machine the fuel rods for GLEEP, after which the young metallurgists turned their hand to designing and fabricating 18 tons of fuel rods for BEPO...the division's work on canning fuels moved in alongside the casting and the machining of graphite. Heavy and novel machinery was installed, some of which was still in use in the early 1990s. There were melting and casting furnaces, capable of operating at 1600 degrees C, and a 750-ton extrusion press. A device in the 1950s, known as 'Harry's Bomb' was installed in the cellar under B35. It made use of the breach of a 6-inch gun barrel from a battleship. Harry Lloyd developed a method of compressing uranium carbide in the breach, using hot argon gas at 50,000 pounds per square inch pressure. Compression raised the temperature to over 1,000 degrees C and his 'bomb' device was to become the forerunner of hot isostatic pressing.
Brian Hudson, who worked in B35, remembered the pioneering days of Metallurgy Division well. "We had to be alert to the pyrophoric hazards of working with uranium powders and it was prudent to use copious flows of coolant during machining work as uranium had a habit of bursting into flames!" (p233-236).
Ross arrived on the scene after these exciting days of pyrophoric fires, but it would nevertheless be interesting to know if he began his career machining chunks of uranium!
Incidentally, those with an extreme interest in Formula 1 technical trivia, might be intrigued by the following account in Hance's text:
Harwell invented the nuclear technique of ion implantation, the room temperature process that could be applied to precision engineering components, giving them greater wear resistance. The equipment needed was an accelerator to produce a beam of ionized nitrogen atoms, and a large vacuum chamber in which were placed the items requiring treatment...The strictures of commercial secrecy do not permit identification of the F1 racing car team which benefited from ion implantation, improving the wear resistance of its engine crankshafts...The improved crankshafts helped win races. Not so shy was Williams Grand Prix Engineering Ltd, based at Didcot, who asked Harwell Analytical Science Centre to determine the nature of, and chemically remove, a layer which had built up on aluminium alloy components inside the engine. (Harwell Bulletin, 25/86, 11th July 1986).
At this time, Williams were using Honda engines...
Ross spent five years in the early 1970s as a trainee at what was then dubbed the Atomic Energy Research Estasblishment (AERE) Harwell, Oxfordshire. As Ross explains in his own words, "I did a mechanical engineering apprenticeship at Harwell and then I went on to start an HNC, still funded by Harwell. My parents lived in Reading and I found an advertisement for Frank Williams Grand Prix, which were based in Reading at that time. I went along and was interviewed by Patrick Head. They were looking for a machinist which was one of the things I’d done at Harwell."
It's interesting to speculate on what Ross actually did at Harwell during these years. The early 1970s were a period in which Harwell were attempting to diversify into sectors outside atomic energy, and it's quite possible that Ross was involved in projects such as those. Whilst Ross himself claims that he did a mechanical engineering apprenticeship, and did some work as a machinist, elsewhere it is claimed that he studied instrumentation. Both may be true.
Nicholas Hance's excellent 2006 book, Harwell: The Enigma Revealed, offers an intriguing insight into some of the work that Brawn might have been doing there:
Metallurgy Division had the task of solving how to fabricate metals such as uranium, plutonium and thorium. It was soon apparent that increasing the efficiency of a nuclear power reactor meant operating it at the highest possible temperatures...The behaviour of such new materials at these higher temperatures needed to be understood. It had to solve the problems of corrosion in the very hostile environment of a reactor core. Nuclear fuel rods needed to be encapsulated inside special alloys. Furthermore, the intense radiation inside a reactor had a dramatic effect on the physical properties of metals. Ordinary steel, for example, suffered severe 'creep' when irradiated by neutrons and lost its strength as it became plastic. Other materials would become more brittle.
...Much of the earlt practical work of Metallurgy Division was carried out in B35. The former RAF workshop building was adapted with an air extraction system so that it could be used to machine uranium on a lathe. One of the first tasks was to machine the fuel rods for GLEEP, after which the young metallurgists turned their hand to designing and fabricating 18 tons of fuel rods for BEPO...the division's work on canning fuels moved in alongside the casting and the machining of graphite. Heavy and novel machinery was installed, some of which was still in use in the early 1990s. There were melting and casting furnaces, capable of operating at 1600 degrees C, and a 750-ton extrusion press. A device in the 1950s, known as 'Harry's Bomb' was installed in the cellar under B35. It made use of the breach of a 6-inch gun barrel from a battleship. Harry Lloyd developed a method of compressing uranium carbide in the breach, using hot argon gas at 50,000 pounds per square inch pressure. Compression raised the temperature to over 1,000 degrees C and his 'bomb' device was to become the forerunner of hot isostatic pressing.
Brian Hudson, who worked in B35, remembered the pioneering days of Metallurgy Division well. "We had to be alert to the pyrophoric hazards of working with uranium powders and it was prudent to use copious flows of coolant during machining work as uranium had a habit of bursting into flames!" (p233-236).
Ross arrived on the scene after these exciting days of pyrophoric fires, but it would nevertheless be interesting to know if he began his career machining chunks of uranium!
Incidentally, those with an extreme interest in Formula 1 technical trivia, might be intrigued by the following account in Hance's text:
Harwell invented the nuclear technique of ion implantation, the room temperature process that could be applied to precision engineering components, giving them greater wear resistance. The equipment needed was an accelerator to produce a beam of ionized nitrogen atoms, and a large vacuum chamber in which were placed the items requiring treatment...The strictures of commercial secrecy do not permit identification of the F1 racing car team which benefited from ion implantation, improving the wear resistance of its engine crankshafts...The improved crankshafts helped win races. Not so shy was Williams Grand Prix Engineering Ltd, based at Didcot, who asked Harwell Analytical Science Centre to determine the nature of, and chemically remove, a layer which had built up on aluminium alloy components inside the engine. (Harwell Bulletin, 25/86, 11th July 1986).
At this time, Williams were using Honda engines...
Subscribe to:
Posts (Atom)