The December issue of Scientific American contains a feature on the life of Hugh Everett III, the inventor of the many-worlds interpretation of quantum theory. Everett left academia in a state of some bitterness at the initial lack of enthusiasm for his proposal. After a period of time working for the Pentagon on the mathematics of nuclear warfare, he established Lambda, a private defence research company. Under contract to the Pentagon, Lambda used Bayesian methods to develop a system for tracking ballistic missiles. Most intriguingly, though, the article suggests that Everett may have subsequently hoodwinked American bank J.P.Morgan:
John Y. Barry, a former colleague of Everett's...questioned his ethics. In the mid-1970s Barry convinced his employers at J.P. Morgan to hire Everett to develop a Bayesian method of predicting movement in the stock market. By several accounts, Everett succeeded - and then refused to turn the product over to J.P.Morgan. "He used us," Barry recalls. "[He was] a brilliant, innovative, slippery, untrustworthy, probably alcoholic individual."
This information appears to have been taken from an on-line biography of Everett, which quotes Barry as follows:
"In the middle 1970s I was in the basic research group of J. P. Morgan and hired Lambda Corporation to develop...the Bayesian stock market timer. He refused to give us the computer code and insisted that Lambda be paid for market forecasts. Morgan could have sued Lambda for the code under the legal precedent of 'work for hire'. Rather than do so, we decided to have nothing more to do with Lambda because they, Hugh, were so unethical. We found that he later used the work developed with Morgan money as a basis for systems sold to the Federal Government. He used us...In brief a brilliant, innovative, slippery, untrustworthy, probably alcoholic, individual."
What I don't understand here, is how this recollection is supposed to substantiate the claim that Everett had dubious ethics. Within an economy, there are some companies which provide the fundamental level of production within the economy, companies which invent things, design things, develop things, discover things, and make things; and then there are parasitic companies, such as banks and firms of lawyers, which produce nothing, and merely drain their wealth from the activities of others. Taking money from a large American bank to subsidise a research project, and then preventing that bank from harvesting the fruits of that project, is the very paragon of ethical behaviour.
Thursday, November 29, 2007
Wednesday, November 28, 2007
Bear Grylls and Coleridge
Last night on the Discovery Channel, there was a gripping two-part 'Born Survivor' special, as Bear Grylls demonstrated how to survive in the Saharan desert. At one stage, Grylls was presented with a dead camel from a local tribe. To provide a blanket against the cold of night, he stripped the skin from the animal, then he cut into its side, and scooped out some water-like liquid to re-hydrate himself. He then cut into what I think was the camel's stomach, the inside of which contained a mass of yellowish manure. Grylls scooped some out, held it above his head, and squeezed the yellow liquid from the manure into his mouth! "It's better than nothing," he claimed.
This was all very impressive, but I couldn't held remembering the recent revelations that Andy Serkis actually performs all the survival stunts on the show, and Grylls's face is simply CGI-ed on in post-production. And at one stage, when Grylls, shot from an aerial perspective, stood atop a high escarpment and wondered aloud "How do I get down from this," I did blurt out: "Use the helicopter!".
Seriously, though, I noticed that the programme was now making the presence of the camera crew explicit, Grylls talking to them at times, and taking a hand-held camera from them on one occasion. Along with the rider displayed at the beginning of the programme, which emphasises that some situations are set-up in advance for Grylls to demonstrate survival techniques, this seems to be a reaction to the accusations of fakery which were levelled at this programme, amongst several others. Similarly, I noticed on a recent 'Top Gear', that when the lads were driving across Botswana, the presence of the film crew and supporting mechanics was made quite overt.
There was a time when TV documentaries, like films, would seek to transport the mind of the viewer to another place, and, to sustain this illusion, no reference would be made to the presence of the camera, or to the production process as a whole. There was no attempt in this to deceive the viewer, rather the viewer and programme-maker were conspiring together in the willing suspension of disbelief (© Sam Coleridge), with the ultimate purpose of enhancing the viewing experience.
The modern trend to make explicit the presence of the cameraman and sound recordist, and to refer within a programme to the programme-making process, is seen as lending a type of authenticity to a programme. The upshot, however, is to dissolve the possible suspension of disbelief in the viewer, and ultimately, therefore, to reduce the potential pleasure which a viewer can gain from a TV programme. This is not a positive trend.
This was all very impressive, but I couldn't held remembering the recent revelations that Andy Serkis actually performs all the survival stunts on the show, and Grylls's face is simply CGI-ed on in post-production. And at one stage, when Grylls, shot from an aerial perspective, stood atop a high escarpment and wondered aloud "How do I get down from this," I did blurt out: "Use the helicopter!".
Seriously, though, I noticed that the programme was now making the presence of the camera crew explicit, Grylls talking to them at times, and taking a hand-held camera from them on one occasion. Along with the rider displayed at the beginning of the programme, which emphasises that some situations are set-up in advance for Grylls to demonstrate survival techniques, this seems to be a reaction to the accusations of fakery which were levelled at this programme, amongst several others. Similarly, I noticed on a recent 'Top Gear', that when the lads were driving across Botswana, the presence of the film crew and supporting mechanics was made quite overt.
There was a time when TV documentaries, like films, would seek to transport the mind of the viewer to another place, and, to sustain this illusion, no reference would be made to the presence of the camera, or to the production process as a whole. There was no attempt in this to deceive the viewer, rather the viewer and programme-maker were conspiring together in the willing suspension of disbelief (© Sam Coleridge), with the ultimate purpose of enhancing the viewing experience.
The modern trend to make explicit the presence of the cameraman and sound recordist, and to refer within a programme to the programme-making process, is seen as lending a type of authenticity to a programme. The upshot, however, is to dissolve the possible suspension of disbelief in the viewer, and ultimately, therefore, to reduce the potential pleasure which a viewer can gain from a TV programme. This is not a positive trend.
Monday, November 26, 2007
Is Benitez going to be sacked?
Astonishingly, it seems that Liverpool manager Rafael Benitez could be on the verge of leaving the club. Benitez reacted petulantly last week when the club's American co-owners, George Gillett and Tom Hicks, postponed any decisions about expenditure in the January transfer window. It appears that Gillett and Hicks's tolerance threshold for insubordination is rather low, for they are now seeking to rid themselves of Benitez by the end of the season. Either that, or they are seeking to control Benitez with dismissal threats, which doesn't sound like a rosy way to proceed either.
Now, I've never been an unconditional fan of Benitez. I think he's an excellent manager for the European game, but he's never really sussed out how to win the Premiership. Even in his first season with Liverpool, Benitez made a succession of slightly odd team selections and tactical decisions, which restricted the team to only 5th place at season's end.
Benitez's greatest triumph, of course, was taking Liverpool to victory in the Champions' League that same year. But even there, in the final, he made an astonishing mis-judgement in selecting Harry Kewell to start the game, and by leaving Dietmar Hamann out of the team until the second half, AC Milan rampaged to a three-goal lead. Liverpool's come-back was remarkable, but hugely fortunate, and was driven more by Gerrard, Carragher and Hamann, than by Benitez.
Benitez's erratic decision-making has continued into a fourth season with Liverpool, and a stunningly inexplicable 'rotation policy' has again restricted the team to its current 5th-place in the Premiership table.
But should Benitez go? On balance, I don't think so. Whilst he's an odd fellow, he's also clearly got something about him as a manager, and I don't know who could replace him at the moment and be more effective. Nevertheless, I've felt for some time that Liverpool won't win the Premiership under Benitez, and it seems that I will be proven correct even earlier than I imagined...
Now, I've never been an unconditional fan of Benitez. I think he's an excellent manager for the European game, but he's never really sussed out how to win the Premiership. Even in his first season with Liverpool, Benitez made a succession of slightly odd team selections and tactical decisions, which restricted the team to only 5th place at season's end.
Benitez's greatest triumph, of course, was taking Liverpool to victory in the Champions' League that same year. But even there, in the final, he made an astonishing mis-judgement in selecting Harry Kewell to start the game, and by leaving Dietmar Hamann out of the team until the second half, AC Milan rampaged to a three-goal lead. Liverpool's come-back was remarkable, but hugely fortunate, and was driven more by Gerrard, Carragher and Hamann, than by Benitez.
Benitez's erratic decision-making has continued into a fourth season with Liverpool, and a stunningly inexplicable 'rotation policy' has again restricted the team to its current 5th-place in the Premiership table.
But should Benitez go? On balance, I don't think so. Whilst he's an odd fellow, he's also clearly got something about him as a manager, and I don't know who could replace him at the moment and be more effective. Nevertheless, I've felt for some time that Liverpool won't win the Premiership under Benitez, and it seems that I will be proven correct even earlier than I imagined...
Saturday, November 24, 2007
Has mankind reduced the life-expectancy of the universe?
Hot on the heels of my post concerning the destruction of the universe, cosmologist Lawrence Krauss has suggested that mankind's detection of dark energy in 1998 may reduce the lifetime of our universe!
The idea, once again, is that the false vacuum energy of the scalar field responsible for inflation, (the hypothetical exponential expansion of the very early universe), may not have decayed to zero, and, since the time of inflation, may have been residing in another false vacuum state of much lower, but non-zero energy. It is suggested by Krauss and James Dent that the dark energy detected by cosmologists in the past decade, may simply be this residual false vacuum energy.
A false vacuum state is 'metastable' in the sense that it is a state of temporary stability, but one which is prone to decay, much like the nucleus of a radioactive atom. Krauss and Dent point out that in quantum mechanics, the survival probability of a metastable state will decrease exponentially until a critical cut-off time, at which point the survival probability will only decrease according to a power law. Given that a false vacuum expands exponentially, there will be a net increase in the volume of space in a false vacuum state after this cut-off time. Krauss argues that whilst the metastable residual false vacuum of our universe may have reached this critical cut-off, the act of observing the dark energy may have reset the decay according to the 'quantum zeno effect'. The consequence is that mankind's actions may have significantly increased the probability that the residual false vacuum will decay to the true vacuum, destroying all the structure in our universe.
With an irony that may have been lost on some readers, The Daily Telegraph's Roger Highfield refers to these as "damaging allegations."
There are also at least two reasons why the claims shouldn't be taken seriously. Firstly, as Max Tegmark points out in New Scientist, the quantum zeno effect does not require humans to make observations: "Galaxies have 'observed' the dark energy long before we evolved. When we humans in turn observe the light from these galaxies, it changes nothing except our own knowledge." Secondly, Krauss and Dent's assumption that dark energy can be equated with the energy density of a scalar field is inconsistent with the current observational evidence, which suggests that the dark energy is a cosmological constant. The energy density due to a cosmological constant is a property of space, not a property of any matter field in space. The energy density due to a cosmological constant is literally constant in space and time, unlike that attributable to the scalar fields postulated to explain dark energy.
The idea, once again, is that the false vacuum energy of the scalar field responsible for inflation, (the hypothetical exponential expansion of the very early universe), may not have decayed to zero, and, since the time of inflation, may have been residing in another false vacuum state of much lower, but non-zero energy. It is suggested by Krauss and James Dent that the dark energy detected by cosmologists in the past decade, may simply be this residual false vacuum energy.
A false vacuum state is 'metastable' in the sense that it is a state of temporary stability, but one which is prone to decay, much like the nucleus of a radioactive atom. Krauss and Dent point out that in quantum mechanics, the survival probability of a metastable state will decrease exponentially until a critical cut-off time, at which point the survival probability will only decrease according to a power law. Given that a false vacuum expands exponentially, there will be a net increase in the volume of space in a false vacuum state after this cut-off time. Krauss argues that whilst the metastable residual false vacuum of our universe may have reached this critical cut-off, the act of observing the dark energy may have reset the decay according to the 'quantum zeno effect'. The consequence is that mankind's actions may have significantly increased the probability that the residual false vacuum will decay to the true vacuum, destroying all the structure in our universe.
With an irony that may have been lost on some readers, The Daily Telegraph's Roger Highfield refers to these as "damaging allegations."
There are also at least two reasons why the claims shouldn't be taken seriously. Firstly, as Max Tegmark points out in New Scientist, the quantum zeno effect does not require humans to make observations: "Galaxies have 'observed' the dark energy long before we evolved. When we humans in turn observe the light from these galaxies, it changes nothing except our own knowledge." Secondly, Krauss and Dent's assumption that dark energy can be equated with the energy density of a scalar field is inconsistent with the current observational evidence, which suggests that the dark energy is a cosmological constant. The energy density due to a cosmological constant is a property of space, not a property of any matter field in space. The energy density due to a cosmological constant is literally constant in space and time, unlike that attributable to the scalar fields postulated to explain dark energy.
The economics of quality
It is a platitude of economics that competition drives the price of a product down towards the cost of producing it. However, it is also a truism that the cost of producing something is variable. Hence, a lower price or greater profit margin can be generated if production costs are reduced.
There are at least two ways in which costs can be reduced: (i) the efficiency of the production process can be increased; or (ii) the quality of the product can be reduced.
In television, it seems that competition has resulted in a reduction to the quality of the product. I would also argue that in the world of academic publishing, the physical quality of the books published in the past decade or so, do not match the quality of the books published in the 1970s and 1980s. I am not thinking here of the quality of the intellectual contents, but the quality of the book itself, as an extended physical object.
Back in the 1970s and 1980s, Academic Press published a series of monographs and textbooks entitled Pure and Applied Mathematics, edited by Samuel Eilenberg and Hyman Bass. These were beautiful books. The paper was of the highest quality; the choice of fonts and typeface was perfect, the text bejewelled with calligraphically sculpted fraktur and script characters; and the books were covered in a type of green leather hide, bearing their titles in gold leaf lettering. These books even smelt good when you opened them. There is nothing comparable in academic publishing today.
There are at least two ways in which costs can be reduced: (i) the efficiency of the production process can be increased; or (ii) the quality of the product can be reduced.
In television, it seems that competition has resulted in a reduction to the quality of the product. I would also argue that in the world of academic publishing, the physical quality of the books published in the past decade or so, do not match the quality of the books published in the 1970s and 1980s. I am not thinking here of the quality of the intellectual contents, but the quality of the book itself, as an extended physical object.
Back in the 1970s and 1980s, Academic Press published a series of monographs and textbooks entitled Pure and Applied Mathematics, edited by Samuel Eilenberg and Hyman Bass. These were beautiful books. The paper was of the highest quality; the choice of fonts and typeface was perfect, the text bejewelled with calligraphically sculpted fraktur and script characters; and the books were covered in a type of green leather hide, bearing their titles in gold leaf lettering. These books even smelt good when you opened them. There is nothing comparable in academic publishing today.
Saturday, November 17, 2007
Analytic Metaphysics
On Thursday this week I made the trip to Oxford to see James Ladyman deliver his talk, The Bankruptcy of Analytic Metaphysics. And most entertaining it was too.
The type of methodology which I take James to be attacking, is nicely defined in A Companion to Metaphysics, (Jaegwon Kim and Ernest Sosa (eds.), Blackwell, 1995). In Felicia Ackerman's entry on 'analysis', we are asked to "consider the following proposition.
(1) To be an instance of knowledge is to be an instance of justified true belief not essentially grounded in any falsehood.
(1) exemplifies a central sort of philosophical analysis. Analyses of this sort can be characterized as follows:
(a) The analysans and analysandum are necessarily coexistensive, i.e every instance of one is an instance of the other.
(b) The analysans and analysandum are knowable a priori to be coextensive.
(c) The analysandum is simpler than the analysans...
(d) The analysans does not have the analysandum as a constituent.
(e) A proposition that gives a correct analysis can be justified by the philosophical example-and-counter-example method. i.e. by generalizing from intuitions about the correct answers to questions about a varied and wide-ranging series of simple described hypothetical test cases, such as 'If such-and-such were the case, would you call this a case of knowledge?' Thus, such an analysis is a philosophical discovery, rather than something that must be obvious to ordinary users of the terms in question."
But what, exactly, is the criterion to be applied in these test cases? These are not empirical tests, where we can compare the predictions of theory with the results of measurement and observation. Neither are these tests comparable to the tests devised by mathematicians, to support or reject a mathematical hypothesis. In analytic metaphysics, an attempt is being made to define the meaning of one of the terms of discourse (in the example given here, the term is 'knowledge'); in the mathematical case, all the terms of discourse have been stipulatively defined at the outset.
As Ladyman has emphasized, the problem with this methodology, is that it ultimately appeals to intuition, which is not only culturally dependent, but varies from one individual to another within a culture.
Ackerman acknowledges that "It can...be objected that it is virtually impossible to produce an example of an analysis that is both philosophically interesting and generally accepted as true. But virtually all propositions philosophers put forth suffer from this problem...The hypothetical example-and counterexample method the sort of analysis (1) exemplifies is fundamental in philosophical enquiry, even if philosophers cannot reach agreement on analyses."
It seems to be acknowledged, then, that the results of relying upon intuition are inconsistent. If the results of a methodology are inconsistent, then, in most disciplines, that entails that the methodology is unreliable, which, in most cases, is a sufficient condition for the methodology to be rejected as a deficient methodology. Apparently, however, "all propositions philosophers put forth suffer from this problem," so the methodology continues to be employed in metaphysics, and, for that matter, in epistemology too. Remarkable.
The type of methodology which I take James to be attacking, is nicely defined in A Companion to Metaphysics, (Jaegwon Kim and Ernest Sosa (eds.), Blackwell, 1995). In Felicia Ackerman's entry on 'analysis', we are asked to "consider the following proposition.
(1) To be an instance of knowledge is to be an instance of justified true belief not essentially grounded in any falsehood.
(1) exemplifies a central sort of philosophical analysis. Analyses of this sort can be characterized as follows:
(a) The analysans and analysandum are necessarily coexistensive, i.e every instance of one is an instance of the other.
(b) The analysans and analysandum are knowable a priori to be coextensive.
(c) The analysandum is simpler than the analysans...
(d) The analysans does not have the analysandum as a constituent.
(e) A proposition that gives a correct analysis can be justified by the philosophical example-and-counter-example method. i.e. by generalizing from intuitions about the correct answers to questions about a varied and wide-ranging series of simple described hypothetical test cases, such as 'If such-and-such were the case, would you call this a case of knowledge?' Thus, such an analysis is a philosophical discovery, rather than something that must be obvious to ordinary users of the terms in question."
But what, exactly, is the criterion to be applied in these test cases? These are not empirical tests, where we can compare the predictions of theory with the results of measurement and observation. Neither are these tests comparable to the tests devised by mathematicians, to support or reject a mathematical hypothesis. In analytic metaphysics, an attempt is being made to define the meaning of one of the terms of discourse (in the example given here, the term is 'knowledge'); in the mathematical case, all the terms of discourse have been stipulatively defined at the outset.
As Ladyman has emphasized, the problem with this methodology, is that it ultimately appeals to intuition, which is not only culturally dependent, but varies from one individual to another within a culture.
Ackerman acknowledges that "It can...be objected that it is virtually impossible to produce an example of an analysis that is both philosophically interesting and generally accepted as true. But virtually all propositions philosophers put forth suffer from this problem...The hypothetical example-and counterexample method the sort of analysis (1) exemplifies is fundamental in philosophical enquiry, even if philosophers cannot reach agreement on analyses."
It seems to be acknowledged, then, that the results of relying upon intuition are inconsistent. If the results of a methodology are inconsistent, then, in most disciplines, that entails that the methodology is unreliable, which, in most cases, is a sufficient condition for the methodology to be rejected as a deficient methodology. Apparently, however, "all propositions philosophers put forth suffer from this problem," so the methodology continues to be employed in metaphysics, and, for that matter, in epistemology too. Remarkable.
Wednesday, November 14, 2007
An exceptionally simple theory of everything
Surfer dude Garrett Lisi has produced a fabulous theory of everything, which, at the classical level at least, unifies the structure of the standard model of particle physics with the structure of general relativity.
The basic idea is that the gauge group of the entire universe is E8, a group which is classified as an exceptional simple Lie group. The gauge field of the entire universe would be represented, at a classical level, by a superconnection upon the total space of an E8-principal fibre bundle over a 4-dimensional space-time. This gauge field subsumes not only the gravitational field, the electroweak field, and the strong field, but all the matter fields as well, including the quark and lepton fields, and the Higgs field.
The diagram here represents the roots of the Lie algebra of E8, each of which purportedly define a possible type of elementary particle. Every Lie algebra has a maximal commuting subalgebra, called the Cartan subalgebra. In each representation of a Lie algebra, the simultaneous eigenvectors of the elements from the Cartan subalgebra are called the weight vectors of the representation, and their simultaneous eigenvalues are called the weights of the representation. In the special case of the adjoint representation, (a representation of a Lie algebra upon itself), the weight vectors are called the root vectors, and the weights are called the roots.
In the case of E8 the Cartan subalgebra is 8-dimensional, hence E8 has 240 roots, and each of these roots is defined by 8 numbers, the eigenvalues of the 8 linearly-independent vectors which are chosen as a basis for the Cartan subalgebra. These 8 numbers are the 'quantum numbers' which define each type of elementary particle.
It's a remarkable paper, which I shall retire to consider at some length.
The basic idea is that the gauge group of the entire universe is E8, a group which is classified as an exceptional simple Lie group. The gauge field of the entire universe would be represented, at a classical level, by a superconnection upon the total space of an E8-principal fibre bundle over a 4-dimensional space-time. This gauge field subsumes not only the gravitational field, the electroweak field, and the strong field, but all the matter fields as well, including the quark and lepton fields, and the Higgs field.
The diagram here represents the roots of the Lie algebra of E8, each of which purportedly define a possible type of elementary particle. Every Lie algebra has a maximal commuting subalgebra, called the Cartan subalgebra. In each representation of a Lie algebra, the simultaneous eigenvectors of the elements from the Cartan subalgebra are called the weight vectors of the representation, and their simultaneous eigenvalues are called the weights of the representation. In the special case of the adjoint representation, (a representation of a Lie algebra upon itself), the weight vectors are called the root vectors, and the weights are called the roots.
In the case of E8 the Cartan subalgebra is 8-dimensional, hence E8 has 240 roots, and each of these roots is defined by 8 numbers, the eigenvalues of the 8 linearly-independent vectors which are chosen as a basis for the Cartan subalgebra. These 8 numbers are the 'quantum numbers' which define each type of elementary particle.
It's a remarkable paper, which I shall retire to consider at some length.
Philosophy 2.0
Philosophy 2.0 is, I propose, the new version of philosophy which will return the subject to its former prestigious role as the fundamental, over-arching, unifying, synthesising discpline, fully integrated with science.
In this vein, Dr James Ladyman, of the University of Bristol, will deliver a talk entitled The Bankruptcy of Analytic Metaphysics, in the Philosophy department at Oxford tomorrow (Thursday 15th November, 4.30, Lecture Room, 10 Merton St.) I reproduce the abstract here:
Analytic metaphysics is becoming increasingly dominant in contemporary philosophy but its status and influence is undeserved and pernicious. The methodology of analytic metaphysics with its reliance on intuition and explanation by posit has no epistemological justification and its results have little or no epistemic value. Unless checked it threatens to discredit philosophy among non-philosophers and waste the talents of a host a graduate students as well as exerting a pernicious influence on other areas of philosophy
I will argue the case for the above claims with reference to recent debates about composition, gunk versus atoms, mental causation and Humean supervenience. I will argue for a naturalized metaphysics that engages with science.
James has also just published a survey of structural realism, which can be found here.
In this vein, Dr James Ladyman, of the University of Bristol, will deliver a talk entitled The Bankruptcy of Analytic Metaphysics, in the Philosophy department at Oxford tomorrow (Thursday 15th November, 4.30, Lecture Room, 10 Merton St.) I reproduce the abstract here:
Analytic metaphysics is becoming increasingly dominant in contemporary philosophy but its status and influence is undeserved and pernicious. The methodology of analytic metaphysics with its reliance on intuition and explanation by posit has no epistemological justification and its results have little or no epistemic value. Unless checked it threatens to discredit philosophy among non-philosophers and waste the talents of a host a graduate students as well as exerting a pernicious influence on other areas of philosophy
I will argue the case for the above claims with reference to recent debates about composition, gunk versus atoms, mental causation and Humean supervenience. I will argue for a naturalized metaphysics that engages with science.
James has also just published a survey of structural realism, which can be found here.
Tuesday, November 13, 2007
All things fair and fowl
It seems that many of the birds in Norfolk and Suffolk have been forced to stay indoors for the next day or so. I imagine, therefore, that they will currently be sitting at home, flicking absent-mindedly through the channels on Freeview, or randomly surfing the net in the hope of finding something that plucks their interest.
In an attempt to satisfy my avian visitors, may I point them in the direction of this interesting research, which attempts to explain, by means of computer simulation, why flocks of birds fly in V-formations, or even W-formations. It seems that these formations offer the optimum combination of collective aerodynamic efficiency and visibility.
Our fine-feathered friends are, of course, notoriously fond of the odd worm-snack, and will therefore be most interested in the lastest proposal for wormholes in physics. This particular proposal appears to be an extension of the idea that invisibility cloaks can be designed using materials with a non-uniform refractive index. The tubular materials proposed here are, it seems, deemed wormholes on the basis that "light entering the tube at one end would emerge at the other with no visible tunnel in-between." I shall resist the judgement that such research is progressing on a wing and a prayer.
In an attempt to satisfy my avian visitors, may I point them in the direction of this interesting research, which attempts to explain, by means of computer simulation, why flocks of birds fly in V-formations, or even W-formations. It seems that these formations offer the optimum combination of collective aerodynamic efficiency and visibility.
Our fine-feathered friends are, of course, notoriously fond of the odd worm-snack, and will therefore be most interested in the lastest proposal for wormholes in physics. This particular proposal appears to be an extension of the idea that invisibility cloaks can be designed using materials with a non-uniform refractive index. The tubular materials proposed here are, it seems, deemed wormholes on the basis that "light entering the tube at one end would emerge at the other with no visible tunnel in-between." I shall resist the judgement that such research is progressing on a wing and a prayer.
Monday, November 12, 2007
How to destroy the universe
Schemes for the possible creation of a universe in a laboratory have received a decent amount of publicity in recent years. In comparison, laboratory-based schemes for the destruction of our universe have been sadly neglected. In an effort, then, to redress this inequality, let me explain how our universe may be destroyed.
The idea depends upon the concept of a false vacuum, introduced by inflationary cosmology. Inflation suggests that there is a scalar field, the 'inflaton', whose 'equation of state' is such that a positive energy density corresponds to a negative pressure. In general relativity, a matter field with negative pressure generates a repulsive gravitational effect. Inflationary cosmology suggests that at some time in the early universe, the energy density of the universe came to be dominated by the non-zero energy density of the inflaton field. A region of the universe in this so-called false vacuum state would undergo exponential expansion until the inflaton field dropped into a lower energy state. This lower energy state is conventionally considered to be the 'true vacuum' state, the lowest energy state of the inflaton field. However, inflation works just as effectively if the transition is from one positive energy state to another, lower, positive energy state. And it is this possibility which opens up the doomsday scenario.
It was originally proposed that the false vacuum state was a local minimum of the potential energy function for the inflaton field, and that inflation ended when the state of the inflaton quantum-mechanically tunnelled through the potential barrier from the initial minimum to another, lower, minimum of the potential energy function, possibly the global minimum (true vacuum) state (see diagram). It was suggested that inflation was ended locally by this quantum tunnelling, and a bubble of the lower-energy vacuum formed, surrounded by a region of the higher-energy vacuum. The walls of the bubble then expanded outwards at the speed of light, destroying all in their path. It was subsequently thought that such 'bubble nucleation' could not explain the observed homogeneity of our own universe, and the original inflationary proposal was duly superceded by the 'new' inflationary proposal, and the 'chaotic' inflationary proposal, which both suggested that inflation could occur without the need for the false vacuum to be a local minimum of the potential, and inflation could therefore end without quantum tunnelling and bubble nucleation.
This, however, does not mean that the bubble nucleation of a lower-energy vacuum is physically impossible, it merely entails that such a process was not involved in inflation. If the current state of the inflaton field is still not the lowest energy state of that field, and if the current state is a local minimum, it may be that the current state will be ended by quantum tunnelling and bubble nucleation. Moreover, it may be that particle accelerators or laser-fusion devices of the future, may generate sufficient energy density to perturb the current state of the inflaton field out of its local minimum, and over the potential barrier into a lower-energy state. A bubble of lower-energy vacuum could thereby form in the laboratory, and propagate outwards at the speed of light, destroying all in its path.
The idea depends upon the concept of a false vacuum, introduced by inflationary cosmology. Inflation suggests that there is a scalar field, the 'inflaton', whose 'equation of state' is such that a positive energy density corresponds to a negative pressure. In general relativity, a matter field with negative pressure generates a repulsive gravitational effect. Inflationary cosmology suggests that at some time in the early universe, the energy density of the universe came to be dominated by the non-zero energy density of the inflaton field. A region of the universe in this so-called false vacuum state would undergo exponential expansion until the inflaton field dropped into a lower energy state. This lower energy state is conventionally considered to be the 'true vacuum' state, the lowest energy state of the inflaton field. However, inflation works just as effectively if the transition is from one positive energy state to another, lower, positive energy state. And it is this possibility which opens up the doomsday scenario.
It was originally proposed that the false vacuum state was a local minimum of the potential energy function for the inflaton field, and that inflation ended when the state of the inflaton quantum-mechanically tunnelled through the potential barrier from the initial minimum to another, lower, minimum of the potential energy function, possibly the global minimum (true vacuum) state (see diagram). It was suggested that inflation was ended locally by this quantum tunnelling, and a bubble of the lower-energy vacuum formed, surrounded by a region of the higher-energy vacuum. The walls of the bubble then expanded outwards at the speed of light, destroying all in their path. It was subsequently thought that such 'bubble nucleation' could not explain the observed homogeneity of our own universe, and the original inflationary proposal was duly superceded by the 'new' inflationary proposal, and the 'chaotic' inflationary proposal, which both suggested that inflation could occur without the need for the false vacuum to be a local minimum of the potential, and inflation could therefore end without quantum tunnelling and bubble nucleation.
This, however, does not mean that the bubble nucleation of a lower-energy vacuum is physically impossible, it merely entails that such a process was not involved in inflation. If the current state of the inflaton field is still not the lowest energy state of that field, and if the current state is a local minimum, it may be that the current state will be ended by quantum tunnelling and bubble nucleation. Moreover, it may be that particle accelerators or laser-fusion devices of the future, may generate sufficient energy density to perturb the current state of the inflaton field out of its local minimum, and over the potential barrier into a lower-energy state. A bubble of lower-energy vacuum could thereby form in the laboratory, and propagate outwards at the speed of light, destroying all in its path.
Sunday, November 11, 2007
The philosophy of music
Modern scientists tend to possess an almost complete ignorance of philosophy, which renders many of their more general claims naive and parochial. Conversely, however, much of 20th-century philosophy was driven by a careerist desire to build an academic discipline which could stand independently of scientific discovery and understanding. Many 20th-century philosophers received an arts-based education just as narrow as that of modern scientists, and, lacking an understanding of science, strove to build an 'analytic philosophy' centred around the analysis of natural language. These philosophers sought to believe that their discipline was logically independent of science because their careers were dependent upon such a claim.
This is not what philosophy should be. Philosophy should be the great over-arching, unifying, synthesising discpline. In an age of deepening academic specialisation, philosophy should be reacting against this trend. Philosophers should be competent in the arts and the sciences. In particular, a lack of mathematical and scientific competency in a philosopher should be considered a crippling disability, comparable to a lack of logical competency.
A perfect example of the lack of ambition and imagination in modern philosophy is provided by Andrew Kania's survey of the philosophy of music. The key phrase in this account can be found in the preamble, where Kania states that the work considered is "in an analytic vein." This, amongst other things, is philosophic code for "there will be no discussion of scientific research here." The ensuing discussion therefore makes no reference either to neuroscience or biological evolution. Vital issues such as the purported universality of musical appreciation, or the emotions evoked by some music, can only be fully understood by integrating conceptual discussion with evidence from neuroscience and cognitive evolution theory.
Which parts of the brain are activated during the production and appreciation of music? What types of interaction occur between the cerebrum (the part of the brain responsible for conscious thought), the cerebellum (the part of the brain responsible for unconscious, 'second-nature' behaviour), and the amygdala (the 'levers' of the emotions). When music is emotionally ambiguous or neutral, but nevertheless evokes an aesthetic appreciation in the listener, which parts of the brain are then activated? How do such patterns of brain activity help us to understand music, if at all? All the scientific theory and evidence here needs to be incorporated into the philosophical discussion if the philosophy is to be of any real interest or relevance.
What role, if any, does music play from the perspective of biological evolution? What light, if any, can evolution throw upon the ontology of music, and the emotions sometimes expressed by music? Consider the following argument by John Barrow: "neither musical appreciation, nor any dexterous facility for musical performance, is shared by people so widely, or at a high level of competence, in the way that linguistic abilities are. In such circumstance it is hard to believe that musical abilities are genetically programmed into the brain in the way that linguistic abilities appear to be. The variations in our ability to produce and respond to music are far too great for musical ability to be an essential evolutionary adaptation. Such diversity is more likely to arise if musical appreciation is a by-product of mental abilities that were adaptively evolved primarily for other purposes. Unlike language, music is something that our ancestors could live without," (The Artful Universe, p196). Is this true? Kania's survey does not enable one to access such discussions, because such discussions are not within its remit.
There is a need for broadly-educated philosophers, with sufficient will and courage to look beyond their careerist aspirations, and to write genuinely unifying, all-embracing philosophy; work which cannot be published because it doesn't fall into the narrow domain of any particular journal; work which doesn't, therefore, enable those philsophers to gain promotion by increasing their citations ranking.
Am I asking too much?
This is not what philosophy should be. Philosophy should be the great over-arching, unifying, synthesising discpline. In an age of deepening academic specialisation, philosophy should be reacting against this trend. Philosophers should be competent in the arts and the sciences. In particular, a lack of mathematical and scientific competency in a philosopher should be considered a crippling disability, comparable to a lack of logical competency.
A perfect example of the lack of ambition and imagination in modern philosophy is provided by Andrew Kania's survey of the philosophy of music. The key phrase in this account can be found in the preamble, where Kania states that the work considered is "in an analytic vein." This, amongst other things, is philosophic code for "there will be no discussion of scientific research here." The ensuing discussion therefore makes no reference either to neuroscience or biological evolution. Vital issues such as the purported universality of musical appreciation, or the emotions evoked by some music, can only be fully understood by integrating conceptual discussion with evidence from neuroscience and cognitive evolution theory.
Which parts of the brain are activated during the production and appreciation of music? What types of interaction occur between the cerebrum (the part of the brain responsible for conscious thought), the cerebellum (the part of the brain responsible for unconscious, 'second-nature' behaviour), and the amygdala (the 'levers' of the emotions). When music is emotionally ambiguous or neutral, but nevertheless evokes an aesthetic appreciation in the listener, which parts of the brain are then activated? How do such patterns of brain activity help us to understand music, if at all? All the scientific theory and evidence here needs to be incorporated into the philosophical discussion if the philosophy is to be of any real interest or relevance.
What role, if any, does music play from the perspective of biological evolution? What light, if any, can evolution throw upon the ontology of music, and the emotions sometimes expressed by music? Consider the following argument by John Barrow: "neither musical appreciation, nor any dexterous facility for musical performance, is shared by people so widely, or at a high level of competence, in the way that linguistic abilities are. In such circumstance it is hard to believe that musical abilities are genetically programmed into the brain in the way that linguistic abilities appear to be. The variations in our ability to produce and respond to music are far too great for musical ability to be an essential evolutionary adaptation. Such diversity is more likely to arise if musical appreciation is a by-product of mental abilities that were adaptively evolved primarily for other purposes. Unlike language, music is something that our ancestors could live without," (The Artful Universe, p196). Is this true? Kania's survey does not enable one to access such discussions, because such discussions are not within its remit.
There is a need for broadly-educated philosophers, with sufficient will and courage to look beyond their careerist aspirations, and to write genuinely unifying, all-embracing philosophy; work which cannot be published because it doesn't fall into the narrow domain of any particular journal; work which doesn't, therefore, enable those philsophers to gain promotion by increasing their citations ranking.
Am I asking too much?
Thursday, November 08, 2007
Can lobsters feel pain?
Robert Elwood, of Queen's University, Belfast, has announced research which, he argues, demonstrates that prawns, and other crustaceans such as lobsters, can feel pain. Surprisingly, this research, to be published in Animal Behaviour, didn't involve a detailed analysis of the neurology of crustaceans, but, rather, involved daubing an irritant, acetic acid, on to one of the two antennae belonging to each in a collection of 144 prawns. Immediately, the creatures began grooming and rubbing the affected antenna for up to 5 minutes. Elwood argues that "the prolonged, specifically directed rubbing and grooming is consistent with an interpretation of pain experience."
Elwood, however, is conflating a controlled response to a potentially damaging stimulus, with the conscious experience or feeling of pain.
Elwood's use of the phrase "consistent with an interpretation of pain experience" is crucial here. This is a much weaker assertion than the claim that something has been observed which provides evidence in favour of a pain experience. A controlled response to a potentially damaging stimulus does not entail that pain is experienced. Even a single cell can respond to a potentially damaging stimulus, hence the observations made by Elwood and his colleagues are also consistent with the absence of experienced pain. If the observed behaviour is consistent with both the presence and absence of experienced pain, then it cannot constitute evidence to support the hypothesis that crustaceans are capable of experiencing pain.
Elwood, however, is conflating a controlled response to a potentially damaging stimulus, with the conscious experience or feeling of pain.
Elwood's use of the phrase "consistent with an interpretation of pain experience" is crucial here. This is a much weaker assertion than the claim that something has been observed which provides evidence in favour of a pain experience. A controlled response to a potentially damaging stimulus does not entail that pain is experienced. Even a single cell can respond to a potentially damaging stimulus, hence the observations made by Elwood and his colleagues are also consistent with the absence of experienced pain. If the observed behaviour is consistent with both the presence and absence of experienced pain, then it cannot constitute evidence to support the hypothesis that crustaceans are capable of experiencing pain.
Tuesday, November 06, 2007
What is a theory in physics?
In terms of mathematical logic, a theory is a set of sentences, in some language, which is 'closed' under logical entailment. In other words, any sentence which is entailed by a subset of sentences from the theory, is itself already an element of the theory. Now, physicists mean something slightly different from this when they refer to something as a theory; a theory in physics is more akin to a class of 'models'.
In this context, a model for a set of sentences is an 'interpretation' of the langauge in which those sentences are expressed, which renders each sentence as true. In this context, an intepretation of a language identifies the domain over which the variables in the language range; it identifies the elements in the domain which correspond to the constants in the language; it identifies which elements in the domain possess the predicates in the language, which n-tuples of elements are related by the n-ary relations in the language, and which elements in the domain result from performing n-ary operations upon n-tuples in the domain.
Each theory in mathematical physics has a class of models associated with it. As Philosopher of Physics John Earman puts it, "a practitioner of mathematical physics is concerned with a certain mathematical structure and an associated set M of models with this structure. The...laws L of physics pick out a distinguished sub-class of models,...the models satisfying the laws L (or in more colorful, if
misleading, language, the models that 'obey' the laws L)."
The laws which define a class of mathematical models, therefore define a theory as far as physicists are concerned. If one retains the same general class of mathematical structure, but one changes the laws imposed upon it, then one obtains a different theory. Thus, for example, whilst general relativity represents space-time as a 4-dimensional Lorentzian manifold, if one changes the laws imposed by general relativity upon a Lorentzian manifold, (the Einstein field equations), then one obtains a different theory.
Physicists find that, at a classical level, the equations of a theory can be economically specified by something called a Lagrangian, hence physicists tend to identify a theory with its Lagrangian. In superstring theory, there are five candidate theories precisely because there are five candidate Lagrangians. This point is particularly crucial because it also explains why physicists associate different theories with different 'vacua'.
The Lagrangians of particle physics typically contain scalar fields, such as the Higgs field postulated to exist by the unified electroweak theory. These scalar fields appear in certain terms of the Lagrangian. The scalar fields have certain values which constitute minima of their respective potential energy functions, and such minima are called vacuum states (or ground states). If one assumes that in the current universe such scalar fields reside in a vacuum state (as the consequence of a process called symmetry breaking), then the form of the Lagrangian changes to specify this special case. After symmetry breaking, the Lagrangian is not the Lagrangian of the fundamental theory, but an 'effective' Lagrangian. Hence, the selection of a vacuum state changes the form of the Lagrangian, and because a Lagrangian defines a theory, the selection of a vacuum state for a scalar field is seen to define the selection of a theory. Physicists therefore tend to talk, interchangeably, about the number of possible vacua, and the number of possible theories in string theory.
In this context, a model for a set of sentences is an 'interpretation' of the langauge in which those sentences are expressed, which renders each sentence as true. In this context, an intepretation of a language identifies the domain over which the variables in the language range; it identifies the elements in the domain which correspond to the constants in the language; it identifies which elements in the domain possess the predicates in the language, which n-tuples of elements are related by the n-ary relations in the language, and which elements in the domain result from performing n-ary operations upon n-tuples in the domain.
Each theory in mathematical physics has a class of models associated with it. As Philosopher of Physics John Earman puts it, "a practitioner of mathematical physics is concerned with a certain mathematical structure and an associated set M of models with this structure. The...laws L of physics pick out a distinguished sub-class of models,...the models satisfying the laws L (or in more colorful, if
misleading, language, the models that 'obey' the laws L)."
The laws which define a class of mathematical models, therefore define a theory as far as physicists are concerned. If one retains the same general class of mathematical structure, but one changes the laws imposed upon it, then one obtains a different theory. Thus, for example, whilst general relativity represents space-time as a 4-dimensional Lorentzian manifold, if one changes the laws imposed by general relativity upon a Lorentzian manifold, (the Einstein field equations), then one obtains a different theory.
Physicists find that, at a classical level, the equations of a theory can be economically specified by something called a Lagrangian, hence physicists tend to identify a theory with its Lagrangian. In superstring theory, there are five candidate theories precisely because there are five candidate Lagrangians. This point is particularly crucial because it also explains why physicists associate different theories with different 'vacua'.
The Lagrangians of particle physics typically contain scalar fields, such as the Higgs field postulated to exist by the unified electroweak theory. These scalar fields appear in certain terms of the Lagrangian. The scalar fields have certain values which constitute minima of their respective potential energy functions, and such minima are called vacuum states (or ground states). If one assumes that in the current universe such scalar fields reside in a vacuum state (as the consequence of a process called symmetry breaking), then the form of the Lagrangian changes to specify this special case. After symmetry breaking, the Lagrangian is not the Lagrangian of the fundamental theory, but an 'effective' Lagrangian. Hence, the selection of a vacuum state changes the form of the Lagrangian, and because a Lagrangian defines a theory, the selection of a vacuum state for a scalar field is seen to define the selection of a theory. Physicists therefore tend to talk, interchangeably, about the number of possible vacua, and the number of possible theories in string theory.
Monday, November 05, 2007
How to solve global warming
Anthropogenic global warming is caused by the emission of greenhouse gases such as carbon dioxide and methane. The Earth re-radiates energy from the Sun as infrared radiation, and greenhouse gases such as carbon dioxide and methane absorb infrared radiation, hence the temperature of the atmosphere will increase if the atmospheric concentration of carbon dioxide and methane increases.
Most proposed solutions to global warming suggest either a reduction in the anthropogenic emission of greenhouse gases, or various technological schemes for the removal of greenhouse gases from the atmosphere.
This, however, is not the correct way to approach the problem. A clue to the correct approach can be found by looking at another solution, which proposes increasing the reflectivity ('albedo') of the Earth's surface by making as much of it white as possible. This proposal works because incoming radiation at visible wavelengths is reflected back into space at the same visible wavelengths, thereby avoiding absorption by greenhouse gases.
I propose, then, that rather than looking at greenhouse gases such as carbon dioxide as the problem, it is the production of infrared radiation by the Earth which is the problem to be solved. If one could release a compound en masse, either into the atmosphere, or deposited upon the surface of the Earth, which absorbs infrared radiation and re-emits it at visible wavelengths, then the radiation emitted by the Earth will pass unhindered through the greenhouse gases into space.
This requires a so-called 'Anti-Stokes' material: "When a phosphor or other luminescent material emits light, in general, it emits light according to Stokes' Law, which provides that the wavelength of the fluorescent or emitted light is always greater than the wavelength of the exciting radiation...Anti-Stokes materials typically absorb infrared radiation in the range of about 700 to about 1300 nm, and emit in the visible spectrum." A variety of Anti-Stokes phosphors, based on yttrium, exist for the conversion of infrared radiation into visible radiation.
Intriguingly, lanthanum hexaboride is already being used on a trial basis in office windows to absorb all but 5% of the incident infrared radiation...
Most proposed solutions to global warming suggest either a reduction in the anthropogenic emission of greenhouse gases, or various technological schemes for the removal of greenhouse gases from the atmosphere.
This, however, is not the correct way to approach the problem. A clue to the correct approach can be found by looking at another solution, which proposes increasing the reflectivity ('albedo') of the Earth's surface by making as much of it white as possible. This proposal works because incoming radiation at visible wavelengths is reflected back into space at the same visible wavelengths, thereby avoiding absorption by greenhouse gases.
I propose, then, that rather than looking at greenhouse gases such as carbon dioxide as the problem, it is the production of infrared radiation by the Earth which is the problem to be solved. If one could release a compound en masse, either into the atmosphere, or deposited upon the surface of the Earth, which absorbs infrared radiation and re-emits it at visible wavelengths, then the radiation emitted by the Earth will pass unhindered through the greenhouse gases into space.
This requires a so-called 'Anti-Stokes' material: "When a phosphor or other luminescent material emits light, in general, it emits light according to Stokes' Law, which provides that the wavelength of the fluorescent or emitted light is always greater than the wavelength of the exciting radiation...Anti-Stokes materials typically absorb infrared radiation in the range of about 700 to about 1300 nm, and emit in the visible spectrum." A variety of Anti-Stokes phosphors, based on yttrium, exist for the conversion of infrared radiation into visible radiation.
Intriguingly, lanthanum hexaboride is already being used on a trial basis in office windows to absorb all but 5% of the incident infrared radiation...
Thursday, November 01, 2007
Infinite suffering
Here's a nice snappy discussion of the prospects, and consequences, of universe creation 'in a laboratory'. The (anonymous) author quotes my own paper on the subject, (always a wise move), and argues that
There is a non-trivial probability that humans or their descendants will create infinitely many new universes in a laboratory. Under plausible assumptions, this would, with probability one, entail the creation of infinitely many sentient organisms...My main concern is that the potential creators of lab universes would give insufficient consideration to the suffering that they would cause. They might think of the project as "cool" or "exciting" without thinking hard about the consequences for real organisms. I fear that, because potential universe creators would have lived generally happy lives--never having been brutally tortured, eaten alive, or slaughtered while conscious--they would be less sensitive to how bad pain can really be.
There is a non-trivial probability that humans or their descendants will create infinitely many new universes in a laboratory. Under plausible assumptions, this would, with probability one, entail the creation of infinitely many sentient organisms...My main concern is that the potential creators of lab universes would give insufficient consideration to the suffering that they would cause. They might think of the project as "cool" or "exciting" without thinking hard about the consequences for real organisms. I fear that, because potential universe creators would have lived generally happy lives--never having been brutally tortured, eaten alive, or slaughtered while conscious--they would be less sensitive to how bad pain can really be.
Subscribe to:
Posts (Atom)