Tuesday, October 30, 2007

The definition of death

Here's a superb account, by David DeGrazia, of the issues pertaining to the definition of death. DeGrazia sets the scene as follows:

According to the whole-brain standard, human death is the irreversible cessation of functioning of the entire brain, including the brainstem. This standard is generally associated with an organismic definition of death. Unlike the older cardiopulmonary standard, the whole-brain standard assigns significance to the difference between assisted and unassisted respiration. A mechanical respirator can enable breathing, and thereby circulation, in a brain-dead patient—a patient whose entire brain is irreversibly nonfunctional. But such a patient necessarily lacks the capacity for unassisted respiration. On the old view, such a patient counted as alive so long as respiration of any sort (assisted or unassisted) occurred. But on the whole-brain account, such a patient is dead. The present approach also maintains that someone in a permanent (irreversible) vegetative state is alive because a functioning brainstem enables spontaneous respiration and circulation as well as certain primitive reflexes.

We may think of the brain as comprising two major portions: (1) the “higher brain,” consisting of both the cerebrum, the primary vehicle of conscious awareness, and the cerebellum, which is involved in the coordination and control of voluntary muscle movements; and (2) the “lower brain” or brainstem. The brainstem includes the medulla, which controls spontaneous respiration, the ascending reticular activating system, a sort of on/off switch that enables consciousness without affecting its contents (the latter job belonging to the cerebrum), as well as the midbrain and pons.

Whole-brain death involves the destruction of the entire brain, both the higher brain and the brainstem. By contrast, in a permanent vegetative state (PVS), while the higher brain is extensively damaged, causing irretrievable loss of consciousness, the brainstem is largely intact. Thus, as noted earlier, a patient in a PVS is alive according to the whole-brain standard. Retaining brainstem functions, PVS patients exhibit some or all of the following: unassisted respiration and heartbeat; wake and sleep cycles (made possible by an intact ascending reticular activating system, though destruction to the cerebrum precludes consciousness); pupillary reaction to light and eyes movements; and such reflexes as swallowing, gagging, and coughing. A rare form of unconsciousness that is distinct from PVS and tends to lead fairly quickly to death is permanent coma. This state, in which patients never appear to be awake, involves partial brainstem functioning. Permanently comatose patients, like PVS patients, can maintain breathing and heartbeat without mechanical assistance.

The whole-brain approach clearly enjoys advantages. First, whether or not the whole-brain standard really incorporates, rather than replacing, the traditional cardiopulmonary standard, the former is at least fairly continuous with traditional practices and understandings concerning human death. Indeed, current law in the American states incorporates both standards into disjunctive form, most states adopting the Uniform Determination of Death Act (UDDA) while others have embraced similar language. The UDDA states that “… an individual who has sustained either (1) irreversible cessation of circulatory and respiratory functions, or (2) irreversible cessation of all functions of the entire brain, including the brain stem, is dead.”

As mentioned, every American state has legally adopted the whole-brain standard alongside the cardiopulmonary standard as in the UDDA. It is worth noting, however, that a close cousin to the whole-brain standard, the brainstem standard, was adopted by the United Kingdom and various other nations (including several former British colonies). According to the brainstem standard—which has the practical advantage of requiring fewer clinical tests—human death occurs at the irreversible cessation of brainstem function. One might wonder whether a person's cerebrum could function—enabling consciousness—while this standard is met, but the answer is no. Since the brainstem includes the ascending reticular activating system, the on/off switch that makes consciousness possible (without affecting its contents), brainstem death entails irreversible loss not only of unassisted respiration and circulation but also of the capacity for consciousness. Importantly, outside the English-speaking world, many or most nations, including virtually all developed countries, have legally adopted either whole-brain or brainstem criteria for the determination of death. Moreover, most of the public, to the extent that it is aware of the relevant laws, appears to accept such criteria for death. Opponents commonly fall within one of two main groups. One group consists of religious conservatives who favor the cardiopulmonary standard, according to which one can be brain-dead yet alive if (assisted) cardiopulmonary function persists. The other group consists of those liberal intellectuals who favor the higher-brain standard, which, notably, has not been adopted by any jurisdiction.

Monday, October 29, 2007

Inventions week - Day 6

Shooting the intellectual rapids with an ideational canoe, and pole-vaulting no-go theorems with Sergei-Bubka agility, here are today's ideas:


  • Piezo-electric sausage rolls: It's a fact that whilst the sausage rolls at buffet lunches are often delicious, they're also, typically, a bit cold. Fortunately, piezoelectric devices can generate electric currents through the application of internal stress forces. Thus, with merely a small twist, a piezo-electric sausage roll is re-heated, returning to its optimally edible state once more.

  • Idea detectors: It's a rarely-known fact that when people have a good idea, the thought ripples across the neural network of the brain faster than the speed of light in the cerebrum. This creates a type of shock wave called Cerenkov radiation. If one were to carefully remove the skull of a really smart person, then one would see a blue glow emanating from the myriad ideas generated by that individual. Cerenkov radiation can be detected by photomultiplier tubes (known as PMTs to those in the business), hence by embedding PMTs inside the head of a smart person we could learn more about the nature of genius.

  • Ignorance flux: In semiconductor physics, just as a free electron can move through an ionic metal lattice, so the absence of an electron can move through the lattice as a so-called 'hole'. Similarly, just as one can have information flows in telecommunications and computer science, one can also have ignorance flows consisting of propagating gaps in knowledge.

  • An ironing-board cover which doesn't rumple-up.

Happyslapped by a jellyfish

I've never really understood gay magazines. Why do gay fellas need pictures of nobs when they've got one of their own to look at?

This week I'm reading 'Happyslapped by a jellyfish', a collection of short travelogues by the hapless Karl Pilkington. I'm quite enjoying this. As a number of commentators have remarked, Pilkington isn't really that funny. He makes some nice observations though, and I particularly enjoy his simple working-class perspective upon things.

I do like fruit, but I find that some of it involves too much messing about to get into. I mainly buy fruit that I can have as a snack when I'm out for the day. Apples are good for this. Bananas are good. Plums are fine. Pineapples are too much hassle. That's why you never see anyone buying pineapples in supermarkets. People should stop growing them.

There's too much fruit knocking about nowadays, and I think this is why we're told to eat five pieces a day - it's to get rid of it all. Once I was drinking some orange cordial and thought "this tastes a bit weird" and looked at the label. It wasn't just orange, they'd gone and slipped in some pineapple that they couldn't get rid of. This is happening more and more.

Saturday, October 27, 2007

The neuroscience of sporting confidence

It's a sporting cliche to explain a loss of form by saying that some team or individual have "lost their confidence." The concept can, however, be justified and understood in neuroscientific terms. When a team or individual perform with confidence, then their actions are under the control of the cerebellum, part of the hindbrain, (the most ancient part of the brain in evolutionary terms); when confidence is lost, then their actions tend to fall under the control of the cerebrum, part of the forebrain. The distinction between the two types of control is beautifully explained by Roger Penrose:

The cerebellum...is responsible for precise coordination and control of the body - its timing, balance, and delicacy of movement. Imagine the flowing artistry of a dancer, the easy accuracy of a professional tennis player, the lightning control of a racing driver, and the sure movement of a painter's or musician's hands...Without the cerebellum, such precision would not be possible, and all movement would become fumbling and clumsy. It seems that, when one is learning a new skill, be it walking or driving a car, initially one must think through each action in detail, and the cerebrum is in control; but when the skill has been mastered - and has become 'second nature' - it is the cerebellum that takes over. Moreover, it is a familiar experience that if one thinks about one's actions in a skill that has been so mastered, then one's easy control may be temporarily lost. Thinking about it seems to involve the reintroduction of cerebral control and, although a consequent flexibility of activity is thereby introduced, the flowing and precise cerebellar action is lost. (The Emperor's New Mind, p490).

If a team or individual lose sporting confidence for whatever reason, perhaps even just because of some random mistakes, then this tends to provoke a conscious analysis of technique, which requires the intervention of the cerebrum.

There are, however, other causes for the intervention of the cerebrum. A change of conditions or circumstances can also be sufficient to trigger cerebral control. For example, at the beginning of the year, Grand Prix drivers Kimi Raikkonen and Fernando Alonso both struggled to adapt to the unfamiliar characteristics of their newly adopted Bridgestone tyres; as they attempted to change their driving styles, their form declined because their normal cerebellar control was being over-ridden by the cerebrum. However, with the passage of time, control returned to the cerebellum, and both Raikkonen and Alonso finished the season strongly.

Tuesday, October 23, 2007

Karl Pilkington

There's not much going on in my head most of the time. In fact, I like nothing better than dozing, or resting, or even just daydreaming. Perhaps, when I'm feeling energetic, I'll have a cup of tea and a cookie, and read a good magazine, but that's about it really.

Karl Pilkington, therefore, is something of a hero of mine. Last night, Karl featured in a documentary on Channel 4, 'Satisfied Fool'. Karl wondered if he'd be any happier today if he'd worked harder at school, or if he was smarter. The basic premise of the programme, therefore, was that Karl would meet four smart people, in order to find out. Sadly, Channel 4 could only find three, so Karl initially had to meet Germaine Greer. Now, such an engagement would fill me with the enthusiasm I normally reserve for having an improvised explosive device inserted into the scroticular groove between my testicles. Karl, however, seemed unperturbed by Greer's shrill anti-logic, and even managed to impress her by recounting the time he tried to cook some sausages in the toaster.

Next on the agenda was Heinz Wolff. Karl pointed out that aliens must be more intelligent than us because they have better, faster space-ships. Professor Wolff was both non-plussed and angered by this, so Karl moved on to meet David Icke, who, as Karl pointed out, had once claimed that he was the son of God, "and that the Queen was a lizard, or something."

Icke, in fact, proved to be a top bloke. He asked Karl if there are times when he feels he'd rather be smart, and Karl explained that when he tells a story in the pub, about, say, a girl who eats only mud, people respond by saying it's rubbish, and he often has to back-down. "Socrates," said David, "once said that wisdom is knowing how little we know," to which Karl retorted, "You see, that's the sort of clever thing I want to be able to quote in the pub!"

So, finally, Karl met Will Self. After taking Karl up what appeared to be eight flights of book-festooned stairs, Self interrogated Karl, and, when Karl finally admitted he'd rather be happy than smart and unhappy, Self told him to "Fuck off then". Self escorted him back to the front-door, and remarked that he didn't know whether to "kick you all the way down the street, or strip you naked, and bathe you in palmolive oil." Karl retorted, "Both of those sound bad to me."

Sunday, October 21, 2007

What is a magnetic monopole?

All the known magnetic fields in nature are produced by moving or fluctuating electric charges. When a voltage is applied to the ends of a wire, the free electrons flowing through the conductor produce a magnetic field around the wire. Similarly, individual atoms and molecules possess magnetic fields due to the quantum fluctuations of bound electrons. These magnetic fields can be aligned to produce an aggregate magnetic field in something like a bar magnet.

Whilst all known magnetic fields are produced by electric charges, the Maxwell equations for an electromagnetic field can be easily, and naturally modified to incorporate magnetic charges and magnetic currents. Moreover, even if one restricts attention to the standard Maxwell equations, there are solutions which describe a purely magnetic field. These solutions seem to require that space, and space-time, possess a non-trivial topology. Starting with Minkowski space-time, R4, it is necessary to remove a timelike curve to obtain a manifold with topology S2 × R2. The curve removed is considered to be the worldline which would be occupied by the magnetic charge.

In the gauge field formulation of electromagnetism, an electromagnetic field is specified by an object called a 'connection one-form' ω upon a U(1)-principal fibre bundle over space-time M. A magnetic monopole field is specified by a connection one-form ω upon a U(1)-principal fibre bundle over S2 × R2. To this connection there corresponds a 'curvature two-form', which induces an electromagnetic field strength tensor Fμν upon space-time M. The following integral

Sr 1/2π Fμν

over Sr, a sphere of radius r around the monopole, specifies the charge of the monopole.

Selecting different U(1)-principal fibre bundles over S2 × R2 enables one to define connections corresponding to different magnetic charges. If P is U(1)-principal fibre bundle corresponding to a magnetic charge of 1, then for any positive integer n, the cyclic group Zn enables one to construct another U(1)-principal fibre bundle Pn, over which P provides an n-fold cover via the n-fold covering map U(1) → U(1)/Zn. The bundle Pn can then be equipped with connections corresponding to magnetic charges of n.

In each case, 1/2π Fμν identifies the '1st Chern class' of the principal fibre bundle. This is an equivalence class of closed 2-forms on the space-time manifold. The Chern class selected is independent of the choice of connection; it is a topological invariant of the principal fibre bundle. The set of equivalence classes of closed 2-forms has the structure of a group, and is called the 2nd cohomology group of the space-time manifold M. Magnetic monopoles of different charges thereby correspond to different elements of the 2nd cohomology group, by virtue of the fact that they correspond to inequivalent U(1)-principal fibre bundles over space-time M.

Magnetic monopoles thereby require a space-time topology with non-trivial cohomology, and this rules out Minkowski space-time, R4. Note carefully, however, that whilst S2 × R2 has a non-trivial cohomology, it remains simply connected. The existence of magnetic monopoles does not, therefore, require space-time to have a non-simply connected topology.

Tuesday, October 16, 2007

Braneworld Cosmology

Superstring theory initially proposed that the universe is a 10-dimensional space-time, with the topology of a product manifold Y × M, where Y is a compact 6-dimensional space, and M is the flat, 4-dimensional Minkowski space-time of special relativity. It was suggested that the additional 6 spatial dimensions are not seen or detected because they are of very small diameter. The additional dimensions were said to be 'compactified'.

Whereas general relativistic cosmology proposed a variety of curved 4-dimensional geometries and topologies to represent the entire universe, superstring theory seemed incapable of embracing such cosmological scenarios. Braneworld cosmology attempted to rectify this.

Strings can be open or closed, and it is now proposed that the ends of open strings must be confined to the surfaces of compact p-dimensional submanifolds in the 9-dimensions of space. These submanifolds are called D-branes, or Dirichlet-branes. Braneworld cosmology proposes that our universe may be a 3-dimensional D-brane, (or part of such a brane), sweeping out a 4-dimensional submanifold in the 10-dimensional 'bulk' of space-time, and the bulk may contain many such D-branes. It is even proposed that these D-branes could dynamically interact with each other, perhaps triggering events such as the 'Big Bang' in our universe. So, rather than our universe having 9 spatial dimensions, 6 of which are compactified, it is now suggested that our 3-dimensional spatial universe simply doesn't extend into the extra 6 dimensions. Closed strings, such as gravitons, are not confined to D-branes, but can interact with D-branes. Hence, whilst our universe does not extend into the extra dimensions, it is influenced by those extra dimensions.

As this article in November's Scientific American attests, it is now suggested that the characteristics of the 'inflaton' scalar field, responsible for inflation, the hypothetical period of exponential expansion early in the history of our universe, may be explained, either by some of the scalar fields proposed in relation to dynamical branes, or by the so-called moduli fields used to define the shape and size of the other 6 spatial dimensions.

Wednesday, October 10, 2007

Is time one-dimensional?

Pseudo-Riemannian geometry permits one to define a geometry with an arbitrary number of spatial dimensions and an arbitrary number of temporal dimensions. The pair (n,m) specifying the number n of spatial dimensions, and the number m of temporal dimensions, is called the signature of the geometry. General Relativity plunders the mathematical resources provided by pseudo-Riemannian geometry, and suggests, in particular, that our universe has a Lorentzian geometry, a geometry of signature (n,1). Hence, if our universe is Lorentzian, then there is only one dimension of time.

However, The Daily Telegraph's Roger Highfield, and this week's New Scientist, both report on Itzhak Bars's suggestion that time may have two dimensions. If true, this requires a modification to what philosophers call the 'endurantist' notion of the persistence of an object through time. The endurantist position holds that the object which possesses a property at one time, is the same whole object which does or does not possess that property at another time. If the specification of a particular time requires the specification of two time coordinates, then the properties of an object could vary as one time coordinate is held fixed, but the other is varied. Is this the same 'whole' object which is varying or not?

There is an alternative to endurantism, dubbed the 'perdurantist' view, which holds that an object has temporal parts, and different temporal parts can possess different properties. On the perdurantist view, the persistence of an object through time is analogous to the extension of an object in space, and the different temporal parts can possess different properties just as much as the different spatial parts of an object can possess different properties. Introducing a second time dimension poses no problems, then, to perdurantism. An object's temporal extension simply extends into more than one temporal dimension.

Tuesday, October 09, 2007

Michael Disney and the case against cosmology

Iconoclasts in cosmology seem to be rather thin on the ground at the moment. Michael J. Disney, however, Emeritus Professor in the School of Physics and Astronomy at Cardiff University, attempts to damage the credibility of the Magical Kingdom in the current issue of American Scientist.

Disney claims that the number of independent observations in cosmology is much smaller than the number of independent free parameters in current cosmological theories, and argues from this that "modern cosmology has at best very flimsy observational support." Seven years ago, Disney wrote 'The Case Against Cosmology', along rather similar lines, making the notable point en route that only "one per cent of the light in the night sky comes from beyond our Galaxy."

However, I don't think Disney has really done anything here other than: (i) re-discover that all scientific theory is under-determined by data; (ii) re-discover that cosmology is a data-starved science; and (iii) re-discover that cosmological theories have to make some rather special assumptions in order to reach any conclusions at all about the universe as a whole. From the emotive language employed, one can detect significant professional jealousy in Disney's diatribe. For example, in his year 2000 paper, Disney wrote:

Much of cosmology is unhealthily self-referencing and it seems to an outsider like myself that cosmological fashions and reputations are made more by acclamation than by genuine scientific debate.

A scientific discipline in which the participants only read and reference each others' papers. Imagine! And in this context, as is oft the case with a scientist writing on philosophical issues, it is noticeable that Disney has made no effort to read or reference the relevant literature on the philosophy of cosmology. That, quite simply, is poor scholarship.

Whither, and from whence, the weather?

The weather today was grim; in fact, it was grimmer than the grimmest spell in a grimoire written by the Brothers Grimm.

The jet-stream moved Southwards this year, and as a consequence, the UK's Summer weather was dominated by a succession of North Atlantic depressions. If the jet-stream remains so over the next few months, a particularly cold Winter is predicted.

It is appropriate, then, to recall Lewis Fry Richardson, the man who invented the modern weather forecast as a numerical solution of a coupled set of differential equations. Fry's astonishing efforts are recalled in Peter Lynch's recent book, The Emergence of Numerical Weather Prediction: Richardson's Dream. Reviewed in the current issue of American Scientist, Lynch points out that:

[Fry] set out to calculate the weather. He built a working mathematical model of the Earth's atmosphere, based on straightforward physical rules. For example, one rule says that if regions differ in barometric pressure, then air will start to flow along the gradient toward the lower-pressure area. Richardson filled in initial values of pressure, wind velocity and so on, and then traced the model's evolution over time.

Models based on essentially the same principles now run on supercomputers capable of trillions of operations per second. Richardson, however, worked entirely with pencil and paper, using a sheaf of forms he had printed up to guide the computations; his only aids to calculation were a slide rule and a table of logarithms. Furthermore, the circumstances in which he did all this arithmetic have made the project legendary. The time was World War I. Richardson, a pacifist, was serving as a volunteer with the Friends Ambulance Unit in the north of France. He performed his calculations between calls to carry wounded from the front. "My office," he reported, "was a heap of hay in a cold rest billet."

Saturday, October 06, 2007

Andy Green and Thrust SSC

It's now 10 years since Andy Green broke the world land-speed record, and the sound barrier, in Richard Noble's Thrust SSC at Black Rock Desert, Nevada. Erstwhile fighter-pilot Green is a remarkable individual, and this month's Motorsport magazine carries a gripping account of the difficulties (those are 'challenges', for younger readers) he faced.

We went back to Jordan in May 1997...We started to use some quite high-end acceleration, and on the first full afterburner run I'm taking it nice and steady, 200, 250, 300, max burner, and suddenly it's going all over the place. I just cannot keep it in a straight line.

Driving a normal car, if it diverges, you respond to correct it. But I was getting into what in an aircraft you'd call a pilot-induced oscillation. The vehicle diverges, you put in a response through the steering wheel, it responds, you put in another, but at that point the vehicle is reversing its response, so you get a double input. You and the vehicle are working out of phase. Every time there's the slightest input, say from irregularity in the desert surface or a crosswind, you've got to correct it. But if you start steerting at the frequency of the car it gets worse, not better. I got out of the car that day and I said to myself, I think this project has just finished. The car's unstable. I think we're stuffed.

I didn't feel I could talk to Richard or anyone else about it, so I spent the night thinking it through. And I came up with the conviction that I was simply going to have to bully the car into being stable, go very high frequency with my steering movements to keep it under control, coupled with lower frequency inputs to steer it. I'd have to steer it at two separate frequencies, simultaneously. So that's what I did, and it worked. I'd found a way to drive the thing. The downside was that I was never going to enjoy it. Imagine taking all the hairiest bits out of a three-hour sortie in a Tornado, and doing them all in two minutes. It was the hardest thing I've ever done: like trying to balance the point of a pencil on the end of your finger.


The team moved to Black Rock Desert in September 1997, and on October 15th set a record of 763.035mph, Mach 1.02

We had active suspension because every time the aerodynamic neutral point moves you have to alter the pitch of the car to compensate. And plus or minus one degree of pitch on the car equals plus or minus 10 tons of load on the front wheels. Running the active fully up through the trans-sonic region increased the load on the front wheels by almost 10 tons. The wheels started to plough into the desert.

We were generating some very powerful shock waves in front of the car. We found the wheels were actually going round less fast than the ground speed: we were ploughing the surface in front of the car. The shock wave was digging up the desert, so the four individual wheel tracks disappeared, and we started to get a single ploughed trace 12 feet wide.

Things change so much between Mach 0.8 and Mach 1. When the shock waves start to form, depending on the shape of your vehicle, you've got a mixture of sub-sonic and supersonic flow. If you're doing Mach 0.95 and the air accelerates locally by another five percent, over the cockpit or the curve of a wheel arch, it will form its own little shock wave there. In the trans-sonic period its doing that in various places around and underneath the car. The pressure difference where the air goes from sub-sonic to supersonic is so great that if it happens underneath the car it can lift it off the ground.

And the car may be fractionally different in shape from one side to the other. It's a hand-built car. We measured it as accurately as we could, but the tiniest difference, the thickness of a few coats of paint, can make the shock waves form earlier on one side. It happens with aircraft when you take them supersonic, but tiny corrections with the controls can fix that. With the car, it's the wheels that have to take the differences in load, and you start to realise the magnitude of the forces involved when a tiny difference can translate to an extra ton of load on one of the front wheels. Once you get well over Mach 1, life gets much easier. It's getting there that's the challenge.

Thursday, October 04, 2007

Evidence for an evaporating black hole?

Quantum field theory in curved space-time predicts that black holes have a finite lifetime, proportional to the mass of the black hole. The larger the black hole, the longer it takes to evaporate. This prediction is widely believed because of the theoretical plausibility of the connection it makes between thermodynamics and curved space-time, but it is, nevertheless, an unverified speculation. However, a team of astronomers analysing archive data from the Parkes radio telescope in Australia, have discovered a very short, but very powerful burst of radio-wavelength radiation, estimated to have originated from a source 3 billion light years away. The signature of this radio burst has never previously been seen, and, indeed, the sensitivity of many radio telescopes is insufficient to detect such short bursts.

The astronomers speculate that the radiation may be the 'last gasp' of an evaporating black hole. Personally, I wonder if it could just be a statistical fluctuation in the data...

Wednesday, October 03, 2007

Shut-up and calculate!

Mr Multiverse, Max Tegmark, wrote an article in New Scientist a couple of weeks ago, advocating once more the notion that the physical universe is nothing more than a mathematical structure. The 'director's cut' version of this article can now be freely downloaded.

I have much sympathy with Tegmark's position, but I would personally say that the physical universe is an instantiation of a mathematical structure, not a mathematical structure itself. It's the difference between an individual thing and a type of thing.

I would also question Tegmark's insistence that a final theory of everything should be bereft of any interpretational 'baggage', by which he means not only observational and experimental terms and phrases, but also non-mathematical theoretical terms such as 'mass' and 'charge'. How does this work when a theory has multiple mathematical formulations, as is generally the case? For example, in quantum theory, the state of a system can be represented by a vector in a Hilbert space in one formulation, and by a positive normalized linear functional upon the self-adjoint part of a C*-algebra in another formulation. If we are to literally equate the state of a system with part of a mathematical structure, from which formulation are the structures to be taken?

Tuesday, October 02, 2007

Naked singularities

Exactly how shy is the universe? Does the universe permit naked singularities? It seems that there may now be an observational means of answering this question.

To understand what a naked singularity is, it is first necessary to recognise that space-time singularities come in two forms: spacelike singularities and timelike singularities. The former are consistent with determinism, whilst the latter are not. Even if one specifies the state of the entire 3-dimensional universe at a moment in time, if timelike singularities exist, then the state of the universe at a moment in time is insufficient to predict what will emerge in the future from the timelike singularity. Note that the Schwarzschild space-time for a non-rotating, electrically neutral black hole, and the corresponding space-time for a white hole, both possess spacelike singularities.

However, the maximally extended Kerr space-time for a rotating black hole contains a timelike singularity. Whilst this is strictly inconsistent with determinism, those Kerr space-times which possess a physically realistic angular momentum, contain an event horizon which prevents the unpredictable output from the singularity reaching the outside world. In contrast, a naked singularity is a timelike singularity which is not confined within an event horizon.

The family of Kerr space-time solutions to the Einstein field equations are such that the event horizon disappears if the angular momentum-to-mass ratio exceeds a certain extremal value. Arguments have thus raged for some years over whether a naked singularity could form by a physically realistic process, or whether it is merely a theoretical artifact.

Arlie Petters (Duke University) and Marcus Werner (University of Cambridge) now suggest that the space-time of such a rapidly spinning, naked singularity, would form a distinctive gravitational lens, enabling astronomers to spot such an object, if it exists, in our own galaxy.