Were gravitational waves first detected in 1987?

March 4th, 2009

joe-weber

In 1987, Joe Weber, a physicist  at the University of  Maryland, claimed to have detected gravitational waves at exactly the same moment that other astronomers witnessed the famous supernova of that year, SN1987A.

His equipment consisted of several massive aluminium bars that were designed to vibrate in a unique way when a large enough gravitational wave passed by.

His claims were ignored largely because other physicists calculated that gravitational waves ought to be several orders of magnitude too weak to be picked up by this kind of gear. (And he’d made several similar claims throughout the 60s and 70s that others had failed to repeat.)

But Weber’s claims may have to be re-examined, says Asghar Qadir, a physicist at  the National University of Sciences and Technology in Rawalpindi, Pakistan. He points out that predicting the strength of a gravitational wave is by no means easy and until recently, only first order effects have been considered.

He and colleagues have now worked out that in certain circumstances, second order effects can enhance the  waves. But this only happens when there is a certain kind of assymetry in the event that created the waves.

But get this: the assymetry can enhance the waves by a factor of 10^4.

He also  points out that SN1987A is aspherical in exactly the way that might create this enhancement. So if SN1987A generated gravitational waves, Weber would have been perfectly able to detect them.

Qadir concludes: “The claim of Weber to have observed gravitational waves from [SN1987A] needs to be re-assessed”.

By all accounts, Weber was a careful experimenter who got something of a rough deal for his efforts (the most comprehensive telling of the tale is in a book called Gravity’s Shadow by Harry Collins) .

Weber died in 2000 but it wouldn’t do any harm to re-examine his work in the light of this development.

Ref: arxiv.org/abs/0903.0252: Gravitational Wave Sources May Be “Closer” Than We Think

The LHC’s dodgy dossier?

March 2nd, 2009

lhc-higgs

There’s no reason to worry about the Large Hadron Collider that is due to to be switched on later this year for the second time. The chances of it creating a planet-swallowing black hole are tiny. Hardly worth mentioning really.

But last month, Roberto Casadio at the Universita di Bologna in Italy and a few pals told us that the LHC could make black holes that will hang around for minutes rather than microseconds. And they were rather less certain about the possibility they the  black holes could grow to catastrophic size. Far from being utterly impossible, they said merely that this outcome didn’t “seem” possible.

This blog complained that that was hardly the categorical assurance we’d come to expect from the particle physics community. The post generated many comments, my favourite being that we shouldn’t worry because of the Many Worlds Interpretation of quantum mechanics. If the LHC does create Earth-destroying black holes, we could only survive in a universe in which the accelerator broke down.

Thanks to Slashdot, the story got a good airing with more than few people pointing out that we need better assurances than this.

Now we can rest easy. Casadio and co have changed their minds. In a second version of the paper, they’ve removed all mention of the black hole lifetime being many minutes (”>> 1sec” in mathematical terms) and they’ve changed their conclusion too. It now reads: “the growth of black holes to catastrophic size is not possible.”

What to make of this? On the one hand, these papers are preprints. They’re early drafts submitted for peer review so that small problems and errors can be ironed out before publication. We should expect changes as papers are updated.

On the other, we depend on the conclusions of scientific papers for properly argued assurances that the LHC is safe. If those conclusions can be rewritten for public relations reasons rather than scientific merit, what value should we place on them?

Either way, we now know that a few minutes work on a wordprocessor makes the LHC entirely safe.

Ref: arxiv.org/abs/0901.2948: On the Possibility of Catastrophic Black Hole Growth in the Warped Brane-World Scenario at the LHC version 2

Thanks to Cap’n Rusty for pointing out the new version

Spiders ”n’ Mars

February 28th, 2009

The best of the rest from the physics arXiv this week:

Discovery of the Arsenic Isotope

The Origin of the Universe as Revealed Through the Polarization of the Cosmic Microwave Background

Chemical Self Assembly of Graphene Sheets

Pricing Strategies for Viral Marketing on Social Networks

Scale Invariance, Bounded Rationality and Non-Equilibrium Economics

Why spiders’ silk is so much stronger than silkworms’

February 27th, 2009

silk-structure

Spider silk and silkworm silk are almost identical in chemical composition and microscopic structure. And yet spider silk is far tougher. “One strand of pencil thick spider silk can stop a Boeing 747 in flight,” say Xiang Wu and colleagues at the National University of Singapore. Whereas a pencil thick strand of silkworm silk couldn’t. Why the difference?

Xiang and co say they’ve worked out why by successfully simulating the structure of both silks for the first time. Both spider silk and silkworm silk are made up of long chains of amino acids called polypeptides and are particularly rich in the amino acids alanine and glycine.

Various imaging techniques have shown that the sequences of amino acids differ slightly between spide and silkworm silk but this alone cannot explain the huge difference in strength, says Xiang.

Instead, the secret appears to lie in the way polypeptide chains fold into larger structures. Xiang’s model shows that a subtle difference exists between the silks in structures called beta sheets and beta crystallites. This insight has allowed the team to model for the first time the way in which both silks break.

That’s important because being able to predict the mechancial properties of an organic material simply by studying its structure is going to be increasingly useful. It may even allow us to take a better stab at making spider-like silk synthetically for the the first time.

Anybody with a 747 watch out.

Ref: arxiv.org/abs/0902.3518: Molecular Spring: From Spider Silk to Silkworm Silk

Liquid film motors finally explained

February 26th, 2009

liquid-film-motor-explained

Last year, a group of Iranian physicists made the extraordinary discovery that motors can be made of nothing more than a thin film of water sitting in a cell bathed in two perpendicular electric fields. The unexpected result of this set up is that the water begins to rotate. Divide the water into smaller cells and each rotates too.

The team at the Sharif University of Technology in Tehran have a number of  fascinating videos of it in action. They raise an interesting question: the electric fields are static, so what’s making the water move?

Now Vlad Vladimirov at York University in the UK and a couple of droogs from Russia have delved into the hydrodynamics to work out what’s putting the oomph in this motor. The key turns out to be the scale on which the effect takes place.

They say the flow is generated at the edge of the cell where the electric field crosses the  (dielectric) boundary between the water and the cell container. The change in field sets the water flowing along the boundary.  Crucially, this flow is opposite on the other side of the cell and this is what sets up the circular flow.

Vladimirov and co point out that this effect can only happen in a thin film where effects such as viscosity and friction play a large role in the dynamics. In larger bodies of water, these effects become insignificant and the rotation stops. Which is why these motors have only ever been seen in thin films.

That has important implications becaue it shows the scale dependency of important phenomena. In fact, liquid film motors may turn out to be a game-changers for anybody involved in microfluidics.

Ref: arxiv.org/abs/0902.3733: Rotating Electrohydrodynamic Flow in a Suspended Liquid Film

Calculating the cost of dirty bombs

February 25th, 2009

cesium-137

One of the more frightening scenarios that civil defence teams worry about is the possibility that a bomb contaminated with radioactive material would be detonated in a heavily populated area.

Various research teams have considered this problem and come to similar conclusions–that the actual threat to human health from such a device is low. Some even claim that terror groups must have come to a similar conclusion which is why we’ve not been on the receiving end of such an attack. The panic such a device would cause is another questions.

Today Theodore Liolios from a private institution called the  Hellenic Arms Control Center in Thessaloniki in Greece, goes through the figures.

He says the most likely material to be used in such an attack is Cesium-137, widely used throughout the developed and developing world as a source for medical therapies.

The unstated implication is that it would be relatively easy to get hold of this stuff from a poorly guarded hospital. Exactly this happened in Goiania in Brazil when an abandoned hospital was broken into and its supply of cesium-137 distributed around the surrounding neighbourhoods. The incident left 200 people contaminated. Four of them died.

But a dirty bomb would not be nearly as lethal. The trouble with them (from a terrorist’s point of view, at least) is that distributing radioactive material over a large area dramatically reduces the exposure that people receive. Particularly when most could be warned to stay indoors or be evacuated (unlike the Goiania incident in which most people were unaware they were contaminated).

Liolios calculates that anybody within a 300 metre range of a dirty bomb would increase their lifetime risk of cancer mortality by about 1.5 per cent. And then only if they were unable to take shelter or leave the area. That’s about 280 people given the kind of densities you expect in metropolitan areas.

And he goes on to say that it is reasonable to assume that a cesium-137 dirty bomb would not increase the cancer mortality risk for the entire city by a statistically significant amount.

But the terror such a device might cause is another question. Liolios reckons that current radiation safety standards would mean the evacuation of some 78 square kilometres around ground zero. That would affect some 78,000 people, cost $7.8m per day to evacuate and some $78m to decontaminate.

That seems a tad conservative but however it is calculated, it may turn out to be chickenfeed compared to the chaos caused by panic, which may well result in more deaths than the bomb itself could claim. How to calculate the effect of that?

Ref: arxiv.org/abs/0902.3789: The Effects of Using Cesium-137 Teletherapy Sources as a Radiological Weapon

The coming of age of hadrontherapy

February 24th, 2009

cyclinac

There’s a problem with conventional radiotherapy for tumours: the body absorbs the radiation as it passes through. So zap a deep seated tumour with X-rays and the dose decreases exponentially with the depth of target. This means that both diseased and healthy tissue end up getting targeted.

In 1946, Robert Wilson, a physicist  at FermiLab near Chicago pointed out that protons with an energy of between 200 and 250 MeV and carbon ions with an energy of 3500 to 4500 MeV work in a different way. Inside the body, they tend to dump all their energy at the end of their range. Not only that, but because the particles are charged, they can be sharply focused. He said that makes these particles ideal for targeting tumours.

It’s taken 60 years, but there is now real interest in hadrontherapy. Several thousand patients have been treated at various particle physics labs around the world. and therein lies the problem: the only places that can produce protons or carbon ions with these energies are  specialised accelerators such as those at GSI in Germany and Chiba in Japan.

There is one facility dedicated to medical work at the Loma Linda University Medical Centre near Los Angeles but it has become clear to the medical and physics community that more are desperately needed.

So how do you build reliable, easy-t0-use, affordable particle accelerators for medical centres? Enter Ugo Amaldi and buddies from the TERA Foundation in Italy, who today propose a design that they think could revolutionise this kind of treatment. (Amaldi was a hot shot at the European particle physics lab CERN for many years before turning to medical therapy.)

The idea is to build a cyclotron (a circular accelerator) and bolt it to a linac (a linear accelerator) and to call the hybrid a Cyclinac. So protons or carbon ions are injected into the cyclotron which stores and accelerates them. They are then injected into a linac which accelerates them to the appropriate energies for medical applications. The result is a therapeutic beam ready for action.

This design can also produce synchrotron  x-rays for other  medical apps.
Of course, there several significant engineering challenges ahead, like increasing the repetition rate of these devices to practical levels, something that Cyclinacs are being designed to address.

The big question, of course, is not whether such a project will be funded (everyone seems to agree on that) but when. And then where to build it. Given the Terra Foundation’s location, what odds on Italy being host to the first of these devices in Europe?

Ref: arxiv.org/abs/0902.3533: Cyclinacs: Fast-Cycling Accelerators for Hadrontherapy

First evidence of a supernova in an ice core

February 23rd, 2009

antarctic-supernova

There hasn’t been a decent supernova in our part of the universe in living memory but astronomers in the 11th century were a little more fortunate. In 1006 AD, they witnessed what is still thought to be the brightest supernova ever seen on Earth (SN 1006) and just 48 years later saw the birth of the Crab Nebula (SN 1054).

Our knowledge of these events come from numerous written accounts, mainly by Chinese and Arabic astronomers (and of course from the observations we can make today of the resultant nebulae).

Now we can go one better. A team of Japanese scientists has found the first evidence of supernovae in an ice core.

The gamma rays from nearby supernova ought to have a significant impact on our atmosphere, in particular by producing an excess of nitrogen oxide. This ought to have left its mark in the Earth’s ice history, so the team went looking for it in Antarctica.

The researchers took an ice core measuring 122 metres from Dome Fuji station, an inland site in Antarctica. At a depth of about 50 metres, corresponding to the 11th century, they found three nitrogen oxide spikes, two of which were 48 years apart and easily identifiable as belonging to SN 1006 and SN 1054. The cause of the third spike is not yet known.

That’s impressive result and their ice core was obligingly revealing about other major events in the Earth’s past. The team saw a 10 year variation in the background levels of nitrogen oxide, almost certainly caused by the 11-year solar cycle (an effect that has been seen before in ice cores). They also saw a number of sulphate spikes from known volcanic eruptions such as Taupo, New Zealand, in 180 AD and El Chichon, Mexico, in 1260 AD.

The team speculate that the mysterious third spike may have been caused by another supernova, visible only from the southern hemisphere or hidden behind a cloud.

That would make the 11th century a truly bounteous time for supernovae. Of course, statistically, there ought to be a supernova every 50 years or so in a galaxy the size of the Milky Way, which means that the Antarctic ice is due another shower of nitrogen oxide any day now.

Ref:  arxiv.org/abs/0902.3446: An Antarctic Ice Core Recording both Supernovae and Solar Cycles

Adding ‘n’ tangling

February 21st, 2009

The best of the rest from the physics arXiv this week:

Hydrogen Storage by Polylithiated Molecules and Nanostructures

Physical Properties of Biological Membranes

A Brief Overview of the Major Contribution to Physics by Landau

New Worlds: Evaluating Terrestrial Planets as Astrophysical Objects

A Recursive Threshold Visual Cryptography Scheme

Tradition Versus Fashion in Consumer Choice

A New, Enhanced Diamond Single Photon Emitter in the Near Infra-Red

Ptarithmetic: reinventing logic for the computer age

February 20th, 2009

ptarithmetic

In the last few years, a small group of logicians have attempted the ambitious task of re-inventing the discipline of formal logic.

In the past, logic has been thought of as the formal theory of “truth”.  Truth plays an important role in our society and as suchm a formal theory is entirely laudable and worthy. But it is not always entirely useful.

The new approach is to reinvent logic as the formal theory of computability. The goal is  to provide a systematic answer to the question “what is computable”. For obvious reasons,  so-called computability logic may turn out to be much more useful.

To understand the potential of this idea , just think for a moment about the most famous of logical systems called Peano Arithmetic, better known to you and me as “arithmetic”.

The idea is for computability logic to do for computation what Peano Arithmetic does for natural numbers.

The role of numbers is played by solvable computations and the collection of basic operations that can be performed on these computations forms the logical vocabulary of the theory.

But there’s a problem. There can be a big difference between something that is computable and something that is efficiently computable, says Giorgi Japaridze, a logician at Villanova University in Pennsylvania and the inventor of computability logic.

So he is proposing a modification (and why not, it’s his idea)–the system should allow only those computations that can be solved in polynomial time. He calls this polynomial time arithmetic or ptarithmetic for short.

The promise is that Japaridze’s system of logic will prove as useful to computer scientists as arithmetic is for everybody else.

What isn’t clear is whether the logical vocabulary of ptarithemtic is well enough understood to construct a system of logic around and beyond that, whether ptarithemtic makes a well-formed system of logic at all.

Those are big potential holes which Japaridze  may need some help to fill–even he would have to admit that he’s been ploughing a lonely furrow on this one since he put forward the idea in 2003.

But the potential pay offs are huge which make this one of those high risk , high potential pay off projects that just might be worth pursuing.

Any volunteers?

Ref: arxiv.org/abs/0902.2969: Ptarithmetic