Archive for February, 2009

Spiders ”n’ Mars

Saturday, February 28th, 2009

The best of the rest from the physics arXiv this week:

Discovery of the Arsenic Isotope

The Origin of the Universe as Revealed Through the Polarization of the Cosmic Microwave Background

Chemical Self Assembly of Graphene Sheets

Pricing Strategies for Viral Marketing on Social Networks

Scale Invariance, Bounded Rationality and Non-Equilibrium Economics

Why spiders’ silk is so much stronger than silkworms’

Friday, February 27th, 2009

silk-structure

Spider silk and silkworm silk are almost identical in chemical composition and microscopic structure. And yet spider silk is far tougher. “One strand of pencil thick spider silk can stop a Boeing 747 in flight,” say Xiang Wu and colleagues at the National University of Singapore. Whereas a pencil thick strand of silkworm silk couldn’t. Why the difference?

Xiang and co say they’ve worked out why by successfully simulating the structure of both silks for the first time. Both spider silk and silkworm silk are made up of long chains of amino acids called polypeptides and are particularly rich in the amino acids alanine and glycine.

Various imaging techniques have shown that the sequences of amino acids differ slightly between spide and silkworm silk but this alone cannot explain the huge difference in strength, says Xiang.

Instead, the secret appears to lie in the way polypeptide chains fold into larger structures. Xiang’s model shows that a subtle difference exists between the silks in structures called beta sheets and beta crystallites. This insight has allowed the team to model for the first time the way in which both silks break.

That’s important because being able to predict the mechancial properties of an organic material simply by studying its structure is going to be increasingly useful. It may even allow us to take a better stab at making spider-like silk synthetically for the the first time.

Anybody with a 747 watch out.

Ref: arxiv.org/abs/0902.3518: Molecular Spring: From Spider Silk to Silkworm Silk

Liquid film motors finally explained

Thursday, February 26th, 2009

liquid-film-motor-explained

Last year, a group of Iranian physicists made the extraordinary discovery that motors can be made of nothing more than a thin film of water sitting in a cell bathed in two perpendicular electric fields. The unexpected result of this set up is that the water begins to rotate. Divide the water into smaller cells and each rotates too.

The team at the Sharif University of Technology in Tehran have a number of  fascinating videos of it in action. They raise an interesting question: the electric fields are static, so what’s making the water move?

Now Vlad Vladimirov at York University in the UK and a couple of droogs from Russia have delved into the hydrodynamics to work out what’s putting the oomph in this motor. The key turns out to be the scale on which the effect takes place.

They say the flow is generated at the edge of the cell where the electric field crosses the  (dielectric) boundary between the water and the cell container. The change in field sets the water flowing along the boundary.  Crucially, this flow is opposite on the other side of the cell and this is what sets up the circular flow.

Vladimirov and co point out that this effect can only happen in a thin film where effects such as viscosity and friction play a large role in the dynamics. In larger bodies of water, these effects become insignificant and the rotation stops. Which is why these motors have only ever been seen in thin films.

That has important implications becaue it shows the scale dependency of important phenomena. In fact, liquid film motors may turn out to be a game-changers for anybody involved in microfluidics.

Ref: arxiv.org/abs/0902.3733: Rotating Electrohydrodynamic Flow in a Suspended Liquid Film

Calculating the cost of dirty bombs

Wednesday, February 25th, 2009

cesium-137

One of the more frightening scenarios that civil defence teams worry about is the possibility that a bomb contaminated with radioactive material would be detonated in a heavily populated area.

Various research teams have considered this problem and come to similar conclusions–that the actual threat to human health from such a device is low. Some even claim that terror groups must have come to a similar conclusion which is why we’ve not been on the receiving end of such an attack. The panic such a device would cause is another questions.

Today Theodore Liolios from a private institution called the  Hellenic Arms Control Center in Thessaloniki in Greece, goes through the figures.

He says the most likely material to be used in such an attack is Cesium-137, widely used throughout the developed and developing world as a source for medical therapies.

The unstated implication is that it would be relatively easy to get hold of this stuff from a poorly guarded hospital. Exactly this happened in Goiania in Brazil when an abandoned hospital was broken into and its supply of cesium-137 distributed around the surrounding neighbourhoods. The incident left 200 people contaminated. Four of them died.

But a dirty bomb would not be nearly as lethal. The trouble with them (from a terrorist’s point of view, at least) is that distributing radioactive material over a large area dramatically reduces the exposure that people receive. Particularly when most could be warned to stay indoors or be evacuated (unlike the Goiania incident in which most people were unaware they were contaminated).

Liolios calculates that anybody within a 300 metre range of a dirty bomb would increase their lifetime risk of cancer mortality by about 1.5 per cent. And then only if they were unable to take shelter or leave the area. That’s about 280 people given the kind of densities you expect in metropolitan areas.

And he goes on to say that it is reasonable to assume that a cesium-137 dirty bomb would not increase the cancer mortality risk for the entire city by a statistically significant amount.

But the terror such a device might cause is another question. Liolios reckons that current radiation safety standards would mean the evacuation of some 78 square kilometres around ground zero. That would affect some 78,000 people, cost $7.8m per day to evacuate and some $78m to decontaminate.

That seems a tad conservative but however it is calculated, it may turn out to be chickenfeed compared to the chaos caused by panic, which may well result in more deaths than the bomb itself could claim. How to calculate the effect of that?

Ref: arxiv.org/abs/0902.3789: The Effects of Using Cesium-137 Teletherapy Sources as a Radiological Weapon

The coming of age of hadrontherapy

Tuesday, February 24th, 2009

cyclinac

There’s a problem with conventional radiotherapy for tumours: the body absorbs the radiation as it passes through. So zap a deep seated tumour with X-rays and the dose decreases exponentially with the depth of target. This means that both diseased and healthy tissue end up getting targeted.

In 1946, Robert Wilson, a physicist  at FermiLab near Chicago pointed out that protons with an energy of between 200 and 250 MeV and carbon ions with an energy of 3500 to 4500 MeV work in a different way. Inside the body, they tend to dump all their energy at the end of their range. Not only that, but because the particles are charged, they can be sharply focused. He said that makes these particles ideal for targeting tumours.

It’s taken 60 years, but there is now real interest in hadrontherapy. Several thousand patients have been treated at various particle physics labs around the world. and therein lies the problem: the only places that can produce protons or carbon ions with these energies are  specialised accelerators such as those at GSI in Germany and Chiba in Japan.

There is one facility dedicated to medical work at the Loma Linda University Medical Centre near Los Angeles but it has become clear to the medical and physics community that more are desperately needed.

So how do you build reliable, easy-t0-use, affordable particle accelerators for medical centres? Enter Ugo Amaldi and buddies from the TERA Foundation in Italy, who today propose a design that they think could revolutionise this kind of treatment. (Amaldi was a hot shot at the European particle physics lab CERN for many years before turning to medical therapy.)

The idea is to build a cyclotron (a circular accelerator) and bolt it to a linac (a linear accelerator) and to call the hybrid a Cyclinac. So protons or carbon ions are injected into the cyclotron which stores and accelerates them. They are then injected into a linac which accelerates them to the appropriate energies for medical applications. The result is a therapeutic beam ready for action.

This design can also produce synchrotron  x-rays for other  medical apps.
Of course, there several significant engineering challenges ahead, like increasing the repetition rate of these devices to practical levels, something that Cyclinacs are being designed to address.

The big question, of course, is not whether such a project will be funded (everyone seems to agree on that) but when. And then where to build it. Given the Terra Foundation’s location, what odds on Italy being host to the first of these devices in Europe?

Ref: arxiv.org/abs/0902.3533: Cyclinacs: Fast-Cycling Accelerators for Hadrontherapy

First evidence of a supernova in an ice core

Monday, February 23rd, 2009

antarctic-supernova

There hasn’t been a decent supernova in our part of the universe in living memory but astronomers in the 11th century were a little more fortunate. In 1006 AD, they witnessed what is still thought to be the brightest supernova ever seen on Earth (SN 1006) and just 48 years later saw the birth of the Crab Nebula (SN 1054).

Our knowledge of these events come from numerous written accounts, mainly by Chinese and Arabic astronomers (and of course from the observations we can make today of the resultant nebulae).

Now we can go one better. A team of Japanese scientists has found the first evidence of supernovae in an ice core.

The gamma rays from nearby supernova ought to have a significant impact on our atmosphere, in particular by producing an excess of nitrogen oxide. This ought to have left its mark in the Earth’s ice history, so the team went looking for it in Antarctica.

The researchers took an ice core measuring 122 metres from Dome Fuji station, an inland site in Antarctica. At a depth of about 50 metres, corresponding to the 11th century, they found three nitrogen oxide spikes, two of which were 48 years apart and easily identifiable as belonging to SN 1006 and SN 1054. The cause of the third spike is not yet known.

That’s impressive result and their ice core was obligingly revealing about other major events in the Earth’s past. The team saw a 10 year variation in the background levels of nitrogen oxide, almost certainly caused by the 11-year solar cycle (an effect that has been seen before in ice cores). They also saw a number of sulphate spikes from known volcanic eruptions such as Taupo, New Zealand, in 180 AD and El Chichon, Mexico, in 1260 AD.

The team speculate that the mysterious third spike may have been caused by another supernova, visible only from the southern hemisphere or hidden behind a cloud.

That would make the 11th century a truly bounteous time for supernovae. Of course, statistically, there ought to be a supernova every 50 years or so in a galaxy the size of the Milky Way, which means that the Antarctic ice is due another shower of nitrogen oxide any day now.

Ref:  arxiv.org/abs/0902.3446: An Antarctic Ice Core Recording both Supernovae and Solar Cycles

Adding ‘n’ tangling

Saturday, February 21st, 2009

The best of the rest from the physics arXiv this week:

Hydrogen Storage by Polylithiated Molecules and Nanostructures

Physical Properties of Biological Membranes

A Brief Overview of the Major Contribution to Physics by Landau

New Worlds: Evaluating Terrestrial Planets as Astrophysical Objects

A Recursive Threshold Visual Cryptography Scheme

Tradition Versus Fashion in Consumer Choice

A New, Enhanced Diamond Single Photon Emitter in the Near Infra-Red

Ptarithmetic: reinventing logic for the computer age

Friday, February 20th, 2009

ptarithmetic

In the last few years, a small group of logicians have attempted the ambitious task of re-inventing the discipline of formal logic.

In the past, logic has been thought of as the formal theory of “truth”.  Truth plays an important role in our society and as suchm a formal theory is entirely laudable and worthy. But it is not always entirely useful.

The new approach is to reinvent logic as the formal theory of computability. The goal is  to provide a systematic answer to the question “what is computable”. For obvious reasons,  so-called computability logic may turn out to be much more useful.

To understand the potential of this idea , just think for a moment about the most famous of logical systems called Peano Arithmetic, better known to you and me as “arithmetic”.

The idea is for computability logic to do for computation what Peano Arithmetic does for natural numbers.

The role of numbers is played by solvable computations and the collection of basic operations that can be performed on these computations forms the logical vocabulary of the theory.

But there’s a problem. There can be a big difference between something that is computable and something that is efficiently computable, says Giorgi Japaridze, a logician at Villanova University in Pennsylvania and the inventor of computability logic.

So he is proposing a modification (and why not, it’s his idea)–the system should allow only those computations that can be solved in polynomial time. He calls this polynomial time arithmetic or ptarithmetic for short.

The promise is that Japaridze’s system of logic will prove as useful to computer scientists as arithmetic is for everybody else.

What isn’t clear is whether the logical vocabulary of ptarithemtic is well enough understood to construct a system of logic around and beyond that, whether ptarithemtic makes a well-formed system of logic at all.

Those are big potential holes which Japaridze  may need some help to fill–even he would have to admit that he’s been ploughing a lonely furrow on this one since he put forward the idea in 2003.

But the potential pay offs are huge which make this one of those high risk , high potential pay off projects that just might be worth pursuing.

Any volunteers?

Ref: arxiv.org/abs/0902.2969: Ptarithmetic

Human eye could detect spooky action at a distance

Thursday, February 19th, 2009

mantanglement

It’s almost a year since Nicolas Gisin and colleagues at the University of Geneva announced that they had calculated that a human eye ought to be able to detect entangled photons. “Entanglement in principle could be seen,” they concluded.

That’s extraordinary because it would mean that the humans involved in such an experiment would become entangled themselves, if only for an instant.

Gisin is a world leader in quantum entanglement and his claims are by no means easy to dismiss.

Now he’s going a step further saying that the human eye could be used in a Bell type experiment to sense spooky-action-at-a-distance. “Quantum experiments with human
eyes as detectors appear possible, based on a realistic model of the eye as a photon detector,” they say.

One problem is that human eyes cannot se single photons–a handful are needed to trigger a nerve impulse to the brain.

That might have scuppered the possibility of  a Bell-type experiment were it not for some interesting work from Francesco De Martini and buddies at the Universityof Rome, pointing out how the quantum properties of a single particle can be transferred to an ensemble of particles.

That allows a single entangled photon, which a human eye cannot see, to be amplified into a number of entangled photons that can be seen. The eye can then be treated like any other detector.

This all looks like fun. The first person to experience entanglement –mantanglement–would surely be destined for some interesting press covereage.

But the work raises an obvious question: why is Gisin pursuing this line? The human eyeball could be put to use in plenty of optics experiments, so why the focus on mantanglement?

Could it be that Gisin thinks there is more to entanglement than meets the eye?

Ref: arxiv.org/abs/0902.2896: Quantum experiments with human eyes as detectors based on cloning via stimulated
emission

The puzzle of planet formation

Wednesday, February 18th, 2009

planet-formation

“The formation of planets is one of the major unsolved problems in modern astrophysics.” That’s how Rafael Millan-Gabet at Caltech and John Monnier from the University of Michigan begin their account of how our understanding of planet formation is about to undergo a revolution.

Driving this change will be a new generation of telelscopes and techniques capable of measuring and in some cases imaging planet formation in action.

It’s worth pointing out the poverty of our current understanding. At the heart of the problem is the fascinating question: why are all the planets different?

The ones in our solar system ought to have formed out of the same stuff at more or less the same time and yet no two are alike. And now the extrasolar planets seem to be demonstrating a similar variety.

The  trouble is that astronomers have only the vaguest understanding of what goes on inside  the circumstellar discs where planets are supposed to form. They have little idea of the circumstances in which accretion dominates over gravitational instability, whether “dead zones” exist in circumstellar discs where planets cannot form or what mechanisms are at work in transporting angular momentum within early solar systems.  They don’t even know when planets form.

The new measurements that will be possible in the coming years should hep to answer at least of these puzzles. And that makes this an exciting field to be in. Watch this space for developments

Ref: arxiv.org/abs/0902.2576: How and When do Planets Form? The Inner Regions of Planet Forming Disks at High Spatial and Spectral Resolution