Archive for the ‘Changin’ the world’ Category

How fast could Usain Bolt have run the 100m?

Wednesday, September 3rd, 2008

usain-bolt.jpg

You’ll probably have seen footage of “the greatest 100m performance in the history of the event” as Michael Johnson put it. But if not, here’s short description:

In the Olympic final of the 100 metres in Beijing, the Jamaican sprinter Usain Bolt accelerated away from the field and then, with victory assured but with 20 metres still to run, started waving his arms and high-stepping in celebration. When he eventually crossed the line ahead of his rivals, he appeared to be dancing.

Despite easing up up, Bolt broke the world record by 0.03 seconds in a time of 9.69 seconds. But his performance raised the question: how fast could he have run?

Bolt’s coach later said he could have done it in 9.52 seconds. Perhaps.

To settle the question, Hans Kristian Eriksen and colleagues at the University of Oslo decided to apply a little physics.

They used video footage of the race to first work out Bolt’s acceleration profile during the 100 metres and that of the silver medallist Richard Thompson.

Bolt clearly slows down towards the end of the race. So the Norwegian team calculated what Bolt’s time would have been had he matched Thompson’s acceleration in the final 20 metres. The answer is a stunning 9.61 seconds.

But they also say he could have done better. Had Bolt outpaced Thompson’s acceleration by 0.5 m/s^2, as he did in the earlier part of the race, the Norwegians say he could have run 9.55 seconds (plus or minus 0.04 seconds). That makes the coach’s assessment look eminently achievable.

Perhaps he knows something we don’t.

Ref: arxiv.org/abs/0809.0209: Velocity Dispersions in a Cluster of Stars: How Fast Could Usain Bolt Have Run?

Orbiting observatory finds dark matter, but what kind?

Monday, September 1st, 2008

pamela-data.jpg

The world of cosmology is abuzz with rumours that an orbiting observatory called PAMELA has discovered dark matter. Last month, the PAMELA team gave a few selected physicists a sneak preview of their results at a conference in Stockholm.

Here’s the deal. The PAMELA people  say their experiment has seen more positrons than can be explained by known physics and that this excess exactly matches what dark matter particles would produce if they were annihilating each other at the center of the galaxy.

What makes this particularly exciting is that other orbiting observatories have also seen similar, but less clear cut, evidence of dark matter annihilations.

Since then, the shutters have come down. With the prospect of a major discovery on their hands  and with publication in a major journal at stake, the team has closed ranks to re-analyse their data and prepare it for exclusive publication. Not a word has leaked from the PAMELA team since their preliminary announcement.

That hasn’t stopped physicists speculating for themselves. Today Marco Cirelli from the CEA near Paris in France and Alessandro Strumia from the Università di Pisa in Italy present their own analysis of the PAMELA data.

Cosmologists have long speculated on the nature of dark matter and dreamt up all manner of models and particles to explain it. The big question is which type of particle does the PAMELA data point towards.

Today, Cirelli and Strumia stake their own claim. They say the data agrees with their own model called Minimal Dark Matter in which the particle responsible is called the “Wino” (no, it  really is called the wino).

But given the PAMELA team’s reluctance to publish just yet, where did Cirelli and Strumia get the data? The answer is buried in a footnote in their paper.

“The preliminary data points for positron and antiproton fluxes plotted in our figures have been extracted from a photo of the slides taken during the talk, and can thereby slightly differ from the data that the PAMELA collaboration will officially publish.”

Can’t fault them for initiative.

Ref: arxiv.org/abs/0808.3867: Minimal Dark Matter Predictions and the PAMELA Positron Excess

Do nuclear decay rates depend on our distance from the sun?

Friday, August 29th, 2008

radioactive-decay.jpg

Here’s an interesting conundrum involving nuclear decay rates.

We think that the decay rates of elements are constant regardless of the ambient conditions (except in a few special cases where beta decay can be influenced by powerful electric fields).

So that makes it hard to explain the curious periodic variations in the decay rates of silicon-32 and radium-226 observed by groups at the Brookhaven National Labs in the US and at the Physikalisch-Technische Bundesandstalt in Germany in the 1980s.

Today, the story gets even more puzzling. Jere Jenkins and pals at Purdue University in Indiana have re-analysed the raw data from these experiments and say that the modulations are synchronised with each other and with Earth’s distance from the sun. (Both groups, in acts of selfless dedication,  measured the decay rates of silicon-32 and radium-226 over a period of many years.)

In other words, there appears to be an annual variation in the decay rates of these elements.

Jenkins and co put forward two theories to explain why this might be happening.

First,  they say a theory developed by John Barrow at the University of Cambridge in the UK and Douglas Shaw at the University of London, suggests that the sun produces a field that changes the value of the fine structure constant on Earth as its distance from the sun varies during each orbit. Such an effect would certainly cause the kind of an annual variation in decay rates that Jenkins and co highlight.

Another idea is that the effect is caused by some kind of interaction with the neutrino flux from the sun’s interior, which could be tested by carrying out the measurements close to a nuclear reactor (which would generate its own powerful neutrino flux).

It turns out, that the notion of that nuclear decay rates are constant has been under attack for some time. In 2006, Jenkins says the decay rate of manganese-54 in their lab decreased dramtically during a solar flare on 13 December.

And numerous groups disagree over the decay rate for elements such as titanium-44, silicon-32 and cesium-137. Perhaps they took their data at different times of the year.

Keep em peeled beause we could hear more about this. Interesting stuff.

Ref: arxiv.org/abs/0808.3283: Evidence for Correlations Between Nuclear Decay Rates and Earth-Sun Distance

How to measure macroscopic entanglement

Monday, August 18th, 2008

Macroscopic entanglement

If macroscopic objects become entangled, how can we tell? The usual way to measure entanglement on the microscopic level is to carry out a Bell experiment, in which the quantum states of two particles are measured.  If the results of these measurements fall within certain bounds, the particles are considered to be entangled.

These kinds of quantum measurements are not possible with macroscopic bodies but recent work suggests there may be other ways to spot entanglement. Vlatko Vedral  at the University of Leeds and pals outline one of these on the arXiv.

Their idea is based on the third law of thermodynamics which states that the entropy at absolute zero is dependent only on the degeneracy of the ground state. This in turn implies that the specific heat capacity of a material must asymptotically approach zero as the temperature gets closer to absolute zero. But if particles within the material were entangled, Vedral and pals say this would not be the case.

That kind of thinking suggests a straightforward experiment: simply measure the heat capacity of a material as its temperature drops to zero. If it doesn’t asymptotically approach zero, then you’ve got some entanglement on your hands.

Best of all, measuring heat capacity is standard technique so there’s no reason this can’t be done pronto.

Ref: arxiv.org/abs/quant-ph/0508193: Heat Capacity as A Witness of Entanglement

The prophetic promise of category theory

Monday, August 11th, 2008

category-theory.jpg

When it comes to  creating the final theory of everything, physicists have an ever broadening (and bewildering) choice of mathematical tricks with which to tackle the mysteries of the universe.

A couple of years ago, random matrix theory cropped up as a potential framework for a new kind of science. And a fascinating idea it is too.

Talking of a new kind of science, what  cellular automatons can’t do, isn’t worth knowing, or so we’re told.

Then there is the non-compact real form of the E8 Lie algebra, a surfer’s dream of a final theory.

Today there’s a new kid on the block called category theory, a kind of stripped down, souped up group theory that has been taking mathematics by storm since it was invented in the 1940s.

It’s chief claim is that it has become a hugely powerful tool for unifying concepts in mathematics and so is obviously going to do the same for physics. (That kind of reasoning may not be as madcap as it sounds).

Chief among the category theory evangelists is Bob Croeke at the University of Oxford who today publishes a kind of idiot’s physicists guide to category theory, to give any interested parties a taste for the field.

“Category theory should become a part of the daily practice of the physicist,” argues Croeke. “The reason for this is not that category theory is a better way of doing mathematics, but that monoidal categories constitute the actual algebra of practicing physics.”

For an old dog, this will surely be a trick too far. But for any whipper snappers out there, it looks intriguing, no?

Ref: arxiv.org/abs/0808.1032: Introducing Categories to the Practicing Physicist

Schroedinger-like PageRank wave equation could revolutionise web rankings

Thursday, August 7th, 2008

quantum-pagerank.jpg

The PageRank algorithm that first set Google on a path to glory measures the importance of a page in the world wide web.  It’s fair to say that an entire field of study has grown up around the analysis of its behaviour.

That field looks set for a shake up following the publication today of an entirely new formulation of the problem of ranking web pages. Nicola Perra at the University of Cagliari in Italy and colleagues have discovered that when they re-arrange the terms in the PageRank equation the result is a Schroedinger-like wave equation.

So what, I hear you say, that’s just a gimmick. Perhaps, but the significance is that it immediately allows the entire mathematical machinery of quantum mechanics to be brought to bear on the problem–that’s 80 years of toil and sweat.

Perra and pals point out some of the obvious advantages and disadvantages of the new formulation.

First, every webpage has a quantum-like potential. The topology of this potential gives the spatial distribution of PageRank throughout the web. What’s more, this distribution can be calculated in a straightforward way which does not require iteration as the conventional PageRank algorithm does.

So the PageRank can be calculated much more quickly for relatively small webs and the team has done a simple analysis of the PageRanking of the .eu domain in this way. However, Perra admits that the iterative method would probably be quicker when working with the tens of billions of pages that make up the entire web.

But the great promise of this Schroedinger-like approach is something else entirely. What the wave equation allows is a study of the dynamic behaviour of PageRanking, how the rankings change and under what conditions.

One of the key tools for this is called perturbation theory. It’s no understatement to say that perturbation theory revolutionised our understanding of the universe when it was applied to quantum theory in the 1920s and 1930s.

The promise is that it could do the same to our understanding of the web and if so, this field is in for an interesting few years ahead.

Ref: arxiv.org/abs/0807.4325: Schroedinger-like PageRank equation and localization in the WWW

Quantum communication: when 0 + 0 is not equal to 0

Tuesday, August 5th, 2008

quantum-channel.jpg

One of the lesser known cornerstones of modern physics is Claude Shannon’s mathematical theory of communication which he published in 1948 while juggling and unicycling his way around Bell Labs.

Shannon’s theory concerns how a message created at one point in space can be reproduced at another point in space. He calls the conduit for such a process a channel and the limits imposed by the universe on this process the channel capacity.

The capacity of a communications channel is hugely important idea. It tells you, among other things, the rate at which you can send information from one location to another, without loss. If you’ve ever made a phone call, watched television or surfed the internet you’ll have benefited from the work associated with this idea.

In recent years, our ideas about communication have been transformed by the possibility of using quantum particles to carry information. When that happens the strange rules of quantum mechanics govern what can and cannot be sent from one region of space to another. This kind of thinking has has spawned the entirely new fields of quantum communication and quantum computing.

But ask a physicist what the capacity is of a quantum information channel and she’ll stare at the floor and shuffle her feet. Despite years of trying, nobody has been able to update Shannon’s theory of communication with a quantum version.

Which is why a paper today on the arXiv is so exciting. Graeme Smith at the IBM Watson Research Center in Yorktown Heights NY (a lab that has carried the torch for this problem) and Jon Yard from Los Alamos National Labs have made what looks to be an important breakthrough by calculating that two zero-capacity quantum channels can have a nonzero capacity when used together.

That’s interesting because it indicates that physicists may have been barking up the wrong tree with this problem: perhaps the quantum capacity of a channel does not uniquely specify its ability for transmitting quantum information. And if not, what else is relevant?

That’s going to be a stepping stone to some interesting new thinking in the coming months and years. Betcha!

Ref: arxiv.org/abs/0807.4935: Quantum Communication With Zero-Capacity Channels

Push ‘n’ shove

Saturday, August 2nd, 2008

The best of the rest from the physics arxiv:

The NASA EPOXI Mission of Opportunity to Gather Ultraprecise Photometry of Known Transiting Exoplanets

EconoThermodynamics, or the World Economy “Thermal Death” Paradox

How the Surrounding Water Changes the Electronic and Magnetic Properties of DNA

Traffic by Small Teams of Molecular Motors

IceCube: A Cubic Kilometer Radiation Detector

The Cosmology of the Divine Comedy

Dark energy and the bitterest pill

Monday, July 14th, 2008

 Copernican principle

It’s hard to get your head around dark energy, this universe-accelerating stuff that is supposed to fill the cosmos. Dark energy was invented to explain measurements that seem to show that the most distant supernovas all appear to be accelerating away from us. The thinking is that something must be pushing them away and that stuff is dark energy.

But for many astrophysicists, dark energy is a difficult pill to swallow. It requires the universe to be fine tuned in a previously unexpected, and frankly, unimaginable way.

So astronomers have begun a systematic investigation of all the assumptions on which the notion of dark energy depends. Nothing is sacrosanct in this hunt–these guys are tearing up the floorboards in the search for an alternative hypothesis. And that means revisiting some of our most fundamental assumptions.

One of these is the Copernican principle, that the universe is more or less the same wherever you happen to be. Principles don’t come much more fundamental than this but the evidence in its favour, at least on the scale that dark energy seems to behave, is pretty thin.

In fact, a number of theorists have calculated that the supernova data can be explained without the need for dark energy if our local environment were emptier than the universe as a whole. But to make this idea work, the earth must be sitting in the middle of a void that is roughly the size of the observable universe and that’s not compatible with the Copernican principle, not by a long shot.

Now Timothy Clifton  and pals at the University of Oxford in the UK have worked our how to tell whether such a void exists or not. They say that the next round of highly accurate measurements of nearby supernova should be able to tell us whether we’re in a void or not. So we shouldn’t have long to wait.

Either way, astronomers will find it hard to settle that troubling sensation in the pit of their stomachs. The truth is that when it comes to swallowing uncomfortable ideas, dark energy may turn out to be a sugar-coated doughnut compared to a rejection of the Copernican principle.

Ref: arxiv.org/abs/0807.1443: Living in a Void: Testing the Copernican Principle with Distant Supernovae

First X-ray diffraction image of a single virus

Friday, June 20th, 2008

Virus x-ray

X-ray crystallography has been a workhorse technique for chemists since the 1940s and 50s. For many years, it was the only way to determine the 3D structure of complex biological molecules such haemoglobin, DNA and insulin. Many a Nobel prize has been won poring over diffraction images with a magnifying glass.

But x-ray crystallography has a severe limitation: it only works with molecules that form into crystals and that turns out to be a tiny fraction of the proteins that make up living things.

So for many years scientists have searched in vain for a technique that can image single molecules in 3D with the resolution, utility and cost-effectiveness of x-ray diffraction.

That search might now be over. Today, John Miao at the University of California, Los Angeles, makes the claim that he and his team have taken the first picture of a single unstained virus using a technique called x-ray diffraction microscopy. Until now this kind of imaging has only been done with micrometre-sized objects.

Miao’s improvement comes from taking a diffraction pattern of the virus and then subtracting the diffraction pattern of its surroundings. The resolution of his images is a mere 22 nanometres, that’s an improvement of three orders of magnitude.

If confirmed, that’s an extraordinary breakthrough. With brighter x-ray sources, the team says higher resolution images will be possible and that it’s just a matter of time before they start teasing apart the 3D structures of the many proteins that have eluded biologists to date.

But best of all, x-ray diffraction gear is so cheap that this kind of technique should be within reach of almost any university lab in the world.

Ref: arxiv.org/abs/0806.2875: Quantitative Imaging of Single, Unstained Viruses with Coherent X-rays