Archive for September, 2008

How politics manipulates science

Tuesday, September 30th, 2008

In a fascinating and controversial paper, Richard Lindzen from the Massachusetts Institute of Technology sets out to show how changes in the structure of scientific activity over the past half century have left the scientific endeavor vulnerable to political manipulation.

In particular, he focuses on how political bodies try to control scientific institutions, how scientists adjust both data and even theory to accommodate politically correct positions, and how opposition to these positions is disposed of.

Much of Lindzen’s discussion is aimed at the climate change debate where he says these vulnerabilities have been exploited to a remarkable extent.

As a result, he says: “progress in climate science and the actual solution of scientific problems in this field have moved at a much slower rate than would normally be possible.”

That may lead to trouble ahead.  He says that society is undoubtedly aware of the imperfections of science but it has rarely encountered a situation such as the current global warming hysteria where institutional science has so thoroughly committed itself to policies which call for massive sacrifices in well being world wide.

And he hints that this may not be best way forward. Lindzen says that massive crash programs such as the Manhattan Project are not appropriate to all scientific problems. In particular, such programs are unlikely to be effective in fields where the basic science is not yet in place. Rather, they are best suited to problems where the needs are primarily in the realm of engineering.

For the record, Lindzen was once a member of the UN’s International Panel on Climate Change and is widely regarded as a climate change skeptic.

Be that as it may, this is a recommended read.

Ref: Climate Science: Is it Currently Designed to Answer Questions?

Forget black holes, could the LHC trigger a “Bose supernova”?

Monday, September 29th, 2008


The fellas at CERN have gone to great lengths to reassure us all that they won’t destroy the planet (who says physicists are cold hearted?).

The worry was that the collision of particles at the LHC’s high energies could create a black hole that would swallow the planet. We appear to be safe on that score but it turns out there’s another way in which some people think the LHC could cause a major explosion.

The worry this time is about Bose Einstein Condensates, lumps of matter so cold that their constituents occupy the lowest possible quantum state.

Physicists have been fiddling with BECs since the early 1990s and have become quite good at manipulating them with magnetic fields.

One thing they’ve found is that it is possible to switch the force between atoms in certain kinds of BECs from positive to negative and back using a magnetic field, a phenomenon known as a Feschbach resonance.

But get this: in 2001, Elizabeth Donley and buddies at JILA in Boulder, Colorado, caused a BEC to explode by switching the forces like. These explosions have since become known as Bose supernovas.

Nobody is exactly sure how these explosions proceed which is a tad worrying for the following reason: some clever clogs has pointed out that superfluid helium is a BEC and that the LHC is swimming in 700,000 litres of the stuff. Not only that but the entire thing is bathed in some of the most powerful magnetic fields on the planet.

So is the LHC a timebomb waiting to go off? Not according to Malcolm Fairbairn and Bob McElrath at CERN who have filled the back of a few envelopes in calculating that we’re still safe. To be doubly sure, they also checked that no other superfluid helium facilities have mysteriously blown themselves to kingdom come.

“We conclude that that there is no physics whatsoever which suggests that Helium could undergo
any kind of unforeseen catastrophic explosion,” they say.

That’s comforting and impressive. Ruling out foreseen catastrophies is certainly useful but the ability to rule out unforeseen ones is truly amazing.

Ref: There is no Explosion Risk Associated with Superfluid Helium in the LHC Cooling System

Orbits ‘n’ ephemeris

Saturday, September 27th, 2008

The best of the rest from the physics arXiv this week:

Correlated Connectivity and the Distribution of Firing Rates in the Neocortex

Constructing Perfect Steganographic Systems

Fainter and Closer: Finding Planets by Symmetry Breaking

Diamond Nanoparticles as Photoluminescent Nanoprobes for Biology and Near-Field Optics 

Silicon Photonics: The Inside Story

Limb Preference in the Gallop of Dogs and the Half-Bound of Pikas on Flat Ground

Precise Orbital Tracking of an Asteroid with a Phased Array of Radio Transponders

Periodic Pioneer anomaly points to modified general relativity

Friday, September 26th, 2008


The Pioneer anomaly grows ever more fascinating.

Here’s the background: Pioneer 10 and 11 were launched in 1972 and 1973 respectively and, after sweeping past a number of the outer gas giants, have been heading out of the solar system ever since.

NASA has been accurately tracking their position and speed using Doppler tracking measurements of radio signals from the craft. But this data has thrown up a problem. Both probes appear to be decelerating faster than can be explained by the Sun’s gravity. All that has been widely discussed and numerous explanations have been put forward to explain this discrepancy.

What isn’t so well known is that there is a periodic component to the anomaly. The team at the NASA’s Jet Propulsion Lab who have been collecting the data say that it’s unlikely that this variation is a from the spacecraft. Instead, they think probably the result of something at our end such as a tiny variation in Earth’s orbit.

Now Bruno Christophe and pals from the French aerospace lab, ONERA, near Paris and various other French institutions, have carried the most detailed analysis yet on these periodic variations and raise another interesting possibility.

A number of people have suggested modifications to general relativity that would explain the Pioneer anomaly. But there has never been a way to test these modifications.

Now Christophe and co say that the periodic variations are compatible with the effects on radio signals that some of these modifications might cause.

That’s an extraordinary claim. Obviously, more analysis is needed and it always pays to be cautious with these kinds of ideas. But could it really be possible that the Pioneer anomaly is the first evidence of physics beyond Einstein’s version of general relativity?

Ref: Pioneer Doppler Data Analysis: Study of Periodic Anomalies

New fractal pattern found in milk and coffee

Thursday, September 25th, 2008


Next time you stare into your 9am double tall latte, look with new respect. Japanese scientists have discovered a new type of fractal in the patterns coffee makes as mixes with milk.

Placing a heavier fluid onto a lighter fluid always results in an disturbance at their boundary
known as a Rayleigh–Taylor instability.

Michiko Shimokawa and Shonosuke Ohta from Kyushu University in Japan placed coffee (Nescafe’s Gold Blend, if you must know) on top of ordinary milk, which is lighter, and watched how gravity and surface tension compete in a way that leads to Rayleigh-Taylor instability.

As soon as the coffee droplet is placed on the surface, the coffee solution spreads out creating a fractal pattern. But this is a different kind of pattern from the ordinary fractals seen in river branches, and bacterial colonies, which continue to grow and increase in area.

Instead, in coffee, parts of the pattern disappear as they are sucked into the milk by gravity. The result is a shifting pattern, parts of which continually disappear..

Shimokawa and Ohta say this behaviour closely matches that of a Sierpinski carpet which is formed by cutting a square into 9 smaller squares in a 3-by-3 grid. The central square is then removed and the procedure applied to the remaining 8 squares ad infinitum. This creates  a fractal structure with dimension 1.88.

That’s the same dimension that the coffee fractals turn out to have. And there are other similarities too, such as the disappearing patterns.

This, say the Japanese pair, strongly implies that the coffee fractal must form in the same way as a Sierpinski carpet, following similar rules. Intriguing!

So come on chaps: what are these rules and how do they come about in a system dominated by the complexity of Rayleigh-Taylor instabilities?

Ref: Annihilative fractals of coffee on milk formed by Rayleigh–Taylor instability

Why spontaneous traffic jams are like detonation waves

Wednesday, September 24th, 2008


We’re all familiar with phantom jams, traffic blockages that arise with no apparent cause and that melt away for no discernible reason.

Today Ruben Rosales and pals at MIT and the University of Alberta in Canada coin a new term for the waves that cause these hold ups: they call them jamitons.

And jamitons turn out to have an interesting property: they are self-sustained disturbances consisting of a shock matched to vehicle speed.

If that sounds familiar, it’s because you’re reminded of the way in which certain types of transonic disturbances can be self-sustaining. In the world of a chemists, these are known as detonation waves.

Rosales and co say jamitons and detonation waves are mathematical analogues.

That sounds interesting and useful and perhaps one day it will be. But you wouldn’t guess it from this paper.

Rosales and friends are unable to run with their analogy in any useful way. They say that the existence of jamitons in traffic flow are an indication that dangerous vehicle concentrations may occur (no, really?).

And they conclude: “such situations may be avoided by judicious selection of speed limits, carrying capacities, etc” (Wow!)

In other words, spontaneous traffic jams may cause avoidable crashes. Nothing gets past these guys.

Ref: On “Jamitons,” Self-Sustained Nonlinear Traffic Waves

Loophole found in quantum cryptography photon detectors

Tuesday, September 23rd, 2008


If you’re hoping to secure your data using quantum cryptography, you might want to find a shoulder to cry on.

Quantum cryptography ought to be 100 percent secure. In theory , it provides perfect security against eavesdroppers. But in practice, a number of loopholes have emerged (see here and here). And today, Vadim Makarov and pals at the Norwegian University of Science and Technology reveal another.

One crucial piece of kit that every quantum cryptographer needs is a detector capable of spotting single photons. And the detector of choice in about half of quantum cryptography experiments is the Perkin Elmer SPCM-AQR detector module. “Until recently, this has been the only commercially available Si single photon detector model,” say Makarov and buddies.

Sadly, it turns out to have a significant flaw. The Norwegian team says that bombarding the machine with bright optical pulses can override the control circuitry in a way that allows an eavesdropper to control its output. That gives Eve a way of staging a successful intercept attack.

I know what you’re thinking: why not switch to the gear used in the other half of quantum crypto experiments? The answer is that Makarov and pals have already shown that these devices have a vulnerability.

All is not lost, however. Now that the vulnerability has been revealed it should be straightforward to implement extra security to foil such an attack.

But the implications are clear. The eternal cat and mouse game between eavesdroppers and their victims looks set to continue. Which means that quantum cryptography may never be perfect.

Ref: Can Eve control PerkinElmer actively-quenched single-photon detector?

The fine line between the visible and invisible

Monday, September 22nd, 2008


The man who built the world’s first invisibility cloak is back and this time he’s got an even better idea.

His first design was a triumph for headline writers and Harry Potter fans alike, although most glossed over the fact that this first cloak worked only for microwave-sensitive eyes and even then only at a single specific frequency. Oh, and only in two dimensions. And it wasn’t really a cloak at all, more of an invisibility canister.

Nevertheless, nothing should be taken away from the technical achievements of David Smith and colleagues at Duke University in North Carolina. They’ve done a job almost as spectacular as their PR team.

The new idea gets around one of the most pressing problems associated with invisibility cloaks which is that they are impossible to construct well. Invisibility cloaks work by distorting the permeability and permittivity of the cloaking material in way that forces light to bend around an internal cavity. This makes the cavity invisible to an observer.

But the technique requires the permeability and permittivity to take infinite values at certain points, particularly on the boundary between the cavity and and the cloaking material . And this just isn’t possible.
Various ideas have been proposed to get around this problem but all have their own weaknesses.

So Smith and colleagues say they might as well accept that an invisibility cloak cannot be perfect and use it to their advantage. Instead of attempting to hide the internal cavity completely or crushing it to a point as others have done, the new idea is to make it appear as a single line. Such an invisibility cloak wouldn’t hide an object entirely but instead make it look like a thin line, like a defect in the structure of the cloak.

That’s clever because it dramatically relaxes the constraints placed on the types of metamaterials you can use for cloaking. Smith and his buddies say that such a cloak would be “very easy to realize” using known techniques.

Smith is known for publishing theoretical predictions just ahead of the practical realisation.

So if his past form is anything to go by, we can expect to see a working invisibility cloak that employs this technique in the coming weeks or months.

Ref: Invisible Cloak With Easily-Realizable Metamaterials

Rubies ‘n’ diamonds

Saturday, September 20th, 2008

The best of the rest from the physics arXiv:

Derivation of Evolutionary Payoffs from Observable Behavior

Space-Time Sensors Using Multiple-Wave Atom Interferometry

Dark Matter from a Gas of Wormholes

Observable Topological Effects of Mobius Molecular Devices

How Long Should an Astronomical Paper be to Increase its Impact?

Nanodiamonds lead to sharper images

Friday, September 19th, 2008


Zap a diamond nanoparticle with laser light and it will fluoresce, emitting single photons if it is small enough.  That makes nanodiamonds extremely useful, say Aurélien Cuche at the Université Joseph Fourier in Grenoble and pals.

For a start, nanodiamonds are easily absorbed by cells, which allows them and the processes inside them to be tracked with ease.

But Cuche and co have found a more exciting use: they have attached a nanodiamond the tip of a scanning near field microscope to provide single photon illumination when needed. And this dramatically improves the resolution of these devices, says the team.

Which means that nanodiamonds, cough, are a microscopist’s best friend.

Ref: Diamond Nanoparticles as Photoluminescent Nanoprobes for Biology and Near-Field Optics