Other highlights from the physics preprint server this week
Archive for September, 2007
Star formation in the early universe ain’t particularly well understood. Why should a pristine cloud of dust suddenly collapse to form a star?
One theory is that clumps of dark matter create gravitational wells into which the dust clouds collapse (although why dark matter should be clumpy and not smooth is anybody’s guess).
But Liang “Ang Li” Gao at Durham University in the UK says that in some theories dark matter might equally be drawn into filaments. In which case the first stars would have formed as filaments rather than spheres. So they’d look like giant lightbulbs hanging in the cocktail lounge of space.
Ref: arxiv.org/abs/0709.2165: Lighting the Universe with Filaments
The kilogram is a-shrinkin’ and ain’t nobody sure why. The problem is the way it is defined: the mass of a lump o’ metal stored in a vault in Paris. That’s no good cos a few atoms rub off each time it is picked up and others seem to fall off even when it ain’t picked up. In the last few years, it’s lost about the mass of a grain of sand.
Ain’t there a better way to define the kilogram? There sure is and shortage of em either. In November the world’s heavyweight experts are meeting in Paris to select the best definition by a vote or a toss of a coin or by some other method they ain’t told us about.
So between now and then expect to see plenty of elbowin’ n’ jostlin as the various definitions jockey for position. This week it’s the turn of Ronald “McDonald” Fox and Theodore “Over-the” Hill at Georgia Tech in Atlanta to pitch their definition.
They reckon that best way to define a kilogram is to fix the value of Avogadro’s constant, the number of atoms in 12 grams of carbon-12, at 84446886^3. That would make one gram exactly equal to the mass of 18 x 14074481^3 atoms of carbon-12.
It’s a good try but ain’t nobody gonna be happy with that definition. Why use carbon-12 and not silicon which is the current darling of most heavyweights in physics? And why rely on countin’ when you can define a kilogram in terms of its equivalent energy, thereby linking it via relativity to another fundamental: the Planck constant?
It’s all shapin up nicely for one helluva mudwrestle in Paris in November. All we need is a unit of hellraisin so we can work out the winner.
Ref: arxiv.org/abs/0709.2576: A Better Definition of the Kilogram
For a while now, them star gazers have been banging heads over the nature o’ gravity.
Here’s the problem: when you look at the way galaxies are a-spinnin and a-spiralin, there just ain’t enough gravity to hold em together. So either there is some hidden mass putting its gravitational shoulder to the wheel: the so-called dark matter. Or there’s something wrong with the 1/r^2 rule at large distances and gravity is stronger than we think at these scales.
In 1981, Mordehai “Mouse” Milgrom modified gravity to cope with this problem in a theory called MOdified Newtonian Dynamics or MOND. It says that gravity is a little stronger over galactic distances and this is what stops spinning galaxies from tearin’ themselves apart.
But the theory makes some strange predictions too. For example, it says that some galaxies should look as if they are surrounded by a ring of dark matter.
Earlier this year, the Hubble Space telescope found exactly that — a ring of dark matter around the galaxy cluster Cl 0024+17, which lies 5 billion light years from Earth. What astrogawpers actually saw were distorted images of galaxies behind Cl 0024+17 caused by the mass of Cl 0024+17 bendin’ light as it passed by, a phenomenon called gravitational lensing. The astrobods used these distortions to infer the distribution of mass around Cl 0024+17 and concluded it must have a ring of dark matter even though they ain’t actually seen a ring of any kind.
So Mouse Milgrom is a-shoutin and a-ravin from the roof tops: this is exactly what his theory predicts. Could it be enough to prove MOND once and for all?
Probably not. Gravitational lensing data is notoriously difficult to interpret so the work is by no means done and dusted. But even if it’s right, one swallow, don’t a summer make. He’s gonna need a lot more evidence before his ideas become mainstream.
Ref: arxiv.org/abs/0709.2561 : Rings and Shells of “Dark Matter” as MOND Artifacts
Public transport networks are easy targets for terrorist attacks: anybody in London, Tokyo or Madrid will tell ya that. So Christian “Furbie” von Ferber at Coventry University in the UK and his buddies have decided to model a few of ‘em from the point of view of network theory and find out how vulnerable they are to various kinds of attack.
Public transport networks have many small world features. For example, it’s easy to get from one point in a city to another with only a few changes in transport. Now small world networks are known to be robust under random attack but particularly vulnerable against specific organised attacks.
So Furbie von Ferber has worked out how various systems break down when stations or connections between them are removed according to predetermined rules. So this ain’t a measure of how easy it is to attack the transport system but how well the buses ‘n’ trains run after an attack (and let’s face it, in my ‘hood they don’t run too good at the best o’ times).
Here’s the list of cities he’s chosen to model: Berlin, Dallas, Dusseldorf, Hamburg, Hong Kong, Istanbul, London, Los Angeles, Moscow, Paris, Rome, Sao Paulo, Sydney and Taipei.
Strangely, no room for Madrid or Tokyo, both of which have suffered serious attacks on their transport systems.
Turns out that Paris and Hong Kong look the most robust and Dallas (do they have public transport in Dallas?) looks the most vulnerable: shout BOO and it’ll grind to a halt.
What Furbie von Ferber fails to do is say what Dallas has gotta do to strengthen its network. Of course, that wouldn’t be in any way related to the potential for consultancy fees from this work
Ref: arxiv.org/abs/0709.3206: Attack Vulnerability of Public Transport Networks
Last year, the world went bonkers when scientists at Duke University in North Carolina unveiled the world’s first invisibility cloak. There weren’t no let up in the wall-to-wall media coverage it generate. And impressive though it was, what many reporters forgot to mention was that the cloak works only for microwaves at a single frequency and only in two dimensions.
So if yer happen to be a sandwich flat alien with microwave radar dishes for eyes then this cloak mighta fooled ya.
Now Igor “Smelly knees” Smolyaninov and buddies at the University of Maryland in College Park, have built the world’s first invisibility cloak that works at optical frequencies. The cloak is based on a design by Vladimir Shalaev from Purdue University who dreamt up a new method building invisibility cloaks that gets around some of the limitations that otherwise prevent them working at optical frequencies.
And of course, there are a few catches this time too. This cloak also works only in two dimensions, when the light is polarised in a particular way and over a scale of only a few micrometers. But from acrons to oak trees grow.
So if yer happento be a sandwich flat alien the size of a grain of pollen wearing polaroid sun glasses, this is the invisibility cloak for you.
What’s amazing, though, is the speed with which these eggheads are a-buildin and a-developin this technology. It ain’t no more than a handful of years since people said that optical invisibility cloaks ain’t possible, even in theory. Even the optimists were sayin only last year that it’d be 10 to 15 years before we see optical versions. So hats off to the Smelly Knees of Smolyaninov.
Ref: arxiv.org/abs/0709.2862: Electromagnetic Cloaking in the Visible Frequency Range
In 1926, when the scientific world was still a-puzzling and a-wondrin over the wave-particle duality of light, Einstein asked a pal, Emil “Hurry” Rupp, to conduct an experiment that would settle the matter. If anyone could do it, thought Einstein, it was Rupp who was considered the latest and greatest experimental physicists of the day.
The experiment involved so-called canal rays produced in a gas discharge tube. When an electric field passes across a gas at low pressure, the tube shines ‘n’ glows due to the movement of electrons from the cathode to the anode (so-called cathode rays). But if a hole is made in the cathode, so-called canal rays appear start a-streamin and a-strayin’ through the hole in the opposite direction to the cathode rays.
The question that Einstein asked Rupp to resolve was whether the light from canal rays was wave-like or particle-like.
The matter was settled when Rupp said he could see with his own eyes that the light formed interference patterns. That proved it must be wave-like. Einstein presented the result as evidence in his own interpretation of quantum mechanics.
But nobody else could see these interference patterns and physicists soon began to doubt the veracity of Rupp’s work. In 1935 he publicly retracted five of his scientific paper in the previous year claiming to be suffering from “psychasthenia linked to psychogenic semiconsciousness”.
Rupp turned out to be the greatest scientific fraudster of the 20th century, surpassing even Hendrick Schoen from Bell Labs in his boldness and audacity (and mental health). It later emerged that everything Rupp had done in the previous ten years was a fraud.
Einstein swallowed it hook, line and sinker.
Now Jeroen “Kongen” van Dongen at the Institute for History and Foundations of Science at Utrecht University in the Netherlands has re-analysed Einstein’s role in the controversy. He says the evidence “suggests a strong theoretical prejudice on Einstein’s part” which led him to ignore evidence that Rupp’s the experiments were a sham and a-rigged.
Poor old Einstein! But I know ya’ll will forgive him
Ref: arxiv.org/abs/0709.3099: Emil Rupp, Albert Einstein and the Canal Ray Experiments on Wave-particle Duality: Scientific Fraud and Theoretical Bias
And: arxiv.org/abs/0709.3226: The Interpretation of the Einstein-Rupp Experiments and their Influence on the History of Quantum Mechanics
There’s more to that question than meets the eye. Black holes ain’t like nothing else in the Universe, havin’ all kinds strange quantum properties as well as some curious gravitational ones too. So when it comes to cruisin’ the cosmos, do black holes move like classical objects such as stars or like quantum objects such as photons?
Carol “Kilo” Herzeneberg, some kinda freelance scientist in Chicago, has done some calculatin’ n’ computin’ and come up with an answer. She reckons that big black holes move-to-the-groove like stars and small black holes get-down-to-the-beat like photons. And the threshold between these behaviours occurs for a black hole about the size of a nucleon.
That might sound lil but a black hole that size would have mass of about 2 trillion kilograms (2 x 10^12 kg). That’s about the mass of Wyoming, which is about as close as you can get to a black hole on Earth.
Here’s something for Kilo Herzenberg to Gedanken about: how many of these things are a-floatin’ and a-driftin’ round the cosmos and what’s the probability of one of them popping into existence anywhere near Earth in my life time?
Ref: arxiv.org/abs/0709.1741: How do Black Holes Move, as Quantum Objects or as Classical Objects?
There ain’t no limit to how impressive computer simulations of the real world can be. Ya only gotta switch on an SP3 or an Xbox 360 to see how far thing have come since since Pong hit the small screen in the 70s as a poor excuse for tennis.
But there are still plenty of simple things that even the most powerful computers get overheated about. And we’re not just talking nuclear stockpile management and the physics of supernovas.
Taizo “Taser” Kobayashi at Kyushu University in Japan says that the way musical instruments generate sound is one of these complicated things that computers just can’t get their microprocessors around.
And here’s the reason. The physics at work operates over many orders of magnitude so any direct numerical simulation from first principles soon explodes into a godforsaken mass o’ messy details. For example, the energy in the turbulent airflow within a wind instrument is 10^5 times greater than the energy of the sound field it radiates. Try simulatin’ that on a supercomputer and it’ll grind to a stutterin’ halt, a-spewin’ out steam ‘n’ flames.
But there’s another way, says Taser Kobayashi. Try dividing up the physical processes by scale into easily simulated packages. Then all yer supercomputer has to worry about is where these packages meet at the edges. It’s called a multiphysics simulation.
That kind of dividin up ain’t always possible: physics just don’t always divide easy. But Taser says it works fine for simulatin’ the sound a flute makes when you blow into it. He reckons it reduces the computational time by two orders of magnitude compared to simulatin’ from first principles. That’s impressive.
Where else could this work? Taser tentatively suggests simulating the behaviour of ion channels in biological cells in which the physics can be easily divided into atomic, molecular and cellular packages.
Any other suggestions?
Ref: arxiv.org/abs/0709.0787: Sound Generation by a Turbulent Flow in Musical Instruments – Multiphysics Simulation Approach