The other highlights from the physics arXiv this week:
Archive for May, 2008
Steganography is the art of hiding message when they are sent, in a process akin to camouflage. In cryptography, on the other hand, no attempt is made to hide the message, only to conceal its content.
Today, Wojciech Mazurczyk and Krzysztof Szczypiorski of the Warsaw University of Technology in Poland explain how VoIP services are wide open to steganographic attack and even measure how much information can be sent covertly in this way.
VoIP services such as Skype are vulnerable to steganographic attack because they use such a high bandwidth and that makes it relatively easy to embed a hidden message in the bit stream in a way that it is almost impossible to detect.
For precisely this reason, the US Department of Defence specifies in that any covert channel with a bandwidth higher than 100 bps must be considered insecure for average security requirements. For high security requirements, the DoD says the data rate should not exceed 1 bps, making it next to impossible to embed a hidden code without it being noticed.
So VoIP systems such as Skype, with their much higher data rates, are difficult to secure.
And to prove it, Mazurczyk and Szczypiorski have tested a number of steganographic attacks (including two new ones they’ve developed themselves) on a VoIP system to determine how much data could be sent. They say that during an average call (that’s 13 minutes long according to Skype) they were able to covertly transmit as much as 1.3 Mbits of data.
That should get a number of governments, companies and individuals thinking. How secure is your VoIP system?
Ref: arxiv.org/abs/0805.2938: Steganography of VoIP streams
How do you turn a narrow slit into a large window? Fill it with a metamaterial that captures and transmits as much light as the bigger window. At least, that’s what Xiaohe Zhang and colleagues at Shanghai Jiao Tong University in China tell us.
Metamaterials are substances constructed in a way that gives them exotic bulk properties that aren’t otherwise found in nature, such as the ability to manipulate electromagnetic radiation in unheard of ways. Much of the publicity about metamaterials has revolved around their potential ability to form invisibility cloaks that can hide an object from view. But less well known are a menagery of designs that do other strange things such as rotate the appearance of a cloaked object.
Now Xiaohe Zhang and pals have weighed in with yet another design: a material “that can transmit the information outside a domain through a small slit, with the transmittance identical to the one of a big window”. In other word, they’ve designed a small window with the same transparency as a larger one, albeit one that works in the microwave region of the spetrum
But why on Earth would you want one of these? It’s one of those things that has a useful smell about it but the team don’t mention any applications their paper so I’m kinda stumped.
Ref: arxiv.org/abs/0805.3039: Transformation Media that Turn a Narrow Slit into a Large Window
On the atomic scale, friction is a curious beast and explaining exactly how it arises (and why in certain circumstances it appears to be absent) has stumped tribologists.
For the growing number of engineers designing and building nanomachines, one important question is how friction scales with the contact area between nanoscale components.
In the macroscopic world, this is easy to answer: dry friction is independent of contact area, according to the second law of friction developed by the 17th century French scientist Guillame Amonton.
Not so on the nanoscale, say Dirk Dietzel at the University of Münster in Germany and friends who have spent many happy hours measuring the force needed to push nanoparticles around using an atomic force microscope.
And their results are at first glance quite counterintuitive. They say that in some circumstances the frictional force increases linearly with surface area. And, get this, in other circumstances friction is absent entirely.
Friction free sliding is actually predicted between surfaces that are perfectly smooth, atomically flat and inert. That turns out to be feasible only for very small surface areas. The evidence for this effect has been patchy so far so Dietzel’;s team can pat themselves on the back.
More interesting perhaps is their assertion that the frictional forces they have measured are the result of contamination between nanosurfaces.
What they’re implying is that the problems that many engineers have with friction on that scale could be solved by reducing contamination. That’s an interesting take. The only trouble is that cleanliness on the atomic scale is not a simple thing to achieve.
Ref: arxiv.org/abs/0805.2448: Frictional Duality Observed during Nanoparticle Sliding
Take a standard piece of copier paper (80 g/m^2) and carefully peel it into two sheets. Listen out for the way it tears and watch how fast the peel line creeps.
What you’ll see and hear is a stick-slip phenomenon in which the creep velocity varies over many orders of magnitude, with small movements of the peel line interspersed with huge avalanches.
So say Jari Rosti and pals at the Helsinki University of Technology in Finland, who have meticulously measured the way paper peels and developed statistical models to better understand what’s going on (those long winter evenings in Finland must fly by).
Why bother? It turns out that the physics of peeling paper almost exactly mimics the stick-slip movement of tectonic plates, right down to the statistics of the time between “quakes” and the correlations between released energy and aftershock activity.
It’s tempting to imagine that peeling paper could therefore be used as a simple model in which to study earthquake statistics. Sadly no. Rosti and co admit there are some subtle but surprising differences between the two systems which would make that impossible.
But it does raise questions about how such subtle differences arise in systems that are otherwise statistically so similar. Rosti hopes future work will reveal all. And with the Finish winter coming all too soon after summer, they should have plenty of time to get peeling.
Ref: arxiv.org/abs/0805.3284: Line creep in paper peeling
Pluto’s three satellites, Hydra, Nix and Charon, are all a similar shade of grey. In fact, Nix and Hydra have exactly the same colour to within our ability to measure it. Pluto, on the other hand, is a beautiful shade of red. How come?
The current thinking is that Charon, Hydra and Nix are a similar colour because they were all formed in the giant impact that created this satellite system.
But today, Alan Stern, former head of NASA’s Planetary Science’s division and principal investigator for the New Horizons mission to Pluto, puts forward an alternative hypothesis.
His idea is that the impact of debris from the Kuiper belt on these bodies could send enough surface material into orbit to coat the satellites nearby. Interesting idea.
Stern calculates that the ejecta velocities on Pluto and Charon would be too low to escape. However, the ejecta from Nix and Hydra could easily escape in enough quantity to cover one another to a depth of tens of metres and to cover Pluto and Charon to a depth of tens of centimetres.
The weather on Pluto generates regular frosts which cover this up as qucikly as it was laid down but no such mechanism operates on the other satellites.
So that might explain the differences and similarities in the Plutonic color scheme. Stern also predicts that if he is right, the colours and albedos of Nix, Hydra and Charon should change slowly as more material is ejected and deposited. So by keeping a sharp eye on them, he can gain further evidence for his theory.
What’s more, he says that this mechanism may be common in the solar sytem wherever small binary systems are found, such as in the asteroid and Kuiper belts. And where this happens, these bodies should have similar colours too.
All we have to do now is to look out for the flurry of papers pointing to evidence that he’s right.
Ref: arxiv.org/abs/0805.3482: Ejecta Exchange, Color Evolution in the Pluto System, and Implications for KBOs and Asteroids with Satellites
The best of the rest from the physics arXiv this week:
In 2006, Mason Peck at Cornell University in Ithaca dreamt up with an entirely new way to control satellites orbiting planets that have a magnetic field. The idea is based on the Lorentz force: that a charged particle moving through a magnetic field experiences a force perpendicular to both its velocity and the field.
So the plan is to somehow ensure that the spacecraft becomes electrically charged as it moves through the planetary magnetic field which should then generate a force that can alter the orbit or orientation of the vehicle. The big advantage of so-called Lorentz actuated orbit control is that it requires no propellant. That’s a big deal since the amount of fuel a spacecraft can carry is the main factor that determines its lifespan. Propellant-free propulsion could significantly increase their operaitng lives.
Today, Peck along with William Gorman and James Brownridge at the State University of New York at Binghamton present the results of the first experimental trials of the idea. The work was funded by NASA but it has to be said: it doesn’t look entirely promising.
The team tested the ability of various objects to hold a charge in a vacuum while being bombarded with plasma, as would be the case in orbit. To generate the charge on the test object, they attached it to a sample of radioactive Americium-24, an alpha-particle emitter, and applied a voltage. The electric field carries away the positively charged alpha particles leaving the object highly charged.
I’ll let the team take up the tale:
“Microscopic arcing was observed at voltages as low as -300 V. This arcing caused solder to explode off of the object.“
Obviously, a proplusion system that explodes while it is in operation needs some more work.
The early pioneers of experimental propulsion systems such as Robert Goddard and Werner von Braun all had to cope with catastrophic failures, so Peck, Gorman and Brownridge are in good company. And as long as nobody gets hurt, a decent explosion livens up any experiment.
So stick with it fellas. Something tells me that if NASA funds the future development of this system, we’re going to be in for some fun.
Ref: arxiv.org/abs/0805.3332: Experimental Study of a Lorentz Actuated Orbit
“The most profound puzzle of contemporary physics” is how Benjamin Granett and colleagues from the University of Hawaii in Honolulu describe the problem of dark energy. And they’re not kidding.
What we have is the extraordinary observation that type 1a supernovas in the most distant galaxies in the universe are dimmer than they ought to be.
Nobody disputes this evidence; the challenge is to explain it and astronomers have been falling over themselves to construct various fascinating theories.
The astronomers’ darling is that the cosmos is filled with a mysterious “dark energy” that is pushing the universe apart. This causes the most distant galaxies to accelerate away from us faster than they would otherwise do. And since they are further away, the supernovas they contain are dimmer.
Recently, astronomers have suggested that if that is the case, then there ought to be other evidence for dark matter too. In particular, they say that this acceleration should change the way photons are influenced by the gravitational fields associated with superclusters of galaxies and the supervoids between them.
Photons should be “heated” and cooled” as they pass through the crests and troughs of these structures (in other words their energy should depend in a small way on the journey they’ve taken).
So a map of the universe according to photon temperature ought to coincide with the large scale structure of superclusters and supervoids in the universe we see (a phenomena known as the late-time integrated Sachs-Wolfe effect).
It turns out we already have just such a map in the form of the cosmic microwave background data taken by the WMAP spacecraft. But until now the variations of this map have only been weakly linked to the superclusters and supervoids we can see.
Today, Granett and pals have taken this work a step further and presented the first strong statistical link between the structure of the universe as we see it today and photon temperature.
What’s more they say their findings may help to explain a mysterious cold spot in this map that astronomers have been puzzling over for a while now. The cold spot is the result of supervoids they say.
This is interesting work but the question of course is: how robust is their statistical analysis? These kinds of findings are notoriously sensitive to the way in which the data is selected.
Now we see it. But next week, who knows?
Ref: arxiv.org/abs/0805.2974: Dark Energy Detected with Supervoids and Superclusters