Archive for the ‘Secrets’ Category

I know why the phase-locked wineglass sings

Thursday, January 8th, 2009

singing-wineglasses

Here’s a neat party trick to impress your friends.

Rub your finger around the rim of a wineglass and friction causes it, and any liquid it contains, to oscillate. When this vibration produces an audible pure tone, the wine glass is said to “sing”.

Now Ana Karina Ramos Musalem and pals at the Weizmann Institute of Science in Israel have shown how to couple one singing wineglass to another so the second wineglass sings without anybody touching it.

The trick is to place both wineglasses in a sink of water (without the water overflowing into the glasses).  Rubbing one so that it sings should make the other sing too. This  phenomenon is known as phase-locking and is also responsible for populations of crickets that chirp together and fireflies that flash together.

Phase locking depends on the strength of the coupling. This is greatest when the glasses are closer together and when their resonant frequency is similar. So if you run into trouble, try moving them nearer to each other and matching their frequencies by changing the amount of water in each glass.

Then watch as jaws drop around the dinner table.

(With apologies to Maya Angelou.)

Ref: arxiv.org/abs/0901.0656: Phase locking between two singing wineglasses

Entangled atoms could “sense” quantum gravity

Tuesday, December 23rd, 2008

entangled-atom-interferometry

The notion of quantum gravity has mystified many physicists, not least because there has never been a prospect of measuring the fabric of  the universe on this scale. That looks set to change.

A few years back, a number of physicists suggested that atom interferometry might do the trick. The thinking was that two atoms sent on different routes of equal length through space would then be made to interfere.

If spacetime is smooth and neat, the atoms should produce a certain set of fringes. But if spacetime on the plank scale were to be a maelstrom of quantum fluctuations, then these would force the atoms to travel slightly different paths and that would be picked up by the interferometer.

Sadly, it turns out that atom interferometers are nowhere near sensitive enough to detect these fluctuations and unlikely to become sensitive enough any time soon. The reason is that every three orders of magnitude increase in the sensitivity of the interferometer gives you only one order of magnitude increase in your ability to spot the fluctuations.

Which is why an idea floated by Mark Everitt and pals at the University of Leeds looks interesting. They say that the scaling problem effectively disappears if you use entangled atoms instead of ordinary ones.

And the improvement is such that the effect of quantum gravity should be detectable with current quantum optics technology.

They fall short of making any predictions so let’s fill in the blanks for them: somebody with a decent quantum optics lab will spot the first evidence of quantum gravity in 2009. Betcha!

Ref:  arxiv.org/abs/0812.3052:  Dephasing of entangled atoms as an improved test of quantum gravity

2D image created from a single pixel sensor

Wednesday, December 17th, 2008

ghost-imaging

Ghost imaging is a curious phenomenon that has had numerous physicists scratching their heads in recent years.

It works like this: take two beams of entangled photons and aim the first at an object. The transmitted photons from the object are then collected by a single pixel detector.

The second beam is aimed at a CCD array without ever having hit the object.

It turns out it is possible to reconstruct an image of the object–a so-called ghost image–by matching the data from the two detectors, even though the single pixel detector has no spatial resolution.

When this was first demonstrated in 1995, everybody was amazed by the strange power of quantum entanglement.

But later, various groups showed that entangled beams weren’t necessary at all and that ordinary light from a pseudothermal source would do the job just as well.

While interesting, that doesn’t actually rule out the possibility that the two beams may be correlated in some entangled-like quantum way, however.

So the question of whether quantum entanglement is responsible or not has remained open. Until now.

Yaron Silberberg and pals from the Weizmann Institute of Science in Israel, have carried out an ingenious experiment that settles the matter.

They use only one beam, which they use to illuminate the object, and collect the transmitted photons using a single pixel detector. They then calculate theoretically what the second beam should look like and combine the single pixel data with this “virtual beam”.

And get this: they still see a ghost image. That’s a 2D image from a single pixel detector! And a pretty convincing demonstration that quantum entanglement cannot be responsible.

The question now is: what kind of classical information processing allows the reconstruction of a 2d image from a single pixel sensor? That’s a real puzzle.

Ref: arxiv.org/abs/0812.2633: Ghost Imaging with a Single Detector

Solving stiction in MEMs devices

Monday, December 15th, 2008

acoustic-casimir-force

Microelectromechanical devices were supposed to change the world, so where are they?

A few designs have leaked out, such as the accelerometers in air bags. But most have remained stubbornly, and literally, stuck in the lab.

One of the troubling secrets about MEMs is that many designs simply don’t work because their moving parts become stuck fast and refuse to budge.

Engineers call this “stiction”: a vaguely defined force that affects small parts but not large ones (where inertia plays a greater role in overcoming these forces). Stiction is thought to be caused variously by Van der Waals forces, electrostatic forces, hydrogen bondng and even the Casimir force, perhaps in combination. That’s why it’s so hard to avoid.

Now Raul Esquivel-Sirvent and a buddy at the Universidad Nacional Autonoma de Mexico in Mexico City, have a potential solution based on the acoustic Casimir force, an acoustic analogue of the more famous quantum Casimir force which was discovered 10 years ago by Andres Larraza, then at the Naval Postrgraduate School in Monterey.

Here’s the idea: place a couple of parallel plates close together and blast them with sound waves of a specific frequency range. If the waves are larger than the gap between the plates, they will tend to push them together but if they are smaller, they will squeeze into the gap and tend to push them apart. So changing the wavelength or the distance between the plates switches the direction of the force.

That could be useful for microswitches in MEMs devices, says Esquivel-Sirvent. But more interestingly, it could also be used to separate microcomponents that have become stuck together. Perhaps the promise of MEMs will be realised at last.

Ref: arxiv.org/abs/0812.2213: Pull-in Control in Microswitches Using Acoustic Casimir Forces

Loop quantum cosmology: a brief overview

Wednesday, December 3rd, 2008

quantum-loop-cosmology

Abhay Ashtekar, a physicist at the Pennsylvania State University is one of the founders of loop quantum cosmology and also a part-time populariser of science.

Today, he uses both of these attributes to produce a fascinating overview of loop quantum cosmology that non-specialists will find enlightening.

A recommended read.

Ref: arxiv.org/abs/0812.0177: Loop Quantum Cosmology: An Overview

Quantum test found for mathematical undecidability

Tuesday, December 2nd, 2008

mathematical-undecidability

It was the physicist Eugene Wigner who discussed the “unreasonable effectiveness of mathematics” in a now famous paper that examined the profound link between mathematics and physics.

Today, Anton Zeilinger and pals at the University of Vienna in Austria reveal this link at its deepest. Their experiment involves the issue of mathematical decidability.

First, some background about axioms and propositions. The group explains that any formal logical system must be based on axioms, which are propositions that are defined to be true. A proposition is logically independent from a given set of axioms if it can neither be proved nor disproved from the axioms.

They then move on to the notion of undecidability. Mathematically undecidable propositions contain entirely new information which cannot be reduced to the information in the axioms. And given a set of axioms that contains a certain amount of information, it is impossible to deduce the truth value of a proposition which, together with the axioms, contains more information than the set of axioms itself.

These notions gave Zeilinger and co an idea. Why not encode a set of axioms as quantum states. A particular measurement on this system can then be thought of as a proposition. The researchers say that whenever a proposition is undecidable, the measurement should give a random result.

They’ve even tested the idea and say they’ve shown the undecidability of certain propositions because they generate random results.

Good stuff and it raises some interesting issues.

Let’s leave aside the problem of determining whether the result of particular measurement is truly random or not and take at face value the groups claim that “this sheds new light on the (mathematical) origin of quantum randomness in these measurements”.

There’s no question that what Zeilinger and co have done is fascinating and important. But isn’t the fact that a quantum system behaves in a logically consistent way exactly what you’d expect?

And if so, is it reasonable to decide that, far from being fantastically profound, Zeilinger’s experiment is actually utterly trivial?

Ref: http://arxiv.org/abs/0811.4542: Mathematical Undecidability and Quantum Randomness

Steganophony–when internet telephony meets steganography

Friday, November 28th, 2008

steganophony.jpg

Steganophony is the term coined by Wojciech Mazurczyk and Józef Lubacz at the Warsaw University of Technology in Poland to describe the practice of hiding messages in internet telephony traffic (presumably the word is an amalgamation of the terms steganography and telephony).

The growing interest in this area is fueled by the fear that terrorist groups may be able to use services such as Skype to send messages secretly by embedding them in the data stream of internet telephony. At least that’s what Mazurczyk and Lubacz tell us.

The pair has developed a method for doing exactly that called Lost Audio PaCKets Steganography or LACKS and outline it on the arXiv today.

LACKS exploits a feature of internet telephony systems: they ignore data packets that are delayed by more than a certain time. LACKS plucks data packets out of the stream, changes the information they contain and then sends them on after a suitable delay. An ordinary receiver simply ignores these packets if they arrive after a certain time but the intended receiver collates them and extracts the information they contain.

That makes LACKS rather tricky to detect since dropped packets are a natural phenomenon of the internet traffic.

But is this really an area driven by the threat of terrorism? If anybody really wants to keep messages secret then there are plenty of easier ways to do it, such as Pretty Good Privacy.

There’s a far more powerful driver for this kind of work. It’s name? Paranoia

Ref: arxiv.org/abs/0811.4138: LACK – a VoIP Steganographic Method

Anonymizing data without damaging it

Thursday, November 6th, 2008

graph

If scientists are to study massive datasets such as mobile phone records, search queries and movie ratings, the owners of these datasets need to find a way to anonymize the data before releasing it.

The high profile cracking of data sets such as the Netflix prize dataset and the AOL search query data set means that people would be wise not to trust these kinds of releases until the anonymization problem has been solved.

The general approach to anonymization is to change the data in some significant but subtle way to ensure that no individual is identifiable as a result. One way of doing this is to ensure that every record in the set is identical to at least one other record.

That’s sensible but not always easy, point out Rajeev Motwani and Shubha Nabar at Stanford University in Palo Alto. For example, a set of search queries can be huge, covering the search habits of millions of people over many months. The variety of searches people make over such a period make it hard to imagine that two entries would be identical. And analyzing and changing such a huge dataset in a reasonable period of time is tricky too.

Motwani and Nabar make a number of suggestions. Why not break the data set into smaller, more manageable clusters, they say. And why not widen the criteria for what it means to be identical to allow similar searches to be replaced with identical terms. For example, replacing a search for “organic milk” with a search for “dairy product”. These ideas seem eminently sensible.

The problem becomes even more difficult when the data is in graph form, as it might be for mobile phone records or web chat statistics. So Nabar suggest a similar anonymizing technique: ensure that every node on the graph should share some number of its neighbors with a certain number of other nodes.

The trouble is that the anonymization technique can destroy the very patterns that you are looking for in the data, for example in the way mobile phones are used. And at present, there’s no way of knowing what has been lost.

So what these guys need to do next is find some kind of measure of data loss that their proposed changes cause, to give us a sense of how much damage is being done to the dataset during anonymization.

In the meantime, dataset owners should show some caution over how, why and to whom they release their data.

Ref:

arxiv.org/abs/0810.5582: Anonymizing Unstructured Data

arxiv.org/abs/0810.5578: Anonymizing Graphs

Cloaking objects at a distance

Wednesday, November 5th, 2008

cloaking-at-a-distance

One of the disadvantages of invisibility cloaks is that anything placed inside one is automatically blinded, since no light can get in.

Now Yun Lai and colleagues from The Hong Kong University of Science and Technology have come up with a way round this using the remarkable idea of cloaking at a distance. This involves using a “complementary material” to hide an object outside it.

Here’s the idea: complementary materials are designed to have a permittivity and permeability that are complementary to the values in a nearby region of space. “Complementary” means that the values cancel out the effect that that this region of space has on a plane lightwave passing through. To an observer, that region of space simply vanishes.

Cloaking a region of space is relatively straightforward but cloaking an object in that space is another matter. Lai and co say the trick is to work out the optical properties of the object and then embed the “complementary image” within the cloaking material. So a plane wave would be bent by the object but then bent back into a plane as it passes through the cloaking material.
Et voila: cloaking at a distance. And in a way that doesn’t leave the cloaked object blind.

Of course , creating the complementary materials necessary to do this trick is another matter. And the usual caveats apply: it works only at a single frequency in 2D. But cloaking, in theory at least, is looking more interesting by the day.

Ref: arxiv.org/abs/0811.0458: A Complementary Media Invisibility Cloak that can Cloak Objects at a Distance Outside the Cloaking Shell

Breakthrough calculations on the capacity of a steganographic channel

Tuesday, November 4th, 2008

steganoraphy

Steganography is the art of hiding a message in such a way that only the sender and receiver realise it is there. (By contrast, cryptography disguises the content of a message but makes no attempt to hide it.)

The central problem for steganographers is how much data can be hidden without being detected. But the complexity of the problem means it has been largely ignored in favor of more easily solved conundrums.

Jeremiah Harmsen from Google Inc in Mountain View and William Pearlman at Rensselaer Polytechnic Institute in Troy NY, say: “while false alarms and missed signals have rightfully dominated the steganalysis literature, very little is known about the amount of information that can be sent past these algorithms.”

So the pair have taken an important step to change that. Their approach is to think along the same lines as Claude Shannon in his famous determination of the capacity of a noisy channel. In Shannon’s theory, a transmission is considered successful if the decoder properly determines which message the encoder has sent. In the stego-channel, a transmission is successful if the decoder properly determines the sent message without anybody else detecting its presence.

Previous attempts have all placed limits on the steganographers channel for example, by stipulating that the hidden data, or stego-channel, has the same distribution as the cover channel. But Harmsen and Pearlman have take a more general approach which takes some important steps towards working out the channel capacity over a much wider range of conditions.

The results are interesting and in some cases counter-intuitive (for example, adding noise to channel can increase its steganographic capacity and in some cases, mounting two attacks on a channel instead of one can do the same).

It’s fair to say that Harmsen and Pearlman are pioneering of the study of steganographic capacity and that with this breakthrough, the field looks rich with low hanging fruit. Expect more!

Ref: arxiv.org/abs/0810.4171: Capacity of Steganographic Channels