Archive for the ‘Calculatin'’ Category

Quantum test found for mathematical undecidability

Tuesday, December 2nd, 2008

mathematical-undecidability

It was the physicist Eugene Wigner who discussed the “unreasonable effectiveness of mathematics” in a now famous paper that examined the profound link between mathematics and physics.

Today, Anton Zeilinger and pals at the University of Vienna in Austria reveal this link at its deepest. Their experiment involves the issue of mathematical decidability.

First, some background about axioms and propositions. The group explains that any formal logical system must be based on axioms, which are propositions that are defined to be true. A proposition is logically independent from a given set of axioms if it can neither be proved nor disproved from the axioms.

They then move on to the notion of undecidability. Mathematically undecidable propositions contain entirely new information which cannot be reduced to the information in the axioms. And given a set of axioms that contains a certain amount of information, it is impossible to deduce the truth value of a proposition which, together with the axioms, contains more information than the set of axioms itself.

These notions gave Zeilinger and co an idea. Why not encode a set of axioms as quantum states. A particular measurement on this system can then be thought of as a proposition. The researchers say that whenever a proposition is undecidable, the measurement should give a random result.

They’ve even tested the idea and say they’ve shown the undecidability of certain propositions because they generate random results.

Good stuff and it raises some interesting issues.

Let’s leave aside the problem of determining whether the result of particular measurement is truly random or not and take at face value the groups claim that “this sheds new light on the (mathematical) origin of quantum randomness in these measurements”.

There’s no question that what Zeilinger and co have done is fascinating and important. But isn’t the fact that a quantum system behaves in a logically consistent way exactly what you’d expect?

And if so, is it reasonable to decide that, far from being fantastically profound, Zeilinger’s experiment is actually utterly trivial?

Ref: http://arxiv.org/abs/0811.4542: Mathematical Undecidability and Quantum Randomness

A clue in the puzzle of perfect synchronization in the brain

Thursday, November 27th, 2008

zero-lag-synchronisation

“Two identical chaotic systems starting from almost identical initial states, end in completely uncorrelated trajectories. On the other hand, chaotic systems which are mutually coupled by some of their internal variables often synchronize to a collective dynamical behavior,” write Meital Zigzag at Bar-Ilan University in Israel and colleagues o the arXiv today.

And perhaps the most fascinating of these synchronized systems are those that show zero lag; that are perfectly synched. For example, in widely separated regions of the brain, zero lag synchronization of neural activity seems to be an important feature of the way we think.

This type of synchronization also turns out to be an important feature of chaotic communication. This is the process by which which information can be hidden in the evolution of a chaotic attractor and retrieved by substracting the same chaotic background to reveal the original message.

Obviously, this only works when the transmitter and receiver have are coupled so that they evolve in exactly the same way. For a long time physicists have wondered whether this effect can be used to send data securely and earlier this year, they proved that the security can only be guaranteed if the synchronisation has zero lag.

But how does zero lag occur and under what range of conditions?

Zero lag seems to occur when the delays in the mutual coupling and self feedback between two systems act to keep them in step. In effect, both systems lag but by exactly the same amount.

Until recently, this was thought to occur only for a very small subset of parameters in which the delays are identical or have a certain ratio. But these limits are so exact and constricting that it’s hard to imagine a wet system such as the brain ever achieving them.

Now Zigzag and friends have shown that it is possible to get around these strict limits by having more than one type of feedback between the systems. When that happens, it’s possible to have zero lag synchronisation over a much wider set of parameters.

That’s going to have important implications for our understanding of synchronisation in the brain and for the development of secure chaotic communication. Betcha!

Ref: arxiv.org/abs/0811.4066: Emergence of Zero-Lag Synchronization in Generic Mutually Coupled Chaotic Systems

Triggering a phase change in wealth distribution

Tuesday, November 11th, 2008

wealth-distribution

Wealth distribution in the western world follows a curious pattern. For 95 per cent of the population, it follows a Boltzmann Gibbs distribution, in other words a straight line on a log-linear scale. For the top 5 per cent, however, wealth allocation follows a Pareto distribution, a straight line on a log-log scale, which is a far less equitable way of apportioning wealth.

Nobody really understands how this arrangement comes about but Javier Gonzalez-Estevez from the Universidad Nacional Experimental del Tachira in Venezuela and colleagues think they can thrown some light on the problem.

They have created an agent-based model in which each agent’s “wealth” evolves according to the way it interacts with its neighbours. Gonzalez-Estevez and co say that a simple model of this kind accurately reproduces the combination of Boltzmann Gibbs and Pareto distributions seen in real economies.

But get this. The teams says: “it is possible to bring the system from a particular wealth distribution to another one with a lower inequality by performing only a small change in the system configuration”. That’s an intriguing possibility.

In their latest work they say that it is possible to switch between Pareto and Boltzmann-Gibbs distributions, simply by increasing the number of neighbors each agent has.

In other words, this triggers a phase change in which wealth suddenly becomes more equally distributed (or vice versa).

That’s going to be a fascinating area for econophysicists to explore. Economists have always thought about that changing the distribution of wealth means tax collection and redistribution.

Now there’s a whole new way in which it might be approached. Gonzalez-Estevez and team make no suggestion as to how it might be done in real life economies but you can be sure that more than a few econophysicists will be thinking about how to trigger these kinds of phase changes for real.

Taxes as a way of redistributing wealth could become a thing of the past. But it will be as well to remember that not everyone wants to makes wealth distribution fairer.

Ref: arxiv.org/abs/0811.1064: Transition from Pareto to Boltzmann-Gibbs Behavior in a Deterministic Economic Model

Predicting the popularity of online content

Monday, November 10th, 2008

digg-prediction

The page views for entries on this site in the last week range from more than 17,000 thousand for this story to around 100 for this one.

That just goes to show that when you post a blog entry and there’s no way of knowing how popular it will become. Right?

Not according to Gabor Szabo and Bernardo Huberman at HP Labs in Palo Alto who reckon they can accurately forecast a site’s page views a month in advance by analysing its popularity during its first two hours on Digg.

They say a similar prediction can be made for YouTube postings except these need to be measured for 10 days before a similarly accurate forecast can be made. (That’s almost certainly because Digg stories quickly become outdated while YouTube videos are still found long after they have been submitted.)

That’s not so astounding if all (or at least most) content has a similar long tail-type viewing distribution. Measuring part of this distribution automatically tells you how the rest is distributed.

But actually proving this experimentally is more impressive. In principle, it gives hosts a way of allocating resources such as bandwidth well in advance which could be useful, especially if you can charge in advance too.

Ref: arxiv.org/abs/0811.0405: Predicting the Popularity of Online Content

Breakthrough calculations on the capacity of a steganographic channel

Tuesday, November 4th, 2008

steganoraphy

Steganography is the art of hiding a message in such a way that only the sender and receiver realise it is there. (By contrast, cryptography disguises the content of a message but makes no attempt to hide it.)

The central problem for steganographers is how much data can be hidden without being detected. But the complexity of the problem means it has been largely ignored in favor of more easily solved conundrums.

Jeremiah Harmsen from Google Inc in Mountain View and William Pearlman at Rensselaer Polytechnic Institute in Troy NY, say: “while false alarms and missed signals have rightfully dominated the steganalysis literature, very little is known about the amount of information that can be sent past these algorithms.”

So the pair have taken an important step to change that. Their approach is to think along the same lines as Claude Shannon in his famous determination of the capacity of a noisy channel. In Shannon’s theory, a transmission is considered successful if the decoder properly determines which message the encoder has sent. In the stego-channel, a transmission is successful if the decoder properly determines the sent message without anybody else detecting its presence.

Previous attempts have all placed limits on the steganographers channel for example, by stipulating that the hidden data, or stego-channel, has the same distribution as the cover channel. But Harmsen and Pearlman have take a more general approach which takes some important steps towards working out the channel capacity over a much wider range of conditions.

The results are interesting and in some cases counter-intuitive (for example, adding noise to channel can increase its steganographic capacity and in some cases, mounting two attacks on a channel instead of one can do the same).

It’s fair to say that Harmsen and Pearlman are pioneering of the study of steganographic capacity and that with this breakthrough, the field looks rich with low hanging fruit. Expect more!

Ref: arxiv.org/abs/0810.4171: Capacity of Steganographic Channels

The trouble with traffic at intersections

Wednesday, October 1st, 2008

traffic-lights.jpg

These rather beautiful graphs are space-time plots of vehicles approaching, entering and then leaving an intersection controlled by traffic lights.

The plots were calculated using cellular automata to model the behavior of the vehicles.  Here the upper plot shows the pattern of traffic at a traffic lights with a fixed schedule. The lower figures shows what happens when the lights respond to the weight of traffic. The plots have a certain symmetry and many similar characteristics (in fact one seems to be a larger version of the other).

Ebrahim Foulaadvand at the Institute for Studies in Theoretical Physics and Mathematics at Zanjan University in Tehran, Iran, and a pal say that the work needs to be calibrated with data from real traffic lights to determine the braking and acceleration characteristics of real cars.

You can say that again. These plots seem rather improbable given that traffic flow is well known to demonstrate phase changes as the density of vehicles increases. They don’t seem to have toyed with that problem yet.

Looks like Foulaadvand and co have some work to do if they want their model to match the behavior of real road traffic.

Ref: arxiv.org/abs/0809.3591: Simulation of Traffic Flow at a Signalised Intersection

Forget black holes, could the LHC trigger a “Bose supernova”?

Monday, September 29th, 2008

lhc-higgs

The fellas at CERN have gone to great lengths to reassure us all that they won’t destroy the planet (who says physicists are cold hearted?).

The worry was that the collision of particles at the LHC’s high energies could create a black hole that would swallow the planet. We appear to be safe on that score but it turns out there’s another way in which some people think the LHC could cause a major explosion.

The worry this time is about Bose Einstein Condensates, lumps of matter so cold that their constituents occupy the lowest possible quantum state.

Physicists have been fiddling with BECs since the early 1990s and have become quite good at manipulating them with magnetic fields.

One thing they’ve found is that it is possible to switch the force between atoms in certain kinds of BECs from positive to negative and back using a magnetic field, a phenomenon known as a Feschbach resonance.

But get this: in 2001, Elizabeth Donley and buddies at JILA in Boulder, Colorado, caused a BEC to explode by switching the forces like. These explosions have since become known as Bose supernovas.

Nobody is exactly sure how these explosions proceed which is a tad worrying for the following reason: some clever clogs has pointed out that superfluid helium is a BEC and that the LHC is swimming in 700,000 litres of the stuff. Not only that but the entire thing is bathed in some of the most powerful magnetic fields on the planet.

So is the LHC a timebomb waiting to go off? Not according to Malcolm Fairbairn and Bob McElrath at CERN who have filled the back of a few envelopes in calculating that we’re still safe. To be doubly sure, they also checked that no other superfluid helium facilities have mysteriously blown themselves to kingdom come.

“We conclude that that there is no physics whatsoever which suggests that Helium could undergo
any kind of unforeseen catastrophic explosion,” they say.

That’s comforting and impressive. Ruling out foreseen catastrophies is certainly useful but the ability to rule out unforeseen ones is truly amazing.

Ref: arxiv.org/abs/0809.4004: There is no Explosion Risk Associated with Superfluid Helium in the LHC Cooling System

Why spontaneous traffic jams are like detonation waves

Wednesday, September 24th, 2008

jamitons.jpg

We’re all familiar with phantom jams, traffic blockages that arise with no apparent cause and that melt away for no discernible reason.

Today Ruben Rosales and pals at MIT and the University of Alberta in Canada coin a new term for the waves that cause these hold ups: they call them jamitons.

And jamitons turn out to have an interesting property: they are self-sustained disturbances consisting of a shock matched to vehicle speed.

If that sounds familiar, it’s because you’re reminded of the way in which certain types of transonic disturbances can be self-sustaining. In the world of a chemists, these are known as detonation waves.

Rosales and co say jamitons and detonation waves are mathematical analogues.

That sounds interesting and useful and perhaps one day it will be. But you wouldn’t guess it from this paper.

Rosales and friends are unable to run with their analogy in any useful way. They say that the existence of jamitons in traffic flow are an indication that dangerous vehicle concentrations may occur (no, really?).

And they conclude: “such situations may be avoided by judicious selection of speed limits, carrying capacities, etc” (Wow!)

In other words, spontaneous traffic jams may cause avoidable crashes. Nothing gets past these guys.

Ref: arxiv.org/abs/0809.2828: On “Jamitons,” Self-Sustained Nonlinear Traffic Waves

Flyby anomalies explained by special relativity

Thursday, September 18th, 2008

flyby-anomaly.jpg

On 23 January 1998, when NASA’s Near spacecraft swung past Earth on a routine flyby towards more interesting lands, a curious thing happened to its speed. It jumped by 13 mm/s.

This wasn’t the first time such an effect had been seen. Engineers saw similar jumps in speed during the Earth flybys of Galileo (in 1990 and 1992), Cassini (in 1999), Messenger (in 2005) and
Rosetta (also in 2005).

Various exotic explanations have been put forward but today it looks as if the explanation is far more prosaic.  Jean Paul Mbelek from CEA-Saclay near Paris, France, says special relativity explains all.
The speed of the spacecraft is measured by the Doppler shift in radio signals from the craft. That makes the speed  easy to calculate.

But Mbelek’s argument is that the relative motion of the spacecraft and the Earth (which is spinning) have not been properly accounted for. And when they are factored in, using special relativity, the flyby anomalies disappear.

Doh!

Ref: arxiv.org/abs/0809.1888: Special Relativity May Account for the Spacecraft Flyby
Anomalies

How big is a city?

Wednesday, August 20th, 2008

s-population.jpg

That’s not as silly a question as it sounds. Defining the size of a city is tricky task that has major economic implications: how much should you invest in a city if you don’t know how many people live and work there?

The standard definition is the Metropolitan Statistical Area, which attempts to capture the notion of a city as a functional economic region and requires a detailed subjective knowledge of the area before it can be calculated. The US Census Bureau has an ongoing project dedicated to keeping abreast of the way this one metric changes for cities across the continent.

Clearly that’s far from ideal. So our old friend Eugene Stanley from Boston University and a few pals have come up with a better measure called the City Clustering Algorithm. This divides an area up into a grid of a specific resolution, counts the number of people within each square and looks for clusters of populations within the grid. This allows a city to be defined in a way that does not depend on its administrative boundaries.

That has significant implications because clusters depend on the scale on which you view them. For example, a 1 kilometre grid sees New York City’s population as a cluster of 7 million, a 4 kilometre grid makes it 17 million and the cluster identified with an 8 kilometre grid scale, which encompassing Boston and Philadelphia, has a population of 42 million. Take your pick.

The advantage is that this gives a more or less objective way to define a city. It also means we’ll need to reanalyse of some of the fundamental properties that we ascribe to cities  growth. For example,  the group has studied only a limited numer of cities in the US, UK and Africa but already says we’ll need to rethink Gibrat’s law which states that a city’s growth rate is independent of its size.

Come to think of it, Gibrat’s is a kind of weird law anyway. Which means there may be some low hanging fruit for anybody else who wants to re-examine the nature of cities.

Ref: arxiv.org/abs/0808.2202: Laws of Population Growth