## Archive for December, 2008

### Modelling the spread of HIV

Tuesday, December 9th, 2008

Modelling the spread of HIV is a difficult business for many reasons: many people are unaware that they are infected, HIV can take a very long time to manifest itself within the body, and researchers are still unsure to what extent different population groups are involved in its transmission.

So it’s remarkable that Shan Mei at the University of Amsterdam and colleagues have been able to model the the HIV epidemic among the relatively small group of gay men in Amsterdam.

The trick they say is to master two approaches to modelling. First, create an agent-based model of human behavior and then create a realistic model of the network that links real people together in Amsterdam.

Mei and friends say they’ve done this for the gay population in Amsterdam and that their model shows a “good correspondence between the historical data of the Amsterdam cohort and the simulation results”.

They even say the model can predict the future trend of HIV prevalence in Amsterda. That’s a much bigger claim that needs to ne matched with evidence which is sadly lacking from this paper.

Mei and co demonstrate exactly what confidence they have in this claim by making no meaningful prediction whatsoever. Strange.

Ref: arxiv.org/abs/0812.1155 : Complex Agent Networks Explaining the HIV Epidemic Among Homosexual Men in Amsterdam

### Calculating the probability of immortality

Monday, December 8th, 2008

How likely is it that a given object will survive forever?

With many groups predicting that human immortality is just around the corner, you could say we all have a vested interest in the answer.

A t first glance, the odds are not good. As David Eubanks of Coker College in South Carolina puts it:

“Imagine that some subject survives each year (or other time period) with a probability p. Assuming for a moment that p exists and is constant over time, it’s easy to compute the dismal odds of long term survival as a decaying exponential. Unless p = 1, the probability of n-year survival approaches zero.”

In other words, the probability of surviving forever is exactly zero.

But this suggests a strategy: the route to immortality is to find a way to increase this probability over time.

Suppose the object we want to make immortal is the data on a hard drive. Then copying the data to another hard drive each year should ensure that after n years there are n copies. If one drive fails, we can easily reconstruct the data onto another drive. So unless all the drives fails at once, the data should be immortal.

Except for one problem. That approach ignores global catastrophes such as comet strikes which would destroy all the drives in one go.

Eubanks says there are two ways to tackle this problem. Life has found one of them which is to produce many diverse copies of the same thing, spread them around the planet and make them work in different ways ie exploit different energy sources.

The other is for a single organism to use its intelligence to avoid catastrophe. “It must collect information about the environment safely and inductively predict it well enough to avoid death,” says Eubanks.

How do these two strategies compare? Naturally, single individuals have a harder time of it because it’s tough to predict and adapt to all possible catastrophes. As Eubanks says, “We’re the product of billions of years of creatures that survived long enough to reproduce, and therefore have very deep survival instincts. And yet we can fall asleep while driving a car.”

Eubanks is even more pessimistic. He says that simulating every possible environmental disaster may be tricky enough but such a disaster would then force us to re-evaluate the way we evaluate disasters and so on ad infinitum. In short, it’s a calculation we are extremely unlikely to ever be able to undertake, making it hard to think of a way we could improve our probability of survival, year in year out.
The bottom line is that humans are unlikely to survive forever and neither is intelligent life anywhere else in the Universe.

“This speaks to the Fermi Paradox, which asks why the galaxy isn’t crawling with intelligent life.”

Quite. But Eubanks’ paper has another sting in the tail.

“The conclusions of this paper could lead one to believe that a democratic government cannot focus solely on external threats, but should also be constantly trying to improve the chances that it does not “self-destruct” into tyranny.”

What democracy could he be thinking of?

Ref: arxiv.org/abs/0812.0644: Survival Strategies

### Secrets ‘n’ lies

Saturday, December 6th, 2008

### Quantum direct communication: secrecy without key distribution

Friday, December 5th, 2008

An interesting development in the world of quantum encryption.

In the last couple of years, we’ve seen a number of quantum key distribution systems being set up that boast close-to-perfect security (although they’re not as secure as the marketing might imply).

These systems rely on two-part security. The first is the quantum part which reveals whether a message has been intercepted or not. Obviously this is no use when it comes to sending secret message because it can only uncover eavesdroppers after the fact.

So Alice sends a one time pad over this quantum channel that she and Bob can later use to encrypt and send a message classically. If this key is compromised, Alice sends another.

What guarantees the security is not quantum mechanics but the second part of the system: the one time pad.

Today, Seth Lloyd and colleagues at the Massachusetts Institute of Technology in Cambridge, publish a way of guaranteeing security over a quantum channel without having to fall back on a one time pad.

Their idea is to send a message over a standard quantum channel without bothering with a one time pad. The security, they say, can be monitored by randomly checking the channel to see whether any of the qubits are being lost (potentially to Eve).

The security of the channel then depends on how much loss of information Alice and Bob are willing to accept, but can always be improved by checking more often for eavesdroppers.

Quantum direct communication, as the team call it, looks interesting. But it will be demanding to implement, not least because any noise in the channel will look like an eavesdropper. So it looks as if this idea will have to be limited to short range applications where noise can be kept to a minimum.

Nevertheless, a cool idea.

Ref: arxiv.org/abs/0802.0656: Quantum Direct Communication with Continuous Variables

### Levitating gas pipelines

Thursday, December 4th, 2008

Great to see one of the arXiv’s most creative minds posting again today. Alexander Bolonkin–he of “In Outer Space without a Space Suit?” and “Floating Cities, Islands and States” fame–is back with another startling idea.

Methane is significantly lighter than air and so could be used to levitate the pipes it flows through. These aerial pipelines would then be abel to carry heavier than air goods such as oil, coal and even human passengers.

Bolonkin calculates that one projected aerial pipeline coudl carry 24 billion cubic meters of gas and tens of million tons of oil, water or other payload per year.

Why not?

Ref: arxiv.org/abs/0812.0588: A Cheap Levitating Gas/Load Pipeline

### Loop quantum cosmology: a brief overview

Wednesday, December 3rd, 2008

Abhay Ashtekar, a physicist at the Pennsylvania State University is one of the founders of loop quantum cosmology and also a part-time populariser of science.

Today, he uses both of these attributes to produce a fascinating overview of loop quantum cosmology that non-specialists will find enlightening.

Ref: arxiv.org/abs/0812.0177: Loop Quantum Cosmology: An Overview

### Quantum test found for mathematical undecidability

Tuesday, December 2nd, 2008

It was the physicist Eugene Wigner who discussed the “unreasonable effectiveness of mathematics” in a now famous paper that examined the profound link between mathematics and physics.

Today, Anton Zeilinger and pals at the University of Vienna in Austria reveal this link at its deepest. Their experiment involves the issue of mathematical decidability.

First, some background about axioms and propositions. The group explains that any formal logical system must be based on axioms, which are propositions that are defined to be true. A proposition is logically independent from a given set of axioms if it can neither be proved nor disproved from the axioms.

They then move on to the notion of undecidability. Mathematically undecidable propositions contain entirely new information which cannot be reduced to the information in the axioms. And given a set of axioms that contains a certain amount of information, it is impossible to deduce the truth value of a proposition which, together with the axioms, contains more information than the set of axioms itself.

These notions gave Zeilinger and co an idea. Why not encode a set of axioms as quantum states. A particular measurement on this system can then be thought of as a proposition. The researchers say that whenever a proposition is undecidable, the measurement should give a random result.

They’ve even tested the idea and say they’ve shown the undecidability of certain propositions because they generate random results.

Good stuff and it raises some interesting issues.

Let’s leave aside the problem of determining whether the result of particular measurement is truly random or not and take at face value the groups claim that “this sheds new light on the (mathematical) origin of quantum randomness in these measurements”.

There’s no question that what Zeilinger and co have done is fascinating and important. But isn’t the fact that a quantum system behaves in a logically consistent way exactly what you’d expect?

And if so, is it reasonable to decide that, far from being fantastically profound, Zeilinger’s experiment is actually utterly trivial?

Ref: http://arxiv.org/abs/0811.4542: Mathematical Undecidability and Quantum Randomness

### How ribosomal traffic cops keep bacteria alive

Monday, December 1st, 2008

Ribosomes are the genetic Turing machines that translate nucleic acid into protein. And fast growing bacteria need plenty of them. E coli bacteria, for example, contain some 73000 ribosomes per cell.

Given that E coli populations double every 20 minutes, new ribosomes have to be created at a fantastic rate. The process requires ribosomal RNA to be built at a rate of 68 transcripts per minute compared to a typical rate of 10 per minute for messenger RNA.

That requires a huge density of RNA polymerase shuttling around to build the ribosomal RNA and the question is: how do bacteria manage it without generating life throttling traffic jams?

Stefan Klumpp and Terence Hwa at the University of California, San Diego, say it looks as if these cells have built in traffic cops whose sole job is to keep the traffic moving.

The problems arise when RNA polymerase pauses for whatever reason. Pauses are thought to be necessary for proper transcription but they often occur for no good reason, a problem known as antitermination. This causes severe traffic jams.

To prevent this, Klumpp and Hwa say that a termination factor called Rho seems to unblock these jams by removing prematurely paused RNA polymerase and replacing it with new polymerase, thereby restarting traffic. A bit like emergency services removing broken down vehicles from a highway.

The researchers have developed a model to describe this process which includes studies of the behaviour of single molecules in vivo.

Interesting stuff and puzzling too. It suggests that Rho actually enhances transcription rather than attenuates it, which is counterintuitive for a termination factor.

Ref: arxiv.org/abs/0811.3163: Stochasticity and Traffic Jams in the Transcription of Ribosomal
RNA: Intriguing Role of Termination and Antitermination