Archive for the ‘Hellraisin'’ Category

The fundamental patterns of traffic flow

Monday, March 9th, 2009

traffic-flow

Take up the study of earthquakes, volcanoes or stock markets and the goal, whether voiced or not, is to find a way to predict future “events” in your field. In that sense, these guys have something in common with scientists who study traffic jams.

The difference is that traffic experts might one day reach their goal. The complexity of traffic flow, while  awe inspiring, may well be fundamentally different  to the complexity of stock markets and earthquake events.

At least that’s how Dirk Helbing at the Institute for Transport & Economics at the Technical University of Dresden in Germany, and his buddies see it.

Helbing says that one long standing dream of traffic experts is to identify the fundamental patterns of traffic congestion from which all other flows can be derived,  a kind of periodic table of traffic flows.

Now he thinks he has found it: a set of fundamental patterns of traffic flow that when identified on a particular road, can be used to create a phase diagram of future traffic states.

The phase diagrams can then be used to make forecasts about the way in which the flow might evolve.

That’ll be handy. But only if it’s then possible to do something to prevent the congestion. And that may be the trickiest problem of all.

Ref: arxiv.org/abs/0903.0929: Theoretical vs. Empirical Classification and Prediction of Congested Traffic States

The LHC’s dodgy dossier?

Monday, March 2nd, 2009

lhc-higgs

There’s no reason to worry about the Large Hadron Collider that is due to to be switched on later this year for the second time. The chances of it creating a planet-swallowing black hole are tiny. Hardly worth mentioning really.

But last month, Roberto Casadio at the Universita di Bologna in Italy and a few pals told us that the LHC could make black holes that will hang around for minutes rather than microseconds. And they were rather less certain about the possibility they the  black holes could grow to catastrophic size. Far from being utterly impossible, they said merely that this outcome didn’t “seem” possible.

This blog complained that that was hardly the categorical assurance we’d come to expect from the particle physics community. The post generated many comments, my favourite being that we shouldn’t worry because of the Many Worlds Interpretation of quantum mechanics. If the LHC does create Earth-destroying black holes, we could only survive in a universe in which the accelerator broke down.

Thanks to Slashdot, the story got a good airing with more than few people pointing out that we need better assurances than this.

Now we can rest easy. Casadio and co have changed their minds. In a second version of the paper, they’ve removed all mention of the black hole lifetime being many minutes (”>> 1sec” in mathematical terms) and they’ve changed their conclusion too. It now reads: “the growth of black holes to catastrophic size is not possible.”

What to make of this? On the one hand, these papers are preprints. They’re early drafts submitted for peer review so that small problems and errors can be ironed out before publication. We should expect changes as papers are updated.

On the other, we depend on the conclusions of scientific papers for properly argued assurances that the LHC is safe. If those conclusions can be rewritten for public relations reasons rather than scientific merit, what value should we place on them?

Either way, we now know that a few minutes work on a wordprocessor makes the LHC entirely safe.

Ref: arxiv.org/abs/0901.2948: On the Possibility of Catastrophic Black Hole Growth in the Warped Brane-World Scenario at the LHC version 2

Thanks to Cap’n Rusty for pointing out the new version

Calculating the cost of dirty bombs

Wednesday, February 25th, 2009

cesium-137

One of the more frightening scenarios that civil defence teams worry about is the possibility that a bomb contaminated with radioactive material would be detonated in a heavily populated area.

Various research teams have considered this problem and come to similar conclusions–that the actual threat to human health from such a device is low. Some even claim that terror groups must have come to a similar conclusion which is why we’ve not been on the receiving end of such an attack. The panic such a device would cause is another questions.

Today Theodore Liolios from a private institution called the  Hellenic Arms Control Center in Thessaloniki in Greece, goes through the figures.

He says the most likely material to be used in such an attack is Cesium-137, widely used throughout the developed and developing world as a source for medical therapies.

The unstated implication is that it would be relatively easy to get hold of this stuff from a poorly guarded hospital. Exactly this happened in Goiania in Brazil when an abandoned hospital was broken into and its supply of cesium-137 distributed around the surrounding neighbourhoods. The incident left 200 people contaminated. Four of them died.

But a dirty bomb would not be nearly as lethal. The trouble with them (from a terrorist’s point of view, at least) is that distributing radioactive material over a large area dramatically reduces the exposure that people receive. Particularly when most could be warned to stay indoors or be evacuated (unlike the Goiania incident in which most people were unaware they were contaminated).

Liolios calculates that anybody within a 300 metre range of a dirty bomb would increase their lifetime risk of cancer mortality by about 1.5 per cent. And then only if they were unable to take shelter or leave the area. That’s about 280 people given the kind of densities you expect in metropolitan areas.

And he goes on to say that it is reasonable to assume that a cesium-137 dirty bomb would not increase the cancer mortality risk for the entire city by a statistically significant amount.

But the terror such a device might cause is another question. Liolios reckons that current radiation safety standards would mean the evacuation of some 78 square kilometres around ground zero. That would affect some 78,000 people, cost $7.8m per day to evacuate and some $78m to decontaminate.

That seems a tad conservative but however it is calculated, it may turn out to be chickenfeed compared to the chaos caused by panic, which may well result in more deaths than the bomb itself could claim. How to calculate the effect of that?

Ref: arxiv.org/abs/0902.3789: The Effects of Using Cesium-137 Teletherapy Sources as a Radiological Weapon

Ptarithmetic: reinventing logic for the computer age

Friday, February 20th, 2009

ptarithmetic

In the last few years, a small group of logicians have attempted the ambitious task of re-inventing the discipline of formal logic.

In the past, logic has been thought of as the formal theory of “truth”.  Truth plays an important role in our society and as suchm a formal theory is entirely laudable and worthy. But it is not always entirely useful.

The new approach is to reinvent logic as the formal theory of computability. The goal is  to provide a systematic answer to the question “what is computable”. For obvious reasons,  so-called computability logic may turn out to be much more useful.

To understand the potential of this idea , just think for a moment about the most famous of logical systems called Peano Arithmetic, better known to you and me as “arithmetic”.

The idea is for computability logic to do for computation what Peano Arithmetic does for natural numbers.

The role of numbers is played by solvable computations and the collection of basic operations that can be performed on these computations forms the logical vocabulary of the theory.

But there’s a problem. There can be a big difference between something that is computable and something that is efficiently computable, says Giorgi Japaridze, a logician at Villanova University in Pennsylvania and the inventor of computability logic.

So he is proposing a modification (and why not, it’s his idea)–the system should allow only those computations that can be solved in polynomial time. He calls this polynomial time arithmetic or ptarithmetic for short.

The promise is that Japaridze’s system of logic will prove as useful to computer scientists as arithmetic is for everybody else.

What isn’t clear is whether the logical vocabulary of ptarithemtic is well enough understood to construct a system of logic around and beyond that, whether ptarithemtic makes a well-formed system of logic at all.

Those are big potential holes which Japaridze  may need some help to fill–even he would have to admit that he’s been ploughing a lonely furrow on this one since he put forward the idea in 2003.

But the potential pay offs are huge which make this one of those high risk , high potential pay off projects that just might be worth pursuing.

Any volunteers?

Ref: arxiv.org/abs/0902.2969: Ptarithmetic

The frightening origins of glacial cycles

Thursday, February 12th, 2009

milankovitch-cycles

Climatologists have known for some time that the Earth’s motion around the Sun is not as regular as it might first appear. The orbit is subject to a number of periodic effects such as the precession of the Earth’s axis which varies over periods of 19, 22 and 24 thousand years, its axial tilt which varies over a period of 41,000 years and various other effects.

The combined effect of these variations are often cited to explain the 41,000 and 100,000 year glacial cycles the Earth appears to have gone through in the past.

But there is a problem with this idea: the change in the amount of sunlight that these variations cause is not enough to trigger glaciation. So some kind of non-linear effect must amplify the effects to cause widespread cooling.

That’s not so surprising given that we know that our climate appears to be influenced by all kinds of non-linear factors. Even still, nobody has been able to explain what kind of processes can account for the difference.

Now Peter Ditlevsen at the University of Copenhagen in Denmark thinks he knows what might have been going on. He says that the change in the amount of sunlight the Earth receives acts as a kind of forcing mechanism in a climatic resonant effect. The resulting system is not entirely stable but undergoes bifurcations in which the cycle switches from a period of 41,000 years to 100,000 years and back again, just as it seems to have done in Earth’s past.

“This makes the ice ages fundamentally unpredictable,” says Ditlevsen.

Quite, but the real worry is this: if bifurcations like this have happened in the past, then they will probably occur in the future. The trouble is that our current climate models are too primituve to allow for this kind of bifurcation and that means their predictions could be even more wildly innacurate than we know they already are .

Kinda frightening, don’t you think?

Ref: arxiv.org/abs/0902.1641:  The Bifurcation Structure and Noise Assisted Transitions in the Pleistocene Glacial Cycles

Econophysicists identify world’s top 10 most powerful companies

Monday, February 9th, 2009

global-network

The study of complex networks has given us some remarkable insights into the nature of systems as diverse as forest fires, the internet and earthquakes. This kind of work is even beginning to give econophysicists a glimmer of much-needed insight in the nature of our economy. In a major study, econophysicists have today identified the most powerful companies in the world based on their ability to control stock markets around the globe. it makes uncomfortable reading. (more…)

The power laws behind terrorist attacks

Friday, February 6th, 2009

power-law

Plot the number of people killed in terrorists attacks around the world since 1968 against the frequency with which such attacks occur and you’ll get a power law distribution, that’s a fancy way of saying a straight line when both axis have logarithmic scales.

The question, of course, is why? Why not a normal distribution, in which there would be many orders of magnitude fewer extreme events?

Aaron Clauset and Frederik Wiegel have built a model that might explain why. The model makes five simple assumptions about the way terrorist groups grow and fall apart and how often they carry out major attacks. And here’s the strange thing: this model almost exactly reproduces the distribution of terrorists attacks we see in the real world.

These assumptions are things like: terrorist groups grow by accretion (absorbing other groups) and fall apart by disintegrating into individuals. They must also be able to recruit from a more or less unlimited supply of willing terrorists within the population.

Being able to reproduce the observed distribution of attacks with such a simple set of rules is an impressive feat. But it also suggests some strategies that might prevent such attacks or drastically reduce them in number . One obvious strategy is to reduce the number of recruits within a population, perhaps by reducing real and perceived inequalities across societies.

Easier said than done, of course. But analyses like these should help to put the thinking behind such ideas on a logical footing.

Ref: arxiv.org/abs/0902.0724: A Generalized Fission-Fusion Model for the Frequency of Severe Terrorist Attacks

Space Station simulator given emotions

Tuesday, February 3rd, 2009

m-sorry-dave

Astronauts training to work on the International Space Station have to have mastered a mind-boggling amount of kit before they leave Earth. One of these devices is the Canadarm 2, a robotic arm used to manipulate experiments outside the station.

On Earth, astronauts train on a Canadarm 2 simulator connected to a virtual assistant that can spot potential errors, such as a move likely to smash the arm into the station. The assistant then offers hints and tips to the astronaut to help him or her make a correction or even issue a command to prevent damage.

The bad news for astronauts is that André Mayers and colleagues at Université de Sherbrooke in Canada who created the virtual assistant, have given it an unusual upgrade. In an attempt to help the simulator learn more about the astronauts who are using it, the team has programmed the assistant to experience the equivalent of an emotion when it records a memory of what has happened.

The problem is that the assistant receives a huge amount of data from each training session. It’s emotional response allows it to determine which of this data is most important, just as humans do. “This allows the agent to improve its behavior by remembering previously selected behaviors which are influenced by its emotional mechanism,” say the team.

The system is called the Conscious Tutoring System or CTS. It’s not clear from the paper how well the system works but how long before one unlucky astronaut hears the phrase: “I’m sorry Dave, I can’t do that.”

Ref: arxiv.org/abs/0901.4963: How Emotional Mechanism Helps Episodic Learning in a Cognitive Agent

Rippling in muscles caused by molecular motors detaching

Friday, January 30th, 2009

sarcomere

Muscle tissue is made of molecular engines called sarcomeres, which contract and expend when the muscle is flexed. In sarcomeres the business of contracting is carried out by molecular motors called myosins as they pull themselves along filaments of a protein called actin. When you flex your arm, it is these myosin molecular motors that are doing the work.

One curious phenomenon that can sometimes be observed in muscles is a wavelike oscillation of the tissue. What causes this infamous “rippling” of muscles has been somewhat of a mystery but today Stefan Gunther and Karsten Kruse from Saarland University in Germany throw some light on the matter.

They’ve modeled the rate at which molecular motors detach themselves from the actin filaments as the load they are under changes. It turns out that oscilllations occur naturally under certain loads, as the molecular motors attach and re-attach.

So when the next bodice ripper you read mentions rippling muscles, you’ll know exactly what this means.

Ref:  arxiv.org/abs/0901.4517: Spontaneous Waves in Muscle Fibres

Massive miscalculation makes LHC safety assurances invalid

Wednesday, January 28th, 2009

lhc-risk

It just gets worse for CERN and its attempts to reassure us that the Large Hadron Collider won’t make mincemeat of the planet.

It’s beginning to look as if a massive miscalculation in the safety reckonings means that CERN scientists cannot offer any assurances about the work they’re doing.

In a truly frightening study, Toby Ord and pals at the University of Oxford say that “while the arguments for the safety of the LHC are commendable for their thoroughness, they are not infallible.”

When physicists give a risk assessment, their figure is only correct if their argument is valid. So an important questions is then: what are the chances that the reasoning is flawed?

(more…)