Archive for the ‘Hellraisin’’ Category

Massive miscalculation makes LHC safety assurances invalid

Wednesday, January 28th, 2009


It just gets worse for CERN and its attempts to reassure us that the Large Hadron Collider won’t make mincemeat of the planet.

It’s beginning to look as if a massive miscalculation in the safety reckonings means that CERN scientists cannot offer any assurances about the work they’re doing.

In a truly frightening study, Toby Ord and pals at the University of Oxford say that “while the arguments for the safety of the LHC are commendable for their thoroughness, they are not infallible.”

When physicists give a risk assessment, their figure is only correct if their argument is valid. So an important questions is then: what are the chances that the reasoning is flawed?


Rule breakers make traffic jams less likely

Monday, January 26th, 2009


Rules are a good thing when it comes to road traffic: drive on the wrong side of the highway and you’ll cause chaos, if you live.  If that seems forehead-smackingly obvious, then an analysis by Seung Ki Baek at Umea University in Sweden and pals my come as a surprise.

They say that a small proportion of lunatics driving on the wrong side of the road actually reduces the chances of a jam rather than increasing it and they have an interesting model to prove it.

Their model is a 100 lane highway in which cars can drive in either direction in any lane.  When two cars collide, that lane becomes blocked and other vehicles have to move to one side or the other to get round them.

In theory, it’s easy to imagine that the best strategy is for everyone to agree to move to their left (or right, the model is symmetrical)  when they meet.

The question is what happens when there are two kinds of drivers: rule-followers and rule-breakers who move either way.

Ki Baek and co considered the two obvious extremes.  When everybody is a rule-breaker, the result is chaos and the road jams up quickly as collisions ensue. Equally, when everybody is a rule-follower, the likelihood of jam is much lower and road users travelling in the same direction tend to end up driving on the left (or right), just as they do on real roads.

But here’s the strange thing: the probability of a jam reaches a minimum somewhere in between, when the number of rule-breakers is between 10 and 40 per cent.

That’s kinda counterintuitive but Ki Baek and co say several factors explain what is going on.

First, a small number of collisions disperses the rule-followers to their respective side of the roads more quickly,  making jamming less  likely.

And second, rule-followers tend to form convoys which can lead to pile ups that jam the road. A few collisions here and there helps to break up these convoys into smaller groups, making large pile ups and the jams they cause, less likely .

“Our result suggests that there are situations when abiding too strictly by a traffic rule could lead to a jamming disaster which would be avoided if some people just ignored the traffic rule altogether,” say the team.

Might be fun to try it on the San Diego Freeway one of these days. Dare ya!

Ref: Flow Improvement Caused by Traffic-Rule Ignorers

Black holes from the LHC could survive for minutes

Friday, January 23rd, 2009


There is absolutely, positively, definitely no chance of the LHC destroying the planet when it eventually switches on some time later this year.  Right?

Err, yep. And yet a few niggling doubts are persuading some scientists to run through their figures again. And the new calculations are throwing up some surprises.

One potential method of destruction is that the LHC will create tiny black holes that could swallow everything in their path including the planet. In 2002, Roberto Casadio at the Universita di Bologna in Italy and a few pals reassured the world that this was not possible because the black holes would decay before they got the chance to do any damage.

Now they’re not so sure.  The question is not simply how quickly a mini-black hole decays but whether this decay always outpaces any growth.

Casadio have reworked the figures and now say that:  ” the growth of black holes to catastrophic size does not seem possible.”

Does not seem possible? That’s not the unequivocal reassurance that particle physicists have been giving us up till now.

What’s more, the new calculations throw up a tricky new prediction. In the past, it had always been assumed that black holes would decay in the blink of an eye.

Not any more. Casadio and co say:  “the expected decay times are much longer (and possibly ≫ 1 sec) than is typically predicted by other models”

Whoa, let’s have that again: these mini black holes will be hanging around for seconds, possibly minutes?

That doesn’t sound good. Anybody at CERN care to clarify?

Ref: On the Possibility of Catastrophic Black Hole Growth in the Warped Brane-World Scenario at the LHC

How Google’s PageRank predicts Nobel Prize winners

Wednesday, January 21st, 2009


Ranking scientists by their citations–the number of times they are mentioned in other scientists’ papers– is a miserable business. Everybody can point to ways in which this system is flawed:

  • not all citations are equal. The importance of the citing paper is a significant factor
  • scientists in different fields of study use citations in different ways. An average paper in the life sciences is cited about six times, three times in physics, and about once in mathematics.
  • ground-breaking papers may be cited less often because a field is necessarily smaller in its early days.
  • important papers often stop being cited when they are incorporated into textbooks

The pattern of citations between papers forms a complex network, not unlike the one the internet forms. Might that be a clue that point us towards a better way of assessing the merits of the papers that it consists of?


How the credit crisis spread

Wednesday, January 14th, 2009


Where did the credit crunch start? According to Reginald Smith at the Bouchet-Franklin Research Institute in Rochester, it began in the property markets of California and Florida in early 2007 and is still going strong.

To help understand how the crisis has evolved, Smith has mapped the way it has spread as reflected in the stock prices of the S&P 500  and NASDAQ-100 companies. The picture above shows how the state of affairs changed between August 2007 and October 2008. Each dot represents a stock price and the colour, its return (green equals bad and red equals catastrophic).


Calculating the probability of immortality

Monday, December 8th, 2008

How likely is it that a given object will survive forever?

With many groups predicting that human immortality is just around the corner, you could say we all have a vested interest in the answer.

A t first glance, the odds are not good. As David Eubanks of Coker College in South Carolina puts it:

“Imagine that some subject survives each year (or other time period) with a probability p. Assuming for a moment that p exists and is constant over time, it’s easy to compute the dismal odds of long term survival as a decaying exponential. Unless p = 1, the probability of n-year survival approaches zero.”

In other words, the probability of surviving forever is exactly zero.

But this suggests a strategy: the route to immortality is to find a way to increase this probability over time.

Suppose the object we want to make immortal is the data on a hard drive. Then copying the data to another hard drive each year should ensure that after n years there are n copies. If one drive fails, we can easily reconstruct the data onto another drive. So unless all the drives fails at once, the data should be immortal.

Except for one problem. That approach ignores global catastrophes such as comet strikes which would destroy all the drives in one go.

Eubanks says there are two ways to tackle this problem. Life has found one of them which is to produce many diverse copies of the same thing, spread them around the planet and make them work in different ways ie exploit different energy sources.

The other is for a single organism to use its intelligence to avoid catastrophe. “It must collect information about the environment safely and inductively predict it well enough to avoid death,” says Eubanks.

How do these two strategies compare? Naturally, single individuals have a harder time of it because it’s tough to predict and adapt to all possible catastrophes. As Eubanks says, “We’re the product of billions of years of creatures that survived long enough to reproduce, and therefore have very deep survival instincts. And yet we can fall asleep while driving a car.”

Eubanks is even more pessimistic. He says that simulating every possible environmental disaster may be tricky enough but such a disaster would then force us to re-evaluate the way we evaluate disasters and so on ad infinitum. In short, it’s a calculation we are extremely unlikely to ever be able to undertake, making it hard to think of a way we could improve our probability of survival, year in year out.
The bottom line is that humans are unlikely to survive forever and neither is intelligent life anywhere else in the Universe.

“This speaks to the Fermi Paradox, which asks why the galaxy isn’t crawling with intelligent life.”

Quite. But Eubanks’ paper has another sting in the tail.

“The conclusions of this paper could lead one to believe that a democratic government cannot focus solely on external threats, but should also be constantly trying to improve the chances that it does not “self-destruct” into tyranny.”

What democracy could he be thinking of?

Ref: Survival Strategies

How much force does it take to stab somebody to death?

Wednesday, November 26th, 2008


How much force does it take to stab somebody to death? Strangely enough, forensic scientists do not know.

A number of groups have attempted to measure the forces necessary to penetrate skin but the results are difficult to apply to murder cases because of the sheer range of factors at work. The type and sharpness of the knife; the angle and speed at which it strikes; the strength of skin which varies with the age of the victim and the area of body involved; these are just a few of parameters that need to be taken into account.

So when giving evidence, forensic scientists have to resort to relative assessments of force.

“A mild level of force would typically be associated with penetration of skin and soft tissue whereas moderate force would be required to penetrate cartilage or rib bone. Severe force, on the other hand, would be typical of a knife impacting dense bone such as spine and sustaining visible damage to the blade,” says Michael Gilchrist at University College Dublin and pals who are hoping to change this state of affairs.

They’ve developed a machine that measures the force required to penetrate skin–either the animal kind or an artificial human skin made of polyurethane, foam and soap.

The surprise they’ve found is that the same knives from the same manufacturer can differ wildly in sharpness. And the force required for these knives to penetrate the skin can differ by more than 100 per cent.

That could have a significant bearing in some murder cases. And that’s important because in many European countries such as the UK, stabbing is the most common form of homicide.

Gilchrist and co say their work could even help tease apart what has happened in that most common of defences: “he ran onto the knife, your honour”.

The key thing here is the speed and angle of penetration. The angle can be measured easily enough but the speed is another matter altogether. Gilchrist and co say future work may throw some light on this.

Ref: Mechanics of Stabbing: Biaxial Measurement of Knife Stab Penetration of Skin Simulant

Why PAMELA may not have found dark matter

Thursday, October 30th, 2008


This is the one we’ve been waiting for. For months, the astrophysical world has been abuzz with rumors that the orbiting observatory PAMELA has found evidence of dark matter.

Various people have speculated on the nature of this dark matter but the PAMELA team has been cautious, refusing to release the data until they are happy with it. (Although that hasn’t stopped data being smuggled out of private presentations using digital cameras to capture slides).

Now the wait is over. The PAMELA team has put its data on the arXiv and the evidence looks interesting but far from conclusive
Here’s the deal: PAMELA has seen more positrons above a certain energy (10GeV) than can be explained by known physics. This excess seems to match what dark matter particles would produce if they were annihilating each other at the center of the galaxy. That’s what has got everybody excited

But there’s a fly in the ointment in the form of another explanation: positrons of this kind of energy can also be generated by nearby pulsars.

So PAMELA isn’t the smoking gun for dark matter that everybody hoped. At least not yet.

For that, we’ll need some way to distinguish between the positron signature of dark matter annihilation and the positron signature of pulsars.

That means a whole lot more data and some refreshing new ideas. You can be sure that more than a few  astrobods are onto the case.

Ref: Observation of an Anomalous Positron Abundance in the Cosmic Radiation

And the number of intelligent civilisations in our galaxy is…

Monday, October 20th, 2008


No really. At least according to Duncan Forgan at the Institute for Astronomy at the University of Edinburgh.

The Drake equation famously calculates the number of advanced civilisations that should populate our galaxy right now. The result is hugely sensitive to the assumptions you make about factors such as the number of planets that orbit a host star that are potentially habitable, how many of these actually develop life and what fraction of that goes onto become intelligent etc.

Disagreement (ie general ignorance) over these numbers leads to estimates of the number intelligent civilisations in our galaxy that range from 10^-5 to 10^6.  In other words, your best bet is to pick a number, double it….

So Forgan has attempted to inject a little more precision into the calculation. His idea is to actually simulate many times over, the number of civilisations that may have appeared in a galaxy like ours using reasonable, modern estimates for the values in the Drake equation.

With these statistics you can calculate an average value and a standard deviation for the number of advanced civilisations in our galaxy.

Better still, it allows you to compare the results of different models of civilisation creation.

Horgan has clearly had some fun comparing three models:

i. panspermia: if life forms on one planet, it can spread to others in a system

ii. the rare-life hypothesis: Earth-like planets are rare but life progresses pretty well on them when they occur

iii.  the tortoise and hare hypothesis: Earth-like plants are common but the steps towards civilisation are hard

And the results are:

i. panspermia predicts  37964.97 advanced civilisations in our galaxy with a standard deviation of 20.

ii. the rare life hypothesis predicts 361.2 advanced civilisations with an SD of 2

iii. the tortoise and hare hypothesis predicts 31573.52 with an SD of 20.

Those are fantastically precise numbers. But before you start broadcasting to your newfound friends with a flashlight, it’s worth considering their accuracy.

The results of simulations like this are no better than than the assumptions you make in developing them. And these, of course, are based on our manifestly imperfect but rapidly improving knowledge of the heavens.

The real question is whether we’ll ever have good enough data to plug in to a model like this to give us a decent answer, without actually discovering another intelligent civilisation. And the answer to that is almost certainly not.

Ref: A Numerical Testbed for Hypotheses of Extraterrestrial Life and Intelligence

How chemotherapy can make tumors bigger

Wednesday, October 15th, 2008


While our understanding and treatment of cancer has advanced significantly in recent years, most specialists would readily admit that the dynamics of tumor growth are poorly understood.

It’s easy to see why. Tumor growth is a multifaceted process  that involves complex interactions between many types of cells and their surrounding tissue.

So it’s interesting to see a multidisciplinary group of mathematicians, cell biologists, cancer specialists and chemists take on the task of modeling tumor growth and the effect that drug treatments have on it. Their results are startling, counterintuitive and frightening.
Such a model has to reproduce a number of important behaviors. For example, the availability of nutrients is the most important factor in tumor growth. When tumors reach 2 mm across, diffusion of oxygen and other nutrients is no longer enough to sustain them and so they enter a new phase in which they grow their own blood vessels to keep them nourished.

It is this that Peter Hinow at the University of Minnesota and buddies say they’ve captured in detail for the first time.

They also looked at the way in which drug treatments effect tumor growth. We know that endothelial cells that line blood vessels  play a dual role in tumor growth. On the one hand, blood vessels supply the tumour with the nutrients needed to help it grow. Many chemotherapy treatments target endothelial cells on the assumption that killing them will cut off the tumor’s lifeblood.

But on the other hand, blood vessels are also the channel along which cancer drugs must pass to reach the tumor.

So what is the effect of killing endothelial cells? That all depends on how they are applied, say Hinow and colleagues. Their frightening  conclusion is that, applied to the tumor in the right way, chemotherapy treatments can dramatically reduce the size of  a tumor.

But applied in the wrong way, without due consideration for the structure of the tumor, chemotherapy treatments can cut off the supply of cancer-fighting drugs to a tumor, causing it to grow.

So chemotherapy can end up making tumors bigger rather than smaller.

That’s a shocking and important result.

Ref: A Spatial Model of Tumor-Host Interaction: Application of Chemotherapy