Archive for September, 2007

The missing language link

Monday, September 10th, 2007

The distribution of languages is the result of the movin’ and migratin’ of millions of people over tens of thousands of years. As a fossil of human history, it’s unrivalled in its richness.

So understandin’ this distribution is major task for them linguists and them historians who want to know more about our ancestors’ locomotin’ habits. One interestin’ feature is that the size of language families follows a power law, like many other natural and social phenomena. Whereas the size of languages by the number of speakers has a different distribution.

To understand why this might be, researchers have been a-buildin and a-tinkerin with computer models of language distribution in an attempt to match the observed data. This week, Paulo Murilo Castro de Oliveira aka “Mr Margarine” at the Ecole Superieure de Physique et de Chimie Industrielles in Paris and colleagues say they’ve cracked it.

Mr Margarine says the trick is to combine two computer models: one that simulates the migration of peoples and the propagation of languages and another that simulates the linguistic structure of languages and how they evolve.

The result is a model that accurately reproduces the observed distribution of languages and language families for the first time. This is a potential goldmine.

Mr Margarine is so confident in his model that he says it can be used to predict undiscovered features of language distribution.

That’s a big claim but one that rings hollow given that slippery My Margarine ain’t sayin’ what any of these might be.
Ref: arxiv.org/abs/0709.0868: A Computer Simulation of Language Families

Shorts n’ knickers

Sunday, September 9th, 2007

This week’s papers that never made the dizzy heights of a full post in the arXivblog

The Evolution of the World Trade Web

Thermal Logic Gates: Computation with Phonons

The History of Artificial Intelligence

Modelling urban street patterns

Where Dirac’s monopoles are a-hidin

Invisibility Cloak 2.0

Game theory and the future of Adwords

Saturday, September 8th, 2007

Ain’t Google Adwords a miracle o’ modern science? Here’s a system that searches your web page for keywords, hunts for advertisers who wanna have their message displayed next to these keywords and then auctions the advertising space to the highest bidder. All in the twinklin’ of an eye. Adwords is so good that it’s made Google millions or billions or zillions (who’s countin’?)

What can game theory tell us about the future of this digital auction house? According to Sudhir “Secrets” Singh, an electrical engineer at the University of California, Los Angeles and his buddies, game theory suggests that everybody would benefit by the emergence of a new kind of online business that exploits the inefficiencies of the Adwords system.

Here’s the thinkin’. There is a limit to the number of advertising slots on each page and this leaves advertisers a-scramblin and a-scufflin for spaces next to the most popular keywords. Inevitably, the advertisers who miss out are left a-sobbin and a-wimperin at the end of the day. The sobbers and wimperers are perfect fodder for the new businesses.

According to Secrets Singh, these businesses are gonna buy up popular keywords and then resell them to advertisers who want to ensure they don’t miss out on a slot. Secrets and his crew have built a model of this new market and calculated the revenues and payoffs for all parties. The model assumes that this game is a symmetric Nash equilibrium, meaning that everybody plays in the same way (or at least that nobody has anything to gain by unilaterally adopting a different strategy) .

Under these circumstances everybody wins–the auctioneer, the reseller and the bidders.

Ain’t that a bee-yoo-tiful story? So ya’ll gonna see resellers springin up all over the place soon. Secrets Singh and his buddies even have a start up called Ilial.com and there ain’t no prizes for guessing what that’s gonna be doin’.

They’ll already be a-dreamin and a-wondrin about the zillions they gonna make.

I know ya’ll lurv a happy endin. So ya’ll keep quiet about Adwords not being a symmetric Nash equilibrium. Y’hear?

Ref: arxiv.org/abs/0709.0204: Capacity Constraints and the Inevitability of Mediators in Adword Auctions

The zeroth theorem

Friday, September 7th, 2007

The Zeroth Theorem in the history of physics states that a discovery named after an individual often did not originate with that person.

It may be pompous and contrived (and let’s face, who ain’t?) but the zeroth theorem is surely worth a post since David “Orc” Jackson from the Lawrence Berkeley National Laboratory has done such a good job a-huntin and a-scourin for the scoundrels who’ve taken credit for things they ain’t done.

Orc Jackson says there ain’t no shortage of examples of the Zeroth Theorem in the history of science and his paper is full of em. Here are a few:

Avogadro’s number (6 x 10^23) was first calculated by Johan Loschmidt in 1865 at least 40 years before Avogadro was posthumously and mistakenly given the credit.

Olber’s paradox was discussed at least a 150 years before Olber was born

The Dirac delta function was invented by the English electrical engineer Oliver Heaviside 30 years before Paul Dirac published his version

And the Lorentz guage condition was dreamt up by a bloke with an almost identical name (Ludvig Lorenz) almost 40 years before Hendrik Lorentz published it in 1904

I can guess what ya’ll a-thinkin: who was this Zeroth bloke and whose idea did he steal for this theorem?

Ref: arxiv.org/abs/0708.4249: Examples of the Zeroth Theorem of the History of Physics

The great gravity wave affair

Thursday, September 6th, 2007

Gravity waves do one helluva job a-squeezin and a-squashin everything in their path. Plonk a big aluminum bar in the way and gravity waves will squash it in one direction while stretchin it in another. With careful measurements yer should be able to spot this.

In fact in the 1960s, Joseph “Big foot” Weber at the University of Maryland claimed to have done just this although everybody else ignored him

Now a group at CERN in Switzerland have finished analysing the rumblin and a-tremblin of a bar of frozen aluminum called Explorer.

The result? Zip, zilch, zero. They ain’t found nothing.

Which shows that although we all been rootin’ for Weber all these years, he was barkin’ up the wrong tree from the start or just plain barkin’. (Unless there was some extra heavy stuff goin down somewhere in the universe in 1969 and I don’t mean in the Berkeley dorms.)

One thing though. By not seeing nothin, Explorer puts an upper limit on the size of gravity waves and this limit is about the same size as the one determined by the giant laser interferometer LIGO (which ain’t seen gravity waves either).

The difference is that LIGO cost somewhere in the region of $400 million, substantially more than a lump of aluminum.

Ref: arxiv.org/abs/0708.4367: All-sky Incoherent Search for Periodic Signals with Explorer 2005 Data

The personal genome machine

Wednesday, September 5th, 2007

One cool way to sequence DNA is to pull it through a nanopore in some kinda membrane in which electrodes are embedded. As each nucleotide passes , it gets zapped by the electrodes to see what it is. (That’s the thinkin, anyway. Ain’t nobody built one a these yet.)

Trouble is that it’s hard tell the nucleotides apart–in fact the signals they generate are identical to within statistical error.

So Shashi “Chilli con” Karna at the US Army Research Laboratory in Maryland and a few buddies has an improved design in which a pyrimidine molecule is chemically bonded to one electrode in the nanopore. When the strand of DNA is dragged through the nanopore, the nucleotides bond with the pyrimidine molecule in different ways, allowing them to be uniquely determined.

Ain’t gonna be long before one a these things (or something like it)  is sequencing your DNA. Betcha!

Ref: arxiv.org/abs/0708.4011: Functionalized Nanopore-embedded Electrodes for Rapid DNA Sequencing

Crowdquakes–the killers that cause stampedes

Tuesday, September 4th, 2007

The squeeze ‘n’ shove of Mecca pilgrimages are a-mighty frightenin. Thousands of people have been died in em. Now video analysis shows that a remarkable new phenomenon called crowdquakes are behind these stampedes. The analysis  also suggests how stampedes might be prevented.

Dizzy Dirk Helbing and his mates at the Swiss Federal Institute of Technology in Zurich have been a-studyin videos of crowd panic and stampedes at the Jamarat Bridge near Mecca, where 350 pilgrims were trampled to death in 2005 (250 also died in 2004).

Conventional models of crowd behaviour assume that the velocity of the flow is smooth and that it drops to zero as the density of people rises–in other words the crowd drifts to a halt.

That ain’t what happens at all though. Dizzy Dirk reckons that at densities of more than 7 people per square meter, an individual is no longer able to control his movement and the crowd begins to move as a whole.

The density within crowds varies from place to place. So where the density is above the threshold, whatcha get is regions that take on a life of their own, a-heavin and a-shovin and a- sendin pressure waves through less dense regions of the crowd.

Now here’s the interestin bit. Dizzy Dirk says that under these circumstances, chains of people can become so tightly compressed that they become momentarily locked, like sand jamming as it passes through an hour glass. When these force chains break, they release sudden, uncontrollable amounts of energy, like earthquakes.

It is these crowdquakes that knock people off their feet causing them to be trampled. Dizzy Dirk says he seen it happen in the Jamarat videos.

The solution? Prevent crowd densities exceeding 7 bodies per square meter. In the videos of the 2006 event in which a crowdquake killed more than 350 people, Dirk says the warning signs were clearly visible 11 minutes before it occurred. That should give the authorities some warning.

Ref: arxiv.org/abs/0708.3339: Crowd turbulence: the physics of crowd disasters

How proteins cause cataracts

Monday, September 3rd, 2007

Some things are just too important to leave to biologists or chemists. Like science, for example, which requires a physicist if yer wanna solve anything decent. There ain’t nothing physicists can’t help with, if they got some spit ‘n’ elbow grease. This week its protein condensation.

When proteins condense within the body they form plaques which seriously screw up ordinary function. They lie at the heart of diseases as diverse as Alzheimers, sickle cell disease and type II diabetes. Not to mention cataracts, a clouding of the lens in the eye which is a leading cause of blindness. This is the focus (geddit!) of Anna “Auburn” Stradner at the University of Fribourg in Switzerland and her buddies.

There are three kindsa proteins in lenses. Cataracts occur when these proteins condense. What Auburn Stradner and co have done is simulate the forces between these groups of proteins and watched what happens. Turns out that if yer assume that groups of proteins mutually repel each other but at short range also attract, then they disperse nicely and form a stable mixture in which the proteins stay at arms length, like nuns in a biker bar.

But switch off the short range attraction something interestin happens: the proteins condense, just as they do in cataracts.

Of course, the model is a lil simplistic–the proteins are hard spheres, for example. But ain’t it remarkable how a toy model like this can provide some important insight?

So with a lil speculatin, yer might imagine that protein condensation diseases might be caused by a chemical change that switches off (or cancels out) the mutual short range attraction between proteins. And that a-spottin’ and a-preventin’ that change could stop cataracts developin.

Zat help any a you biologists out there?

Ref: arxiv.org/abs/0708.3370: New Insight into Cataract Formation — Enhanced Stability through Mutual Attraction

Trouble afoot for cosmic inflation

Sunday, September 2nd, 2007

Things ain’t all rosy for the Big Bang; it just don’t explain why the universe looks the way it does and the theoretical fixes proposed by them astrobods are a-crackin and a-crumblin.

Here’s one problem with the Big Bang. It involves these giant space thermometers like COBE and WMAP which tell us that wherever we look, the universe is the same temperature: it is in thermal equilibrium.

Now, the observable universe is about 28 billion light years across but only 14 billion years old. So there is no way that heat could have travelled between opposite sides of the cosmos–it’s just too far. So a thermal equilibrium ain’t possible. That leaves only the tiny possibility that the universe looks like it does by chance; and that makes them cosmobods all suspicious and sweaty.

So they did a lil a-dreamin and a-thinkin and in 1981 came up with an ah-dear called inflation. It goes like this. After the Big Bang, when the universe was just a fraction of a second old, it suddenly expanded in size. Not by a lil bitty amount but by a gigantic amount. Get this: by 26 orders of magnitude. That’s alotta expandin.

Inflation smooths out irregularities and makes the universe neat ‘n’ tidy, just as we see it today. It solves everything or so them astrobods thought.

But earlier this year, cosmologists calculated that the probability of inflation actually occuring in this universe is vanishingly small. And this week William “Half” Nelson at Kings College, London says that when yer take quantum cosmology into account (and who likes to leave that out), inflation is  even more unlikely

So the problem of why the universe looks like it does raises its ugly head again. What could have caused such an unlikely an event as inflation? And if it weren’t inflation, what was it?

There’s gonna be a-plenty of hand-wringin’ over this one.

Ref: arxiv.org/abs/0708.3288: The probability of Inflation in Loop Quantum Cosmology

Do single photons tunnel faster than light?

Saturday, September 1st, 2007

This ain’t as silly a question as it sounds. Almost 15 years ago a group at Berkeley raced photons down two tracks of identical length. One track was a straight line through a vacuum; the other was the same except for a barrier that the photons had to tunnel through to reach their destination.

Believe it or not, every time the race was run, the photons that had done the tunnelin came in by a head. That got the Berkeley group a-puzzlin and a-wondrin. The only way they could explain it was if the photons had tunnelled through the barrier at 1.7 times the speed of light. And that’s impossible, right?

A few others repeated the result and before long everybody was a-jumpin and a-clappin at the superluminal speeds they were a-generatin. Never mind relativity which they explained away with some hocus pocus, sayin that only the group velocity of light travels at superluminal speeds and any information carried by the photons was still limited to the speed of light.

Now Herbert “Sherbert” Winful says they go it all wrong. Photons don’t travel faster than light–they ARE light, stoopid!

Instead what causes the quickening is the way that the photons’ energy is stored and then released by the barrier. This introduces a phase change in the photon signal and it is this change, which photons detectors see as an earlier arrival time, that causes the apparent increase in speed.

So there ain’t nothing superluminal about tunnelin after all.

Ref: arxiv.org/abs/0708.3889: Do Single Photons Tunnel Faster than Light?