Archive for the ‘Nets 'n' webs’ Category

The terrible truth about economics

Friday, October 31st, 2008

“Compared to physics, it seems fair to say that the quantitative success of the economic sciences is disappointing,” begins Jean-Philippe Bouchaud,  an econophysicist at Capital Fund  Management in Paris. That’s something of an understatement given the current global financial crisis.

Economic sciences have a poor record of success, partly because they are hard (Newton once pointed out that modelling the madness of people is more difficult than the motion of planets). But also because economists have singularly failed to apply the basic process of science to their discipline.

By that I mean the careful collection and analysis of observable evidence which allows the development of hypotheses to explain how things work.

This is a process that has worked well for the physical sciences. Physicists go to great lengths to break hypotheses and replace them with better models.

Economics (and many other social sciences) works back to front. It is common to find economists collecting data to back up an hypothesis while ignoring data that contradicts it.

Bouchaud gives several examples.  The notion that a free market works with perfect efficiency is clearly untenable. He says: “Free markets are wild markets. It is foolish to believe that the market can impose its own self-discipline.” And yet economists do believe that.

The Black-Scholes model for pricing options assumes that price changes have a Gaussian distribution. In other words, the model and the economists who developed it, assume that the probability of extreme events is negligible.  We’re all now able to reconsider that assumption at our leisure.

Bouchaud could also have added the example of economists’ assumption that sustained and unlimited economic growth is possible on a planet with limited resources.   It’s hard to imagine greater folly.

So what is to be done? Bouchaud suggests building better models with more realistic assumptions;  stronger regulation; proper testing of financial products under extreme conditions; and “a complete change in the mindset of people working in economics”.

All these things seem like good ideas.

But Bouchaud seems blind also to the greatest folly, which would be to imply that the roller coaster ride that we have seen in recent weeks can somehow be avoided in these kinds of complex systems.

Various physicists have  shown that stock markets demonstrate the same kind of self-organised criticality as avalanches, earthquakes, population figures, fashions, forest fires…. The list is endless.

And of course, nobody expects to be able to prevent the spread of bell bottoms or earthquakes or avalanches. If you have forests, you’re going to have forest fires.

What people do expect, however, is to have mitigation procedures in place for when these disasters do happen. That’s where econophysicists need to focus next.

Ref: arxiv.org/abs/0810.5306: Economics Needs a Scientific Revolution

Why ant colonies don’t have traffic jams

Wednesday, October 29th, 2008

ant-motorway

Traffic jams are the bane of modern life. But could it be possible that one of this planet’s more ancient life forms could show us how to better regulate road traffic?

That’s the claim of congestion expert Dirk Helbing at the Dresden University of Technology in Germany and pals using a remarkable insight gained from the study of ants.

It turns out that ants are able to regulate ant traffic with remarkable efficiency. Let’s face it, you never see ants backed up and idling along a pheromone scent trail. On the contrary, ant colonies are a constant blur of organized and directed motion. How do they do it?

To find out, Helbing and pals built an ant motorway with several carriageways between a nest and a source of sugar. The carriageways had several interchanges where the ants could switch between longer and shorter routes.

Some ants soon found the shortest route to the sugar and others followed the pheromone trail they left behind until this shortest route became saturated with ants going to and from the sugar.

Then something interesting happened at the interchanges between the carriageways. When the route was about to become clogged, the ants coming back to the nest physically prevented the ants travelling to the sugar getting on to the highway. It wasn’t a conscious action, there simply wasn’t room for two ants to pass at these congested spots. So these ants were forced to take a different route.

The result was that just before the shortest route became clogged, the ants were diverted to another route. Traffic jams never formed.

That’s an impressive feat because the efficient distribution of limited resources by decentralized, individual decisions is still an open problem in many networked systems. As Helbing puts it: “This is one of the most challenging problems in road traffic and routing of data on the internet.”

But one that ants seem to have cracked and this gave Helbing an idea. Obviously, you can’t allow cars to collide with vehicles coming in the opposite direction as a form of traffic control; but you could do the next best thing and allow them to communicate. His plan is to force cars traveling in one direction to tell oncoming vehicles what traffic conditions they are about to encounter so that they can take evasive action if necessary.

And it’s not just road traffic that might benefit. Helbing speculates that all kinds of routing processes could benefit from a similar approach.

Simple really, if you’re an ant.

Ref: arxiv.org/abs/0810.4583: Analytical and Numerical Investigation of Ant Behavior Under Crowded Conditions

How religions spread like viruses

Wednesday, October 8th, 2008

religious-memes.jpg

“Religions are sets of ideas, statements and prescriptions of whose validity and applicability individual humans can become convinced,” say Michael Doebeli  and Iaroslav Ispolatov at the University of Vancouver.

In other words, religions are memes, units of cultural inheritance just like songs, languages or political beliefs. Richard Dawkins proposed the idea that memes spread much in the same way that viruses do, using humans as hosts. Some get passed from person to person and can survive for many generations. Others die away and become rapidly extinct. The most successful adapt and thrive. Evolution acts on memes in the same way it acts on our genes.

That has given Doebeli and Ispolatov an idea: “We propose to model cultural diversification in religion using techniques from evolutionary theory to describe scenarios in which the reproducing units are religious memes.”

The model they use is relatively simple, including factors such as the rates of transmission of religious memes as well as the rate of loss,  but it generate some interesting results.

It predicts, for example, that new distinct religions should emerge as descendants of a single ancestor. Exactly this process has been observed many times in various religions such as the Catholic-Protestant split in the 16th century, and the ongoing fragmentation of a religious organisation in Papua New Guinea, which anthropologists are currently observing with interest.

This is an interesting piece of work and one that could lead to new detail in our  study of memes. Religious meme transmission rates are relatively easy to measure and change more quickly than other widespread memes such as languages. So there is plenty of data to play with.
But if ever an idea was likely to ruffle a few feathers, this is it. They’ll be spluttering over their coffee and donuts tomorrow morning in Dover, Pennsylvania.

Ref: arxiv.org/abs/0810.0296: A Model for the Evolutionary Diversification of Religions

New fractal pattern found in milk and coffee

Thursday, September 25th, 2008

coffee-fractal.jpg

Next time you stare into your 9am double tall latte, look with new respect. Japanese scientists have discovered a new type of fractal in the patterns coffee makes as mixes with milk.

Placing a heavier fluid onto a lighter fluid always results in an disturbance at their boundary
known as a Rayleigh–Taylor instability.

Michiko Shimokawa and Shonosuke Ohta from Kyushu University in Japan placed coffee (Nescafe’s Gold Blend, if you must know) on top of ordinary milk, which is lighter, and watched how gravity and surface tension compete in a way that leads to Rayleigh-Taylor instability.

As soon as the coffee droplet is placed on the surface, the coffee solution spreads out creating a fractal pattern. But this is a different kind of pattern from the ordinary fractals seen in river branches, and bacterial colonies, which continue to grow and increase in area.

Instead, in coffee, parts of the pattern disappear as they are sucked into the milk by gravity. The result is a shifting pattern, parts of which continually disappear..

Shimokawa and Ohta say this behaviour closely matches that of a Sierpinski carpet which is formed by cutting a square into 9 smaller squares in a 3-by-3 grid. The central square is then removed and the procedure applied to the remaining 8 squares ad infinitum. This creates  a fractal structure with dimension 1.88.

That’s the same dimension that the coffee fractals turn out to have. And there are other similarities too, such as the disappearing patterns.

This, say the Japanese pair, strongly implies that the coffee fractal must form in the same way as a Sierpinski carpet, following similar rules. Intriguing!

So come on chaps: what are these rules and how do they come about in a system dominated by the complexity of Rayleigh-Taylor instabilities?

Ref: arxiv.org/abs/0809.2458: Annihilative fractals of coffee on milk formed by Rayleigh–Taylor instability

How big is a city?

Wednesday, August 20th, 2008

s-population.jpg

That’s not as silly a question as it sounds. Defining the size of a city is tricky task that has major economic implications: how much should you invest in a city if you don’t know how many people live and work there?

The standard definition is the Metropolitan Statistical Area, which attempts to capture the notion of a city as a functional economic region and requires a detailed subjective knowledge of the area before it can be calculated. The US Census Bureau has an ongoing project dedicated to keeping abreast of the way this one metric changes for cities across the continent.

Clearly that’s far from ideal. So our old friend Eugene Stanley from Boston University and a few pals have come up with a better measure called the City Clustering Algorithm. This divides an area up into a grid of a specific resolution, counts the number of people within each square and looks for clusters of populations within the grid. This allows a city to be defined in a way that does not depend on its administrative boundaries.

That has significant implications because clusters depend on the scale on which you view them. For example, a 1 kilometre grid sees New York City’s population as a cluster of 7 million, a 4 kilometre grid makes it 17 million and the cluster identified with an 8 kilometre grid scale, which encompassing Boston and Philadelphia, has a population of 42 million. Take your pick.

The advantage is that this gives a more or less objective way to define a city. It also means we’ll need to reanalyse of some of the fundamental properties that we ascribe to cities  growth. For example,  the group has studied only a limited numer of cities in the US, UK and Africa but already says we’ll need to rethink Gibrat’s law which states that a city’s growth rate is independent of its size.

Come to think of it, Gibrat’s is a kind of weird law anyway. Which means there may be some low hanging fruit for anybody else who wants to re-examine the nature of cities.

Ref: arxiv.org/abs/0808.2202: Laws of Population Growth

Schroedinger-like PageRank wave equation could revolutionise web rankings

Thursday, August 7th, 2008

quantum-pagerank.jpg

The PageRank algorithm that first set Google on a path to glory measures the importance of a page in the world wide web.  It’s fair to say that an entire field of study has grown up around the analysis of its behaviour.

That field looks set for a shake up following the publication today of an entirely new formulation of the problem of ranking web pages. Nicola Perra at the University of Cagliari in Italy and colleagues have discovered that when they re-arrange the terms in the PageRank equation the result is a Schroedinger-like wave equation.

So what, I hear you say, that’s just a gimmick. Perhaps, but the significance is that it immediately allows the entire mathematical machinery of quantum mechanics to be brought to bear on the problem–that’s 80 years of toil and sweat.

Perra and pals point out some of the obvious advantages and disadvantages of the new formulation.

First, every webpage has a quantum-like potential. The topology of this potential gives the spatial distribution of PageRank throughout the web. What’s more, this distribution can be calculated in a straightforward way which does not require iteration as the conventional PageRank algorithm does.

So the PageRank can be calculated much more quickly for relatively small webs and the team has done a simple analysis of the PageRanking of the .eu domain in this way. However, Perra admits that the iterative method would probably be quicker when working with the tens of billions of pages that make up the entire web.

But the great promise of this Schroedinger-like approach is something else entirely. What the wave equation allows is a study of the dynamic behaviour of PageRanking, how the rankings change and under what conditions.

One of the key tools for this is called perturbation theory. It’s no understatement to say that perturbation theory revolutionised our understanding of the universe when it was applied to quantum theory in the 1920s and 1930s.

The promise is that it could do the same to our understanding of the web and if so, this field is in for an interesting few years ahead.

Ref: arxiv.org/abs/0807.4325: Schroedinger-like PageRank equation and localization in the WWW

The curious kernels of dictionaries

Monday, July 7th, 2008

Grounded kernel

If you don’t know the meanng of a word, you look it up in the dictionary. But what if you don’t know the meaning of any of the words in the definition? Or the meaning of any of the words in the definitions of these defining words? And so on ad infinitum.

This is known as the “symbol grounding problem” and is related to the nature of meaning in language.  The way out of this problem is to assume that we somehow automatically “know” the meaning of a small kernel of words from which all others can be defined.

The thinking is that some words are so closely linked to the object to which they refer that we know their meaning without a definition. Certain individuals, events and  actions apparently fall into this category. These words are called “grounded”.

How this controversial idea might work, we’ll leave for another day.The question we’re pondering today, thanks to Alexandre Blondin Masse at the University of Quebec in Canada is: how small a kernel of grounded words do we need to access the entire dictionary.

We don’t have an answer for you but Blondin Masse and pals have a method based on the concept of reachable set: “a larger vocabulary whose meanings can be learned from a smaller vocabulary through definition alone, as long as the meanings of the smaller vocabulary  are
themselves already grounded”.

The team have even  developed algorithms to compute a reachable set for any given dictionary and from that the size of the grounded kernel.

It has to be said that modern dictionaries already work like this; they are based on a defining vocabulary of about 2000 words from which all others are defined, although this system does not appear to be rigorously enforced, says Blondin Masse and co.

Nobody knows whether 2000 words is close to the theoretical limit for a grounding kernel. But we’ll expect Blondin Masses and pals to tell us soon.

Ref: arxiv.org/abs/0806.3710: How Is Meaning Grounded in Dictionary Definitions?

Cellphone records reveal the basic pattern of human mobility

Wednesday, June 11th, 2008

Mobile phone movement

A few months back, we saw what happens when researchers get their paws on anonymixed mobile phone records. Albert-Laszlo Baribasi at the University of Notre Dame in Indiana and some buddies used them to discover entirely new patterns of human behaviour.

Now Baribasi has dug deeper into the data and discovered a single basic pattern of human mobility.  It’s nothing special: lots of smallish journeys interspersed with occasional long ones (the length of the journey actually follows a power law).

That’s more or less what you’d expect but experimental confirmation is important.

Human mobility is one of the crucial factors in understanding the spread of epidemics. Until now, the models that predict how disease spreads have had to rely on educated guesses about the way human travel patterns might affect this process.

Baribasi’s work will take just little of the guesswork out of future efforts and that can’t be bad.

Ref: arxiv.org/abs/0806.1256: Understanding Individual Human Mobility Patterns

Why do online opinions evolve differently to offline ones?

Thursday, June 5th, 2008

 Online opinions

The way in which opinions form, spread through societies and evolve over time is a hot topic among researchers because of their increasing ability to measure and simulate what’s going on.

The field offers some juicy puzzles that look ripe for picking by somebody with the right kind of insight. For example,  why do people bother to vote in elections in which they have little control over the result when a “rational” individual ought to conclude that it is not worth taking part.

A similar conundrum is why people contribute to online opinion sites such as Amazon’s book review system or the Internet Movie Database’s (IMDB) ratings system. When there are already a hundred 5-star reviews, why contribute another?

Today Fang Wu and Bernardo Huberman at the HP Laboratories in Palo Alto present the results of their analysis of this problem. And curiously, it looks as if online opinions form in a subtley different way to offline ones.

The researchers studied the patterns of millions of opinions posted on Amazon and the IMDB and found some interesting trends. They say:

Contrary to the common phenomenon of group polarization observed offline, we measured a strong tendency towards moderate views in the course of time.

That might come as a surprise to anyone who has  followed the discussion on almost any online forum but Wu and Huberman have an idea how moderation seems to evolve.  They suggest that people are most likely to express a view when their opinion is different from the prevailing consensus because such a contribution will have a bigger effect on the group.

They tested the idea  by looking at the contributions of people who added detailed reviews against those who simply clicked a button. Sure enough, those who invest more effort are more likely to have an opposing view. It is these opposing views that tend to moderate future views.

By contrast, sites such as Jyte in which users can only click a button to give their opinion tend to show herding behaviour in which people copy their peers, just as they often do offline.

Wu and Huberman’s analysis raises more questions than answers for me. But they point out that the study of online opinions has been neglected until now.  That looks set to change.

Ref: arxiv.org/abs/0805.3537: Public Discourse in the Web Does Not Exhibit Group Polarization

The science of scriptwriting

Wednesday, June 4th, 2008

McKee

You don’t have to delve far into the realms of scriptwriting before you’ll be pointed towards a book called Story by Robert McKee, which explains why scriptwriting is more akin to engineering than art. McKee examines story-telling like a biologist dissecting a rat. But after taking it apart, he explains how to build a story yourself using rules that wouldn’t look out of place in a computer programming text book.

McKee has become so influential that huge numbers of films, perhaps most of them, and many TV series are now written using his rules. But the real measure of his success is that there are even anti-McKee films such as Adaptation that attempt to burst McKee’s bubble.

Given that scriptwriting has become so formulaic, shouldn’t science have a role to play in analysing it? That’s exactly what Fionn Murtagh and pals at the Royal Holloway College, University of London have done in a project that analyses scripts in a repeatable, unambiguous and potentially-automatic way.

Using McKee’s rules they compare the script of the film Casablanca, a classic pre-McKee movie, with scripts of six episodes of CSI (Crime Scene Investigation), a classic post-Mckee production, and find numerous similarities.

That’s hardly surprising since McKee learnt his trade analysing films such as Casablanca, so anything written using his rules should have these similarities.

What’s interesting about the work is that Murtagh and mates want to use their technique to develop a kind of project management software for scriptwriting. That’s an ambitious goal but one that might find a handy niche market, particularly since many scripts, TV serials in particular, are now written by teams rather than individuals and so need careful project management from the start.

The challenge for Murtagh and co will be to turn this aproach into a bug-free, easy-to-use package that has the potential to become commercially viable. And for that they’ll almost certainly need some outside help and funding. Anybody got any spare cash?

Ref: arxiv.org/abs/0805.3799: The Structure of Narrative: the Case of Film Scripts