Archive for the ‘Nets ‘n’ webs’ Category

The fundamental patterns of traffic flow

Monday, March 9th, 2009

traffic-flow

Take up the study of earthquakes, volcanoes or stock markets and the goal, whether voiced or not, is to find a way to predict future “events” in your field. In that sense, these guys have something in common with scientists who study traffic jams.

The difference is that traffic experts might one day reach their goal. The complexity of traffic flow, while  awe inspiring, may well be fundamentally different  to the complexity of stock markets and earthquake events.

At least that’s how Dirk Helbing at the Institute for Transport & Economics at the Technical University of Dresden in Germany, and his buddies see it.

Helbing says that one long standing dream of traffic experts is to identify the fundamental patterns of traffic congestion from which all other flows can be derived,  a kind of periodic table of traffic flows.

Now he thinks he has found it: a set of fundamental patterns of traffic flow that when identified on a particular road, can be used to create a phase diagram of future traffic states.

The phase diagrams can then be used to make forecasts about the way in which the flow might evolve.

That’ll be handy. But only if it’s then possible to do something to prevent the congestion. And that may be the trickiest problem of all.

Ref: arxiv.org/abs/0903.0929: Theoretical vs. Empirical Classification and Prediction of Congested Traffic States

Why spiders’ silk is so much stronger than silkworms’

Friday, February 27th, 2009

silk-structure

Spider silk and silkworm silk are almost identical in chemical composition and microscopic structure. And yet spider silk is far tougher. “One strand of pencil thick spider silk can stop a Boeing 747 in flight,” say Xiang Wu and colleagues at the National University of Singapore. Whereas a pencil thick strand of silkworm silk couldn’t. Why the difference?

Xiang and co say they’ve worked out why by successfully simulating the structure of both silks for the first time. Both spider silk and silkworm silk are made up of long chains of amino acids called polypeptides and are particularly rich in the amino acids alanine and glycine.

Various imaging techniques have shown that the sequences of amino acids differ slightly between spide and silkworm silk but this alone cannot explain the huge difference in strength, says Xiang.

Instead, the secret appears to lie in the way polypeptide chains fold into larger structures. Xiang’s model shows that a subtle difference exists between the silks in structures called beta sheets and beta crystallites. This insight has allowed the team to model for the first time the way in which both silks break.

That’s important because being able to predict the mechancial properties of an organic material simply by studying its structure is going to be increasingly useful. It may even allow us to take a better stab at making spider-like silk synthetically for the the first time.

Anybody with a 747 watch out.

Ref: arxiv.org/abs/0902.3518: Molecular Spring: From Spider Silk to Silkworm Silk

Econophysicists identify world’s top 10 most powerful companies

Monday, February 9th, 2009

global-network

The study of complex networks has given us some remarkable insights into the nature of systems as diverse as forest fires, the internet and earthquakes. This kind of work is even beginning to give econophysicists a glimmer of much-needed insight in the nature of our economy. In a major study, econophysicists have today identified the most powerful companies in the world based on their ability to control stock markets around the globe. it makes uncomfortable reading. (more…)

Next generation search engines could rank sites by “talent”

Wednesday, January 7th, 2009

experience-v-talent

How will the next generation of search engines outperform Google’s all-conquering Pagerank algorithm?

One route might be to hire Vwani Roychowdhury at the University of California, Los Angeles and his buddies who have found a fascinating new way to tackle the problem of website rankings.

(more…)

First “movie” of fruitfly gene network aging

Tuesday, January 6th, 2009

fruitfly-gene-network-movie

One of the major goals in biology is to reconstruct the complex genetic networks that operate inside cells, and to “film” how these networks evolve during the course of an organism’s development.

Today, Eric Xing and buddies at Carnegie Mellon University in Pittsburgh claim to have worked out how the way patterns of gene expression change in fruitflies over the course of their entire development. That’s a world first and no mean feat to boot.

(more…)

A clue in the puzzle of perfect synchronization in the brain

Thursday, November 27th, 2008

zero-lag-synchronisation

“Two identical chaotic systems starting from almost identical initial states, end in completely uncorrelated trajectories. On the other hand, chaotic systems which are mutually coupled by some of their internal variables often synchronize to a collective dynamical behavior,” write Meital Zigzag at Bar-Ilan University in Israel and colleagues o the arXiv today.

And perhaps the most fascinating of these synchronized systems are those that show zero lag; that are perfectly synched. For example, in widely separated regions of the brain, zero lag synchronization of neural activity seems to be an important feature of the way we think.

This type of synchronization also turns out to be an important feature of chaotic communication. This is the process by which which information can be hidden in the evolution of a chaotic attractor and retrieved by substracting the same chaotic background to reveal the original message.

Obviously, this only works when the transmitter and receiver have are coupled so that they evolve in exactly the same way. For a long time physicists have wondered whether this effect can be used to send data securely and earlier this year, they proved that the security can only be guaranteed if the synchronisation has zero lag.

But how does zero lag occur and under what range of conditions?

Zero lag seems to occur when the delays in the mutual coupling and self feedback between two systems act to keep them in step. In effect, both systems lag but by exactly the same amount.

Until recently, this was thought to occur only for a very small subset of parameters in which the delays are identical or have a certain ratio. But these limits are so exact and constricting that it’s hard to imagine a wet system such as the brain ever achieving them.

Now Zigzag and friends have shown that it is possible to get around these strict limits by having more than one type of feedback between the systems. When that happens, it’s possible to have zero lag synchronisation over a much wider set of parameters.

That’s going to have important implications for our understanding of synchronisation in the brain and for the development of secure chaotic communication. Betcha!

Ref: arxiv.org/abs/0811.4066: Emergence of Zero-Lag Synchronization in Generic Mutually Coupled Chaotic Systems

Counting negative links make network models more realistic

Tuesday, November 18th, 2008

conflicts-and-pacts

Spotting communities within networks is a big deal. Not least for search engines that rely heavily for their results on the communities that form when websites point to each other. If a lot of websites point to another site then that proves it is of value.

At least that’s what everyone has assumed. But links can be negative as well as positive. If lots of websites point to another site specifically to say how bad it is, then the community is actually saying the site has little value.

Being able to tell the difference, then, is crucial, not only for search results but in understanding the structure of the network and the communities that emerge.

Vincent Traag at the University of Amsterdam in the Netherlands and a buddie say that including negative as well as positive links, profoundly changes the pattern of communities that you find in a network.

They’ve applied the idea to a dataset called the Correlates of War that provides details of agreements and disputes between 138 countries between 1993 and 2001.

In terms of the network, a negative link is the same as a positive link but pointing in the opposite direction (it has the opposite sign).

By putting the links into a model of the world, Traag has worked out what global communities existed at the time. The communities that emerge are the standard power blocs well known to historians: the West; Latin America; Russia & China; West Africa; North Africa & the Middle East; and a collection of independents not truly forming a bloc.

That’s almost exactly as historians would put it except for one or two features. For example, west Africa does not normally figure as a power bloc on its own and the independents include New Zealand which would normally be classified as part of the West.

That provides an interesting and somewhat unconventional insight ino the politics of the time.

Ref: arxiv.org/abs/0811.2329: Community Detection in Networks with Positive and Negative Links

Triggering a phase change in wealth distribution

Tuesday, November 11th, 2008

wealth-distribution

Wealth distribution in the western world follows a curious pattern. For 95 per cent of the population, it follows a Boltzmann Gibbs distribution, in other words a straight line on a log-linear scale. For the top 5 per cent, however, wealth allocation follows a Pareto distribution, a straight line on a log-log scale, which is a far less equitable way of apportioning wealth.

Nobody really understands how this arrangement comes about but Javier Gonzalez-Estevez from the Universidad Nacional Experimental del Tachira in Venezuela and colleagues think they can thrown some light on the problem.

They have created an agent-based model in which each agent’s “wealth” evolves according to the way it interacts with its neighbours. Gonzalez-Estevez and co say that a simple model of this kind accurately reproduces the combination of Boltzmann Gibbs and Pareto distributions seen in real economies.

But get this. The teams says: “it is possible to bring the system from a particular wealth distribution to another one with a lower inequality by performing only a small change in the system configuration”. That’s an intriguing possibility.

In their latest work they say that it is possible to switch between Pareto and Boltzmann-Gibbs distributions, simply by increasing the number of neighbors each agent has.

In other words, this triggers a phase change in which wealth suddenly becomes more equally distributed (or vice versa).

That’s going to be a fascinating area for econophysicists to explore. Economists have always thought about that changing the distribution of wealth means tax collection and redistribution.

Now there’s a whole new way in which it might be approached. Gonzalez-Estevez and team make no suggestion as to how it might be done in real life economies but you can be sure that more than a few econophysicists will be thinking about how to trigger these kinds of phase changes for real.

Taxes as a way of redistributing wealth could become a thing of the past. But it will be as well to remember that not everyone wants to makes wealth distribution fairer.

Ref: arxiv.org/abs/0811.1064: Transition from Pareto to Boltzmann-Gibbs Behavior in a Deterministic Economic Model

Predicting the popularity of online content

Monday, November 10th, 2008

digg-prediction

The page views for entries on this site in the last week range from more than 17,000 thousand for this story to around 100 for this one.

That just goes to show that when you post a blog entry and there’s no way of knowing how popular it will become. Right?

Not according to Gabor Szabo and Bernardo Huberman at HP Labs in Palo Alto who reckon they can accurately forecast a site’s page views a month in advance by analysing its popularity during its first two hours on Digg.

They say a similar prediction can be made for YouTube postings except these need to be measured for 10 days before a similarly accurate forecast can be made. (That’s almost certainly because Digg stories quickly become outdated while YouTube videos are still found long after they have been submitted.)

That’s not so astounding if all (or at least most) content has a similar long tail-type viewing distribution. Measuring part of this distribution automatically tells you how the rest is distributed.

But actually proving this experimentally is more impressive. In principle, it gives hosts a way of allocating resources such as bandwidth well in advance which could be useful, especially if you can charge in advance too.

Ref: arxiv.org/abs/0811.0405: Predicting the Popularity of Online Content

Anonymizing data without damaging it

Thursday, November 6th, 2008

graph

If scientists are to study massive datasets such as mobile phone records, search queries and movie ratings, the owners of these datasets need to find a way to anonymize the data before releasing it.

The high profile cracking of data sets such as the Netflix prize dataset and the AOL search query data set means that people would be wise not to trust these kinds of releases until the anonymization problem has been solved.

The general approach to anonymization is to change the data in some significant but subtle way to ensure that no individual is identifiable as a result. One way of doing this is to ensure that every record in the set is identical to at least one other record.

That’s sensible but not always easy, point out Rajeev Motwani and Shubha Nabar at Stanford University in Palo Alto. For example, a set of search queries can be huge, covering the search habits of millions of people over many months. The variety of searches people make over such a period make it hard to imagine that two entries would be identical. And analyzing and changing such a huge dataset in a reasonable period of time is tricky too.

Motwani and Nabar make a number of suggestions. Why not break the data set into smaller, more manageable clusters, they say. And why not widen the criteria for what it means to be identical to allow similar searches to be replaced with identical terms. For example, replacing a search for “organic milk” with a search for “dairy product”. These ideas seem eminently sensible.

The problem becomes even more difficult when the data is in graph form, as it might be for mobile phone records or web chat statistics. So Nabar suggest a similar anonymizing technique: ensure that every node on the graph should share some number of its neighbors with a certain number of other nodes.

The trouble is that the anonymization technique can destroy the very patterns that you are looking for in the data, for example in the way mobile phones are used. And at present, there’s no way of knowing what has been lost.

So what these guys need to do next is find some kind of measure of data loss that their proposed changes cause, to give us a sense of how much damage is being done to the dataset during anonymization.

In the meantime, dataset owners should show some caution over how, why and to whom they release their data.

Ref:

arxiv.org/abs/0810.5582: Anonymizing Unstructured Data

arxiv.org/abs/0810.5578: Anonymizing Graphs