Archive for the ‘Calculatin'’ Category

The secret of world class putting

Friday, March 13th, 2009

perfect-putting

Watch professional golfers putt and you’ll eventually notice three common features about their style,  says Robert Grober, an expert on the physics of golf at the Yale University.

First, the putter head always moves at a constant speed when it hits the ball. Second, the length of time the putting stroke takes has little impact on the speed of the ball (and therefore the length of the putt). And finally, a professional golfer’s backswing takes about twice as long as the  downswing.

Grober has used these observations to construct a mathematical model of a putting swing and to explore other properties of such a system.

It turns out that the model that best accounts for this behaviour is a simple pendulum driven at twice its resonant frequency.

That explains a number of other observations about professional golfers, says Grober. For example, a common putting tip is that longer backswings equate to longer putts. This model has exactly this characteristic: the length of the backswing is proportional to the speed of the club at impact.

It is also relatively straightforward to get a sense of the tempo of the required putt by swinging the club back and forth in resonance, like a pendulum. The duration of the actual stroke is exactly half the length of the putter cycle (i.e. from the address position moving backward, to the address position moving forward). “In fact, one often observes golfers instinctively doing this before they hit a putt,” says Grober.

So now the secret is out. Make a careful note for next time you’re out on the links.

Ref: arxiv.org/abs/0903.1762: Resonance in Putting

Ptarithmetic: reinventing logic for the computer age

Friday, February 20th, 2009

ptarithmetic

In the last few years, a small group of logicians have attempted the ambitious task of re-inventing the discipline of formal logic.

In the past, logic has been thought of as the formal theory of “truth”.  Truth plays an important role in our society and as suchm a formal theory is entirely laudable and worthy. But it is not always entirely useful.

The new approach is to reinvent logic as the formal theory of computability. The goal is  to provide a systematic answer to the question “what is computable”. For obvious reasons,  so-called computability logic may turn out to be much more useful.

To understand the potential of this idea , just think for a moment about the most famous of logical systems called Peano Arithmetic, better known to you and me as “arithmetic”.

The idea is for computability logic to do for computation what Peano Arithmetic does for natural numbers.

The role of numbers is played by solvable computations and the collection of basic operations that can be performed on these computations forms the logical vocabulary of the theory.

But there’s a problem. There can be a big difference between something that is computable and something that is efficiently computable, says Giorgi Japaridze, a logician at Villanova University in Pennsylvania and the inventor of computability logic.

So he is proposing a modification (and why not, it’s his idea)–the system should allow only those computations that can be solved in polynomial time. He calls this polynomial time arithmetic or ptarithmetic for short.

The promise is that Japaridze’s system of logic will prove as useful to computer scientists as arithmetic is for everybody else.

What isn’t clear is whether the logical vocabulary of ptarithemtic is well enough understood to construct a system of logic around and beyond that, whether ptarithemtic makes a well-formed system of logic at all.

Those are big potential holes which Japaridze  may need some help to fill–even he would have to admit that he’s been ploughing a lonely furrow on this one since he put forward the idea in 2003.

But the potential pay offs are huge which make this one of those high risk , high potential pay off projects that just might be worth pursuing.

Any volunteers?

Ref: arxiv.org/abs/0902.2969: Ptarithmetic

Econophysicists identify world’s top 10 most powerful companies

Monday, February 9th, 2009

global-network

The study of complex networks has given us some remarkable insights into the nature of systems as diverse as forest fires, the internet and earthquakes. This kind of work is even beginning to give econophysicists a glimmer of much-needed insight in the nature of our economy. In a major study, econophysicists have today identified the most powerful companies in the world based on their ability to control stock markets around the globe. it makes uncomfortable reading. (more…)

Glider guns created in chemical Game of Life

Thursday, February 5th, 2009

glider-guns

If you’ve ever played Conway’s Game of Life, you’ll be familiar with cellular automata and, more importantly,  glider guns. So get this:  a team of British chemists and computer scientists  have  created a chemical cocktail that behaves like a cellular automata and which  reproduces this behavior: chemical  guns firing chemical gliders across a chemical grid.

For those who haven’t played with it, Conway’s  Game of Life is a two dimensional grid known as a cellular automata in which each square can be black or white. The game starts in an initial state–a pattern of black and white squares–and its evolution is determined by a set of rules that specify what color a square should become depending on the color of its neighbors.

The game was devised by the British mathematician John Conway in 1970 and has been studied in detail by countless generations of computer scientists, mathematicians and students ever since, not least because of the extraordinary patterns and structures that the game can produce.

One of these is the glider gun: a structure that periodically “fires” projectiles across the landscape.

Now Ben de Lacy Costello and pals from the University of West of England in Bristol have created a chemical version of all this. Their model is based on the famous Belousov-Zhabotinsky reactions in which a specific cocktail of chemicals  can produce complex patterns of oscillating colors.

The team set up a grid in which each square could change colour via a BZ reaction and in which the reaction diffused across the boundaries of each square in way that mimics Conway’s Game of Life.

The oscillating patterns produced by BZ reaction have long been thought of as similar to cellular automata but this is the first time that the Game of Life has been reproduced in a chemical system.

That’s impressive but it looks as if the best is yet to come. It is well known that cellular automata can operate as universal Turing machines . The next step, says  Costello and buddies, is to build a similar chemical grid capable of  arithmetic.

Beyond that, the question is this: if we can do this in the lab, might evolution also have harnessed these reactions in a similar way. Let the search begin.

Ref: arxiv.org/abs/0902.0587: Implementation of Glider Guns in the Light-Sensitive Belousov-Zhabotinsky Medium

Rule breakers make traffic jams less likely

Monday, January 26th, 2009

traffic-rule-breakers

Rules are a good thing when it comes to road traffic: drive on the wrong side of the highway and you’ll cause chaos, if you live.  If that seems forehead-smackingly obvious, then an analysis by Seung Ki Baek at Umea University in Sweden and pals my come as a surprise.

They say that a small proportion of lunatics driving on the wrong side of the road actually reduces the chances of a jam rather than increasing it and they have an interesting model to prove it.

Their model is a 100 lane highway in which cars can drive in either direction in any lane.  When two cars collide, that lane becomes blocked and other vehicles have to move to one side or the other to get round them.

In theory, it’s easy to imagine that the best strategy is for everyone to agree to move to their left (or right, the model is symmetrical)  when they meet.

The question is what happens when there are two kinds of drivers: rule-followers and rule-breakers who move either way.

Ki Baek and co considered the two obvious extremes.  When everybody is a rule-breaker, the result is chaos and the road jams up quickly as collisions ensue. Equally, when everybody is a rule-follower, the likelihood of jam is much lower and road users travelling in the same direction tend to end up driving on the left (or right), just as they do on real roads.

But here’s the strange thing: the probability of a jam reaches a minimum somewhere in between, when the number of rule-breakers is between 10 and 40 per cent.

That’s kinda counterintuitive but Ki Baek and co say several factors explain what is going on.

First, a small number of collisions disperses the rule-followers to their respective side of the roads more quickly,  making jamming less  likely.

And second, rule-followers tend to form convoys which can lead to pile ups that jam the road. A few collisions here and there helps to break up these convoys into smaller groups, making large pile ups and the jams they cause, less likely .

“Our result suggests that there are situations when abiding too strictly by a traffic rule could lead to a jamming disaster which would be avoided if some people just ignored the traffic rule altogether,” say the team.

Might be fun to try it on the San Diego Freeway one of these days. Dare ya!

Ref:  arxiv.org/abs/0901.3513: Flow Improvement Caused by Traffic-Rule Ignorers

Harvesting energy from the airwaves

Monday, January 12th, 2009

nanohelices

Antennae are the most fundamental energy harvesting devices that we know, says Sung Nae Cho at the Samsung Advanced Institute of Technology in south Korea. So why aren’t they more widely used?

Turns out that helical antennae are already used to harvest energy and most of us probably own one already in the form of a transformers. These contain a helical winding that rectifies  AC into DC.

Cho points out that it has recently become possible to build nanohelices and that these might also be used for rectification. He’s designed a device that rectifies, not current, but electromagnetic waves. It consists of a nanohelix layer, a diode  layer and a capacitor layer, all the components of a standard rectifying circuit.

The nanohelix layer consists of an array of  100 million “pixels” which each contain a single nanohelix. That makes the array no bigger than the imaging chips in digital cameras . Cho calculates that if only 10 per cent of the nanohelices harvest energy from ambient electromagnetic waves to the tune of 130 nA, then the device would produce 1.3A.

If he’s right, that’s a handy amount by any standards. Anybody volunteer to prove him right.

Ref: arxiv.org/abs/0901.0769: Energy Harvesting by Utilization of Nanohelices

Why Saturn’s rings are so sharp

Monday, January 5th, 2009

outer-b-ring

Here’s a conundrum for you: why do Saturn’s rings have such sharp edges?

It’s question that has puzzled planetary scientists for many years. Various ideas have been put forward but none adequately explain the structure we see today.

To understand just  how sharp the edges are consider this: pictures from Cassini show that the density of particles at the edge of the outer B ring, for example, drops by an order of magnitude over a distance of only 10 metres or so.

That’s extraordinary given that the ring is 25580 kilometres wide.

(more…)

How chaos could improve speech recognition

Wednesday, December 24th, 2008

a-sound

If you’ve ever used speech recognition software, you’ll know how often it fails to work well. Recognition rates are nowhere near what is needed for anything but the simplest applications.

So a new approach for analysing speech by Yuri Andreyev and Maxim Koroteev at the Institute of Radioengineering and Electronics of the Russian Academy of Sciences in Moscow is welcome. Their approach is to treat the production of speech as a chaotic phenomenon.

That’s a significant difference compared with previous approaches which predict the next point in a speech signal by extrapolating from previous points in a linear fashion.

That works because the organs that produce speech–the vocal cords–change over a much longer time period than the sound they produce. So they can be considered essentially stationary for this type of analysis.

Of course, one of the characteristics of chaos is that very small changes in starting conditions can produce large changes in output. And if that’s happening, what kind of chaos are we talking about?

Andreyev and Koroteev answer this question by measuring the frequency and amplitude of the sound a person makes when saying various vowels and consonants. They then use this data to reconstruct the multidimensional phase space in which the chaotic signal is produced.

The results are interesting because specific vowels appear to be linked to unique structures in the phase space. Andreyev and Koroteev call these structures phase portraits. The picture above is a phase portrait of the vowel sound ‘a’.

It’s a little harder to identify the shapes associated with consonants and the researchers haven’t yet tried with other sounds such as dipthongs.

It’s a long step from here to speech recognition but in principle, it could be done by looking for the phase portraits of specific phonemes and using them to spell out words.

The question, of course, is whether this would be easier or harder than current approaches.

Ref: arxiv.org/abs/0812.4172: On Chaotic Nature of Speech Signals

How chaos could improve speech recognition

Wednesday, December 24th, 2008

a-sound

If you’ve ever used speech recognition software, you’ll know how often it fails to work well. Recognition rates are nowhere near what is needed for anything but the simplest applications.

So a new approach for analysing speech by Yuri Andreyev and Maxim Koroteev at the Institute of Radioengineering and Electronics of the Russian Academy of Sciences in Moscow is welcome. Their approach is to treat the production of speech as a chaotic phenomenon.

That’s a significant difference compared with previous approaches which predict the next point in a speech signal by extrapolating from previous points in a linear fashion.

That works because the organs that produce speech–the vocal cords–change over a much longer time period than the sound they produce. So they can be considered essentially stationary for this type of analysis.

Of course, one of the characteristics of chaos is that very small changes in starting conditions can produce large changes in output. And if that’s happening, what kind of chaos are we talking about?

Andreyev and Koroteev answer this question by measuring the frequency and amplitude of the sound a person makes when saying various vowels and consonants. They then use this data to reconstruct the multidimensional phase space in which the chaotic signal is produced.

The results are interesting because specific vowels appear to be linked to unique structures in the phase space. Andreyev and Koroteev call these structures phase portraits. The picture above is a phase portrait of the vowel sound ‘a’.

It’s a little harder to identify the shapes associated with consonants and the researchers haven’t yet tried with other sounds such as dipthongs.

It’s a long step from here to speech recognition but in principle, it could be done by looking for the phase portraits of specific phonemes and using them to spell out words.

The question, of course, is whether this would be easier or harder than current approaches.

Ref: arxiv.org/abs/0812.4172: On Chaotic Nature of Speech Signals

Calculating the probability of immortality

Monday, December 8th, 2008

How likely is it that a given object will survive forever?

With many groups predicting that human immortality is just around the corner, you could say we all have a vested interest in the answer.

A t first glance, the odds are not good. As David Eubanks of Coker College in South Carolina puts it:

“Imagine that some subject survives each year (or other time period) with a probability p. Assuming for a moment that p exists and is constant over time, it’s easy to compute the dismal odds of long term survival as a decaying exponential. Unless p = 1, the probability of n-year survival approaches zero.”

In other words, the probability of surviving forever is exactly zero.

But this suggests a strategy: the route to immortality is to find a way to increase this probability over time.

Suppose the object we want to make immortal is the data on a hard drive. Then copying the data to another hard drive each year should ensure that after n years there are n copies. If one drive fails, we can easily reconstruct the data onto another drive. So unless all the drives fails at once, the data should be immortal.

Except for one problem. That approach ignores global catastrophes such as comet strikes which would destroy all the drives in one go.

Eubanks says there are two ways to tackle this problem. Life has found one of them which is to produce many diverse copies of the same thing, spread them around the planet and make them work in different ways ie exploit different energy sources.

The other is for a single organism to use its intelligence to avoid catastrophe. “It must collect information about the environment safely and inductively predict it well enough to avoid death,” says Eubanks.

How do these two strategies compare? Naturally, single individuals have a harder time of it because it’s tough to predict and adapt to all possible catastrophes. As Eubanks says, “We’re the product of billions of years of creatures that survived long enough to reproduce, and therefore have very deep survival instincts. And yet we can fall asleep while driving a car.”

Eubanks is even more pessimistic. He says that simulating every possible environmental disaster may be tricky enough but such a disaster would then force us to re-evaluate the way we evaluate disasters and so on ad infinitum. In short, it’s a calculation we are extremely unlikely to ever be able to undertake, making it hard to think of a way we could improve our probability of survival, year in year out.
The bottom line is that humans are unlikely to survive forever and neither is intelligent life anywhere else in the Universe.

“This speaks to the Fermi Paradox, which asks why the galaxy isn’t crawling with intelligent life.”

Quite. But Eubanks’ paper has another sting in the tail.

“The conclusions of this paper could lead one to believe that a democratic government cannot focus solely on external threats, but should also be constantly trying to improve the chances that it does not “self-destruct” into tyranny.”

What democracy could he be thinking of?

Ref: arxiv.org/abs/0812.0644: Survival Strategies