Archive for the ‘Hellraisin’’ Category

Celebrating 10 years of the Physics arXiv Blog

Monday, August 21st, 2017

Back in 2007, I started an experiment. My idea was to use the relatively novel technique of blogging to explore exciting new ideas on the Physics arXiv, an online server for scientific papers.  I called this thing the Physics arXiv Blog and 11 August 2017 is the 10th anniversary of its birth.

Every day, I scanned almost all the new papers that appeared there and wrote about the one I thought most interesting.

It has been a roller coaster ride. I’ve had to deal with hackers, botnets, plagiarists, the FBI and attempts to unmask me as the author  — I’ve always written this blog anonymously for reasons I explain below.

I’ve also gained a loyal following of readers and the support of publishing organisations, such as MIT’s Technology Review. Thank you all!

I’ve also learned a lot of fascinating science. That’s what drives this blog–my desire to understand the universe and how it works. One of these days, I hope to find out!

You can’t write about anything unless you understand it (or think you do). So blogging is the perfect vehicle for this pursuit.

This post is a short review of some these adventures. i’ll post links to my favourites stories separately.

First some background. I’m a science journalist by trade with a background in physics. In 2007, science journalism was changing quickly as print-based publications belatedly found their business models foundering. New media in the form of websites, podcasts, online videos and blogging, were changing everything.

At that time, mainstream media coverage of science was dominated by write ups of papers that appeared in the top journals–Nature, Science, PNAS and so on. These organisation have slick PR operations that feed upcoming scientific papers to journalists before they are published . It’s a cosy arrangement for all concerned and I wanted to do something different.

The Physics arXiv provided a way to do that. Every day, physicists, astronomers, computer scientists and increasingly scientists from other disciplines, post their latest papers here for anyone to read for free. More than 2000 new papers appear every week. My goal was to filter them all.

I soon realised this was an impossible task. Certain topics are beyond me–the maths section is impenetrable for me and I quickly gave up on that. Later, I stopped regularly scanning the high energy physics sections too.

But the rest I filter regularly and quickly, marking papers of interest to study later. Then I choose one or two to read and then a single one to write about. It is hard work but it is endlessly fascinating and inspiring.

In 2007, I had no experience of blogging and so began to explore what it could do. My first posts had an irreverent–puerile to be honest–tone that I hoped would appeal to a different audience. But the tone distracted from the science and the style soon morphed into more conventional coverage. I settled into a rhythm of writing what I thought were reasonably high quality posts every day.

At the time, I was a freelance science writer retained by a major science publication. I didn’t want my blog to reflect badly on this publication. My contract also required me to write exclusively for them. But the contract did not cover blogging, which hadn’t been invented when the contract was first drawn up. This was a grey area that I was unsure about.

For these reasons, I started to write anonymously. And I continue to do this, even though the original rationale no longer applies. Most of my colleagues now know who writes this blog.

I also wanted to see if I could make a living from blogging. I had no idea what the future would hold for science journalists but I had a pretty good idea that, however I made my money in 2007, I would be making it differently in five or 10 years time.  That has certainly turned out to be true.

So I monetised the content using services such as Google AdSense. That was a dramatic wake up call.

My content quickly generated reasonable viewing figures–by the end of 2007 I was attracting around 300,000 pageviews a month. But this generated no more than beer money from Google– about $100 per month. I was shocked given the amount of work it took to produce the blog. Clearly, my labour of love was unrequited by Google.

At the same time, I discovered some of the downsides of blogging. One website began pasting my content into their site, word for word, every day, without attribution. It was as clear a case of plagiarism as you could find. I was furious.

To make things seem worse, this site was heavily monetised with display ads, pop ups and so on. Whoever was running it was trying to make a buck at my expense.

The owner was brazen– a whois search quickly revealed his details but there was little I could do. My $100 from Google was not going to cover legal fees.

However, I noticed that the site was pasting other science content too, some of it from a large commercial publisher. I assumed this was theft as well and I wrote to this publisher, pointing out the plagiarism. Soon after that, the site disappeared.

At the same time, I became aware of the significant security limitations of WordPress, the go-to software for bloggers then and now. Arxivblog.com was frequently targeted by hackers who rewrite the code to redirect readers to gambling, phishing and porn websites.

The attacks came slowly at first but then increased. Eventually I was being hacked on a daily basis. I could fix it by finding the malware inserted into the WordPress code and removing it but I had to do this every morning.

Then Google labelled arXivblog.com as a site hosting malware and actively began barring people from visiting. That reduced my traffic to a dribble. It took Google 30 days to remove this malware label.

At one point, the FBI contacted me saying that arxivblog.com was at the centre of a global botnet that was spreading malware. They asked to see the site logs to help them track down the culprits and I supplied what information I could. But I never discovered the result of their investigation.

The problem was that WordPress was dangerously insecure. Eventually a couple of the big tech blogs blew the whistle on this. They also relied on WordPress and presumably suffered similar intrusions.  Wordpress eventually tightened its security but not before another big change occurred.

With a growing following, I suppose it was natural for people to wonder who  wrote the arXivblog. Occasionally, people wrote and asked but I wasn’t interested in publicity.

I was on vacation when the fuss began. I had written several posts in advance, set them to run automatically and then disappeared for a week or two. When I came back, all hell had broken loose.

It began with a couple of emails from the editor-in-chief of Wired.com.  He said he enjoyed the blog and wanted to know who I was. Being on holiday, I didn’t respond and his next email revealed that one of his journalists was going to unmask me, or at least try.

As a result, Wired published this story: “WHO IS THE ANONYMOUS AUTHOR OF THE WEB’S BEST PHYSICS BLOG?” The article invited people to name me and offered a prize for the whistleblower.

At the time, there had been a spate of unmaskings of anonymous bloggers. The media had had some sport tracking these people down and revealing their identities. But enthusiasm for this was waning and many readers wrote to Wired suggesting they leave me alone. “If he wants to write anonymously, let him,” they said.

I didn’t know whether to be furious or flattered, when I stumbled into this debate. Either way, the episode left a bad taste in the mouth. But it did spike the curiosity of other publishers.

Wired itself asked if I wanted to write the arxivblog for them. I did not. A couple of other offers came in as well. But I eventually settled on an offer from Technology Review to host the blog. I was increasingly disillusioned by the problems with WordPress and the limited income that Google ads were generating. Technology Review was a much more secure and prestigious environment.

And there I’ve stayed since 2009. At their request, the blog now focuses more on emerging technology. So I write fewer posts about astrophysics and other esoteric physics.

But hard-core physics still fascinates me. So in 2013, I began producing this kind of physics-based content for Medium, a new publishing platform created by Evan Williams, the founder of Twitter.

Working for a start up was an adventure too. There was constant evolution both in terms of the publishing tools and the business model. And a constant change in the remuneration. Medium tried various ways of rewarding writers, from numbers of pageviews to time spent reading and all kinds of variations in between.

It was great for a while but Medium eventually stopped paying altogether. I hope it’s a success, that Williams eventually finds the magic formula that rewards Medium, and the writers that use it, in an appropriate way.

But I think it’s fair to say it hasn’t found the secret sauce yet. And neither have many others in the world of science journalism.

Technology Review is still a beacon. It continues to produce high quality content focused on the world of technology. I’m proud to be part of it and want to thank the team there for their years of support. It’s been great guys.

And long may it continue. The Physics arXiv Blog is ten years old. I hope the next ten are just as exciting.

WHO IS THE ANONYMOUS AUTHOR OF THE WEB’S BEST PHYSICS BLOG?

The fundamental patterns of traffic flow

Monday, March 9th, 2009

traffic-flow

Take up the study of earthquakes, volcanoes or stock markets and the goal, whether voiced or not, is to find a way to predict future “events” in your field. In that sense, these guys have something in common with scientists who study traffic jams.

The difference is that traffic experts might one day reach their goal. The complexity of traffic flow, while  awe inspiring, may well be fundamentally different  to the complexity of stock markets and earthquake events.

At least that’s how Dirk Helbing at the Institute for Transport & Economics at the Technical University of Dresden in Germany, and his buddies see it.

Helbing says that one long standing dream of traffic experts is to identify the fundamental patterns of traffic congestion from which all other flows can be derived,  a kind of periodic table of traffic flows.

Now he thinks he has found it: a set of fundamental patterns of traffic flow that when identified on a particular road, can be used to create a phase diagram of future traffic states.

The phase diagrams can then be used to make forecasts about the way in which the flow might evolve.

That’ll be handy. But only if it’s then possible to do something to prevent the congestion. And that may be the trickiest problem of all.

Ref: arxiv.org/abs/0903.0929: Theoretical vs. Empirical Classification and Prediction of Congested Traffic States

The LHC’s dodgy dossier?

Monday, March 2nd, 2009

lhc-higgs

There’s no reason to worry about the Large Hadron Collider that is due to to be switched on later this year for the second time. The chances of it creating a planet-swallowing black hole are tiny. Hardly worth mentioning really.

But last month, Roberto Casadio at the Universita di Bologna in Italy and a few pals told us that the LHC could make black holes that will hang around for minutes rather than microseconds. And they were rather less certain about the possibility they the  black holes could grow to catastrophic size. Far from being utterly impossible, they said merely that this outcome didn’t “seem” possible.

This blog complained that that was hardly the categorical assurance we’d come to expect from the particle physics community. The post generated many comments, my favourite being that we shouldn’t worry because of the Many Worlds Interpretation of quantum mechanics. If the LHC does create Earth-destroying black holes, we could only survive in a universe in which the accelerator broke down.

Thanks to Slashdot, the story got a good airing with more than few people pointing out that we need better assurances than this.

Now we can rest easy. Casadio and co have changed their minds. In a second version of the paper, they’ve removed all mention of the black hole lifetime being many minutes (“>> 1sec” in mathematical terms) and they’ve changed their conclusion too. It now reads: “the growth of black holes to catastrophic size is not possible.”

What to make of this? On the one hand, these papers are preprints. They’re early drafts submitted for peer review so that small problems and errors can be ironed out before publication. We should expect changes as papers are updated.

On the other, we depend on the conclusions of scientific papers for properly argued assurances that the LHC is safe. If those conclusions can be rewritten for public relations reasons rather than scientific merit, what value should we place on them?

Either way, we now know that a few minutes work on a wordprocessor makes the LHC entirely safe.

Ref: arxiv.org/abs/0901.2948: On the Possibility of Catastrophic Black Hole Growth in the Warped Brane-World Scenario at the LHC version 2

Thanks to Cap’n Rusty for pointing out the new version

Calculating the cost of dirty bombs

Wednesday, February 25th, 2009

cesium-137

One of the more frightening scenarios that civil defence teams worry about is the possibility that a bomb contaminated with radioactive material would be detonated in a heavily populated area.

Various research teams have considered this problem and come to similar conclusions–that the actual threat to human health from such a device is low. Some even claim that terror groups must have come to a similar conclusion which is why we’ve not been on the receiving end of such an attack. The panic such a device would cause is another questions.

Today Theodore Liolios from a private institution called the  Hellenic Arms Control Center in Thessaloniki in Greece, goes through the figures.

He says the most likely material to be used in such an attack is Cesium-137, widely used throughout the developed and developing world as a source for medical therapies.

The unstated implication is that it would be relatively easy to get hold of this stuff from a poorly guarded hospital. Exactly this happened in Goiania in Brazil when an abandoned hospital was broken into and its supply of cesium-137 distributed around the surrounding neighbourhoods. The incident left 200 people contaminated. Four of them died.

But a dirty bomb would not be nearly as lethal. The trouble with them (from a terrorist’s point of view, at least) is that distributing radioactive material over a large area dramatically reduces the exposure that people receive. Particularly when most could be warned to stay indoors or be evacuated (unlike the Goiania incident in which most people were unaware they were contaminated).

Liolios calculates that anybody within a 300 metre range of a dirty bomb would increase their lifetime risk of cancer mortality by about 1.5 per cent. And then only if they were unable to take shelter or leave the area. That’s about 280 people given the kind of densities you expect in metropolitan areas.

And he goes on to say that it is reasonable to assume that a cesium-137 dirty bomb would not increase the cancer mortality risk for the entire city by a statistically significant amount.

But the terror such a device might cause is another question. Liolios reckons that current radiation safety standards would mean the evacuation of some 78 square kilometres around ground zero. That would affect some 78,000 people, cost $7.8m per day to evacuate and some $78m to decontaminate.

That seems a tad conservative but however it is calculated, it may turn out to be chickenfeed compared to the chaos caused by panic, which may well result in more deaths than the bomb itself could claim. How to calculate the effect of that?

Ref: arxiv.org/abs/0902.3789: The Effects of Using Cesium-137 Teletherapy Sources as a Radiological Weapon

Ptarithmetic: reinventing logic for the computer age

Friday, February 20th, 2009

ptarithmetic

In the last few years, a small group of logicians have attempted the ambitious task of re-inventing the discipline of formal logic.

In the past, logic has been thought of as the formal theory of “truth”.  Truth plays an important role in our society and as suchm a formal theory is entirely laudable and worthy. But it is not always entirely useful.

The new approach is to reinvent logic as the formal theory of computability. The goal is  to provide a systematic answer to the question “what is computable”. For obvious reasons,  so-called computability logic may turn out to be much more useful.

To understand the potential of this idea , just think for a moment about the most famous of logical systems called Peano Arithmetic, better known to you and me as “arithmetic”.

The idea is for computability logic to do for computation what Peano Arithmetic does for natural numbers.

The role of numbers is played by solvable computations and the collection of basic operations that can be performed on these computations forms the logical vocabulary of the theory.

But there’s a problem. There can be a big difference between something that is computable and something that is efficiently computable, says Giorgi Japaridze, a logician at Villanova University in Pennsylvania and the inventor of computability logic.

So he is proposing a modification (and why not, it’s his idea)–the system should allow only those computations that can be solved in polynomial time. He calls this polynomial time arithmetic or ptarithmetic for short.

The promise is that Japaridze’s system of logic will prove as useful to computer scientists as arithmetic is for everybody else.

What isn’t clear is whether the logical vocabulary of ptarithemtic is well enough understood to construct a system of logic around and beyond that, whether ptarithemtic makes a well-formed system of logic at all.

Those are big potential holes which Japaridze  may need some help to fill–even he would have to admit that he’s been ploughing a lonely furrow on this one since he put forward the idea in 2003.

But the potential pay offs are huge which make this one of those high risk , high potential pay off projects that just might be worth pursuing.

Any volunteers?

Ref: arxiv.org/abs/0902.2969: Ptarithmetic

The frightening origins of glacial cycles

Thursday, February 12th, 2009

milankovitch-cycles

Climatologists have known for some time that the Earth’s motion around the Sun is not as regular as it might first appear. The orbit is subject to a number of periodic effects such as the precession of the Earth’s axis which varies over periods of 19, 22 and 24 thousand years, its axial tilt which varies over a period of 41,000 years and various other effects.

The combined effect of these variations are often cited to explain the 41,000 and 100,000 year glacial cycles the Earth appears to have gone through in the past.

But there is a problem with this idea: the change in the amount of sunlight that these variations cause is not enough to trigger glaciation. So some kind of non-linear effect must amplify the effects to cause widespread cooling.

That’s not so surprising given that we know that our climate appears to be influenced by all kinds of non-linear factors. Even still, nobody has been able to explain what kind of processes can account for the difference.

Now Peter Ditlevsen at the University of Copenhagen in Denmark thinks he knows what might have been going on. He says that the change in the amount of sunlight the Earth receives acts as a kind of forcing mechanism in a climatic resonant effect. The resulting system is not entirely stable but undergoes bifurcations in which the cycle switches from a period of 41,000 years to 100,000 years and back again, just as it seems to have done in Earth’s past.

“This makes the ice ages fundamentally unpredictable,” says Ditlevsen.

Quite, but the real worry is this: if bifurcations like this have happened in the past, then they will probably occur in the future. The trouble is that our current climate models are too primituve to allow for this kind of bifurcation and that means their predictions could be even more wildly innacurate than we know they already are .

Kinda frightening, don’t you think?

Ref: arxiv.org/abs/0902.1641:  The Bifurcation Structure and Noise Assisted Transitions in the Pleistocene Glacial Cycles

Econophysicists identify world’s top 10 most powerful companies

Monday, February 9th, 2009

global-network

The study of complex networks has given us some remarkable insights into the nature of systems as diverse as forest fires, the internet and earthquakes. This kind of work is even beginning to give econophysicists a glimmer of much-needed insight in the nature of our economy. In a major study, econophysicists have today identified the most powerful companies in the world based on their ability to control stock markets around the globe. it makes uncomfortable reading. (more…)

The power laws behind terrorist attacks

Friday, February 6th, 2009

power-law

Plot the number of people killed in terrorists attacks around the world since 1968 against the frequency with which such attacks occur and you’ll get a power law distribution, that’s a fancy way of saying a straight line when both axis have logarithmic scales.

The question, of course, is why? Why not a normal distribution, in which there would be many orders of magnitude fewer extreme events?

Aaron Clauset and Frederik Wiegel have built a model that might explain why. The model makes five simple assumptions about the way terrorist groups grow and fall apart and how often they carry out major attacks. And here’s the strange thing: this model almost exactly reproduces the distribution of terrorists attacks we see in the real world.

These assumptions are things like: terrorist groups grow by accretion (absorbing other groups) and fall apart by disintegrating into individuals. They must also be able to recruit from a more or less unlimited supply of willing terrorists within the population.

Being able to reproduce the observed distribution of attacks with such a simple set of rules is an impressive feat. But it also suggests some strategies that might prevent such attacks or drastically reduce them in number . One obvious strategy is to reduce the number of recruits within a population, perhaps by reducing real and perceived inequalities across societies.

Easier said than done, of course. But analyses like these should help to put the thinking behind such ideas on a logical footing.

Ref: arxiv.org/abs/0902.0724: A Generalized Fission-Fusion Model for the Frequency of Severe Terrorist Attacks

Space Station simulator given emotions

Tuesday, February 3rd, 2009

m-sorry-dave

Astronauts training to work on the International Space Station have to have mastered a mind-boggling amount of kit before they leave Earth. One of these devices is the Canadarm 2, a robotic arm used to manipulate experiments outside the station.

On Earth, astronauts train on a Canadarm 2 simulator connected to a virtual assistant that can spot potential errors, such as a move likely to smash the arm into the station. The assistant then offers hints and tips to the astronaut to help him or her make a correction or even issue a command to prevent damage.

The bad news for astronauts is that André Mayers and colleagues at Université de Sherbrooke in Canada who created the virtual assistant, have given it an unusual upgrade. In an attempt to help the simulator learn more about the astronauts who are using it, the team has programmed the assistant to experience the equivalent of an emotion when it records a memory of what has happened.

The problem is that the assistant receives a huge amount of data from each training session. It’s emotional response allows it to determine which of this data is most important, just as humans do. “This allows the agent to improve its behavior by remembering previously selected behaviors which are influenced by its emotional mechanism,” say the team.

The system is called the Conscious Tutoring System or CTS. It’s not clear from the paper how well the system works but how long before one unlucky astronaut hears the phrase: “I’m sorry Dave, I can’t do that.”

Ref: arxiv.org/abs/0901.4963: How Emotional Mechanism Helps Episodic Learning in a Cognitive Agent

Rippling in muscles caused by molecular motors detaching

Friday, January 30th, 2009

sarcomere

Muscle tissue is made of molecular engines called sarcomeres, which contract and expend when the muscle is flexed. In sarcomeres the business of contracting is carried out by molecular motors called myosins as they pull themselves along filaments of a protein called actin. When you flex your arm, it is these myosin molecular motors that are doing the work.

One curious phenomenon that can sometimes be observed in muscles is a wavelike oscillation of the tissue. What causes this infamous “rippling” of muscles has been somewhat of a mystery but today Stefan Gunther and Karsten Kruse from Saarland University in Germany throw some light on the matter.

They’ve modeled the rate at which molecular motors detach themselves from the actin filaments as the load they are under changes. It turns out that oscilllations occur naturally under certain loads, as the molecular motors attach and re-attach.

So when the next bodice ripper you read mentions rippling muscles, you’ll know exactly what this means.

Ref:  arxiv.org/abs/0901.4517: Spontaneous Waves in Muscle Fibres