Archive for August, 2008

The black hole at the center of our galaxy

Thursday, August 21st, 2008

supermassive-black-hole.jpg

Is there a supermassive black hole at the center our galaxy, asks Mark Reid from the Harvard Smithsonian Center for Astrophysics in Cambridge.

There sure is and Reid gives a good account of the evidence to prove it.

How can astronomers be so sure?  The first evidence began to emerge in the 1950s when the first radio telescopes spotted a mysterious source of radio waves from the center of the Milky Way in the constellation of Sagittarius. This source was given the name Sagittarius A.

Better observations in the 1970s led to estimations that this source was small, less than the size of our solar system.

Towards the end of the 70s other evidence emerged. The movement of gas clouds near Sagittarius A indicated they it must circling a a compact mass several million times greater than the Sun. Then various stars were observed circling this mass, the most recent being a star called S2 which orbits Sagittarius A every 15.8 years. This implies that Sagittarius A is a point mass some 6 million times the sun’s mass

Then astronomers proved that the radio source and the gravitational  mass were centred at the same point. Other evidence of a weak infrared source at the same point also turned up.

And the clincher was that Sagittarius A is motionless relative to the rest of the Milky Way. In other words, the entire galaxy revolves around this object.

That kind of evidence is overwhelming. It’s hard to think of anything with such a dim optical signature that could create such a high mass density (although there are some possibilities, the most obvious being a cluster of dark stars).

Nevertheless, most astronomers are convinced we’ve got a super massive black hole on our doorstep and it’ll take some spectacular contradictory evidence to change their minds.

Ref:   arxiv.org/abs/0808.2624: Is There a Supermassive Black Hole at the Center our Galaxy

How big is a city?

Wednesday, August 20th, 2008

s-population.jpg

That’s not as silly a question as it sounds. Defining the size of a city is tricky task that has major economic implications: how much should you invest in a city if you don’t know how many people live and work there?

The standard definition is the Metropolitan Statistical Area, which attempts to capture the notion of a city as a functional economic region and requires a detailed subjective knowledge of the area before it can be calculated. The US Census Bureau has an ongoing project dedicated to keeping abreast of the way this one metric changes for cities across the continent.

Clearly that’s far from ideal. So our old friend Eugene Stanley from Boston University and a few pals have come up with a better measure called the City Clustering Algorithm. This divides an area up into a grid of a specific resolution, counts the number of people within each square and looks for clusters of populations within the grid. This allows a city to be defined in a way that does not depend on its administrative boundaries.

That has significant implications because clusters depend on the scale on which you view them. For example, a 1 kilometre grid sees New York City’s population as a cluster of 7 million, a 4 kilometre grid makes it 17 million and the cluster identified with an 8 kilometre grid scale, which encompassing Boston and Philadelphia, has a population of 42 million. Take your pick.

The advantage is that this gives a more or less objective way to define a city. It also means we’ll need to reanalyse of some of the fundamental properties that we ascribe to cities  growth. For example,  the group has studied only a limited numer of cities in the US, UK and Africa but already says we’ll need to rethink Gibrat’s law which states that a city’s growth rate is independent of its size.

Come to think of it, Gibrat’s is a kind of weird law anyway. Which means there may be some low hanging fruit for anybody else who wants to re-examine the nature of cities.

Ref: arxiv.org/abs/0808.2202: Laws of Population Growth

Carbon nanotubes sucessfully deliver cancer drugs (in mice)

Tuesday, August 19th, 2008

paclitaxel.jpg

“A holy grail in cancer therapy is to deliver high doses of drug molecules to tumor sites for maximum treatment efficacy while minimizing side effects to normal organs,” write Zhuang Liu and colleagues at Stanford University before revealing the results of experiments they have carried out with a material that has the potential to be just such a holy grail.

The material in question is a single-walled carbon nanotube attached to a molecule of paclitaxel, a widely used chemotherapy drug.  This connection is made using a cleavable ester bond that makes the entire molecule water soluble.

Single-walled carbon nanotubes  have the unusual property of tending to pass easily through the walls of the especially leaky blood vessels that form inside tumours. In this way, they end up being concentrated in tumours, carrying their deadly load of paclitaxel. Here the esther bond is cleaved, leaving the drug to do its work.

The result is a tenfold higher concentration of paclitaxel in the tumour than is possible with other means. That’s important because it means cancer can be treated with lower doses of chemotherapy for the body as a whole while ensuring that the tumour itself receives a high dose.

In tests, Liu says the technique works well in mice and the team is duly chuffed:

“To our knowledge, this is the first successful report that carbon nanotubes are used as drug delivery vehicles to achieve in vivo tumor treatment efficacy with mice.”

A similar kind of approach is already being investigated with quantum dots.  But there’s an important difference. Carbon nanotubes are pure carbon which is thought to be relatively benign. Quantum dots on the other hand are often made from heavy metals which have potentially serious health implications.

It’s never worth getting your hopes up with these kinds of early results. Nevertheless, it’ll be interesting to see where this work leads.

Ref: arxiv.org/abs/0808.2070: Drug Delivery with Carbon Nanotubes for In Vivo Cancer Treatment

How to measure macroscopic entanglement

Monday, August 18th, 2008

Macroscopic entanglement

If macroscopic objects become entangled, how can we tell? The usual way to measure entanglement on the microscopic level is to carry out a Bell experiment, in which the quantum states of two particles are measured.  If the results of these measurements fall within certain bounds, the particles are considered to be entangled.

These kinds of quantum measurements are not possible with macroscopic bodies but recent work suggests there may be other ways to spot entanglement. Vlatko Vedral  at the University of Leeds and pals outline one of these on the arXiv.

Their idea is based on the third law of thermodynamics which states that the entropy at absolute zero is dependent only on the degeneracy of the ground state. This in turn implies that the specific heat capacity of a material must asymptotically approach zero as the temperature gets closer to absolute zero. But if particles within the material were entangled, Vedral and pals say this would not be the case.

That kind of thinking suggests a straightforward experiment: simply measure the heat capacity of a material as its temperature drops to zero. If it doesn’t asymptotically approach zero, then you’ve got some entanglement on your hands.

Best of all, measuring heat capacity is standard technique so there’s no reason this can’t be done pronto.

Ref: arxiv.org/abs/quant-ph/0508193: Heat Capacity as A Witness of Entanglement

Sand ‘n’Sun (part 2)

Sunday, August 17th, 2008

More highlights from the physics arXiv this week:

Injection of Short-Lived Radionuclides into the Early Solar System from a Faint Supernova with Mixing-Fallback

Iron Behaving Badly: Inappropriate Iron Chelation as a Major Contributor to the Aetiology of Vascular and Other Progressive Inflammatory and Degenerative Diseases

Quantum Algorithm Design using Dynamic Learning

A Biophysical Model of Prokaryotic Diversity in Geothermal Hot Springs

Radiation Damage in Biological Material: Electronic Properties and Electron Impact Ionization in Urea

The Pioneer Anomaly and a Machian Universe

Sand ‘n’ sun (part 1)

Saturday, August 16th, 2008

The best of the rest from the physics arXiv:

The NuMoon Experiment: First Results

The Role of Microtubule Movement in Bidirectional Organelle Transport

Links Between Traumatic Brain Injury and Ballistic Pressure Waves Originating in the Thoracic Cavity and Extremities

Testing the Dark-Energy-Dominated Cosmology by the Solar-System Experiments

Shelf Space Strategy in Long-Tail Markets

Image Steganography, a New Approach for Transferring Security Information

The puzzling beauty of Abelian sandpiles

Friday, August 15th, 2008

sandpile.jpg

Pour real sand, a grain at a time, onto a flat surface and the result is a rather dull pyramidal shape. but in the mathematical world, the result is a little  different.

The image above is produced using a theoretical model called an Abelian sandpile model. It is  produced by dropping some 200,000 grains onto a single point on flat grid and distributing them according to a set of toppling rules.

For example, a point on the grid can hold no more than three grains. If a fourth lands, all the grains are redistributed, avalanche style, to surrounding points.

This is a relatively new discipline–Abelian sandpile models were only discovered in 1990 by Deepak Dhar at the Tata Institute of Fundamental Research in Mumbai, so people are still trying to characterise them.

This pattern was produced Dhar and colleagues who are obviously captivated by its beauty and complexity but puzzled by how to analyse it. They say that simpler, related patterns seem to have an eightfold symmetry. But this one has them stumped.  “It has not been possible to characterize [this pattern] so far,” they say.

That looks like an interesting puzzle. There’s work here for anyone who needs it.

Ref: arxiv.org/abs/0808.1732: Pattern Formation in Growing Sandpiles

Graphene quantum computers could be built with today’s technology

Thursday, August 14th, 2008

graphene-nanoribbon

Is there anything graphene cannot do?

The great graphene gold rush continues today with the news that graphene nanoribbon could be the key ingredient of the next generation of quantum computers.

The trick with quantum computing is to use qubit-carrying particles that are easy to manipulate so that their quibits can be written and read, that interact with each other so that the qubits can be processed in logic gates but are robust in the sense that thay do not easily interact with the environment so that data isn’t needlessly lost.

Photons are the current darlings of the quantum computing crowd because they do not interact easily with the environment and can be relatively easily manipulated themselves (although getting photons to interact with each other is hard).

But electron spins are also a good prospect because they can be easily controlled and interact readily with each other. Their downside is that it is hard to insulate them from stray magnetic and electric fields in the environment, so storing them is hard.

Now it looks as if graphene nanoribbon may come to the rescue. Guo-Ping Guo and pals from the University of Science and Technology of China in Hefei say that z-shaped graphene ribbons can easily store electrons in the corners of their Zs, where they can be read and written to. And by placing two Zs close to each other on a graphene strip, the electrons can also be made to interact with each other.

Materials scientists have recently worked out how to make Z-shaped graphene reliably in the lab so all the ingredients are in place for a test device to be knocked up shortly.

As Guo-Ping Guo and buddies put it: “Due to recent achievement in production of graphene nanoribbon, this proposal may be implementable within the present techniques.”

Ref: arxiv.org/abs/0808.1618: Quantum computation with graphene nanoribbon

Solar systems like ours likely to be rarer than we thought

Wednesday, August 13th, 2008

planet-formation.jpg

Astronomers, to their obvious delight, have discovered some 250 planetary systems beyond our own, many of them with curious properties. In particular, the discovery of several “hot Jupiters” gas giants that orbit close to their parent stars, challenges our theories of planet formation.The thinking is that gas giants can only form far away from stars because gas and dust simply gets blown away from the inner regions.

Now Edward Thommes from the University of Guelph in Canada and pals think they know what must be happening. One idea is that gas giants migrate after they have formed. By performing a detailed numerical simulation of planet formation and repeating it many times using different starting conditions, Thommes and co say this looks a likely scenario. In fact, their data indicates that gas giant migration must be a common occurence.

But the data also has implications for us. A migrating gas giant sweeps away all in its path and that means that solar systems like ours are likely to be rare.

As Thommes and friends put it: “All of this leads us to predict that within the diverse ensemble of planetary systems, ones resembling our own are the exception rather than the rule.”

Shame!

Ref: arxiv.org/abs/0808.1439: Gas Disks to Gas Giants: Simulating the Birth of Planetary Systems

The physics of skin vision

Tuesday, August 12th, 2008

skin-vision.jpg

Most animals use optical systems to form images but a substantial number rely on optics-less cutaneous vision or skin vision. And while computer scientists have spent a good deal of time and effort trying to reproduce the former, how many will even have heard of skin vision?

So a systematic investigation of this kind of imaging is long overdue, argue Leonid Yaroslavsky and mates from Tel Aviv University in Israel on the arXiv today.

What exactly are we talking about here?  Yaroslavsky and pals give a list of examples from the natural world including:

The ability of some plants to orient their leaves or flowers
towards the sun

Cutaneous photoreception in reptiles

“Pit organs” of vipers located near their normal eyes – these organs are sensitive
to infra-red radiation and do not contain any optics

They even mention one or two anecdotal examples in humans to which I might add the common sense of knowing where the sun is with your eyes closed, by feeling its heat alone.

Skin vision has a number of advantages over conventional optics. Since it requires no lenses skin vision can be adapted to virtually any type of radiation at any wavelength. It can work on almost any surface and its resolution is determined by number of sensors and not by diffraction limits.

Equally, there are significant disadvantages, chief among them being that a lens creates an image using no processing power at all whereas skin vision needs significant post-processing to produce an image.

In fact, the trade off between lenses and optics-less vision systems seems to be between simplicity of design and and computational complexity. As Yaroslavskyputs it:

What a lens does in parallel and at the speed of light, optics-less vision must replace by computations in neural machinery, which is slower and requires high energy (food) consumption.

But where does the crossover occur that makes one type of vision better than another? This looks to be a field that is too important and potentially useful to be overlooked  by biologists and computer scientists alike.

Ref: arxiv.org/abs/0808.1225: Optics-less Smart Sensors and a Possible Mechanism of Cutaneous Vision in Nature