Americas Settled 15,000 Years Ago, Study Says



Americas Settled 15,000 Years Ago, Study Says

Stefan Lovgren for National Geographic News March 13, 2008

A consensus is emerging in the highly contentious debate over the colonization of the Americas, according to a study that says the bulk of the region wasn't settled until as late as 15,000 years ago.

Researchers analyzed both archaeological and genetic evidence from several dozen sites throughout the Americas and eastern Asia for the paper.

"In the past archaeologists haven't paid too much attention to molecular genetic evidence," said lead author Ted Goebel, an archaeologist at Texas A&M University in College Station.

"We have brought together two different fields of science, and it looks like they are coming up with the same set of answers."

The article, which is published in tomorrow's issue of the journal Science, shows that the first Americans came from a single Siberian population and ventured across the Bering land bridge connecting Asia and North America about 22,000 years ago.

The group got stuck in Alaska because of glacial ice, however, so humans probably didn't migrate down into the rest of the Americas until after 16,500 years ago, when an ice-free corridor in Canada opened up.

Clovis Not First

Scientists have long agreed that the first Americans came from northeast Asia, according to Goebel.

But the new article—which analyzed genetic and archaeological evidence from 43 sites, including a dozen sites in Asia—better pins down the makeup of the first Americans.

Genetic evidence, for instance, points to a founding population of less than 5,000 individuals.

Some geneticists had also previously suggested that the migration across the land bridge could have occurred as early as 30,000 years ago.

"Now there seems to be consensus among those studying mitochondrial DNA and [chromosome records] of modern native Americans that it happened pretty late, after the last glacial maximum, maybe as late as 15,000 calendar years ago," Goebel said.

Meanwhile, archaeologists for years had considered sites belonging to the so-called Clovis culture, which dates back 13,000 years, to represent evidence of the first Americans.

The Clovis culture was named after flint spearheads found in the 1930s at a site in Clovis, New Mexico. Clovis sites have been identified throughout the contiguous United States as well as in Mexico and Central America.

But several sites, from Wisconsin to Monte Verde in Chile, have been discovered in recent years that predate Clovis by at least a thousand years.

"There probably has to have been some time before Clovis in which people were here, but they didn't leave much of a record behind because there just weren't that many people," Goebel said.

Coastal Route

Archaeological evidence shows that there were people occupying the Asian side of the Bering land bridge area as early as 30,000 years ago.

"That tells us that once early modern humans spread out of Africa around 50,000 years ago and colonized temperate Eurasia, it wasn't very long before they had developed the technology and the skills needed to be able to make a go of it in the Arctic," Goebel said.

Modern humans spread across the land bridge about 22,000 years ago, according to the new article.

But then the group got stuck for up to 5,000 years, blocked by thick ice sheets across Canada.

It was only when the ice had melted sufficiently that humans began to spread south, either along the coast or though an interior corridor in western Canada, the authors say.

"That might have been the bottleneck that kept people from draining south from Alaska into temperate North America," said Goebel, adding that geological evidence suggests the Pacific coastal corridor would have become ice-free perhaps as early as a thousand years before the interior corridor.

"This suggests that the first Americans may have spread through the New World along a coastal route," he said.

Henry Harpending is an anthropologist and population geneticist at the University of Utah in Salt Lake City who was not involved in the study.

He agreed that there is a consensus emerging among researchers studying the first Americans.

"But there are still outstanding questions," he said.

For example, there are some "puzzling anomalies" in the Alaskan archaeological record dating back to before the glacial melt, he pointed out.

And there are several possible reasons other than ice why people did not venture south earlier, including a "ferocious army of predators" living in North America that might have had a role in keeping humans away.

"We all have open minds, and we will leave them open," Harpending said.

Archaeological sensation in Austria: scientists from the University of Vienna unearth the earliest evidence of Jewish inhabitants in Austria

Archaeologists from the Institute of Prehistory and Early History of the University of Vienna have found an amulet inscribed with a Jewish prayer in a Roman child’s grave dating back to the 3rd century CE at a burial ground in the Austrian town of Halbturn.

The 2.2-centimeter-long gold scroll represents the earliest sign of Jewish inhabitants in present-day Austria.

This amulet shows that people of Jewish faith lived in what is today Austria since the Roman Empire. Up to now, the earliest evidence of a Jewish presence within the borders of Austria has been letters from the 9th century CE. In the areas of the Roman province of Pannonia that are now part of Hungary, Croatia and Serbia, gravestones and small finds attest to Jewish inhabitants even in antiquity. Jews have been settling in all parts of the ancient world at the latest since the 3rd century BCE. Particularly following the second Jewish Revolt against the Roman Empire, the victorious Romans sold large numbers of Jews as slaves to all corners of the empire. This, coupled with voluntary migration, is how Jews also might have come to present-day Austria.

Gold Scroll

Child’s grave

The one or two year old child, which presumably wore the silver amulet capsule around its neck, was buried in one of around 300 graves in a Roman cemetery which dates back to the 2nd to 5th century CE and is situated next to a Roman estate ("villa rustica"). This estate was an agricultural enterprise that provided food for the surrounding Roman towns (Carnuntum, Gyo"r, Sopron).

The gravesite, discovered in 1986 in the region of Seewinkel, around 20 kilometres from Carnuntum, was completely excavated between 1988 and 2002 by a team led by Falko Daim, who is now General Director of the Roman-German Central Museum of Mainz, with the financial backing of the Austrian Science Fund FWF and the Austrian state of Burgenland. All in all, more than 10,000 individual finds were assessed, most notably pieces of glass, shards of ceramic and metal finds. The gold amulet, whose inscription was incomprehensible at first, was only discovered in 2006 by Nives Doneus from the Institute for Prehistory and Early History of the University of Vienna.

The inscription on the amulet is a Jewish prayer

ΣΥΜΑ ΙΣΤΡΑΗΛ ΑΔΩNΕ ΕΛΩΗ ΑΔΩN Α Hear, O Israel! The Lord is our God, the Lord is one.

Greek script, Hebrew language

Greek is common with amulet inscriptions, although Latin and Hebrew and amulet inscriptions are known. In this case, the scribe's hand is definitely familiar with Greek. However, the inscription is Greek in appearance only, for the text itself is nothing other than a Greek transcription of the common Jewish prayer from the Old Testament (Deuteronomy, 6:4): "Hear, O Israel! The Lord is our God, the Lord is one."

Amulet to protect against demons

Other non-Jewish amulets have been found in Carnuntum. One gold- and three silver-plated amulets with magical texts were found in a stone sarcophagus unearthed west of the camp of the Roman legion, including one beseeching Artemis to intervene against the migraine demon, Antaura. Amulets have also been found in Vindobona and the Hungarian part of Pannonia. What is different about the Halbturn gold amulet is its Jewish inscription. It uses the confession to the center of Jewish faith and not magic formulae.

The gold-plated artefact from Halbturn can be viewed from 11 April 2008 onwards as part of the "The Amber Road -- Evolution of a Trade Route" exhibition in the Burgenland State Museum in Eisenstadt.

Genetic counselors turn to unconventional counseling to meet demand for genetic testing

Imagine receiving genetic test results for a disease you could develop later in life without having anyone with whom to discuss your options for managing the risk. That’s becoming a common occurrence as people turn to the Internet and other outlets for genetic testing without genetic counseling. In an effort to broaden accessibility to genetic counseling, researchers are exploring non-conventional counseling methods that challenge traditional approaches.

“The delivery of genetic test results for a disease like cancer can trigger a range of emotions and can be more distressful than anticipated-- particularly when there’s been no counseling and the results are ‘positive’,” explains Beth N. Peshkin, MS, CGC, senior genetics counselor at Lombardi Comprehensive Cancer Center, part of Georgetown University Medical Center, and educational director of the Jess and Mildred Fisher Center for Familial Cancer Research. “While in-person genetic counseling is ideal, it’s not convenient for people who live in rural areas or don’t have access to an academic center.”

According to Peshkin, genetic counseling and testing, particularly for adult onset conditions, is a trend that will continue to grow as additional genes are identified and as such testing diffuses into mainstream clinical care. Telephone counseling has been utilized with increased frequency despite a lack of data about its efficacy and concern about its use as a substitute for face-to-face contact with patients.

“In anticipation of this increased demand, it is imperative we find alternatives to traditional genetic susceptibility counseling and that we develop and evaluate these possible options now,” Peshkin explains. “A successful alternative would be one that effectively delivers information but allows greater accessibility, such as telephone counseling.”

To address these issues, Peshkin and her colleagues have launched a randomized clinical trial –the largest to date— at Lombardi to evaluate telephone genetic counseling versus in person (standard) genetic counseling among women at high risk of carrying a BRCA1/2 mutation. The study is outlined in the Spring edition of the journal Genetic Counseling posted online today.

“Many of us favor face-to-face counseling but the reality is the telephone may allow us to reach more people, more efficiently” says Peshkin. “It makes sense to develop interventions that parallel the traditional model while extending its reach and deliverability.”

Peshkin points to the abundant clinical data on the epidemiology of BRCA1/2 and the efficacy of various management strategies. Also, evidence attests to the efficacy of traditional genetic counseling at increasing knowledge, prospectively improving the accuracy of perceived risk, and increasing the awareness of the risks and benefits of testing.

“Patients appear to be highly satisfied with the traditional format of comprehensive genetic counseling so a study among individuals undergoing BRCA1/2 testing is an ideal population on which to evaluate alternative models of counseling,” Peshkin says.

Loss of egg yolk genes in mammals and the origin of lactation and placentation

If you are reading this, you did not start your life by hatching from an egg. This is one of the many traits that you share with our mammalian relatives. A new paper in this week’s PLoS Biology explores the genetic changes that led mammals to feed their young via the placenta and with milk, rather then via the egg, and finds that these changes occurred fairly gradually in our evolutionary history. The paper shows that milk-protein genes arose in a common ancestor of all existing mammalian lineages and preceded the loss of the genes that encoded egg proteins.

There are three living types of mammals: placental mammals (you, me, dogs, sheep, tigers, etc.), marsupial mammals (found in Australasia and South America, including kangaroos and possums), and monotremes (the duck-billed platypus and two species of Echidna). The reproductive strategies of these three groups are very different. Placental mammals have long pregnancies and complicated placentas that provide nourishment to the embryo, followed by a relatively short period of lactation. Marsupials have a simpler form of placenta and much shorter pregnancies, followed by an extended period where the offspring is fed milk that changes in composition to meet the baby’s altering nutritional needs. Monotremes—once a diverse group, but now restricted both in species number and distribution—have a much more reptilian beginning, as they lay eggs filled with yolk. While they do feed their young with milk, it is secreted onto a patch of skin rather then from a teat. How did these different strategies arise from our reptilian ancestors"

A new paper by David Brawand, Walter Wahli, and Henrik Kaessmann investigates the transition in offspring nutrition by comparing the genes of representatives of these three different mammalian lineages with those of the chicken—an egg-laying, milkless control. The authors found that there are similar genetic regions in all three mammalian lineages, suggesting that the genes for casein (a protein found in milk) arose in the mammalian common ancestor between 200 and 310 million years ago, prior to the evolution of the placenta.

Eggs contain a protein called vitellogenin as a major nutrient source. The authors looked for the genes associated with the production of vitellogenin, of which there are three in the chicken. They found that while monotremes still have one functional vitellogenin gene, in placental and marsupial mammals, all three have become pseudogenes (regions of the DNA that still closely resemble the functional gene, but which contain a few differences that have effectively turned the gene off). The gene-to-pseudogene transitions happened sequentially for the three genes, with the last one losing functionality 30-70 million years ago.

Therefore, mammals already had milk before they stopped laying eggs. Lactation reduced dependency on the egg as a source of nutrition for developing offspring, and the egg was abandoned completely in the marsupial and placental mammals in favor of the placenta. This meant that the genes associated with egg production gradually mutated, becoming pseudogenes, without affecting the fitness of the mammalian lineages.

Citation: Brawand D, Wahli W, Kaessmann H (2008) Loss of egg yolk genes in mammals and the origin of lactation and placentation. PLoS Biol 6(3): e63. doi:10.1371/journal.pbio.0060063

Blood disease protects against malaria in an unexpected way

NEW YORK, March 17, 2008 – Children with an inherited blood disorder called alpha thalassemia make unusually small red blood cells that mostly cause a mild form of anemia. Now, researchers have discovered that this disorder has a benefit—it can protect children against one of the world’s greatest killers, malaria, according to a new study.

“We made the surprising finding that packaging your hemoglobin in smaller amounts in more cells is an advantage against malaria,” says Karen Day, Ph.D., Professor and Chairman of the Department of Medical Parasitology at NYU School of Medicine, who led the research with colleagues at the University of Oxford. Hemoglobin is the oxygen-carrying protein in red blood cells.

The new research shows how children with a mild form of alpha thalassemia are protected against life-threatening malarial anemia. The study, published in the March issue of the journal PLoS Medicine, proposes an answer to a biological puzzle that first emerged more than 50 years ago.

Some 800 children living in Papua, New Guinea, participated in the study. Malaria is endemic in Papua New Guinea and 68 percent of children living there have alpha thalassemia. Dr. Day and her then-Ph D. student Freya J.I. Fowkes, and colleagues from the University of Oxford, Papua New Guinea Institute of Medical Research, and Swansea University, show that an attack of severe malaria causes the loss of one-third to one-half of the total number of red blood cells, which number in the trillions per liter of blood. Children with mild alpha thalassemia tolerated this massive loss because they started out with 10 to 20 percent more red blood cells than unaffected children.

“It is really remarkable and so simple. Children with alpha thalassemia have adapted to the loss of red blood cells associated with malarial disease by making more of these cells with less hemoglobin,” says Dr. Day. “So, these children do better because they end up with more hemoglobin overall when they have a malaria attack compared to normal children,” says Dr. Day.

Malaria has been a scourge for thousands of years. The parasite causing the disease spends part of its life inside human red blood cells, which are eventually destroyed. Severe anemia occurs in some children with malaria when blood cell loss leads to hemoglobin levels of less than 50 grams per liter.

Malaria afflicts hundreds of millions of people, causing up to 2 million deaths every year in Africa and Asia. Many of its victims are young children. In regions of the world where malaria is endemic, mutations have arisen in human populations that allow people to survive. Sickle cell trait, for example, protects against malaria.

Nearly sixty years ago the renowned evolutionary biologist J.B.S.Haldane postulated that the thalassemias were common in human populations because they protected against malaria. Alpha thalassemia is common in Asia, the Mediterranean and Melanesia where malaria is or was prevalent. In the mid 1990s researchers working on the north coast of Papua New Guinea proved that children with mild alpha thalassemia, who inherit mutations in the “alpha” part of hemoglobin genes from each parent, were protected against malaria. These children were 60 percent less likely to get severe malarial anemia than normal children, however the mechanism of such protection was unclear.

Dr. Day and colleagues based their new study on this same population of children. “We are proposing an unexpected mechanism of protection against severe malarial anemia” says Dr. Day. “We show that alpha thalassemia is giving the child a hematological advantage by making more red blood cells.

According to the National Human Genome Research Institute, part of the National Institutes of Health, most individuals with alpha thalassemia have milder forms of the disease, with varying degrees of anemia. The most severe form of alpha thalassemia, which mainly affects individuals of Southeast Asian, Chinese and Filipino ancestry, results in fetal or newborn death.

The authors of the study are: Karen P. Day; Freya J.I. Fowkes, who was a Ph D student in Dr. Day’s laboratory and is now at the Walter and Eliza Institute of Medical Research in Melbourne, Australia; Angela Allen and David J. Weatherall, University of Oxford; Steve Allen, University of Swansea, and Michael Alpers, Papua New Guinea Institute of Medical Research.

The study was supported by grants from the Wellcome Trust, the European Community and the Medical Research Council, and funds from NYU School of Medicine.

For further information about thalassemia:









Severe West Nile infection could lead to lifetime of symptoms

Most people who suffer severe infection with West Nile virus still experience symptoms years after infection and many may continue to experience these symptoms for the rest of their lives according to research presented today (March 17) at the 2008 International Conference on Emerging Infectious Diseases in Atlanta, Georgia.

“What we are finding is that about 60% of people, one year after severe infection with West Nile, still report symptoms,” says Kristy Murray of the University of Texas Health Science Center at Houston, a lead researcher on the study.

Supported by a grant from the National Institutes of Health (NIH), Murray and her colleagues have been conducting a long-term, in-depth study of people in the Houston, Texas area who have been diagnosed with West Nile. They monitored 108 patients over a 5-year period, checking in every 6 months to record both subjective and objective clinical outcomes and rates of recovery.

Persistent symptoms of West Nile infection still plagued 60% of patients in the study at the end of the first year. Moreover, Murray and her colleagues discovered that most, if not all, recovery appeared to take place in the first two years following infection.

“Once they hit two years it completely plateaus. If a patient has not recovered by that time, it is very likely the will never recover,” says Murray. Approximately 40% of patients in the study continued to experience symptoms 5 years after infection. Some long-term damage included memory loss, loss of balance and tremors.

Approximately 80 percent of people who are infected with West Nile do not experience symptoms. This study only included patients with symptoms, which can range from mild fatigue and weakness to seizures, paralysis and tremors. Half the patients experienced encephalitis due to infection and another third presented with meningitis. Murray and her colleagues noted a significant difference in recovery rates.

“Those patients with encephalitis were less likely to recover than those who had meningitis or uncomplicated fever,” says Murray.

Another outcome of severe West Nile infection was depression. At the one-year followup 31% of the patients reported new-onset depression. Using objective measurements, the researchers determined that 75% of those cases met the definition of clinical depression.

“West Nile virus infection can result in significant long-term clinical sequelae and cognitive and functional impairment, particularly in those who present with encephalitis,” says Murray.

Bonn scientists discover new hemoglobin type

Instruments falsely report anoxia in affected people

This release is available in German.

Scientists at the University of Bonn have discovered a new rare type of haemoglobin. Haemoglobin transports oxygen in the red blood corpuscles. When bound to oxygen it changes colour. The new haemoglobin type appears optically to be transporting little oxygen. Measurements of the blood oxygen level therefore present a similar picture to patients suffering from an inherited cardiac defect. After examining two patients, the scientists now understand that the new type of haemoglobin distorts the level of oxygen measured. The scientists have named the type 'Haemoglobin Bonn'. They have published their discovery in the current issue of the scientific journal 'Clinical Chemistry'.

The article can be downloaded from .

Haemoglobin transports oxygen to the body's cells and in return picks up carbon dioxide there. In doing so it changes colour. With an optical measuring instrument, known as a pulse oximeter, you can therefore measure whether there is enough oxygen present in the blood. The cause of anoxia can be an inherited cardiac defect, for example.

This was also the tentative diagnosis in the case of a four-year-old boy who was admitted to the Paediatric Clinic of the Bonn University Clinic. However, after a thorough examination, the paediatricians Dr. Andreas Hornung and his colleagues did not find any cardiac defect. A low saturation of oxygen had also been previously found in the blood of the boy's 41-year-old father, again without apparent signs of a cardiac defect.

Dr. Berndt Zur from Professor Birgit Stoffel-Wagner’s team at the Institute of Clinical Chemistry and Pharmacology examined the boy's and the father's haemoglobin. He eventually realised that they were dealing with a new type of the blood pigment. 'The pulse oximeter is put on a finger as a clip and X-rays it with infrared radiation,' he explains. 'Haemoglobin absorbs infrared light in the absence of oxygen. The lower the content of oxygen in the blood, the less light penetrates the finger and reaches the sensor of the oximeter.' But Haemoglobin Bonn absorbs a bit more infrared light than normal oxygen saturated haemoglobin, even when combined with oxygen. 'That’s why, at first, we did not understand why the patients did not have any particular health problems,' Dr. Zur says.

Every human has two main heart ventricles. One pumps the blood through the arteries to the lungs, where the haemoglobin releases the carbon dioxide and takes on oxygen. The other one pumps the blood which is saturated with oxygen from the lungs to every cell in the body. Both ventricles must be separated by a wall in the heart, so that the oxygen-rich blood does not mix with the anoxaemic blood. But some people have a hole in this septum. In such cases, the pulse oximeter shows anoxia. Doctors therefore see this as a sign of a cardiac defect. Another cause is what is known as the Apnoea Syndrome. In the patients affected, breathing can cease for more than a minute. That is why the father of the 4-year-old received oxygen treatment at nights for some time. 'If we had known about Haemoglobin Bonn before, father and son could have been spared the fear of a cardiac defect or the Sleep Apnoea Syndrome,' Dr. Zur explains.

First ‘rule’ of evolution suggests that life is destined to become more complex

Scientists have revealed what may well be the first pervasive ‘rule’ of evolution.

In a study published in the Proceedings of the National Academy of Sciences researchers have found evidence which suggests that evolution drives animals to become increasingly more complex.

Looking back through the last 550 million years of the fossil catalogue to the present day, the team investigated the different evolutionary branches of the crustacean family tree.

They were seeking examples along the tree where animals evolved that were simpler than their ancestors.

Instead they found organisms with increasingly more complex structures and features, suggesting that there is some mechanism driving change in this direction.

“If you start with the simplest possible animal body, then there’s only one direction to evolve in – you have to become more complex,” said Dr Matthew Wills from the Department of Biology & Biochemistry at the University of Bath who worked with colleagues Sarah Adamowicz from from the University of Waterloo (Canada) and Andy Purvis from Imperial College London.

“Sooner or later, however, you reach a level of complexity where it’s possible to go backwards and become simpler again. What’s astonishing is that hardly any crustaceans have taken this backwards route. Instead, almost all branches have evolved in the same direction, becoming more complex in parallel. This is the nearest thing to a pervasive evolutionary rule that’s been found. Of course, there are exceptions within the crustacean family tree, but most of these are parasites, or animals living in remote habitats such as isolated marine caves. For those free-living animals in the ‘rat-race’ of evolution, it seems that competition may be the driving force behind the trend. What’s new about our results is that they show us how this increase in complexity has occurred. Strikingly, it looks far more like a disciplined march than a milling crowd.”

Dr Adamowicz said: “Previous researchers noticed increasing morphological complexity in the fossil record, but this pattern can occur due to the chance origination of a few new types of animals.

“Our study uses information about the inter-relatedness of different animal groups – the ‘Tree of Life’ – to demonstrate that complexity has evolved numerous times independently.”

Like all arthropods, crustaceans’ bodies are built up of repeating segments. In the simplest crustaceans, the segments are quite similar - one after the other. In the most complex, such as shrimps and lobsters, almost every segment is different, bearing antennae, jaws, claws, walking legs, paddles and gills.

The American biologist Leigh Van Valen coined the phrase ‘Red Queen’ for the evolutionary arms race phenomenon. In Through the Looking-Glass Lewis Carroll’s Red Queen advises Alice that: “It takes all the running you can do, to keep in the same place.”

“Those crustacean groups going extinct tended to be less complex than the others around at the time,” said Dr Wills.

“There’s even a link between average complexity within a group and the number of species alive today.

“All organisms have a common ancestor, so that every living species is part of a giant family tree of life.”

Dr Adamowicz added: “With a few exceptions, once branches of the tree have separated they continue to evolve independently.

“Looking at many independent branches is similar to viewing multiple repeated runs of the tape of evolution.

“Our results apply to a group of animals with bodies made of repeated units. We must not forget that bacteria – very simple organisms – are among the most successful living things. Therefore, the trend towards complexity is compelling but does not describe the history of all life.”

This work was supported by a Beit Scientific Research Fellowship, a Natural Sciences & Engineering Research Council of Canada Scholarship, an Overseas Research Scholarship and a Biotechnology & Biological Sciences Research Council grant.

Solving the drug price crisis

Sarah H. Wright, News Office March 17, 2008

The mounting U.S. drug price crisis can be contained and eventually reversed by separating drug discovery from drug marketing and by establishing a non-profit company to oversee funding for new medicines, according to two MIT experts on the pharmaceutical industry.

Stan Finkelstein, M.D., senior research scientist in MIT's Engineering Systems Division, and Peter Temin, Elisha Gray II Professor of Economics, present their research and detail their proposal in their new book, "Reasonable Rx: Solving the Drug Price Crisis," published by Financial Times Press.

Finkelstein and Temin address immediate national problems--the rising cost of available medicines, the high cost of innovation and the 'blockbuster' method of selecting drugs for development--and predict worsening new ones, unless bold steps are taken.

"Drug prices in the United States are higher than anywhere else in the world. Right now, the revenues from those drugs finance research and development of new drugs. We propose to reduce prices, not at the expense of innovation, but by changing the way innovation is financed," said Temin, also the author of "Taking Your Medicine: Drug Regulation in the US."

"Nationally, if we keep the current structure, in 50 years only hedge fund managers will be able to afford prescription drugs. Drug development will focus on therapies for those small groups of people who can pay a thousand dollars a pill. With income distribution widening and insurance carriers already refusing some coverage, this would be a disaster," said Temin.

"Prescription drugs have been left out of previous efforts to reform the delivery of health care. New initiatives to expand coverage must include a plan to reduce the high cost of drugs," Finkelstein added.

The book, which draws on the researchers' expertise in the realms of medicine and economics, proposes eliminating the linkage between drug prices and the cost of drug discovery while financing innovation and addressing the needs of society.

Their first bold step is conceptual, recognizing that we all have a critical stake in the products of pharmaceutical research.

Next, drawing on recent history, they propose dividing drug companies into drug discovery/development firms and drug marketing/distribution firms, just as electric utility firms were separated into generation and distribution companies in the 1990s.

Following the utility model, Finkelstein and Temin propose establishing an independent, public, non-profit Drug Development Corporation (DDC), which would act as an intermediary between the two new industry segments -- just as the electric grid acts as an intermediary between energy generators and distributors.

The DDC also would serve as a mechanism for prioritizing drugs for development, noted Finkelstein.

"It is a two-level program in which scientists and other experts would recommend to decision-makers which kinds of drugs to fund the most. This would insulate development decisions from the political winds," he said.

Finkelstein and Temin's plan would also insulate drug development from the blockbuster mentality, which drives companies to invest in discovering a billion-dollar drug to offset their costs.

An example of the blockbuster mentality is developing a new drug for hypertension, one that varies only slightly from those already on the market, but that can bring in huge profits if aggressively marketed.

For Finkelstein, a physician, and Temin, an economist, societal needs for medicines are swiftly extending beyond national boundaries: Diseases affecting the developing world--afflicting people too poor to make drug development attractive for businesses--will soon affect health inside the United States.

"Global travel and climate change both require that U.S. drug development and innovation policy rethink the way drugs are developed, and for whom. Air travel, migrations, a global workforce--all these mean unusual diseases could become usual here," Temin noted.

Climate change also may affect drugs their proposed DDC would select for funding.

"Especially in the southern states, tropical diseases are likely to increase with global warming, and people will need treatments for them. In our plan, the DDC would encourage research in advance of the market - and, we hope, in advance of disaster," he said.

A version of this article appeared in MIT Tech Talk on March 19, 2008 (download PDF).

Fake diamonds help jet engines take the heat

COLUMBUS, Ohio -- Ohio State University engineers are developing a technology to coat jet engine turbine blades with zirconium dioxide -- commonly called zirconia, the stuff of synthetic diamonds -- to combat high-temperature corrosion.

The zirconia chemically converts sand and other corrosive particles that build up on the blade into a new, protective outer coating. In effect, the surface of the engine blade constantly renews itself.

Ultimately, the technology could enable manufacturers to use new kinds of heat-resistant materials in engine blades, so that engines will be able to run hotter and more efficiently.

Nitin Padture, professor of materials science and engineering at Ohio State, said that he had military aircraft in mind when he began the project. He was then a professor at the University of Connecticut.

“In the desert, sand is sucked into the engines during takeoffs and landings, and then you have dust storms,” he said. “But even commercial aircraft and power turbines encounter small bits of sand or other particles, and those particles damage turbine blades.”

Jet engines operate at thousands of degrees Fahrenheit, and blades in the most advanced engines are coated with a thin layer of temperature-resistant, thermally-insulating ceramic to protect the metal blades. The coating -- referred to as a thermal-barrier coating -- is designed like an accordion to expand and contract with the metal.

The problem: When sand hits the hot engine blade it melts -- and becomes glass.

“Molten glass is one of the nastiest substances around. It will dissolve anything,” Padture said.

The hot glass chews into the ceramic coating. But the real damage happens after the engine cools, and the glass solidifies into an inflexible glaze on top of the ceramic. When the engine heats up again and the metal blades expand, the ceramic coating can’t expand, because the glaze has locked it in place. The ceramic breaks off, shortening the life of the engine blades.

In a recent issue of the journal Acta Materialia, Padture and his colleagues described how the new coating forces the glass to absorb chemicals that will convert it into a harmless -- and even helpful -- ceramic.

The problem: When sand hits the hot engine blade it melts -- and becomes glass. “Molten glass is one of the nastiest substances around. It will dissolve anything,” Padture said.

The key, Padture said, is that the coating contains aluminum and titanium atoms hidden inside zirconia crystals. When the glass consumes the zirconia, it also consumes the aluminum and titanium. Once the glass accumulates enough of these elements, it changes from a molten material into a stable crystal, and it stops eating the ceramic.

“The glass literally becomes a new ceramic coating on top of the old one. Then, when new glass comes in, the same thing will happen again. It’s like it’s constantly renewing the coating on the surface of the turbine,” Padture said.

Padture’s former university has applied for a patent on the technique that he devised for embedding the aluminum and titanium into the zirconia. He’s partnering with Inframat Corp., a nanotechnology company in Connecticut, to further develop the technology.

Padture stressed that the technology is in its infancy. He has yet to apply the coatings to complex shapes, and cost is a barrier as well: the process is energy-consuming.

But if that cost eventually came down and the technology matured, the payoff could be hotter engines that burn fuel more efficiently and create less pollution. Manufacturers would be able to use more sophisticated ceramics that boost the heat-resistance of engines. Eventually, technology could go beyond aircraft and power-generator turbines and extend to automobiles as well, Padture said.

His coauthors on the Acta Materialia paper included Ohio State doctoral student Aysegul Aygun, who is doing this work for her dissertation; former postdoctoral researcher Alexander Vasiliev, who is now at the Russian Academy of Sciences; and Xinqing Ma, a scientist at Inframat Corp.

This research was funded by the Office of Naval Research and Naval Air Systems Command.

Chinook Salmon Vanish Without a Trace

By FELICITY BARRINGER

SACRAMENTO — Where did they go?

The Chinook salmon that swim upstream to spawn in the fall, the most robust run in the Sacramento River, have disappeared. The almost complete collapse of the richest and most dependable source of Chinook salmon south of Alaska left gloomy fisheries experts struggling for reliable explanations — and coming up dry.

Whatever the cause, there was widespread agreement among those attending a five-day meeting of the Pacific Fisheries Management Council here last week that the regional $150 million fishery, which usually opens for the four-month season on May 1, is almost certain to remain closed this year from northern Oregon to the Mexican border. A final decision on salmon fishing in the area is expected next month.

As a result, Chinook, or king salmon, the most prized species of Pacific wild salmon, will be hard to come by until the Alaskan season opens in July. Even then, wild Chinook are likely to be very expensive in markets and restaurants nationwide.

“It’s unprecedented that this fishery is in this kind of shape,” said Donald McIsaac, executive director of the council, which is organized under the auspices of the Commerce Department.

Fishermen think the Sacramento River was mismanaged in 2005, when this year’s fish first migrated downriver. Perhaps, they say, federal and state water managers drained too much water or drained at the wrong time to serve the state’s powerful agricultural interests and cities in arid Southern California. The fishermen think the fish were left susceptible to disease, or to predators, or to being sucked into diversion pumps and left to die in irrigation canals.

But federal and state fishery managers and biologists point to the highly unusual ocean conditions in 2005, which may have left the fingerling salmon with little or none of the rich nourishment provided by the normal upwelling currents near the shore.

The life cycle of these fall run Chinook salmon takes them from their birth and early weeks in cold river waters through a downstream migration that deposits them in the San Francisco Bay when they are a few inches long, and then as their bodies adapt to saltwater through a migration out into the ocean, where they live until they return to spawn, usually three years later.

One species of Sacramento salmon, the winter run Chinook, is protected under the Endangered Species Act. But their meager numbers have held steady and appear to be unaffected by whatever ails the fall Chinook.

So what happened? As Dave Bitts, a fisherman based in Eureka in Northern California, sees it, the variables are simple. “To survive, there are two things a salmon needs,” he said. “To eat. And not to be eaten.”

Fragmentary evidence about salmon mortality in the Sacramento River in recent years, as well as more robust but still inconclusive data about ocean conditions in 2005, indicates that the fall Chinook smolts, or baby fish, of 2005 may have lost out on both counts. But biologists, fishermen and fishery managers all emphasize that no one yet knows anything for sure.

Bill Petersen, an oceanographer with the National Oceanic and Atmospheric Administration’s research center in Newport, Ore., said other stocks of anadromous Pacific fish — those that migrate from freshwater to saltwater and back — had been anemic this year, leading him to suspect ocean changes.

After studying changes in the once-predictable pattern of the Northern Pacific climate, Mr. Petersen found that in 2005 the currents that rise from the deeper ocean, bringing with them nutrients like phytoplankton and krill, were out of sync. “Upwelling usually starts in April and goes until September,” he said. “In 2005, it didn’t start until July.”

Mr. Petersen’s hypothesis about the salmon is that “the fish that went to sea in 2005 died a few weeks after getting to the ocean” because there was nothing to eat. A couple of years earlier, when the oceans were in a cold-weather cycle, the opposite happened — the upwelling was very rich. The smolts of that year were later part of the largest run of fall Chinook ever recorded.

But, Mr. Petersen added, many factors may have contributed to the loss of this season’s fish.

Bruce MacFarlane, another NOAA researcher who is based in Santa Cruz, has started a three-year experiment tagging young salmon — though not from the fall Chinook run — to determine how many of those released from the large Coleman hatchery, 335 miles from the Sacramento River’s mouth, make it to the Golden Gate Bridge. According to the first year’s data, only 4 of 200 reached the bridge.

Mr. MacFarlane said it was possible that a diversion dam on the upper part of the river, around Redding and Red Bluff, created calm and deep waters that are “a haven for predators,” particularly the pike minnow.

Farther downstream, he said, young salmon may fall prey to striped bass. There are also tens of thousands of pipes, large and small, attached to pumping stations that divert water.

Jeff McCracken, a spokesman for the federal Bureau of Reclamation, which is among the major managers of water in the Sacramento River delta, said that in the last 18 years, significant precautions have been taken to keep fish from being taken out of the river through the pipes.

“We’ve got 90 percent of those diversions now screened,” Mr. McCracken said. He added that two upstream dams had been removed and that the removal of others was planned. At the diversion dam in Red Bluff, he said, “we’ve opened the gates eight months a year to allow unimpeded fish passage.”

Bureau of Reclamation records show that annual diversions of water in 2005 were about 8 percent above the 12-year average, while diversions in June, the month the young Chinook smolts would have headed downriver, were roughly on par with what they had been in the mid-1990s.

Peter Dygert, a NOAA representative on the fisheries council, said, “My opinion is that we won’t have a definitive answer that clearly indicates this or that is the cause of the decline.”

Carolyn Marshall contributed reporting.

NC State Gene 'Knockout' Floors Tobacco Carcinogen

March 17, 2008

In large-scale field trials, scientists from North Carolina State University have shown that silencing a specific gene in burley tobacco plants significantly reduces harmful carcinogens in cured tobacco leaves.

The finding could lead to tobacco products – especially smokeless products – with reduced amounts of cancer-causing agents.

NC State's Dr. Ralph Dewey, professor of crop science, and Dr. Ramsey Lewis, assistant professor of crop science, teamed with colleagues from the University of Kentucky to knock out a gene known to turn nicotine into nornicotine. Nornicotine is a precursor to the carcinogen N-nitrosonornicotine (NNN). Varying percentages of nicotine are turned into nornicotine while the plant ages; nornicotine converts to NNN as the tobacco is cured, processed and stored.

The field tests in Kentucky, Virginia and North Carolina compared cured burley tobacco plants with the troublesome gene silenced and "control" plant lines with normal levels of gene expression. The researchers found a six-fold decrease in carcinogenic NNN in the genetically modified tobacco plants, as well as a 50 percent overall reduction in the class of harmful compounds called TSNAs, or tobacco-specific nitrosamines. TSNAs are reported to be among the most important tobacco-related compounds implicated in various cancers in laboratory experiments, Lewis said.

The research results were published online in Plant Biotechnology Journal.

Lewis and Dewey stress that the best way for people to avoid the risks associated with tobacco use is to avoid using tobacco products. But their findings show that targeted gene silencing can work as well in the field as it does on the lab bench.

"Creating a tobacco plant with fewer or no harmful compounds may also help with tobacco plants that are being used to create pharmaceuticals or other high-value products," Dewey said.

To get initial lines of plants with the troublesome gene silenced, the NC State researchers used a technique called RNA interference in which genetic engineering was used to introduce a gene that inhibits the demethylase gene function into the tobacco plant.

Dewey and Lewis have since developed tobacco lines with the same effect without using genetic engineering. They randomly inserted chemical changes, or mutations, into the tobacco genome of burley tobacco plants. They then searched for plants in which the nicotine demethylase gene was permanently impaired. The researchers are currently working to transfer this mutation to widely used tobacco varieties.

Dewey and Lewis add that nothing else in the plant changed – growth or resistance to insects or disease, for example – after they knocked out this specific gene.

While Lewis believes that varieties of burley tobacco with a silenced demethylase gene will exist within the next few years, the NC State researchers say burley tobacco has a number of other targets for their gene silencing method.

The research is sponsored by Philip Morris USA. Note to editors: An abstract of the paper follows.

"RNA Interference (RNAi)-Induced Suppression of Nicotine Demethylase Activity Reduces Levels of a Key Carcinogen in Cured Tobacco Leaves"

Authors: Ramsey S. Lewis and Ralph E. Dewey, North Carolina State University; Anne M. Jack, Lily Gavilano, Balazs Siminszky and Lowell Bush, University of Kentucky; Jerry Morris, Vincent Robert and Alec Hayes, Philip Morris USA

Published: Online Feb. 14 in Plant Biotechnology Journal

Abstract: Technologies for reducing the levels of tobacco product constituents that may contribute to unwanted health effects are desired. Target compounds include tobacco-specific nitrosamines (TSNAs), a class of compounds generated through the nitrosation of pyridine alkaloids during the curing and processing of tobacco. Studies have reported the TSNA N'-nitrosonornicotine (NNN) to be carcinogenic in laboratory animals. NNN is formed via the nitrosation of nornicotine, a secondary alkaloid produced through enzymatic N-demethylation of nicotine. Strategies to lower nornicotine levels in tobacco (Nicotiana tabacum L.) could lead to a corresponding decrease in NNN accumulation in cured leaves. The major nicotine demethylase gene of tobacco has recently been isolated. In this study, a large-scale field trial was conducted to evaluate transgenic lines of burley tobacco carrying an RNA interference (RNAi) construct designed to inhibit the expression of this gene. Selected transgenic lines exhibited a six-fold decrease in nornicotine content relative to untransformed controls. Analysis of cured leaves revealed a commensurate decrease in NNN and total TSNAs. The inhibition of nicotine demethylase activity is an effective means of decreasing significantly the level of a key defined animal carcinogen present in tobacco products.

Lithium chloride slows onset of skeletal muscle disorder

Study first to show biopolar drug could effectively treat inclusion body myositis

Irvine, Calif., March 18, 2008 A new UC Irvine study finds that lithium chloride, a drug used to treat bipolar disorder, can slow the development of inclusion body myositis, a skeletal muscle disease that affects the elderly.

In the study by scientists Frank LaFerla and Masashi Kitazawa, mice genetically engineered to have IBM demonstrated markedly better motor function six months after receiving daily doses of lithium chloride, compared with non-treated mice. The muscles in treated mice also had lower levels of a protein that the study linked to muscle inflammation associated with IBM.

These data are the first to show that lithium chloride is a potential IBM therapy.

“Lithium chloride is an approved drug for treating humans. We already know it is safe and can be used by people,” said LaFerla, professor of neurobiology and behavior at UCI and co-author of the study. “Given our findings, we believe a clinical trial that tests the effectiveness of lithium chloride on IBM patients should be conducted as soon as possible.”

Results of the study appear online this month in the journal Annals of Neurology.

IBM is the most common skeletal muscle disorder among people older than 50. People with IBM experience weakness, inflammation and atrophy of muscles in their fingers, wrists, forearms and quadriceps. There is no cure for IBM, nor is there an effective treatment, according to the National Institutes of Health.

LaFerla, a noted Alzheimer’s disease researcher, began studying IBM about 10 years ago after learning the disorders have similar tissue characteristics. In the brain, a buildup of phosphorylated tau protein leads to the development of tangles, one of the two lesions that are hallmarks of Alzheimer’s disease. High phospho-tau levels also are present in IBM, though patients do not experience dementia or memory loss. In a previous study, LaFerla found that lithium chloride reduced phospho-tau levels in mice genetically engineered to develop Alzheimer’s disease.

LaFerla and his research team then wondered: Could lithium chloride also reduce phospho-tau levels and symptoms in mice with IBM?

First, they sought to determine how the inflammation affects the skeletal muscle fibers. They injected the mice with a drug to trigger muscle inflammation, then put them on tiny treadmills to test their motor function. As expected, mice with inflammation could not keep up with the control mice, indicating reduced motor function. Examining their brain tissue, the scientists discovered the mice with muscle inflammation also had higher levels of phospho-tau.

Through additional testing, they discovered an enzyme called GSK-3 beta was responsible for increasing the tau phosphorylation. Previous studies have shown that same enzyme to cause tau buildup in the Alzheimer’s brain.

Next, the scientists sought to block the accumulation of phospho-tau in the IBM mice with the goal of curbing motor function loss. In mice six months of age, one group was fed lithium chloride-laced food for six months, and a second group was fed regular food. At 12 months of age, mice in the first group performed on the treadmill as if they were six months of age, while mice in the second group had reduced motor function. Lithium chloride, the scientists found, blocked the GSK-3 beta enzyme that caused higher levels of phospho-tau.

“The older animals were performing as if they were younger animals,” said Kitazawa, a postgraduate researcher of neurobiology and behavior at UCI and co-author of the study. “Lithium chloride was delaying their rate of decline.”

The scientists then sought evidence that their results in mice might translate to humans with IBM. They performed tests on human muscle tissue samples and found the GSK-3 beta enzyme again played a role in the phosphorylation of tau. That was not the case, though, in patients with other muscle disorders. “This suggests that our IBM mouse model may have the same skeletal muscle mechanism as in human cases,” LaFerla said.

Researcher Dan Trinh of UCI also worked on this study, which was funded by the National Institutes of Health.

Arctic pollution's surprising history

Study: Early explorers saw particulate haze in late 1800s

Scientists know that air pollution particles from mid-latitude cities migrate to the Arctic and form an ugly haze, but a new University of Utah study finds surprising evidence that polar explorers saw the same phenomenon as early as 1870.

“The reaction from some colleagues – when we first mentioned that people had seen haze in the late 1800s – was that it was crazy,” says Tim Garrett, assistant professor of meteorology and senior author of the study. “Who would have thought the Arctic could be so polluted back then? Our instinctive reaction is to believe the world was a cleaner place 130 years ago.”

The study will be published soon in the March 2008 issue of the Bulletin of the American Meteorological Society.

By searching through historic records written by early Arctic explorers, Garrett and his collaborator Lisa Verzella, former undergraduate student at the University of Utah, were able to find evidence of an aerosol “dry haze” that settled onto the ice to form a layer of grayish dust containing metallic particles. The haze and dust were likely the byproducts of smelting and coal combustion generated during the Industrial Revolution.

“We searched through open literature, including a report in the second issue of the journal Science in 1883 by the famous Swedish geologist Adolf Erik Nordenskiold, who was the first to describe the haze,” says Garrett. “We also looked through books describing Arctic expeditions that had to be translated from Norwegian and French.”

The historic accounts show that more than 130 years ago, the Industrial Revolution was “already darkening the snow and skies of the far North,” Garrett says.

History of Arctic Pollution

Garrett and Verzella say the first report of Arctic haze pollution usually is credited to a U.S. Air Force meteorologist J. Murray Mitchell, who in 1957 described “the high incidence of haze at flight altitudes” during weather reconnaissance missions from Alaska over the Arctic Ocean during the late 1940s and 1950s.

Mitchell was credited in the 1970s by Glenn Shaw from the University of Alaska, Fairbanks, and his collaborators Kenneth Rahn and Randolf Borys, from the University of Rhode Island, who were the first to discover the haze contained high levels of heavy metals, including vanadium, suggestive of heavy oil combustion.

In a later study, Rahn and Shaw said: “Arctic haze is the end product of massive transport of air pollution from various mid-latitude sources to the northern polar regions, on a scale that could never have been imagined, even by the most pessimistic observer.”

Since humans had been generating aerosol pollution long before 1950 – namely, since sometime after the advent of the Industrial Revolution in the late 1700s – it made sense to Garrett that pollution generated from earlier times also might have made it to northern latitudes from Europe, Asia and North America.

“I thought that pollution had to be observed in the Arctic prior to 1950, so I decided to find out if that was true,” says Garrett. So he hired Verzella to search historic records to determine if there was written evidence of early Arctic pollution.

Verzella found a number of published reports from the late 1800s to early 1900s that mention a whitish haze in the sky, or a gray or black dust on the ice. But Nordenskiold “was the first to explicitly draw attention to the haze phenomenon” during his 1883 expedition to Greenland, the researchers concluded.

Even during an earlier expedition in 1870, Nordenskiold observed “a fine dust, gray in color, and, when wet, black or dark brown, is distributed over the inland ice in a layer which I should estimate at from 0.1 to 1 millimeter.”

He found that the dust contained “metallic iron, which could be drawn out by the magnet, and which, under the blowpipe, gave a reaction of cobalt and nickel.” He believed it to be a “cosmic dust” possibly from meteors. However, the concentration of metallic iron, nickel and cobalt made it much more likely that the origin was industrial pollution generated at mid-latitudes.

Last year, other researchers found that the dust is present in ice core samples. “Recent Greenland ice cores show a rapid rise in anthropogenic soot and sulfate that began in the late 1800s, but with peak sulfate levels in the 1970s, and peak soot between 1906 and 1910,” Garrett and Verzella say in their study. A higher composition of sulfate suggests oil combustion, while higher soot suggests coal combustion, consistent with the main sources of pollution generated in the 20th versus 19th centuries.

Early Arctic Warming

In a 2006 study, Garrett concluded that particulate pollution from mid-latitudes aggravates global warming in the Arctic. Did it do the same back in the 1800s?

“It is reasonable that the effect of particulate pollution on Arctic climate may have been greater 130 years ago than it is now, because during the Industrial Revolution, technologies were dirtier than they are now,” says Garrett. “Of course, today carbon dioxide emissions are greater and have accumulated over the last century, so the warming effect due to carbon dioxide is much greater today than 100 years ago.”

In fact, after fossil-fuel combustion became more efficient in the mid-1900s, the levels of particulate pollution in the Arctic dropped dramatically from levels earlier in the century. However, Garrett believes that we might be seeing another increase due to higher emissions from developing industrial countries such as China.

Fizzy water powered 'super' geysers on ancient Mars

* 16:15 17 March 2008

* news service

* David Shiga, Houston

Huge fountains of carbonated water once erupted on Mars, hurling hailstones and mud several kilometres into the air, a team of scientists says.

Geysers form in two ways on Earth. In some geysers, the water is blasted up into the air by steam escaping from underground, while in others, it is forced out by bubbles of carbon dioxide rushing to the surface. Both types can be spectacular; the Old Faithful geyser in Yellowstone National Park in Wyoming, US, for example, shoots scalding water about 45 metres into the air.

Now, scientists say they have found signs of ancient geysers on Mars that would have dwarfed those in Yellowstone. Towering a couple of kilometres above the surface, Martian geysers rained hailstones and muddy water for several kilometres around.

The geysers erupted when bubbles of carbon dioxide forced underground water up to the surface through cracks, say scientists led by Alistair Bargery of Lancaster University in the UK.

"It's like when you shake up a Coke bottle and all the fizz comes up," team member Adam Neather told New Scientist.

The evidence for this appears at two sites on Mars where cracks hundreds of kilometres long called Mangala Fossa and Cerberus Fossae stretch across the surface. Both cracks are the starting points for broad channels that appear to have carried huge quantities of water – between 10 and 100 times the flow of the Amazon River.

Eroded slope

At least some of this water seems to have erupted onto the surface in the form of enormous geysers. At Cerberus Fossae, water appears to have reached an area that is uphill from the crack and several kilometres away, based on the erosion of a slope there. A powerful geyser must have erupted from the crack to transport the water all that way, says team member Lionel Wilson.

Near Mangala Fossa, muddy water from geysers apparently rained down onto the surface, creating mud flows. Later, when the water evaporated, only the sediment it carried would be left, forming ridged rock formations resembling the flows.

Another possibility is that the sediment deposits are made of solidified lava, but the direction of the ridges appears to rule that out, Wilson says. When ridges form in lava flows, they are always perpendicular to the direction of flow, whereas the ridges in this case are parallel to it.

Based on the distance of the deposits from the cracks, Neather says the geysers must have thrown material outwards by at least 4 kilometres. The team calculates that Martian geysers could throw material up to twice this distance.

Deep water

The secret to the power of the Martian geysers is that their source water seems to have come from very deep below the surface. The overlying rock at Mangala Fossa appears to have slumped downwards when underground water pockets emptied out, and the amount of this slumping suggests the water pockets lay about 3 to 4 kilometres below the surface.

The pressure at such great depths means water would be able to hold large quantities of dissolved carbon dioxide, which may have come from underlying magma. Once a crack formed that connected the surface to the high-pressure water below, the water would have rushed upwards.

The resulting drop in pressure would have had the same effect as opening a shaken soda can. Expanding bubbles of CO2 would have caused the muddy water to shoot out in geysers at more than 400 kilometres per hour.

Weak gravity

The geysers probably turned on very quickly and rose to great heights, helped along by the Red Planet's weak gravity, which is just 38% as strong as Earth's. "Maybe in a few minutes it grows to 1 to 2 kilometres," Wilson told New Scientist.

The fountains could have carried on spouting for a month or two, Wilson says, spewing massive amounts of muddy water onto the surface. Water at the periphery of the geyser would have quickly frozen in the -70 °C Martian air, pelting the ground as hailstones.

Geologist John Dixon of the University of Arkansas in Fayetteville, US, agrees that the deposits could be the aftermath of a CO2 water fountain. "I think it's definitely a viable hypothesis," he told New Scientist. "It sure doesn't look like any terrestrial lava flow that I've seen."

Geysers appear to have been active relatively recently on the 4.6-billion-year-old planet. Previous studies have estimated the channel connected to Cerberus Fossae to be less than 20 million years old.

Neather presented the research last week at the Lunar and Planetary Science Conference in Houston, Texas, US.

The Neanderthal-Human Split: (Very) Ancient History

Jennifer Viegas, Discovery News

March 17, 2008 -- Neanderthals and humans once shared a common ancestor, but we split from the stocky, hairy hominid group as long as 400,000 to 350,000 years ago, concludes a new study.

That estimate matches prior DNA studies, putting a date to the time when human beings first emerged on the planet. But would these first humans have been anatomically just like us? Probably not, suggests lead author Timothy Weaver, an anthropologist at the University of California at Davis.

"Early fossils along this lineage are quite different from later ones," he told Discovery News.

Fast evolution, in fact, probably drove the initial Neanderthal/human divergence, which likely began as genetic drift -- random changes in DNA. As the two groups parted ways, their changing environments likely drove more substantial changes in body shape and size, in response to differing needs.

Weaver and colleagues Charles Roseman and Chris Stringer created a model to determine how long it would have taken genetic drift to create the cranial differences observed between Neanderthal and modern human skeletons.

The model used prior information on how microsatellites, aka "junk DNA," can change, or drift, over time in a species. Over time, those changes can accumulate enough for an entirely new species to evolve.

The researchers applied the model to 37 cranial measurements collected on 2,524 modern and 20 Neanderthal specimens. Their findings are published in this week's Proceedings of the National Academy of Sciences.

Now that scientists have a better idea on when Neanderthals split from humans, they can zone in on which species might have been our common ancestor. They do this mostly by process of elimination. Fossils found long before 400,000 years ago, such as the 800,000-year-old Atapuerca humans from Spain, are simply too old to represent the common ancestor.

"I support the concept of a widespread ancestral species, Homo heidelbergensis," Stringer, a paleontologist at the Natural History Museum of London, told Discovery News.

Neanderthal features began to emerge from Homo heidelbergensis just before 500,000 years ago. "Heidelberg Man" was muscular and tall, had a relatively large brain, and usually grew to heights of 6 feet or more. Markings on bones suggest the burly hominid dined on enormous animals, such as mammoths, rhinos and elephants, some of which weighed over 1,500 pounds.

Stringer thinks that since Neanderthals and humans split relatively early, "we may need to designate the earlier part [on the human side] as 'Archaic sapiens.'" That would allow researchers to account for the different types of human fossils that fall between the divergence date and the appearance of more modern-looking people in Africa around 50,000 years ago.

Osbjorn Pearson, an associate professor of anthropology at the University of New Mexico, recently conducted similar research on Neanderthals and humans. He told Discovery News that he fully agrees with the new findings.

"From their, and other scientists' previous research, it has become clear that many of the physical differences between human skulls are due to random genetic changes that make populations diverge over time," Pearson said.

"It is gratifying -- and, for many anthropologists, perhaps unexpected -- that the bones and genes tell the same story."

"The results also reinforce the conclusion that it is unlikely that Neanderthals...contributed substantially to the modern human gene pool."

Methane found on distant world

By Helen Briggs Science reporter, BBC News

A carbon-containing molecule has been detected for the first time on a planet outside our Solar System.

The organic compound methane was found in the atmosphere of a planet orbiting a star some 63 light years away.

Water has also been found in its atmosphere, but scientists say the planet is far too hot to support life.

The discovery, unveiled in the journal Nature, is an important step towards exploring new worlds that might be more hospitable to life, they say.

Methane, made up of carbon and hydrogen, is the simplest possible organic compound.

Under certain circumstances, methane can play a key role in prebiotic chemistry - the chemical reactions considered necessary to form life.

Scientists detected the gas in the atmosphere of a Jupiter-sized planet known as HD 189733b.

Co-author Giovanna Tinetti from University College, London, told BBC News: "This planet is a gas giant very similar to our own Jupiter, but orbiting very close to its star.

"The methane here, although we can call it an organic constituent, is not produced by life - it is way too hot there for life."

Stepping stone

Dr Tinetti, and co-authors Mark Swain and Gautam Vasisht, from Nasa's Jet Propulsion Laboratory in Pasadena, California, found the tell-tale signature of methane in the planet's atmosphere using the Hubble Space Telescope.

The observations were made as the planet passed in front of its parent star, as viewed from Earth. As the star's light passed briefly through the planet's atmosphere, the gases imprinted their chemical signatures on the transmitted light.

A method known as spectroscopy, which splits light into its components, revealed the chemical "fingerprint" of methane.

| |

|HD 189733b |

|Located 63 light years from Earth, in |

|the constellation Vulpecula, the little fox |

|About the size of Jupiter but orbits |

|closer to the parent star in its Solar |

|System than Mercury does in our own |

|Temperatures reach 900 degrees C, |

|about the melting point of silver |

The researchers also confirmed a previous discovery - made by Nasa's Spitzer Space Telescope - that the atmosphere of HD 189733b also contains water vapour.

It shows that Hubble, Spitzer and a new generation of space telescopes yet to be launched can detect organic molecules on other extrasolar planets using spectroscopy, they say.

Dr Swain said: "This is a crucial stepping stone to eventually characterising prebiotic molecules on planets where life could exist."

Dr Tinetti said the technique could eventually be applied to extrasolar planets that appear more suitable for life than HD 189733b.

She said: "I definitely think that life is out there. My personal view is it is way too arrogant to think that we are the only ones living in the Universe."

Real worlds

The number of known planets orbiting stars other than our own now stands at about 270.

For most of them, scientists know little more than the planet's mass and orbital properties.

Adam Showman of the Department of Planetary Sciences at the University of Arizona, US, said scientists were finally starting to move beyond simply discovering extrasolar planets to truly characterising them as worlds.

Dr Showman, who was not part of the study, said: "The discovery does not by itself have any direct implications for life except that it proves a technique which might potentially be useful for characterising the atmosphere of rocky planets when we finally start discovering them."

Excitement about finding other Earth-like planets is driven by the idea that some might contain life; or that perhaps, centuries from now, humans might be able to set up colonies on them.

The key to this search is the so-called "Goldilocks zone", an area of space in which a planet is "just the right distance" from its parent star so that its surface is not-too-hot or not-too-cold to support liquid water.

Blue LEDs to reset tired truckers' body clocks

* 12:19 18 March 2008

* news service

* Max Glaskin

Eerie blue LEDs in truck cabs and truck stops could be the key to reducing accidents caused by drowsy drivers, say US researchers. They say bathing night drivers in the right light can increase their alertness by resetting their body clocks.

The scientists at Rensselaer Polytechnic Institute, New York, are testing blue LEDs that shine light at particular wavelengths that convince the brain it is morning, they say, resetting the body's natural clock.

That could help reduce the number of accidents that occur when people drive through the night. Nearly 30% of all fatal accidents involving large trucks in the US happen during the hours of darkness, according to a recent report by the Federal Motor Carrier Safety Administration, while fatigue causes half of all truck accidents in the early hours on UK motorways.

Wakey wakey

"The concept of using light to boost alertness is well established [in other areas]," says Mariana Figueiro, co-author of a new white paper published by the institute's lighting research centre.

"Translating that understanding into a practical application is the next challenge." Drivers could take 30-minute "light showers" in truck stops fitted with similar lights, or the lights could be fitted into truck cabs.

Figueiro is currently investigating how the blue light affects daytime alertness of sleep-deprived and non-sleep-deprived subjects. "These findings will also be applicable to transportation applications, since the accident rates during the afternoon hours are still higher than in the morning hours," says Figueiro.

Results so far show a clear effect on the brain activity of test subjects of both kinds, she adds. "After 45 minutes there is a clear effect," says Figueiro. "You start to see a beautiful increase in brain activity in the 300 milliseconds response, which is a measure of alertness." The current test box emits diffuse light at 470 nanometres, with an intensity of 40 lux when measured at the eye.

Light work

Figueiro plans experiments on a driving simulator using different light spectra, of 450 and 470nm, and intensities of 2.5, 5 and 7.5 lux, to see which combination works best without obscuring the driver's view of the road.

An alternative is to build goggles with blue LEDs for the driver to wear before setting off. Figueiro is already designing such equipment for people with Alzheimer's that will change their circadian rhythms to reduce their nocturnal alertness and help them to sleep at night.

Car manufacturers already market systems to warn or wake drowsy drivers. They use measures of eye movements, blink rates or small steering-wheel movements to tell if a driver is losing alertness. But preventing drowsiness in the first place would be more effective.

Jim Horne, director of the sleep research centre at Loughborough University, UK, says changing the body's clock is possible, but difficult in short periods. "Shifting it by eight hours takes at least 10 days, and very few people are capable of doing that," he says.

Punishment does not earn rewards or cooperation, study finds

'Winners don’t punish,' say the authors of a forthcoming Nature paper

CAMBRIDGE, Mass. -- Individuals who engage in costly punishment do not benefit from their behavior, according to a new study published this week in the journal Nature by researchers at Harvard University and the Stockholm School of Economics.

The group, led by Martin A. Nowak of Harvard's Program for Evolutionary Dynamics, Department of Mathematics, and Department of Organismic and Evolutionary Biology, examined cooperation among subjects playing a modified version of the Prisoner's Dilemma. This game captures the fundamental tension between the interests of the individual and the group, and is the classic paradigm for cooperation. The study found that the use of punitive behavior correlates strongly with reduced individual payoff, and bestows no benefit on the group as a whole.

"Put simply, winners don’t punish," says co-author David G. Rand of Harvard's Program for Evolutionary Dynamics and Department of Systems Biology. "Punishment can lead to a downward spiral of retaliation, with destructive outcomes for everybody involved. The people with the highest total payoffs do not use costly punishment."

"Costly punishment," the type of punitive behavior studied by Nowak and his colleagues, refers to situations where a punisher is willing to incur a cost in order to penalize someone else. Other researchers have suggested that costly punishment can compel cooperation in one-time interactions where individuals need not worry about reputation or retaliation -- a scenario Nowak and his colleagues found unrealistic, since, as they write, "most of our interactions are repeated and reputation is always at stake."

"There's been a lot of previous work on the use of punishment in cooperation games, but the focus has not been on situations where individuals use punishment in the context of ongoing interactions," says co-author Anna Dreber of the Stockholm School of Economics and the Program for Evolutionary Dynamics at Harvard. "We make the setting more realistic by having subjects play repeated games and introducing costly punishment as one of several options."

Dreber, Rand, Nowak, and Drew Fudenberg of Harvard's Department of Economics recruited 104 Boston-area college students to participate in a computer-based Prisoner's Dilemma game that was extended to include costly punishment alongside the usual options of cooperation and defection. Pairs of students played the game repeatedly so the interaction between costly punishment and reciprocity could be assessed.

The result: There is a strong negative correlation between individual payoff and the use of costly punishment. The five top-ranked players never used costly punishment, while players who earned the lowest payoffs tended to punish most often. Winners used a tit-for-tat like strategy while losers used costly punishment. Furthermore, costly punishment did not increase the average payoff of the group.

The study shows that punishment is not an effective force for promoting cooperation. The unfortunate tendency of humans to engage in acts of costly punishment must have evolved for other reasons such as establishing dominance hierarchy and defending ownership, but not to promote cooperation. In cooperation games, costly punishment is a detrimental and self-destructive behavior.

"Punishment may be a tool for forcing another person to do what you want," Dreber says. "It might have been for those kinds of dominance situations that the use of punishment has evolved."

"Our finding has a very positive message: In an extremely competitive setting, the winners are those who resist the temptation to escalate conflicts, while the losers punish and perish," concludes Nowak.

This study was supported by the John Templeton Foundation, the National Science Foundation, the National Institutes of Health, the Jan Wallander Foundation, and J. Epstein.

Woodburn, Ore.: a microcosm of immigrant shifts in America

University of Oregon geographer details economic and social changes in communities receiving immigrants

Travelers on I-5 know that Woodburn, Ore., is home to the region's largest tax-free outlet center. A University of Oregon researcher, however, turns away from the mall to study the heart of town, which, she says, provides insight on how new immigrant settlement patterns are transforming place and identity in small- to medium-sized U.S. cities.

Details of the research by Lise Nelson, professor of geography, appeared in two recent journals, Geographical Review (October 2007) and Cultural Geographies (January 2008). The former examined migrant farmworkers and community relationships as they transitioned from a migratory workforce in isolated labor camps to having year-round roles in the economy and becoming permanent residents. The latter follows the friction between an advocacy group's efforts to build new housing in the 1990s and resistance from mostly white residents and city officials.

Many of the changes detailed were fueled by globalization in the 1980s, Nelson said. Mexico faced an economic crisis, the U.S. economy became service-oriented and created a demand for low-wage workers, and the Immigration Reform and Control Act of 1986 allowed millions of undocumented workers with long employment histories to become legal workers. These events, in turn, allowed more family members to migrate and join the workers. The dynamics expanded already well-established labor flows between Mexico and the United States.

Economic changes in the northern Willamette Valley in the 1980s also contributed to increasing numbers of immigrant farmworkers arriving and settling in Woodburn. The expansion of the greenhouse and nursery industry, agricultural processing plants, the Christmas tree industry and a transition to immigrant tree planters in public and private reforestation activities combined to create nearly year-round demand for immigrant workers, mostly from Mexico.

While Nelson's research is on Woodburn, a city of 20,000 people just south of Portland, similar changes occurred in nearby Gervais and Canby and many other non-metropolitan cities. The 2000 census found Woodburn to be the largest Oregon city with a majority population of Latinos.

"Woodburn is a place that represents a microcosm of the broader-scale migration and settlement dynamics that are changing small- and medium-sized towns throughout the United States," Nelson said. "Woodburn's farmworker housing struggle in the 1990s offers a window into the shifting dynamics of belonging and identity in these contexts.

"The housing struggle reflected a deep resistance on the part of some white residents to the presence of Mexican immigrants, yet today we see, at least on an official level, a more active embracing of Woodburn's multicultural identity. A few years ago Woodburn inaugurated, as its first urban renewal project, a downtown plaza, designed in a Latin-American style," Nelson said. "For several years now the city has helped organize a community celebration of Mexican Independence Day. This is not to say the picture is all rosy, as racism and discrimination against immigrant residents have not disappeared, but there have been public and visible changes."

Nelson collected data from archived newspaper articles, public records and personal interviews done in English and Spanish. Her research follows shifting politics and immigration, as well as economic changes that drive both. She has done extensive research in Mexico, especially in Michoacán, within migrant-sending communities.

Mexican workers came in large numbers to the northern Willamette Valley in the 1940s under the U.S.-sponsored Bracero Program to alleviate World War II labor shortages. The workers often lived in cramped, ill-equipped labor-camps. By the 1950s and 1960s, most farmworkers were Mexican-American citizens coming from border areas on a seasonal basis. The rural labor force shifted again by the late 1970s, when large numbers of workers again began arriving from Mexico. By the 1990s, the trend saw more immigrants seeking employment in smaller cities rather than large gateway cities such as Los Angeles and Chicago.

During the 1980s as farmworkers workers sought housing in Woodburn, Nelson found, they often were crowded into single-family housing units or lived in garages and cars. Landlords often charged for entire families to live in one room; multiple families shared bathrooms, living rooms and kitchens. Overcrowding created unsafe conditions, fostered social tensions and led to housing decay. Few residents were pleased, Nelson noted. Longtime residents, both white and Mexican-American, reported plummeting living conditions, and immigrant families were concerned about the effects on their children and family life.

In response, a coalition of advocacy groups formed the Farmworker Housing Development Corp. (FHDC) in 1991 to build safe and affordable housing. With bank loans and grants, FHDC sought to take over a failed Housing and Urban Development-funded site to build an apartment complex with rents scaled by income. Although this appeared to be a win-win situation, Nelson said, the city, which was forced to foreclose on the property after a private developer went bankrupt, resisted the proposal for two years before giving to avoid paying $245,000 to the government.

Nelson's study provides insight to the battle. FHDC eventually prevailed and opened Phase 1 of the Nuevo Amanecer (New Dawn) complex in August 1994. "Nuevo Amanecer created living space for farmworkers that contrasted sharply with traditional farmworker housing," Nelson noted in Geographical Review. "It enacted a spatial claim to place and belonging in the community for farmworkers who had historically been relegated to the labor camp. FHDC staff worked with residents to generate rules governing the complex, from security to garbage-collection schedules."

FHDC's efforts to build another complex also met with resistance. In 1995 FHDC purchased, to the city's surprise, an abandoned lot near Woodburn City Hall. Again, the city balked and stalled its approval, but, again, FHDC won and opened Esperanza (Hope) Court in October 1997. The FHDC later won awards for its design and operation of the complexes.

"I talked to some residents in Woodburn who had originally opposed the housing projects," Nelson recalled. "They said that they thought there would be gangs, more trash and more problems. Instead, they found them to be well run and a nice place for families -- with a lot of participation by residents. It is seen by many as a really innovative and successful program." The September 2005 dedication of the downtown plaza, she added, "indicated a shift in who is seen as belonging in the community, and the nature of the town's 'place identity' itself."

"Woodburn's housing struggle," she said, "offers a window into the shifting dynamics of belonging and identity between white residents and Latinos, including Mexican-American and Mexican immigrants. These inter-group dynamics are now more accommodating, more understanding and more accepting of differences, even though not all racial tensions are gone."

The Woodburn Area Chamber of Commerce proclaims the city's diversity on its Web site, noting the city has "grown up a lot," is one-half Hispanic, one-fifth Russian and one-quarter senior citizen. "People of all ages and all cultures have come together to know Woodburn as the City of Unity, a place where they can celebrate their differences and share their cultural heritage," the site says.

In the Cultural Geographies paper, Nelson concludes that "the political and economic power structures remain overwhelmingly white ... But constructions of place identity and the public sphere in Woodburn have become decidedly more pluralistic, partly, I think, as a result of the successful struggles such as those to build Nuevo Amanecer and Esperanza Court." She predicts that over time the town's power structure will become more pluralistic as well.

Need New Look? Online Makeover is fan-taaz-tic

San Diego, CA, March 18, 2008 -- Thanks to a Jacobs School startup company whose site, went live today, the cosmetics counter isn't the only place to try out the latest makeup trends. The new way is easier, faster, and much more private. Anyone with a digital photograph can now apply more than 4,000 makeup products with the click of a mouse. It's all at - the creation of two Jacobs School computer scientists turned entrepreneurs.

The computer scientists invented an algorithm for separating gloss from non-gloss in digital images - a technical feat crucial for 's patented approach to applying photorealistic makeup to images. It is also useful for more traditional computer vision applications like face recognition.

is easy and free. Simply upload a portrait-style photograph and a computer vision system automatically identifies your eyes, nose, lips and cheeks. From here, you can apply thousands of makeup products from a wide range of brands to your digital portrait and experiment with new hairstyles and colored contacts. After trying a prototype, one user went home and bought a new pair of colored contacts, says David Kriegman, a UCSD computer science professor and co-founder.

Once you create a new look, you can share it with friends, post the picture in 's public gallery or upload it to social networking sites. To make shopping easier, you can print a list of what you tried on at .

"With , we take something very complicated -- giving digital portraits a photorealistic makeover -- and make it very easy," says Satya Mallick, a co-founder with a fresh Ph.D. in electrical engineering from UCSD.

Mallick's dissertation focused on the gloss removal algorithm that led to .

Kriegman, Mallick and third co-founder Kevin Barnes secured venture funding from iSherpa Capital in August 2007. They hired three recent UCSD grads. Kriegman credits the Jacobs School's von Liebig Center, UCSD's Technology Transfer and Intellectual Property Services, and CONNECT's Springboard program for helping the team develop and spin out their business, license the core technology, and secure venture capital.

Why men should pair off with younger women

* 00:01 19 March 2008

* news service

* Colin Barras

Mick Jagger, Rupert Murdoch and Michael Douglas all have the right idea, evolutionarily speaking. Statistics show that monogamous men have the most children if they marry women younger than themselves. How much younger is the key question.

Last year, a study of Swedish census information suggested a 4 to 6-year age gap is best, but new research has found that in some circumstances a surprisingly large gap – 15 years – is the optimum.

Martin Fieder at the University of Vienna and Susanne Huber of the University of Veterinary Medicine, also in Vienna, Austria, studied the Swedish data and found that a simple equation related the age difference of the parents to the number of offspring. For people who had maintained monogamous relationships throughout adulthood, the most children were found in couples where the man was 4.0 to 5.9 years older than the woman.

The probable reasons behind this state of affairs are not controversial: "Men want women younger than themselves because they are physically attractive," says Fieder, while women tend to prioritise a partner who can provide security and stability, and so tend to opt for older men.

Mum’s the word

However, Fieder and Huber's calculations drew criticism. For example, Erik Lindqvist at the Research Institute of Industrial Economics in Stockholm, Sweden, pointed out that the age of the mother is likely to be more important than any age difference: the older the mother, the lower her chances of having more children.

"We added that factor into the calculation," says statistician Fred Bookstein at the University of Washington, a colleague of Fieder and Huber. "The importance of the age difference didn't change.”

Even if it holds true for Sweden, the 4 to 6-year age gap is unlikely to be optimal in all cultures. Samuli Helle at the University of Turku in Finland read Fieder and Huber's paper and says it stirred memories of an unpublished study he conducted a few years ago.

Cultural differences

"In 2001, I studied the demographics of the Sami people of northern Finland," he says. "I had thought I had missed the opportunity to publish, but when I saw the Fieder and Huber paper I thought: why not write a response?"

Helle’s team performed a similar calculation to Fieder and Huber's, using the demographic data from the 17th to 19th centuries that Helle had already collected from northern Finland. For the Sami people, they found that males with 15 years on their partners had the most children.

"I don't know why the optimal age differences were so much bigger among the Sami people, but it might be related to culture," says Helle, noting that the Sami were nomadic reindeer hunters. "Perhaps those huge lifestyle differences are important."

Journal references:

Fieder and Huber’s original paper: Biology Letters, DOI: 10.1098/rsbl.2007.0324; Lindqvist’s response: Biology Letters, DOI: 10.1098/rsbl.2007.0514; Fieder et al. reply to Lindqvist: Biology Letters; DOI: 10.1098/rsbl.2007.0567; Helle paper: Biology Letters, DOI:10.1098/rsbl.2007.0538

Two-week-old blood no good for transfusions

* 21:00 19 March 2008

* news service

* Michael Day

The common practice of storing blood for more than two weeks could be proving fatal for thousands of heart surgery patients, according to a major study.

Doctors at the Cleveland Clinic in Ohio have found that patients who receive blood that is more than 14 days old are nearly two-thirds more likely to die than those who get newer blood.

The survey of more than 9000 heart surgery patients also suggests that recipients of older blood are more at risk from blood poisoning and organ failure.

The conclusion? "Blood should be classified as outdated earlier than current recommendations," says lead researcher Colleen Koch.

Koch's team note that in the US the average age of transfused blood is more than two weeks - and that around half of all heart surgery patients receive blood transfusions.

New measures are urgently needed, say the researchers, to prevent unnecessary deaths among this large and vulnerable group of patients.

Previous studies have suggested that transfusions increase the risk of death and serious complications. This latest study suggests the age of the blood used is a major factor.

Broken blood

On the basis of earlier laboratory studies, Koch speculates that, at the two-week stage, stored red blood cells begin to break down. This, she says, may make them more likely to block blood vessels while reducing their capacity to carry oxygen.

Her team studied the medical records of patients who received major heart surgery at the Cleveland Clinic between June 1998 and January 2006.

A total of 2,872 patients received blood that had been stored for 14 days or less, and 3,130 patients received blood that was more than 14 days old.

The mean storage age was 11 days for the newer blood and 20 days for the older blood.

In-hospital mortality was significantly higher among those who received older blood: 2.8% compared to 1.7%.

The researchers also found that death rates a year on were nearly half as high again in the patients who had received older blood, compared to those who received newer blood. 11% of the patients who had received older blood had died a year later, compared to 7.4% of those who received newer blood. Both sets of patients received the same volume of blood.

"This research suggests that the longer transfused blood has been stored, the greater the risk of complications following cardiac surgery," says Peter Weissberg of the UK charity, the British Heart Foundation. He says that research last year indicated that, in many heart surgery patients, transfusions did more harm than good.

"Together, these studies suggest that only those heart patients whose lives are at serious risk without a transfusion should receive blood," Weissberg says.

"Further research is urgently needed to clarify the indications for transfusion and the effects of blood storage on outcome."

There are around 30,000 heart operations every year in the UK, and over 100,000 in the US.

Journal refs: New England Journal of Medicine, vol 358, p1229 Circulation, vol 116, p2544

Floating a big idea: MIT demos ancient use of rafts to transport goods

Written by David Chandler, MIT News Office

CAMBRIDGE, Mass.--Oceangoing sailing rafts plied the waters of the equatorial Pacific long before Europeans arrived in the Americas, and carried tradegoods for thousands of miles all the way from modern-day Chile to western Mexico, according to new findings by MIT researchers in the Department of Materials Science and Engineering.

Details of how the ancient trading system worked more than 1,000 years ago were reconstructed largely through the efforts of former MIT undergraduate student Leslie Dewan, working with Professor of Archeology and Ancient Technology Dorothy Hosler, of the Center for Materials Research in Archaeology and Ethnology (CMRAE). The findings are being reported in the Spring 2008 issue of the Journal of Anthropological Research.

The new work supports earlier evidence documented by Hosler that the two great centers of pre-European civilization in the Americas-the Andes region and Mesoamerica-had been in contact with each other and had longstanding trading relationships. That conclusion was based on an analysis of very similar metalworking technology used in the two regions for items such as silver and copper tiaras, bands, bells and tweezers, as well as evidence of trade in highly prized spondylus-shell beads.

Early Spanish, Portuguese and Dutch accounts of the Andean civilization include descriptions and even drawings of the large oceangoing rafts, but provided little information about their routes or the nature of the goods they carried.

In order to gain a better understanding of the rafts and their possible uses, Dewan and other students in Hosler's class built a small-scale replica of one of the rafts to study its seaworthiness and handling, and they tested it in the Charles River in 2004. Later, Dewan did a detailed computer analysis of the size, weight and cargo capacity of the rafts to arrive at a better understanding of their use for trade along the Pacific coast.

“It's a nontrivial engineering problem to get one of these to work properly,” explained Dewan, who graduated last year with a double major in nuclear engineering and mechanical engineering. Although the early sketches give a general sense of the construction, it took careful study with a computerized engineering design program to work out details of dimensions, materials, sail size and configuration, and the arrangement of centerboards. These boards were used in place of a keel to prevent the craft from being blown to the side, and also provided a steering mechanism by selectively raising and lowering different boards from among two rows of them arranged on each side of the craft.

Although much of the raft design may have seemed familiar to the Europeans, some details were unique, such as masts made from flexible wood so that they could be curved downward to adjust the sails to the strength of the wind, the centerboards used as a steering mechanism, and the use of balsa wood, which is indigenous to Ecuador.

Dewan also analyzed the materials used for the construction, including the lightweight balsa wood used for the hull. Besides having to study the aerodynamics and hydrodynamics of the craft and the properties of the wood, cloth and rope used for the rafts and their rigging, she also ended up delving into some biology. It turns out that one crucial question in determining the longevity of such rafts had to do with shipworms-how quickly and under what conditions would they devour the rafts? And were shipworms always present along that Pacific coast, or were they introduced by the European explorers?

Shipworms are molluscs that can be the width of a quarter and a yard long. “Because balsa wood is so soft, and doesn't have silicates in it like most wood, they are able to just devour it very quickly,” Dewan said. “It turns into something like cottage cheese in a short time.”

That may be why earlier attempts to replicate the ancient rafts had failed, Dewan said. After construction, those replicas were allowed to sit near shore for weeks before the test voyages. “That's where the shipworms live,” Dewan said. “One way to avoid that is to minimize the amount of time spent in harbor.”

Dewan and Hosler did a simulation of the amount of time it would take for shipworms to eat one of the rafts and concluded that with proper precautions, it would be possible to make two round-trip voyages from Peru to western Mexico before the raft would need replacing.

The voyages likely took six to eight weeks, and the trade winds only permit the voyages during certain seasons of the year, so the travelers probably stayed at their destination for six months to a year each trip, Dewan and Hosler concluded. That would have been enough time to transfer the detailed knowledge of specific metalworking techniques that Hosler had found in her earlier research.

While Hosler's earlier work had shown a strong likelihood that there had been contact between the Andean and Mexican civilizations, it took the details of this new engineering analysis to establish that maritime trade between the two regions could indeed have taken place using the balsa rafts. “We showed from an engineering standpoint that this trip was feasible,” Dewan said. Her analysis showed that the ancient rafts likely had a cargo capacity of 10 to 30 tons-about the same capacity as the barges on the Erie canal that were once a mainstay of trade in the northeastern United States.

Hosler said the analysis is “the first paper of its kind” to use modern engineering analysis to determine design parameters and constraints of an ancient watercraft and thus prove the feasibility of a particular kind of ancient trade in the New World. And for Dewan, it was an exciting departure from her primary academic work. “I just loved working on this project,” she said, “being able to apply the mechanical engineering principles I've learned to a project like this, that seems pretty far outside the scope” of her work in nuclear engineering.

Most republicans think the US health care system is the best in the world; democrats disagree

All political groups agree the US lags in providing affordable care and controlling costs

Boston, MA - A recent survey by the Harvard School of Public Health (HSPH) and Harris Interactive, as part of their ongoing series, Debating Health: Election 2008, finds that Americans are generally split on the issue of whether the United States has the best health care system in the world (45% believe the U.S. has the best system; 39% believe other countries have better systems; 15% don’t know or refused to answer) and that there is a significant divide along party lines. Nearly seven-in-ten Republicans (68%) believe the U.S. health care system is the best in the world, compared to just three in ten (32%) Democrats and four in ten (40%) Independents who feel the same way.

This poll was conducted during a period of debate over the comparative merits of the U.S. health care system and the health care systems in other countries. President Bush and other prominent political figures have claimed that the U.S. has the best system in the world. At the same time, the World Health Organization and other organizations have ranked the U.S. below many other countries in their comparisons, while Michael Moore presented a similarly negative assessment of the U.S. health system in a popular format with his film Sicko.

So how might this issue impact how Americans vote in the upcoming presidential election" When asked if they would be more likely to support or oppose a presidential candidate who advocates making the U.S. health care system more like health systems in other countries, specifically Canada, France, and Great Britain, only one in five (19%) Republicans say they would be more likely to support such a candidate. This is compared to more than half (56%) of Democrats and more than a third of Independents (37%) who say they would be more likely to support such a candidate.

Though many Americans view the health care systems of other countries as better than the U.S. in general, the survey shows that they do not identify as better those countries that have been most frequently compared to the U.S. In head-to-head comparisons with health care systems in Canada, France and Great Britain, a large percentage of Americans are not sure how the U.S. compares overall. Over half (53%) of Americans say they don’t know how the U.S. generally compares to France and four in ten (40%) say they don’t know if the U.S. system is better or worse than Great Britain’s. A quarter (26%) are not sure how the U.S health care system compares to the Canadian system.

The view that the U.S. health care system lags other countries seems largely driven by the view that the U.S. is behind in controlling health care costs and providing affordable access to everyone. In comparing how the U.S. stacks up against other countries in specific areas, a slim majority of Americans believe that the U.S. health care system is better in terms of the quality of care patients receive (55% believe the U.S. is better than other countries) and shorter waiting times to see specialists or be admitted to the hospital (53% believe the U.S. is better than other countries). However, very few believe that the U.S. has the edge when it comes to providing affordable access to everyone (26% believe the U.S. is better than other countries) and controlling health care costs (21% believe the U.S. is better than other countries).

Once again, there are contrasts in how Republicans view the United States’ standing on these elements and how Democrats and Independents rate the U.S. As an example, four-in-ten (40%) Republicans believe the U.S health care system is better than other countries when it comes to making sure everyone can get affordable health care, compared to just one-in-five Democrats (19%) and Independents (22%) who share that belief. On each of the four elements tested, Independents are within a few percentage points of agreement with Democrats, and both are significantly separated from Republicans.

“The health care debate in this election involves starkly different views of the U.S. health care system,” says Robert J. Blendon, Professor of Health Policy and Political Analysis at the Harvard School of Public Health. “One party sees it as lagging other countries across a broad range of problem areas while the other party sees the system as the best in the world with a more limited range of problems.”

[reporters: Full survey results available upon request]

Methodology

This survey is part of the series, Debating Health: Election 2008. The series focuses on current health issues in the presidential campaign. The survey design team includes Professor Robert Blendon, Tami Buhr, John Benson and Kathleen Weldon of the Harvard School of Public Health; and Humphrey Taylor, Scott Hawkins and Justin Greeves of Harris Interactive.

This survey was conducted by telephone within the United States among a nationwide cross section of adults aged 18 and over. The survey was conducted from March 5 to 8, 2008 among a representative sample of 1026 respondents. Figures for age, sex, race/ethnicity, education, region, number of adults in the household, size of place (urbanicity) and number of phone lines in the household were weighted where necessary to bring them into line with their actual proportions in the population.

All sample surveys and polls are subject to multiple sources of error including sampling error, coverage error, error associated with nonresponse, error associated with question wording and response options, and post-survey weighting and adjustments. The sampling error for both polls is +/- 3.0% in 95 out of 100 cases for results based on the entire sample. For results based on a smaller subset, the sampling error is somewhat larger.

Life’s Work

Clicking, at Last, on ‘Don’t Print’

By LISA BELKIN

WITH this column, I am relearning how to write.

Until last week I collected interview notes and e-mail exchanges and Web downloads on my computer, then printed and sorted, underlined and typed until I had a column. Something about holding papers, and rearranging them, fired up my brain.

But this week, when it came time to press “print” on my dozens of pages of data, I froze. All that wasted paper. And ink. And electricity. Weren’t computers supposed to have led to the paperless office? Why was my desk piled with byproducts of felled trees?

Daniel Horowitz

So here I am, electronically cutting and pasting — and being part of a trend. Green is to this decade’s workplace what flexible hours were to the last.

Pick a company, and you are increasingly likely to find a plan. Some are ambitious, building energy-saving factories from renewable recyclable materials. Some are modest, aiming to change the behavior of individual workers.

That last part, which seems as if it should be the easiest, is meeting some resistance. A poll for Randstad USA by Harris Interactive found that over all, 77 percent of the 2,079 respondents said they recycle, but only 49 percent said they do so at work. In the survey, conducted from Jan. 17 to 21 online, 93 percent reported turning off lights and computers when they leave home. But only 50 percent flip the off switch when they leave work for the night.

In spite of this ambivalence, some companies are mandating double-sided copying, paper recycling or electronic faxes only. Others save electricity by installing timers on lights and automating computer shutdowns.

Businesses are even subsidizing gasoline conservation. At the RSUI Group, an insurance underwriter in Atlanta, an employee who relinquishes a parking pass and takes mass transit or joins a van pool is paid $60 a month. Workers at the Meradia Group, a Philadelphia business consulting concern, are given $5,000 if they buy a hybrid car. Meradia also puts blue recycling bins under desks so employees don’t have to walk across the room to deposit an empty water bottle.

Unsurprisingly, water bottles are under attack. Last year, the Boston law firm Ropes & Gray stopped buying them for office meetings, replacing them with pitchers and glasses. That conserved 4,000 bottles in the first month alone.

Coffee cups are on the radar screen, too, although there seems to be some disagreement over what is actually best for the environment. At the 233 offices of the advertising company Euro RSCG Worldwide, paper has been replaced with glass, to cut down on landfill. At the Skyline Downtown Salon in Kansas City, Mo., glass has been replaced with paper, so the dishwasher doesn’t have to be run more than once a day. And the 9,000 employees of the Capital Group Companies, a financial management firm, use disposable cups made from biodegradable cornstarch, and plates and bowls made of sugar cane.

Not all the green initiatives come from management. Many are driven by determined employees, waving their recyclable pompoms and cheering on the laggards.

Andrew Granchelli, for instance. Although his employer, Newman Communications, in Brighton, Mass., professed interest in recycling, he said, it claimed to be hamstrung by the owners of its office building, who refused to put out separate containers for paper, plastic and glass.

So Mr. Granchelli and a co-worker, Rachel Rausch, put bins in a central part of the office. Roughly once a week, Mr. Granchelli puts the soup cans, soda bottles and plastic iced-tea containers in his car trunk, then adds them to his household recycling. Ms. Rausch takes at least a box of paper out to her car daily and, during her weekly grocery shopping, puts it in bins provided by a company that raises money for charity by recycling.

Employees at LPA Inc., an architecture firm in Irvine, Calif., take home more than bottles and paper. The more than 100 employees participate in an on-site composting program, and trash bins are provided for food waste that appeals to worms (coffee grounds, for example). The waste goes into a “worm habitat” in the office, and the “worm wrangler” (a volunteer) cares for them and divides the resulting compost among employees to use in their gardens.

The saving of the planet is also employee-driven at I Love Rewards, a Toronto company that develops employee productivity incentives. Amy Cole leads the company’s social responsibility committee, which in the last six months has created a recycling program and a bike-to-work program (which includes a sign-up sheet for a shared company bike) and has installed timers on the coffee machine and the office refrigerator.

Those timers were added, she explained, because the coffee maker is used only until 3 p.m. daily and the lounge refrigerator runs only on Fridays, when it holds the ingredients for “our signature corporate cocktail, the RedPoint,” a whiskey sour spiked with Red Bull that traditionally ends the workweek. The electricity that would be saved in a year is $686.40. Which pays for more coffee and RedPoints.

Things get a little competitive when employees start to monitor and measure. The May issue of Discover magazine will include a “Carbon Footprint Challenge,” urging readers to reduce their impact on the environment, and while preparing the article, the magazine looked in its own backyard — at the environmental impact of its reporting and printing, and at the lifestyles of its 35 employees.

Using a “Cool Climate Calculator” designed by a graduate student at the University of California, Berkeley, Discover found that Patrice Adcroft, its editorial director, was the most carbon neutral — living in Manhattan, where the magazine has its offices, and walking wherever she could — though she still created carbon emissions that were three times the world’s average.

In contrast, Michael Di Ioia, the creative director, commutes 46,000 miles by bus each year between his home in Pennsylvania and work. Since learning that he was the worst polluter of the lot, Mr. Di Ioia has contacted his oil company to take it up on its energy-saving tips, and he is considering moving closer to Manhattan, he said, “to make it easier on my carbon footprint.”

has even turned workplace conservation into a competitive sport. The site offers challenges: Go a week without using disposable coffee cups. Abandon your car for public transportation once a week. Offices form teams and battle to see who can make the greatest dent in carbon emissions. As of Wednesday, Google’s Cambridge, Mass., office was leading Google’s Pittsburgh office, 2.74 tons to 2.53 tons.

My decision not to print about 36 pages of notes is but a blip in that sea of CO2. And it took nearly twice as long as usual to write this column. But I have spared myself the moment where I usually toss a stack of printouts into the trash.

Now I am off to buy some environmentally friendly compact fluorescent light bulbs.

For Scientists, a Beer Test Shows Results as a Litmus Test

By CAROL KAESUK YOON

Ever since there have been scientists, there have been those who are wildly successful, publishing one well-received paper after another, and those who are not. And since nearly the same time, there have been scholars arguing over what makes the difference.

What is it that turns one scientist into more of a Darwin and another into more of a dud?

After years of argument over the roles of factors like genius, sex and dumb luck, a new study shows that something entirely unexpected and considerably sudsier may be at play in determining the success or failure of scientists — beer.

According to the study, published in February in Oikos, a highly respected scientific journal, the more beer a scientist drinks, the less likely the scientist is to publish a paper or to have a paper cited by another researcher, a measure of a paper’s quality and importance.

The results were not, however, a matter of a few scientists having had too many brews to be able to stumble back to the lab. Publication did not simply drop off among the heaviest drinkers. Instead, scientific performance steadily declined with increasing beer consumption across the board, from scientists who primly sip at two or three beers over a year to the sort who average knocking back more than two a day.

“I was really surprised,” said Dr. Tomas Grim, the author of the study and an ornithologist at Palacky University in the Czech Republic, who normally studies the behavior of birds, not scientists. “And I am happy to see that the relationship I found seems to be very well supported by my new observations in pubs, bars and restaurants.”

Dr. Grim, carried out the research by surveying his fellow Czech ornithologists about their beer drinking habits first in 2002 and then in 2006. He obtained the same results each time.

The paper has quickly been making the rounds among biologists, provoking reactions like surprise, nervous titters and irritation — often accompanied by the name of a scientist whose drinking is as impressive as his or her list of publications.

Matthew Symonds, an evolutionary biologist at the University of Melbourne who has also studied factors affecting scientific productivity, called the results remarkable.

“It’s rather devastating to be told we should drink less beer in order to increase our scientific performance,” Dr. Symonds said.

Though the public may tend to think of scientists as exceedingly sober, scientific schmoozing is often beer-tinged, famous for producing spectacular breakthroughs and productive collaborations, countless papers having begun as scrawls on cocktail napkins.

Yet the new study shows no indication that some level of moderate social beer drinking increases scientific productivity. Some scientists suggest that biologists in the Czech Republic could prove to be an anomaly, given that the country has a special relationship to beer, boasting the highest rate of beer consumption on earth.

More important, as Dr. Grim pointed out, the study documents a correlation between beer drinking and scientific performance without explaining why they are correlated. That leaves open the possibility that it is not beer drinking that causes poor scientific performance, but just the opposite.

Or, as Dr. Mike Webster, an ornithologist and a beer enthusiast at Washington State University in Pullman, said, maybe “those with poor publication records are drowning their sorrows.”

In spite of his study, Dr. Grim, who said he would on occasion enjoy more than 12 beers in a night, is not on a campaign to decrease beer drinking among scientists. Why not? His answer: “I like it.”

Tuatara, the fastest evolving animal

New DNA research has questioned previous notions about the evolution of the tuatara

In a study of New Zealand’s “living dinosaur” the tuatara, evolutionary biologist, and ancient DNA expert, Professor David Lambert and his team from the Allan Wilson Centre for Molecular Ecology and Evolution recovered DNA sequences from the bones of ancient tuatara, which are up to 8000 years old. They found that, although tuatara have remained largely physically unchanged over very long periods of evolution, they are evolving - at a DNA level - faster than any other animal yet examined. The research will be published in the March issue of Trends in Genetics.

“What we found is that the tuatara has the highest molecular evolutionary rate that anyone has measured,” Professor Lambert says.

The rate of evolution for Adélie penguins, which Professor Lambert and his team have studied in the Antarctic for many years, is slightly slower than that of the tuatara. The tuatara rate is significantly faster than for animals including the cave bear, lion, ox and horse.

“Of course we would have expected that the tuatara, which does everything slowly – they grow slowly, reproduce slowly and have a very slow metabolism – would have evolved slowly. In fact, at the DNA level, they evolve extremely quickly, which supports a hypothesis proposed by the evolutionary biologist Allan Wilson, who suggested that the rate of molecular evolution was uncoupled from the rate of morphological evolution.”

Allan Wilson was a pioneer of molecular evolution. His ideas were controversial when introduced 40 years ago, but this new research supports them.

Professor Lambert says the finding will be helpful in terms of future study and conservation of the tuatara, and the team now hopes to extend the work to look at the evolution of other animal species.

“We want to go on and measure the rate of molecular evolution for humans, as well as doing more work with moa and Antarctic fish to see if rates of DNA change are uncoupled in these species. There are human mummies in the Andes and some very good samples in Siberia where we have some collaborators, so we are hopeful we will be able to measure the rate of human evolution in these animals too.”

The tuatara, Sphendon punctatus, is found only in New Zealand and is the only surviving member of a distinct reptilian order Sphehodontia that lived alongside early dinosaurs and separated from other reptiles 200 million years ago in the Upper Triassic period.

Lambert et al.:"Rapid molecular evolution in a living fossil." Researchers include Jennifer M. Hay, Sankar Subramanian, Craig D. Millar, Elmira Mohandesan and David M. Lambert.

Rare cosmic rays are from far away

Study confirms 1966 prediction: The most energetic particles in the universe are not from the neighborhood

Final results from the University of Utah’s High-Resolution Fly’s Eye cosmic ray observatory show that the most energetic particles in the universe rarely reach Earth at full strength because they come from great distances, so most of them collide with radiation left over from the birth of the universe.

The findings are based on nine years of observations at the now-shuttered observatory on the U.S. Army’s Dugway Proving Ground. They confirm a 42-year-old prediction – known as the Greisen-Zatsepin-Kuzmin (GZK) “cutoff,” “limit” or “suppression” – about the behavior of ultrahigh-energy cosmic rays, which carry more energy than any other known particle.

The idea is that most – but not all – cosmic ray particles with energies above the GZK cutoff cannot reach Earth because they lose energy when they collide with “cosmic microwave background radiation,” which was discovered in 1965 and is the “afterglow” of the “big bang” physicists believe formed the universe 13 billion years ago.

The journal Physical Review Letters published the results Friday, March 21.

The GZK limit’s existence was first predicted by Kenneth Greisen of Cornell University while visiting the University of Utah in 1966, and independently by Georgiy Zatsepin and Vadim Kuzmin of Moscow’s Lebedev Institute of Physics.

“It has been the goal of much of ultrahigh-energy cosmic ray physics for the past 40 years to find this cutoff or disprove it,” says physics Professor Pierre Sokolsky, dean of the University of Utah College of Science and leader of the study by a collaboration of 60 scientists from seven research institutions. “For the first time in 40 years, that question is answered: there is a cutoff.”

That conclusion, based on 1997-early 2006 observations at the High Resolution Fly’s Eye cosmic ray observatory (nicknamed HiRes) in Utah’s western desert, has been bolstered by the new Auger cosmic ray observatory in Argentina. During a cosmic ray conference in Merida, Mexico, last summer, Auger physicists outlined preliminary, unpublished results showing that the number of ultrahigh-energy cosmic rays reaching Earth drops sharply above the cutoff.

So both the HiRes and Auger findings contradict Japan’s now-defunct Akeno Giant Air Shower Array (AGASA), which observed roughly 10 times more of the highest-energy cosmic rays – and thus suggested there was no GZK cutoff.

Cosmic Rays: Far Out

Last November, the Auger observatory collaboration – to which Sokolsky also belongs – published a study suggesting that the highest-energy cosmic rays come from active galactic nuclei or AGNs, or the hearts of extremely active galaxies believed to harbor supermassive black holes.

AGNs are distributed throughout the universe, so confirmation that the GZK cutoff is real suggests that if ultrahigh-energy cosmic rays are spewed out by AGNs, they primarily are very distant from the Earth – at least in Northern Hemisphere skies viewed by the HiRes observatory. University of Utah physics Professor Charlie Jui, a co-author of the new study, says that means galaxies beyond our “local” supercluster of galaxies at distances of at least 150 million light years from Earth, or roughly 870 billion billion miles. [In U.S. usage, billion billion is correct here and in subsequent references for 10 to the 18th power. In British usage, 10 to the 18th power should be million billion.]

However, unpublished results from HiRes do not find the same correlation that Auger did between ultrahigh-energy cosmic rays and active galactic nuclei. So there still is uncertainty about the true source of extremely energetic cosmic rays.

“We still don’t know where they’re coming from, but they’re coming from far away,” Sokolsky says. “Now that we know the GZK cutoff is there, we have to look at sources much farther out.”

In addition to the University of Utah, High Resolution Fly’s Eye scientists are from Los Alamos National Laboratory in New Mexico, Columbia University in New York, Rutgers University – the State University of New Jersey, Montana State University in Bozeman, the University of Tokyo and the University of New Mexico, Albuquerque.

Messengers from the Great Beyond

Cosmic rays, discovered in 1912, are subatomic particles: the nuclei of mostly hydrogen (bare protons) and helium, but also of some heavier elements such as oxygen, carbon, nitrogen or even iron. The sun and other stars emit relatively low-energy cosmic rays, while medium-energy cosmic rays come from exploding stars.

The source of ultrahigh-energy cosmic rays has been a mystery for almost a century. The recent Auger observatory results have given the edge to the popular theory they originate from active galactic nuclei. They are 100 million times more energetic than anything produced by particle smashers on Earth. The energy of one such subatomic particle has been compared with that of a lead brick dropped on a foot or a fast-pitched baseball hitting the head.

“Quite apart from arcane physics, we are talking about understanding the origin of the most energetic particles produced by the most energetic acceleration process in the universe,” Sokolsky says. “It’s a question of how much energy the universe can pack into these extraordinarily tiny particles known as cosmic rays. … How high the energy can be in principle is unknown. By the time they get to us, they have lost that energy.”

He adds: “Looking at energy processes at the very edge of what’s possible in the universe is going to tell us how well we understand nature.”

Ultrahigh-energy cosmic rays are considered to be those above about 1 billion billion electron volts (1 times 10 to the 18th power).

The most energetic cosmic ray ever found was detected over Utah in 1991 and carried an energy of 300 billion billion electron volts (3 times 10 to the 20th power). It was detected by the University of Utah’s original Fly’s Eye observatory, which was built at Dugway during 1980-1981 and improved in 1986. A better observatory was constructed during 1994-1999 and named the High Resolution Fly’s Eye.

Jui says that during its years of operation, HiRes detected only four of the highest-energy cosmic rays – those with energies above 100 billion billion electron volts. AGASA detected 11, even though it was only one-fourth as sensitive as HiRes.

The new study covers HiRes operations during 1997 through 2006, and cosmic rays above the GZK cutoff of 60 billion billion electron volts (6 times 10 to the 19th power). During that period, the observatory detected 13 such cosmic rays, compared with 43 that would be expected without the cutoff. So the detection of only 13 indicates the GZK limit is real, and that most ultrahigh-energy cosmic rays are blocked by cosmic microwave background radiation so that few reach Earth without losing energy.

The discrepancy between HiRes Fly’s Eye and AGASA is thought to stem from their different methods for measuring cosmic rays.

HiRes used multifaceted (like a fly’s eye) sets of mirrors and photomultiplier tubes to detect faint ultraviolet fluorescent flashes in the sky generated when incoming cosmic ray particles hit Earth’s atmosphere. Sokolsky and University of Utah physicist George Cassiday won the prestigious 2008 Panofsky Prize for developing the method.

HiRes measured a cosmic ray’s energy and direction more directly and reliably than AGASA, which used a grid-like array of “scintillation counters” on the ground.

The Search Goes On

University of Tokyo, University of Utah and other scientists now are using the new $17 million Telescope Array cosmic ray observatory west of Delta, Utah, which includes three sets of fluorescence detectors and 512 table-like scintillation detectors spread over 400 square miles – in other words, the two methods that produced conflicting results at HiRes and AGASA. One goal is to figure out why ground detectors gave an inflated count of the number of ultrahigh-energy cosmic rays.

The Telescope Array also will try to explain an apparent shortage in the number of cosmic rays at energies about 10 times lower than the GZK cutoff. This ankle-shaped dip in the cosmic ray spectrum is a deficit of cosmic rays at energies of about 5 billion billion electron volts.

Sokolsky says there is debate over whether the “ankle” represents cosmic rays that run out of “oomph” after being spewed by exploding stars in our galaxy, or the loss of energy predicted to occur when ultrahigh-energy cosmic rays from outside our galaxy collide with the big bang’s afterglow, generating electrons and antimatter positrons.

The Telescope Array and Auger observatories will keep looking for the source of rare ultrahigh-energy cosmic rays that evade the big bang afterglow and reach Earth.

“The most reasonable assumption is they are coming from a class of active galactic nuclei called blazars,” Sokolsky says.

Such a galaxy center is suspected to harbor a supermassive black hole with the mass of a billion or so suns. As matter is sucked into the black hole, nearby matter is spewed outward in the form of a beam-like jet. When such a jet is pointed at Earth, the galaxy is known as a blazar.

“It’s like looking down the barrel of a gun,” Sokolsky says. “Those guys are the most likely candidates for the source of ultrahigh-energy cosmic rays.”

The new study’s 60 co-authors include Sokolsky, Jui and 31 other University of Utah faculty members, postdoctoral fellows and students: Rasha Abbasi, Tareq Abu-Zayyad, Monica Allen, Greg Archbold, Konstantin Belov, John Belz, S. Adam Blake, Olga Brusova, Gary W. Burt, Chris Cannon, Zhen Cao, Weiran Deng, Yulia Fedorova, Richard C. Gray, William Hanlon, Petra Huntemeyer, Benjamin Jones, Kiyoung Kim, the late Eugene Loh, Melissa Maestas, Kai Martens, John N. Matthews, Steffanie Moore, Kevin Reil, Robertson Riehle, Douglas Rodriguez, Jeremy D. Smith, R. Wayne Springer, Benjamin Stokes, Stanton Thomas, Jason Thomas and Lawrence Wiencke.

Press Release 08-042

"Nanominerals" Influence Earth Systems from Ocean to Atmosphere to Biosphere

A bacteria cell living in a no-oxygen environment "breathes" using mineral nanoparticles.

The ubiquity of tiny particles of minerals--mineral nanoparticles--in oceans and rivers, atmosphere and soils, and in living cells are providing scientists with new ways of understanding Earth's workings. Our planet's physical, chemical, and biological processes are influenced or driven by the properties of these minerals.

So states a team of researchers from seven universities in a paper published in this week's issue of the journal Science: "Nanominerals, Mineral Nanoparticles, and Earth Systems."

The way in which these infinitesimally small minerals influence Earth's systems is more complex than previously thought, the scientists say. Their work is funded by the National Science Foundation (NSF).

A bacteria cell living in a no-oxygen environment "breathes" using mineral nanoparticles.

"This is an excellent summary of the relevance of natural nanoparticles in the Earth system," said Enriqueta Barrera, program director in NSF's Division of Earth Sciences. "It shows that there is much to be learned about the role of nanominerals, and points to the need for future research."

Minerals have an enormous range of physical and chemical properties due to a wide range of composition and structure, including particle size. Each mineral has a set of specific physical and chemical properties. Nanominerals, however, have one critical difference: a range of physical and chemical properties, depending on their size and shape.

"This difference changes our view of the diversity and complexity of minerals, and how they influence Earth systems," said Michael Hochella of the Virginia Polytechnic Institute and State University in Blacksburg, Va.

The role of nanominerals is far-reaching, said Hochella. Nanominerals are widely distributed throughout the atmosphere, oceans, surface and underground waters, and soils, and in most living organisms, even within proteins.

Nanoparticles play an important role in the lives of ocean-dwelling phytoplankton, for example, which remove carbon dioxide from the atmosphere. Phytoplankton growth is limited by iron availability. Iron in the ocean is composed of nanocolloids, nanominerals, and mineral nanoparticles, supplied by rivers, glaciers and deposition from the atmosphere. Nanoscale reactions resulting in the formation of phytoplankton biominerals, such as calcium carbonate, are important influences on oceanic and global carbon cycling.

On land, nanometer-scale hematite catalyzes the oxidation of manganese, resulting in the rapid formation of minerals that absorb heavy metals in water and soils. The rate of oxidation is increased when nanoparticles are present.

Conversely, harmful heavy metals may disperse widely, courtesy of nanominerals. In research at the Clark Fork River Superfund Complex in Montana, Hochella discovered a nanomineral involved in the movement of lead, arsenic, copper, and zinc through hundred of miles of Clark River drainage basin.

Nanominerals can also move radioactive substances. Research at one of the most contaminated nuclear sites in the world, a nuclear waste reprocessing plant in Mayak, Russian, has shown that plutonium travels in local groundwater, carried by mineral nanoparticles.

In the atmosphere, mineral nanoparticles impact heating and cooling. Such particles act as water droplet growth centers, which lead to cloud formation. The size and density of droplets influences solar radiation and cloud longevity, which in turn influence average global temperatures.

"The biogeochemical and ecological impact of natural and synthetic nanomaterials is one of the fastest growing areas of research, with not only vital scientific, but also large environmental, economic, and political consequences," the authors conclude.

In addition to Hochella, authors of the paper are Steven Lower of Ohio State University, and Patricia Maurice of the University of Notre Dame; along with R. Lee Penn of the University of Minnesota; Nita Sahai of the University of Wisconsin-Madison; Donald Sparks of the University of Delaware; and Benjamin Twining of the University of South Carolina.

Do attractive women want it all?

New study reveals relationship standards are relative

AUSTIN, Texas—Although many researchers have believed women choose partners based on the kind of relationship they are seeking, a new study from The University of Texas at Austin reveals women’s preferences can be influenced by their own attractiveness.

David Buss, psychology researcher at the university, has published the findings in “Attractive Women Want it All: Good Genes, Economic Investment, Parenting Proclivities and Emotional Commitment” in this month’s Evolutionary Psychology.

Previous researchers argued that what women value depended on the type of relationship they were looking for. Women looking for long-term partners want someone who will be a good provider for them and their children, but women seeking short-term flings care more about masculinity and physical attractiveness, features that may be passed down to children.

Buss and Todd Shackelford, psychology professor at Florida Atlantic University, found women ideally want partners who have all the characteristics they desire, but they will calibrate their standards based on their own desirability.

“When reviewing the qualities they desire in romantic partners, women gauge what they can get based on what they got,” Buss said. “And women who are considered physically attractive maintain high standards for prospective partners across a variety of characteristics.”

The researchers identified four categories of characteristics women seek in a partner:

* good genes, reflected in desirable physical traits,

* resources,

* the desire to have children and good parenting skills, and

* loyalty and devotion.

Most women attempt to secure the best combination of the qualities they desire from the same man, but the researchers said a small portion of women who do not find a partner with all the qualities may trade some characteristics for others.

Although women’s selectivity across categories reflected how attractive they appeared to other people, the researchers found the characteristics men desired in a partner did not vary based on their own physical attractiveness.

To read the journal article, visit filestore/EP06134146.pdf .

Give away your money and be happy

* 18:00 20 March 2008

* news service

* Jim Giles

Money can buy happiness, but only if we spend it on others, say researchers behind a three-part psychology experiment. It is the latest in a line of recent studies suggesting that happiness depends on experiences and interactions with others, not our income and possessions.

The researchers started out by asking over 600 Americans to rate their happiness and then supply details of earnings and expenditures. They found that extra income was linked to happiness, but personal spending was not. Only prosocial spending – gifts for others and donations to charity – correlated with happiness, say Elizabeth Dunn of the University of British Columbia and colleagues.

If the link is meaningful, reasoned Dunn, it should be possible to predict happiness levels by analysing spending patterns. To test the idea, her team next talked to a group of 16 employees at a Boston firm before and after they received a profit-sharing bonus that averaged around $5,000.

Dunn wanted to know which of three factors – initial happiness and personal and prosocial expenditure – would determine the impact of the bonus after about two months. Only the amount of prosocial spending turned out to matter.

"The manner in which they spent the bonus was a more important predictor of their happiness that the size of the bonus itself," says Dunn.

Hey big spender

So have the team discovered the key to happiness? The final part of Dunn’s study suggests that we would all be happier if we spent more on others.

They asked 46 people to rate their happiness and then gave each $5-$20, with directions on how the money should be spent. Those told to spend it on themselves were found to be slightly less happy when interviewed later the same day, but subjects who gave the money away reported increased happiness.

"Very minor alterations in spending allocations – as little as $5 – may be enough to produce real gains in happiness on a given day," concludes Dunn.

The study is interesting because it suggests that the way money is spent may be more important than total income, which people often focus on as a source of happiness, says Sonja Lyubomirsky, a psychologist at the University of California, Riverside.

Happy sappy

Lyubomirsky has recorded similar increases in happiness in students who were asked to perform acts of kindness, such as helping a friend with their homework.

She suggests that the reason may be due to the way we adapt to changes in our lives.

"Moving into a bigger house will give you a happiness boost, but you then get used to the house," says Lyubomirsky. The same goes for other types of possessions.

Acts of kindness, by contrast, are more likely to produce unexpected positive outcomes, such as a favour performed in return. Prosocial acts also enhance our self-perception in a way that possessions do not, adds Lyubomirsky. Journal references: Science, vol 319, p 1687; Review of General Psychology, vol 9, p 111–131

Volcanoes fingered for 'crime of the Cretaceous'

* 18:00 20 March 2008

* news service

* Jason Palmer

One of the prime suspects for "the crime of the Cretaceous" - the killing-off of the dinosaurs - may have hidden evidence of its guilt inside a rare time-capsule.

The biggest volcanic eruptions are called flood events, which release millions of cubic kilometres of lava and all the gases trapped within it. One of the main theories about mass extinctions is that such flood events could have pumped sulphur and chlorine into the atmosphere, killing off anything nearby.

"But it's not just poisoning by the pollutants," says Stephen Blake of the Open University in the UK. "There can be a whole lot of knock-on effects to the environment."

However, geologists haven’t been sure that enough of the gases were released to effect large-scale climate change, and thus contribute to extinctions.

To investigate that, Blake and his colleagues scoured the Deccan Traps, a region in India that was formed by a flood event about 65 million years ago. They were looking for rare nuggets called glass inclusions.

Because they form at high pressures beneath the surface, these inclusions hold a record of the gases in the magma before eruption. The team analysed the inclusions' composition and estimated that at least 10 million million tonnes of sulphur and chlorine were pumped into the atmosphere at the time of the flood event.

That's more than enough to make volcanic activity look more like the culprit for extinctions.

"This much sulphur strengthens the case for volcanism to do lots of environmental damage," Blake says.

Journal ref: Science, DOI: 10.1126/science.1152830

Proto-humans walked on two legs in 6 million BC

* 18:00 20 March 2008

* news service

* Jeff Hecht

Anthropologists can spend a lot of time arguing over a single bone - and if that bone is one of the few known from an early ancestor, the arguments will be all the fiercer.

When a "big question" is at issue, such as when our ancestors evolved the upright stance that sets us apart from the knuckle-walking apes, the stakes get higher still. The latest ding-dong in the anthropology world centres around the oldest thigh bone in the human lineage, that of the six-million-year-old Orrorin tugenensis.

Discovered in 2000, initial reports claimed Orrorin was bipedal, but not surprisingly, many anthropologists were not convinced by an argument advanced on the basis of a single, incomplete bone.

Now Brian Richmond of George Washington University in Washington, DC and William Jungers of Stony Brook University in New York have gained access to the Orrorin fossils in Kenya, and measured the shape of the thigh bone, which reveals posture. Comparisons with thigh bones of other fossils, and of modern great apes, suggest that Orrorin was bipedal, they say.

Its thigh bone strongly resembles those of ancestors of the Homo lineage, Australopithecus and Paranthropus, upright walkers which lived between about 4 and 2 million years ago, but differs from those of modern great apes and the genus Homo. This contradicts earlier reports that said Orrorin was more closely related to Homo than to Australopithecus, but supports the idea that it was bipedal.

"I had expected Orrorin to look more primitive," says Richmond.

Orrorin retained chimp-like arms and powerful fingers that let it climb easily into trees. But its bipedal gait "made possible a new way of living in the environment, not used by other apes, which might have led to the great success of our lineage," says Richmond.

He thinks Orrorin's particular walking style remained dominant for 4 million years, until the genus Homo evolved a stride that was better for long-distance walking and running.

Lining up with Richmond, anthropologist Dan Lieberman, of Harvard University, agrees that Orrorin was bipedal. "This is a great paper," he says.

The discoverers of Orrorin had suggested it was ancestral to Homo, but not to the australopithecines, which were thought to have evolved from a separate lineage. But Richmond puts Orrorin near the base of a single lineage that led first to the australopithecines and then Homo.

Yet Tim White, an anthropologist from the University of California, Berkeley, says that although Orrorin's thigh bone is "the best current evidence" for bipedalism six million years ago, even the new study gives only a hazy picture of how Orrorin walked compared to what we know about its descendants from their more abundant fossils. Journal reference: Science, vol 319, p 1662

Titan's changing spin hints at hidden ocean

* 18:45 20 March 2008

* news service

* David Shiga

Changes in the spin rate of Saturn's moon Titan suggest an ocean of liquid water lies beneath its icy surface, a new study reports. The finding bolsters the possibility that the moon might foster life.

Titan's low density suggests it is composed of a combination of water and rock. During the moon's early days, heat from its formation and the decay of radioactive material should have melted much of this water to create an ocean.

Much of the ocean would have since frozen. But scientists suspect a liquid layer up to 300 kilometres thick persists beneath an ice crust, probably aided by ammonia, which acts as an antifreeze.

Hard evidence for such an ocean has been difficult to come by, however. Apparent radio echoes observed by the Huygens probe as it landed on the moon's surface in 2005 might be due to radio waves reflecting off the top of an ocean. But it's possible they're simply an instrument error caused by motion of the lander's parachute.

Now, slight variations in Titan's rotation rate detected by the Cassini spacecraft have provided new evidence for an ocean, say Ralph Lorenz of the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, US and colleagues. Watch an animation showing Titan's ocean.

Faster spin

Titan is close enough to Saturn that Saturn's gravity should distort the moon's shape, causing internal friction that slows down its rotation. This slowing continues until the moon rotates exactly once per orbit, keeping one face forever locked on the planet.

But when Cassini's radar tracked surface features on the moon, scientists found evidence that Titan rocks slightly due to tiny shifts in its rotation rate. Currently, Titan spins an extra 0.36° over the course of a year beyond what it would if it were in perfect sync with the moon. The moon's rotation rate also appears to be slowly increasing.

The rocking effect was actually predicted in 2005 as a result of changes in the direction of winds in Titan's massive atmosphere over the course of its 29.5-year orbit around the Sun. A similar effect is known to vary Earth's rotation rate.

Conditions suitable for life may prevail in the ocean of liquid water beneath the icy surface of Saturn's largest moon, Titan, seen here in a false-colour image taken by the Cassini spacecraft (Image: NASA/JPL/Space Science Institute)

Liquid layer

Importantly, the rocking is easier to produce if Titan's surface is simply a shell that floats on top of an ocean. The rotation of the relatively lightweight shell would be easily disturbed by changing winds. Alternatively, it would be much more difficult for winds to change Titan's rotation if the moon were solid from surface to core.

The observations weigh "in favour of a liquid layer, but it is not a definitive proof", Gabriel Tobie of the Université de Nantes in France, told New Scientist.

Tobie and Christophe Sotin of NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California, US, say changes in the orientation of the moon's rotation axis over thousands of years could also explain the observed shift.

But Bryan Stiles of JPL, who is a member of Lorenz's team, disagrees. He says the existing data "is sufficient to distinguish between movement of the pole and changes in spin rate" and that the team has indeed detected changes in spin rate.

Quick changes

Tobie says further observations could help settle the matter. If the spin rate changes on a timescale of a few years, that would bolster the case that the variations are produced by seasonal wind changes, rather than very long-term changes in the direction of Titan's rotation axis.

If Titan has an ocean beneath its surface, could life be present there? Possibly. Tobie says Titan may have provided especially good conditions for the development of life. Early in its history, liquid water may have been exposed to the surface, allowing complex carbon-containing molecules from the atmosphere to mix with the water.

"Organic [carbon-based] chemistry and warm water provide very good conditions for life to arise," Tobie says, although he adds that it might have been difficult for this life to survive after the ocean was cut off from the atmosphere by ice.

Other icy moons, including Jupiter's Europa and Saturn's Enceladus, may also harbour liquid water, making them potentially prime locations for life in the solar system.

Chloride salts on Mars may have preserved past life

* 22:28 20 March 2008

* news service

* Maggie McKee

Chloride salt deposits have been found for the first time on Mars, thanks to high-resolution images taken by NASA's Mars Odyssey spacecraft. The deposits likely formed through the evaporation of liquid water early in the planet's history and may have preserved evidence of past Martian life – though some say the water was too salty to be habitable.

Other types of salts, known as sulphates, have been found by orbiting spacecraft and the rover Opportunity, hinting that parts of the planet were once in contact with acidic, salty water. But the spectral signature of chloride salts is harder to pick out in orbital data – especially if the salts cover a relatively small patch of ground.

Now, researchers led by Mikki Osterloo of the University of Hawaii in Honolulu, US, have used the high-resolution vision of the THEMIS camera on Mars Odyssey to spot about 200 such patches on the Red Planet. The patches measure between 1 and 25 square kilometres in size.

Because they are scattered around Mars's southern hemisphere, which is older and more cratered than the lava-covered northern hemisphere, the team believes the deposits formed between 3.5 and 3.9 billion years ago.

On Earth, chloride deposits can form either by the evaporation of liquid water or by the escape of volcanic gases from vents in the ground.

But Osterloo's team thinks water was probably responsible for many of the Martian deposits, since they were often found in low-lying areas where local surface water could have pooled or the surface itself dipped into the groundwater table. "The water would evaporate and leave mineral deposits, which build up over years," Osterloo says.

Past life?

The researchers say the deposits suggest that near-surface water was widespread in early Martian history. And they think water could have swept up any organic material – including life – that may have been around at the time, depositing it into low-lying basins, where the water then evaporated.

"By their nature, evaporate deposits may be better at preserving signs of habitability or past life," team member Victoria Hamilton, also of the University of Hawaii, told New Scientist.

None of the chloride deposits are in the short list of landing sites chosen for NASA's next rover mission, the Mars Science Laboratory due to launch in 2009. Instead, those sites were chosen because they boast clays, which are also thought to form in the presence of water.

Chloride-bearing minerals (shown in blue) are often found in terrain that lies lower than its surroundings, suggesting water pooled and evaporated there (Image: M Osterloo/NASA/JPL-Caltech/ASU/U of Hawaii)

"Both [clays] and chlorides are indicative of water-related processes, so from that perspective, I think either would be interesting," Hamilton says. "But I think our chances of finding evidence of a large body or volume of water may be better at a salt site."

Salty lakes

Andrew Knoll of Harvard University in Cambridge, Massachusetts, US, who is not a member of the team, says the research does suggest salty lakes dotted the early Martian surface. "[But] this is peanuts relative to the distribution of water on Earth," whose surface is 70% covered by water, he says.

"Second, table salt is a record of water bidding good-bye," he told New Scientist. "On Mars, NaCl [sodium chloride] is likely to precipitate only from extremely salty waters."

"Few if any known terrestrial organisms would be happy in brines concentrated to the point of NaCl precipitation on Mars," he adds. "So the report carries a double-edge sword for astrobiology."

But though sodium chloride, or table salt, has been found previously in a Martian meteorite that fell to Earth, the THEMIS data does not distinguish between the types of chlorides found in the newly discovered deposits.

Chloride salt deposits have been found in 200 places in Mars's southern hemisphere. Many are thought to have formed by the evaporation of surface or ground water. But a fair number are found in or near craters, suggesting impacts may have heated soil rich in ice or liquid water, creating a hydrothermal environment where water could have evaporated to form the deposits (Image: Osterloo et al/Science)

"Not all chloride minerals are sodium chlorides – for example, magnesium chlorides," Hamilton says.

"I think until we know more about the chemistry of these deposits, it's too early to make any blanket assessments on the habitability of the environment," says Osterloo.

Mars - The Red Planet is full of surprises; learn more in our continually updated special report.

Journal reference: Science (vol 319, p 1651)

Supercontinent was too heavy to hold

* 08:00 21 March 2008

* news service

* Kate Ravilious

How and why did Gondwana - the southern hemisphere supercontinent that existed between 500 and 180 million years ago – break up? A new model suggests that it simply cracked in two, collapsing under its own weight.

For the last 40 years geologists have debated how Gondwana split apart. There are two competing theories – one that says the continent was smashed to smithereens and the other that says it broke into just a few large pieces.

Watch an animation of Gondwana splitting up into many separate plates.

Graeme Eagles, from Royal Holloway, University of London and Matthias König, from the Alfred Wegener Institute for Polar and Marine Research in Bremerhaven, Germany, gathered together magnetic and gravity anomaly data from some of Gondwana's first cracking points - fracture zones in the Mozambique Basin and the Riiser-Larsen Sea off Antarctica.

Plugging these data into a computer model, Eagles and König plotted the path that different parts of Gondwana took as the supercontinent broke apart. The model supports the idea of Gondwana splitting into just two large plates.

A simple split is compelling because it removes the need for a plume of hot mantle underneath Gondwana to start the splitting process – unusual behaviour for the Earth's mantle.

"It doesn't require us to re-invent plate tectonics at break-up times," says Eagles.

Chunky continent

Instead, Eagles and König suggest that large continents like Gondwana are inherently unstable because they have very thick crust compared to oceans, making them spread outwards under their own weight. Eventually the groaning mass splits into a few large plates.

Inevitably, supporters of the multiple plates theory think that Eagles and König have misinterpreted the data.

"In order to make the continents fit they have to place Sri Lanka and India in a position not supported by the geological data," says Maarten De Wit, from the University of Cape Town in South Africa.

Journal ref: Geophysical Journal International, DOI: 10.1111/j.1365-246X.2008.03753.x

US Army toyed with telepathic ray gun

* 12:00 21 March 2008

* news service

* David Hambling

A recently declassified US Army report on the biological effects of non-lethal weapons reveals outlandish plans for "ray gun" devices, which would cause artificial fevers or beam voices into people's heads.

The report titled "Bioeffects Of Selected Nonlethal Weapons" was released under the US Freedom of Information Act and is available on this website (pdf). The DoD has confirmed to New Scientist that it released the documents, which detail five different "maturing non-lethal technologies" using microwaves, lasers and sound.

Released by US Army Intelligence and Security Command at Fort Meade, Maryland, US, the 1998 report gives an overview of what was then the state of the art in directed energy weapons for crowd control and other applications.

A word in your ear

Some of the technologies are conceptual, such as an electromagnetic pulse that causes a seizure like those experienced by people with epilepsy. Other ideas, like a microwave gun to "beam" words directly into people's ears, have been tested. It is claimed that the so-called "Frey Effect" – using close-range microwaves to produce audible sounds in a person's ears – has been used to project the spoken numbers 1 to 10 across a lab to volunteers'.

In 2004 the US Navy funded research into using the Frey effect to project sound that caused "discomfort" into the ears of crowds.

The report also discusses a microwave weapon able to produce a disabling "artificial fever" by heating a person's body. While tests of the idea are not mentioned, the report notes that the necessary equipment "is available today". It adds that while it would take at least fifteen minutes to achieve the desired "fever" effect, it could be used to incapacitate people for almost "any desired period consistent with safety."

Less exotic technologies discussed include laser dazzlers and a sound source loud enough to disturb the sense of balance. Both have been realised in the years since the report was written. The US army uses laser dazzlers in Iraq, while the Long Range Acoustic Device has military and civilian users, and has been used on one occasion to repel pirates off Somalia.

However, the report does not mention any trials of weapons for producing artificial fever or seizures, or beaming voices into people's heads.

Potentially torturous

Steve Wright, a security expert at Leeds Metropolitan University, UK, warns that the technologies described could be used for torture. In 1998 the European Parliament passed a motion banning potentially dangerous incapacitating technologies that interfere with the human brain.

"The epileptic seizure inducing device is grossly irresponsible and should never be fielded," says Steve Wright "We know from similar [chemically] artificially-induced fits that the victim subsequently remains "potentiated" and may spontaneously suffer epileptic fits again after the initial attack."

The acoustic energy device that affects the ear canals, disrupting the motion sense, may require dangerously loud sound levels to be effective, points out Juergen Altmann, a physicist at Dortmund University, Germany, who is interested in new military technologies.

"[There is] inconsistency between the part that says "interesting" effects occur at 130-155 dB and the Recovery/Safety section that says that 115 dB is to be avoided - without commenting on the difference."

Universe's most powerful blast visible to the naked eye

* 15:16 21 March 2008

* news service

* Govert Schilling

The most powerful blast ever observed in the universe detonated on Wednesday. That day, a record four gamma-ray bursts were detected by NASA's Swift telescope.

If you knew exactly when and where to look, you could have seen the bright burst with the naked eye, despite its enormous distance of 7.5 billion light years. "This burst definitely has an 'oh wow' flavour," says astrophysicist Ralph Wijers of the University of Amsterdam in the Netherlands.

Gamma-ray bursts are brief but extremely powerful flashes of high-energy radiation. Theorists think they signal the violent death of very massive, rapidly rotating stars.

Gamma rays can't penetrate Earth's atmosphere, so they can only be observed by space telescopes. But many bursts also produce lower-energy X-rays, radio waves and even visible light. So if you're quick enough, you can study gamma-ray bursts from the ground.

That's where NASA's Swift satellite comes in. It detects a burst, measures its sky position, and then radioes the results to robotic telescopes on the ground – all within seconds.

With four bursts, Wednesday was the busiest day in Swift's life so far. The second of the four bursts, GRB 080319B, occurred at 0613 GMT in the northern constellation Bootes – well-placed for follow-up observations with telescopes in the US.

One of these robotic telescopes, called RAPTOR, was already looking in that part of the sky. It witnessed the quick rise and fall of an optical flash. About 30 to 40 seconds after the Swift detection, the burst peaked at naked-eye visibility, making this the only gamma-ray burst so far that could have been seen without a telescope.

The extremely luminous afterglow of GRB 080319B was imaged by Swift's X-ray Telescope (left) and Optical/Ultraviolet Telescope (right) (Image: NASA/Swift/Stefan Immler et al.)

'Burst of the year'

But it took a few more hours to determine how powerful the burst really was. Paul Vreeswijk of the Dark Cosmology Center in Copenhagen, Denmark, led a team that measured the distance to the burst using the European Southern Observatory's Very Large Telescope in Chile.

By studying how much the burst's light had been stretched, or redshifted, as it travelled through the expanding universe, they put the explosion at 7.5 billion light years away. "At first, I had expected this burst to be much closer," says Vreeswijk. "It's exciting to be able to see something with the naked eye halfway across the universe."

Knowing the distance, astronomers could calculate the burst's true luminosity: 2.5 million times brighter than the most powerful supernova ever seen.

It's unclear what exactly caused this incredible brightness, but most theorists think that gamma-ray bursts produce two narrow jets of matter and energy, so we may have been lucky to look right into the cannon's barrel. But, says Vreeswijk's colleague Jens Hjorth, "in this business, getting surprised ceases to surprise you".

Follow-up studies of GRB 080319B's afterglow are still in progress. Says Wijers: "It doesn't shatter current theoretical thinking, but in terms of detailed knowledge, this will probably become the burst of the year."

Study unlocks Latin American past

European colonisation of South America resulted in a dramatic shift from a native American population to a largely mixed one, a genetic study has shown.

It suggests male European settlers mated with native and African women, and slaughtered the men.

But it adds that areas like Mexico City "still preserve the genetic heritage" because these areas had a high number of natives at the time of colonisation.

The findings appear in the journal Public Library of Science Genetics.

The international team of researchers wrote: "The history of Latin America has entailed a complex process of population mixture between natives and recent immigrants across a vast geographic region.

"Few details are known about this process or about how it shaped the genetic make-up of Latin American populations."

'Clear signature'

The study examined 249 unrelated individuals from 13 Mestizo populations (people from a mixed European/native American origin) in seven countries, ranging from Chile in the south to Mexico in the north.

"There is a clear genetic signature," explained lead author Andres Luiz-Linares from University College London.

"The initial mixing occurred predominately between immigrant and European men and native and African women."

He said that the study showed that it was a pattern that was uniform across Latin America.

"We see it in all the populations we examined, so it is clearly a historical fact that the ancestors of these populations can be traced to matings between immigrant men and native and African women."

The researchers found that within the genetic landscape of Latin America, there were variations.

"The Mestizo with the highest native ancestry are in areas which historically have had relatively large native populations," they reported.

This included Andean regions and cities such as Mexico City, where major civilisations were already established by the time Europeans reached the continent in the late 15th Century.

"By contrast, the Mestizo with the highest European ancestry are from areas with relatively low pre-Columbian native population density and where the current native population is sparse," they added.

Bloody past

Explaining the fate of native males when the Europeans arrived, Professor Luiz-Linares said: "It is a very sad and terrible historical fact, they were basically annihilated.

"Not only did the European settlers take away land and property, they also took away the women and, as much as possible, they exterminated the men."

He said the findings could help people change their perception of Latin American history.

"It is very important in terms of rescuing the past and recognising the roots of the population, and the living presence of natives within the current population," Professor Luiz-Linares explained.

As well as providing an insight into the past, the team hopes that the findings will also help shape studies aimed at identifying and analysing diseases.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download