March of the giant penguins



March of the giant penguins

Prehistoric equatorial penguins reached 5 feet in height

Giant prehistoric penguins? In Peru? It sounds more like something out of Hollywood than science, but a researcher from North Carolina State University along with U.S., Peruvian and Argentine collaborators has shown that two heretofore undiscovered penguin species reached equatorial regions tens of millions of years earlier than expected and during a period when the earth was much warmer than it is now.

Paleontologist Dr. Julia Clarke, assistant professor of marine, earth and atmospheric sciences at NC State with appointments at the North Carolina Museum of Natural Sciences and the American Museum of Natural History, and colleagues studied two newly discovered extinct species of penguins. Peruvian paleontologists discovered the new penguins’ sites in 2005.

The research is published online the week of June in Proceedings of the National Academy of Sciences. It was funded by the National Science Foundation Office of International Science and Engineering and the National Geographic Society.

(Right) The Eocene giant penguin (Icadyptes salasi) and (Left) the middle Eocene (Perudyptes devriesi) are shown to scale with the only penguin inhabiting Peru today - the Humboldt penguin, or Spheniscus humbolti

The first of the new species, Icadyptes salasi, stood 5 feet tall and lived about 36 million years ago. The second new species, Perudyptes devriesi, lived about 42 million years ago, was approximately the same size as a living King Penguin (2 ½ to 3 feet tall) and represents a very early part of penguin evolutionary history. Both of these species lived on the southern coast of Peru.

These new penguin fossils are among the most complete yet recovered and call into question hypotheses about the timing and pattern of penguin evolution and expansion. Previous theories held that penguins probably evolved in high latitudes (Antarctica and New Zealand) and then moved into lower latitudes that are closer to the equator about 10 million years ago – long after significant global cooling that occurred about 34 million years ago.

"We tend to think of penguins as being cold-adapted species,” Clarke says, "even the small penguins in equatorial regions today, but the new fossils date back to one of the warmest periods in the last 65 million years of Earth’s history. The evidence indicates that penguins reached low latitude regions more than 30 million years prior to our previous estimates.”

The new species are the first fossils to indicate a significant and diverse presence of penguins in equatorial areas during a period that predates one of the most important climatic shifts in Earth’s history, the transition from extremely warm temperatures in the Paleocene and Eocene Epochs to the development of "icehouse” Earth conditions and permanent polar icecaps. Not only did penguins reach low latitudes during this warmer interval, but they thrived: more species are known from the new Peruvian localities than inhabit those regions today.

By comparing the pattern of evolutionary relationships with the geographic distribution of other fossil penguins, Clarke and colleagues estimate that the two Peruvian species are the product of two separate dispersal events. The ancestors of Perudyptes appear to have inhabited Antarctica, while those of Icadyptes may have originated near New Zealand.

Icadyptes salasi had a spear-like beak (top). Scale bar = 1cm

It was much longer and more pointed than the skull of the modern-day Peruvian (Humboldt) penguin Spheniscus humboldti (below)

The new penguin specimens are among the most complete yet discovered that show us what early penguins looked like. Both new species had long narrow pointed beaks – now believed to be an ancestral beak shape for all penguins. Perudyptes devriesi has a slightly longer beak than seen in some living penguins but the giant Icadyptes salasi exhibits a grossly elongated beak with features not known in any extinct or living species. This species’ beak is sharply pointed, almost spear-like in appearance, and its neck is robustly built with strong muscle attachment sites. Icadyptes salasi is among the largest species of penguin yet described.

Although these fossils seem to contradict some of what we think we know about the relationship between penguins and climate, Clarke cautions against assuming that just because prehistoric penguins may not have been cold-adapted, living penguins won’t be negatively affected by climate change.

"These Peruvian species are early branches off the penguin family tree, that are comparatively distant cousins of living penguins,” Clarke says. "In addition, current global warming is occurring on a significantly shorter timescale. The data from these new fossil species cannot be used to argue that warming wouldn’t negatively impact living penguins.”

Baby poop gives Stanford researchers inside scoop on development of gut microbes

STANFORD, Calif. - Researchers at the Stanford University School of Medicine are as interested in a baby's poop as doting parents are, and for good reason.

When a baby is born, so too is a new microbial ecosystem in the baby's gut. The Stanford team has made the most extensive survey yet of how the microbes establish flourishing communities in what began as a sterile environment. Their findings will be published in the July issue of Public Library of Science-Biology.

"It's an amazing thing trying to figure out how we go from a completely sterile gut to having a microbial ecosystem that will be with us for the rest of our lives," said the study's senior author, Patrick Brown, MD, PhD, professor of biochemistry. "What could be more fundamental than that""

Looking at stool samples from 14 healthy, full-term infants over their first year of life, including one set of fraternal twins, the researchers found that each baby had very different microbes colonizing their intestinal tracts at different stages.

"This study emphasizes that the definition of a 'healthy' baby is pretty broad," said the article's lead author Chana Palmer, PhD, who was a graduate student in Brown's lab at the time the work was done. She noted that by the end of the first year, despite the chaos of the early months, each baby's intestinal ecology remained unique but harbored dynamic, complex societies of microbes similar to that found in adults' intestines.

The gut of a baby is a rapidly evolving place. It has no inhabitants before birth. Within days of an infant's delivery, the microbial immigrants in the gut establish a thriving community whose population soon outnumbers that of the baby's own cells tenfold, a ratio that persists throughout life.

"It's really striking the degree to which the patterns of bacterial abundance were so dynamic over time," said David Relman, MD, a collaborator on the work. "Things appear and then things suddenly drop in abundance and other things appear and take their place." There are no obvious reasons for these fluctuations, but there must be important factors underlying these patterns, said Relman, associate professor of medicine and of microbiology and immunology.

Six of the 14 babies had some course of antimicrobial medicine during their first year. Only one of the babies had an extremely dramatic change in the microbial community in response to the drugs. "But it was so dramatic it makes us want to look at more examples of that and try to understand generalizations of the process," Palmer said.

The researchers had one set of fraternal twins in their study, the only babies delivered by a planned caesarean section and thus without any exposure to the mother's vaginal or rectal environments. They had much lower bacterial levels than the other babies for the first week of life.

The twins also showed the most similarity in their microbial community profiles, leading to speculation that combinations of genetics and environment can shape a microbial community. "The fact that the twins were so similar gives us a glimmer of hope that it's not a completely chaotic process," said Palmer.

Although microbes' reputation for causing disease usually gets top billing, the tiny critters play a number of critical roles in health, including processing nutrients, defining host body-fat content and providing protection against invading pathogens.

Despite their significant role in health, much of their existence remains a mystery. No more than half the total number of intestinal-tract organisms are even recognized, Relman said.

This study relied heavily on the use of a DNA microarray - technology that Brown helped develop in the mid-1990s. It consists of a glass microscope slide with an orderly array of DNA spots that can give a snapshot of genetic activity in a given sample. The microarray design for this study included spots representing nearly all bacteria known to be involved in human microbial ecosystems. The researchers included spots that would recognize whole families of bacteria so that they wouldn't be limited to the 80,000 or so known species.

The researchers had the parents collect stool samples from the babies according to a prescribed schedule, beginning with the first stool produced after birth. There were additional samples around key events, such as starting on solid food and taking antibiotics.

Postdoctoral scholar Elisabeth Bik, PhD, and research fellow Daniel DiGiulio, MD, in Relman's lab, were involved in processing and analyzing more than 500 samples in this study.

The team emphasized that the goal of this study was to provide the foundation for the range of what occurs in healthy babies born to healthy mothers. Based on this, future studies may find new species of bacteria, and also separate the role that genetics plays compared with life history and identify new roles of microbes in human health.

Other factors for future inquiry include looking at breast-fed babies compared to those who are formula-fed, and comparing premature babies to full-term ones. All the babies in this study were full-term and breast-fed.

"This study raises so many interesting questions, and it's a wonderful segue into the next phase," said Relman.

This work was supported by funding from the Horn Foundation, the National Institutes of Health and the Howard Hughes Medical Institute. Brown is an investigator for the Howard Hughes Medical Institute.

SARS survivors recover from physical illness, but may experience mental health decline

Most patients who survived severe acute respiratory syndrome (SARS) had good physical recovery, but they or their caregivers often reported a decline in mental health one year later, according to a study in the June 25 issue of Archives of Internal Medicine, one of the JAMA/Archives journals.

"Severe Acute Respiratory syndrome (SARS) became a global epidemic in 2003. Most cases were in Asia, and the largest concentration of North American cases occurred in Toronto, Ontario,” according to background information in the article. "The longer-term physical and psychological consequences of SARS were not reported until recently.” Investigations of the disease have focused on lung function, distance walked in six minutes and health-related quality of life.

Catherine M. Tansey, M.Sc., University Health Network, Toronto, and colleagues, evaluated 117 SARS survivors from Toronto who were discharged from the hospital in 2003. Patients were evaluated three, six and 12 months after leaving the hospital by undergoing a physical examination, a six-minute walk test, a lung function test, a chest X-ray and quality-of-life measures and reporting how often they saw a physician. Formal caregivers of survivors were given a survey on caregiver burden one year after patient discharge.

All but one patient had chest X-rays demonstrating normal or pre-SARS condition by one year. At three months, 31 percent of the survivors had a reduced six-minute walk distance and at one year, 18 percent did. For most, lung capacity measures and the lung’s ability to exchange respiratory gases were within normal limits at three months and during the rest of the follow-up period.

General health, vitality and social functioning remained below the normal range one year after discharge from the hospital. Many patients returned to work part-time, increasing their workload over the first two months while 23 patients returned to work full-time with no need for a modified schedule. "At one year, 17 percent of patients had not returned to work, and a further 9 percent had not returned to their pre-SARS level of work,” the authors note.

Survivors used health care services frequently the first year after hospitalization. "Psychiatric evaluation accounted for the greatest number of visits,” the authors write. "Of the patients, 74 percent saw their primary care physician a median of five times. Infectious disease specialists assessed 72 percent of patients, mostly in the first three months after discharge.” Caregiver surveys showed a decline in the mental health of caregivers, which was caused by reported lifestyle interference and loss of control.

"We have shown that most SARS survivors have pulmonary and functional recovery from their acute illness. However, one year after discharge from hospital, health-related quality of life remained lower than in the general population, and patients reported important decrements in mental health. These findings are reflected in the notable utilization of psychiatric and psychological services in the one-year follow-up period,” the authors conclude. "These data may help to highlight the needs of patients and caregivers during and after an epidemic, the potential benefit of a family-centered approach to follow-up care, and the importance of exploring strategies to minimize the psychological burden of an epidemic illness as part of future pandemic planning initiatives.”

Personal comments by physicians distract from patient needs

Rochester investigation finds disclosures can harm physician-patient relationship

In well-intentioned efforts to establish relationships, some physicians tell patients about their own family members, health problems, travel experiences and political beliefs.

While such disclosures seem an important way to build a personal connection, a University of Rochester School of Medicine and Dentistry investigation of secretly-recorded first-time patient visits to experienced primary care physicians has found these personal disclosures have no demonstrable benefits and may even disrupt the flow of important patient information.

The journal Archives of Internal Medicine publishes the surprising results of the investigation in the June 25 issue. The investigators found physician self-disclosures in about a third of patient visits. The disclosures "were often non sequiturs, unattached to any discussion in the visit and focused more on the physician’s needs than the patient’s needs.” The disclosures "interrupted the flow of information exchange and valuable patient time in the typically time-pressured primary care visit.” Investigators found no examples of a physician making a statement that led back from the self-disclosure to the patient’s concern.

"Most doctors think self-disclosure is a good idea for building relationships,” said Susan H. McDaniel, Ph.D., lead author of the article and a professor of psychiatry and family medicine at the University of Rochester School of Medicine and Dentistry. "The health care system now requires doctors to see many patients. Visits to the doctor often are short and anything that is a waste of time takes away from getting to what the patient needs.”

The psychologists and physicians who conducted the investigation began the research believing that self-disclosure was an effective way to encourage patients to say more about what really troubled them.

"We were hoping to find that physician disclosure would be a part of patient-centered care, encouraging the patient to open up and offer additional valuable information,” said Howard B. Beckman, M.D., a co-author of the article and a clinical professor of medicine and family medicine at the School of Medicine and Dentistry. "Instead we found these disclosures to be doctor-centered and to benefit the doctor, not the patient. As a discloser myself, I was really devastated.”

The investigation is part of a larger study of patient communication and health outcomes funded by the Agency for Healthcare Research and Quality. One hundred primary care physicians in the Rochester region agreed to participate, consenting to two unannounced and secretly recorded visits by people trained to portray specific patient roles.

The project produced 193 recorded first-time patient visits to primary care physicians. For the self-disclosure investigation, four recordings were eliminated for poor technical quality and 76 were excluded because the physician suspected the patient was not a true patient before the end of the visit. Self-disclosures were defined as physician statements about his or her own personal or professional experience.

Each investigator independently reviewed and analyzed 113 transcripts of patient visits, rating the content for self-disclosure and its effect on the patient. Thirty-four percent of the visits contained at least one self-disclosure. None of the self-disclosures were patient focused, while 60 percent were physician focused, the investigators concluded. Eighty-five percent of the disclosures were considered not useful and 11 percent were viewed as disruptive.

Here is an example of one brief exchange:

Physician: No partners recently?

Patient: I was dating for a while and that one just didn’t work out . . . about a year ago.

Physician: So you’re single now.

Patient: Yeah. It’s all right.

Physician: (laughing): It gets tough. I’m single as well. I don’t know. We’re not the right age to be dating, I guess. So let’s see. No trouble urinating or anything like that"

Although occasionally it might be useful for physicians to answers inquiries from patients about their personal life or to comment on a specific topic raised by a patient, such discussions generally should be very short and clearly tie into a patient’s concerns, the authors of the article concluded.

Those involved in the investigation said the findings have affected how they conduct their practice. McDaniel, for example, now takes breaks between patient visits to discuss the day’s news or vent about problems, eliminating those kinds of self-disclosures from patient sessions. Beckman, who is medical director of the Rochester Individual Practice Association, has stopped telling his elderly patients about his mother, who was very active and healthy through her 80s.

"I would tell people their expectations could be higher and use my mother as an example,” Beckman said. "In subsequent visits, they asked about my mother. That was great until her health began to decline then I had to tell them she was not well. That frightened them. If I couldn’t help my own mother, how could I help them" My disclosure did not work as well as I had hoped.”

"Patients want their needs met. Doctors want to meet the needs of their patients and they want to have human contact,” said McDaniel, who is director of the Wynne Center for Family Research at the University of Rochester Medical Center. "But self-disclosure ultimately is misguided. Patient visits should be focused on the patient. They are not about me.”

While a physician’s self-disclosures usually develop from positive intentions, the investigators said that empathy, understanding and compassion toward the patient are more reliable and helpful for the patient.

"If I tell my patients about my problems or how I feel, they are taking care of me,” Beckman said. "Is taking care of me the only way to deepen the relationship" There are other ways. We can have empathy. We can encourage our patients. We can praise them -- things that make a person feel valued.”

McDaniel hopes physicians use the article to examine the way they conduct their practice and consider whether self-disclosures are in the best interest of their patients. She wants medical schools to include more discussions of methods of communication with patients in their curriculum.

"Doctors need support groups, self-awareness groups and mindfulness groups to meet their needs,” McDaniel said. "They should not use self-disclosure. If they want to complain about their rent or the stress of the work, they should complain to their colleagues, not their patients.”

Substance in tree bark could lead to new lung-cancer treatment

DALLAS— Researchers at UT Southwestern Medical Center have determined how a substance derived from the bark of the South American lapacho tree kills certain kinds of cancer cells, findings that also suggest a novel treatment for the most common type of lung cancer.

The compound, called beta-lapachone, has shown promising anti-cancer properties and is currently being used in a clinical trial to examine its effectiveness against pancreatic cancer in humans. Until now, however, researchers didn’t know the mechanism of how the compound killed cancer cells.

Dr. David Boothman, a professor in the Harold C. Simmons Comprehensive Cancer Center and senior author of a study appearing online this week in the Proceedings of the National Academy of Sciences, has been researching the compound and how it causes cell death in cancerous cells for 15 years.

In the new study, Dr. Boothman and his colleagues in the Simmons Cancer Center found that beta-lapachone interacts with an enzyme called NQO1, which is present at high levels in non-small cell lung cancer and other solid tumors. In tumors, the compound is metabolized by NQO1 and produces cell death without damaging noncancerous tissues that do not express this enzyme.

"Basically, we have worked out the mechanism of action of beta-lapachone and devised a way of using that drug for individualized therapy,” said Dr. Boothman, who is also a professor of pharmacology and radiation oncology.

In healthy cells, NQO1 is either not present or is expressed at low levels. In contrast, certain cancer cells — like non-small cell lung cancer — overexpress the enzyme. Dr. Boothman and his colleagues have determined that when beta-lapachone interacts with NQO1, the cell kills itself. Non-small cell lung cancer is the most common type of lung cancer.

Beta-lapachone also disrupts the cancer cell’s ability to repair its DNA, ultimately leading to the cell’s demise. Applying radiation to tumor cells causes DNA damage, which results in a further boost in the amount of NQO1 in the cells.

"When you irradiate a tumor, the levels of NQO1 go up,” Dr. Boothman said. "When you then treat these cells with beta-lapachone, you get synergy between the enzyme and this agent and you get a whopping kill.”

In the current study, Dr. Boothman tested dosing methods on human tumor cells using a synthesized version of beta-lapachone and found that a high dose of the compound given for only two to four hours caused all the NQO1-containing cancer cells to die.

Understanding how beta-lapachone works to selectively kill chemotherapy-resistant tumor cells creates a new paradigm for the care of patients with non-small cell lung cancer, the researchers said. They are hoping that by using a drug like beta-lapachone, they can selectively target cancer tumors and kill them more efficiently. The current therapy for non-small cell lung cancer calls for the use of platinum-based drugs in combination with radiation.

"Future therapies based on beta-lapachone and NQO1 interaction have the potential to play a major role in treating devastating drug-resistant cancers such as non-small cell lung cancer,” said Dr. Erik Bey, lead author of the study and a postdoctoral researcher in the Simmons Cancer Center. "This is the first step in developing chemotherapeutic agents that exploit the proteins needed for a number of cellular processes, such as DNA repair and programmed cell death.”

About 85 percent of patients with non-small cell lung cancer have cancer cells containing elevated levels of the NQO1 enzyme, which is produced by a certain gene. Patients who have a different version of the gene would likely not benefit from treatment targeting NQO1, Dr. Boothman said.

Dr. Boothman cautioned that clinical trials of beta-lapachone in lung cancer patients will be needed to determine its effectiveness as a treatment. He and his team have created a simple blood test that would screen patients for the NQO1 enzyme.

Along with Dr. Jinming Gao’s laboratory in the Simmons Cancer Center and a joint collaboration with the bioengineering program at UT Dallas, researchers in the new "Cell Stress and Cancer Nanomedicine” initiative within the Simmons Cancer Center have developed novel nanoparticle drug delivery methods for the tumor-targeted delivery of this compound. These delivery methods have the promise of further improving this drug for non-small cell lung cancer.

Other Simmons Cancer Center researchers involved in the study were Dr. Ying Dong, postdoctoral researcher; Dr. Chin-Rang Yang, assistant professor; and Dr. Gao, associate professor. UT Southwestern’s Dr. John Minna, director of the Nancy B. and Jake L. Hamon Center for Therapeutic Oncology Research and the W.A. "Tex” and Deborah Moncrief Jr. Center for Cancer Genetics, and Dr. Luc Girard, assistant professor of pharmacology, also participated along with researchers from Case Western Reserve University and UT M.D. Anderson Cancer Center. The research was supported by the National Institutes of Health.

Penn researchers report that gene therapy awakens the brain despite blindness from birth

Philadelphia –- Researchers at the University of Pennsylvania have demonstrated that gene therapy used to restore retinal activity to the blind also restores function to the brain’s visual center, a critical component of seeing. The multi-institutional study led by Geoffrey K. Aguirre, assistant professor of neurology in Penn's School of Medicine, shows that gene therapy can improve retinal, visual-pathway and visual-cortex responses in animals born blind and has the potential to do the same in humans.

"The retina of the eye captures light, but the brain is where vision is experienced,” Aguirre said. "The traditional view is that blindness in infancy permanently alters the structure and function of the brain, leaving it unable to process visual information if sight is restored. We’ve now challenged that view.”

The results support the potential for human benefit from retinal therapies aimed at restoring vision to those with genetic retinal disease. Researchers used functional MRI to measure brain activity in blind dogs born with a mutation in gene RPE65, an essential molecule in the retinoid-visual cycle. The same mutation causes a blindness in humans called Leber congenital amaurosis, or LCA. It is the first human eye-retinal disorder slated for gene therapy.

Gene therapy, performed by introducing a working copy of RPE65 into the retina, restored eye function in canines. Yet, it was previously unclear if the brain could "receive” the restored sight.

The team found that gene therapy to the eye dramatically increased responses to light within the visual cortex of the canine brain. The recovery of visual brain function occurred in a canine that had been blind for the first four years of its life, and recovery was found to persist in another dog for at least two-and-a-half years after therapy, suggesting a level of permanence to the treatment.

Penn scientists then studied the structure and function of the visual brain of human patients with the same form of blindness. Young adults with blindness from RPE65 mutation had intact visual brain pathways with nearly normal structure. The Penn team also found that, while the visual cortex of these patients with LCA did not respond to dim lights, the brain’s reaction to brighter lights was comparable to that of individuals with normal sight.

"It seems these patients have the necessary brain pathways ready to go if their eyes start working again,” Aguirre said.

The results of the current study are critical to these human clinical trials, led at Penn’s Scheie Eye Institute by Samuel G. Jacobson, professor of ophthalmology, and Artur V. Cideciyan, research associate professor of ophthalmology.

"Existence of functional potential both in the eye and brain are prerequisites for successful gene therapy in all forms of LCA,” Cideciyan said. "In the RPE65 form of the disease, we now have evidence for both, and treatment at the retinal level has the hope of recovery of useful vision in patients.”

Echinacea may halve the risk of catching cold

* 11:58 25 June 2007

* news service

* Roxanne Khamsi

A wide-ranging survey of studies involving echinacea suggests that the herbal supplement can reduce the risk of catching a cold by about 60%.

Experts say that echinacea might offer a particularly beneficial boost to people with weak immune systems. However, they also caution that the long-term effects of taking echinacea remain unknown.

Echinacea supplements are prepared from a plant commonly known as the purple coneflower, which is native to North America.

Many people have argued that the herbal supplement can protect against the common cold, but others have said that more scientific evidence is needed to back up this claim.

To address this debate, Craig Coleman at the University of Connecticut School of Pharmacy in Hartford, Connecticut, US, and colleagues reviewed 14 studies of echinacea involving more than 1600 subjects.

Immune boost

On average, the participants in these studies took 300 milligrams of the supplement three times a day. Based on how often these people became sick with the common cold, Coleman's team calculated that Echinacea reduces the odds of catching a cold by 60%.

Moreover, when people did come down with a cold, those taking echinacea were sick for a shorter period of time. The herbal supplement seemed to reduce the duration of the cold by 1.4 days. The average cold lasts from three to five days, Coleman says.

Ronald Eccles, director of the Common Cold Centre at the University of Cardiff, UK, comments that the new analysis helps to resolve some of the controversy surrounding echinacea: "Harnessing the power of our own immune system to fight common infections with herbal medicines such as echinacea is now given more validity with this interesting scientific evaluation of past clinical trials.”

Researchers speculate that active compounds in echinacea called phenols work to protect against cold viruses by revving up the immune system.

According to Coleman, it is thought that certain phenols in the plant stimulate the production of an immune signalling chemical, or "cytokine", known as tumor necrosis factor-alpha.

Not for everyone

For this reason, some scientists say that only certain people should use echinacea: "People with impaired immune function may benefit from taking echinacea during the winter months to prevent colds and flu, but healthy people do not require long-term preventative use," says Ron Cutler at the University of East London, UK.

Coleman stresses that people with immune system disorders, such as multiple sclerosis, HIV and rheumatoid arthritis, should not take echinacea supplements. "Their immune systems are already too revved up. It's probably not a good idea that they go anywhere near echinacea," he says.

And Coleman himself says he is not about to start taking echinacea supplements. "Honestly, at this point I wouldn't," he says. "I really don't get a lot of colds."

He notes that people in the US who want to buy echinacea should look for the "USP Dietary Supplement Verified" seal to make sure the supplement has met manufacturing standards. Coleman adds that children should receive only about a third of the dose recommended for adults.

Journal reference: Lancet Infectious Diseases (vol 7, p 473)

Weather observed on a star for the first time

* 22:38 25 June 2007

* news service

* Jeff Hecht

Weather – caused by the same forces as the weather on Earth – has been seen on a star for the first time, reveal observations of mercury clouds on a star called Alpha Andromedae.

Previously, astronomers had thought that any structures on stars were caused by magnetic fields. Sunspots, for example, are relatively cool regions on the Sun where strong magnetic fields prevent energy from flowing outwards.

But now, seven years of painstaking observations of Alpha Andromedae show that stars do not need magnetic fields to form clouds after all.

Lying about 100 light years away, it is one of a class of stars unusually rich in mercury and manganese. Earlier observations of similar stars had revealed uneven distributions of mercury, but all of them had strong magnetic fields.

The relatively massive stars do not mix gases in their atmospheres, which less massive stars, like the Sun, do. So the balance between the pull of gravity and the push of radiation pressure concentrates some heavy elements at certain atmospheric levels. At that point, their magnetic fields were thought to continue the separation process, sequestering some chemicals in particular regions.

But researchers led by Oleg Kochukhov of Uppsala University in Sweden have found that this last step is not necessary to create chemical clouds on a star.

Darker regions show heavier concentrations of mercury on the surface of the star Alpha Andromedae. The mercury clouds are most common along the star's equator, probably due to the star's rotation (Illustration: Kochukhov et al./Nature)

Driven by the tides

They observed the mercury concentration in Alpha Andromedae – which does not have a detectable magnetic field – for seven years with 1.2- and 6-metre telescopes, detecting the mercury by its signature absorption line in the violet end of the spectrum.

They resolved details on the spinning star's surface by looking at how rapidly the clouds were turning towards or away from Earth. That revealed that the mercury concentration varies by as much as a factor of 10,000 across its surface, and the pattern of concentration changes as well.

The evidence for changes in the mercury distribution over time "look very convincing", comments Gregg Wade of the Royal Military College of Canada in Kingston, who discovered in 2006 that the star lacked a magnetic field.

But exactly what causes the clouds to change over time is unclear. Kochukhov and colleagues say the changes "may have the same underlying physics as the weather patterns on terrestrial and giant planets".

The mercury clouds are on the brighter and larger member of a close pair of stars that orbit each other every 97 days. "The second star may create tides on the surface of the main star, much like the Moon creates tides on Earth, which drives evolution of the mercury cloud cover," Kochukhov told New Scientist.

But he adds that other explanations are possible. So for now, the weather on stars, as on Earth, remains hard to fathom. Journal reference: Nature Physics (doi: 10.1038/nphys648)

Egyptologists Think They Have Hatshepsut's Mummy

By Jonathan Wright

Egyptologists think they have identified with certainty the mummy of Hatshepsut, the most famous queen to rule ancient Egypt, found in a humble tomb in the Valley of the Kings, an archaeologist said on Monday.

Egypt's chief archaeologist, Zahi Hawass, will hold a news conference in Cairo on Wednesday. The Discovery Channel said he would announce what it called the most important find in the Valley of the Kings since the discovery of King Tutankhamun.

The archaeologist, who asked not to be named, said the candidate for identification as the mummy of Hatshepsut was one of two females found in 1903 in a small tomb believed to be that of Hatshepsut's wet-nurse, Sitre In.

Several Egyptologists have speculated over the years that one of the mummies was that of the queen, who ruled from between 1503 and 1482 BC -- at the height of ancient Egypt's power.

Sculpted Head to show Egyptian Headress taken at Met. Museum of Art. (Walter Daran/ Time Life Pictures/ Getty Images)

The archaeologist said Hawass would present new evidence for an identification but that not all Egyptologists are convinced he will be able to prove his case.

"It's based on teeth and body parts ... It's an interesting piece of scientific deduction which might point to the truth," the archaeologist said.

Egyptologist Elizabeth Thomas speculated many years ago that one of the mummies was Hatshepsut's because the positioning of the right arm over the woman's chest suggested royalty.

Her mummy may have been hidden in the tomb for safekeeping after her death because her stepson and successor, Tuthmosis III, tried to obliterate her memory.

Donald Ryan, an Egyptologist who rediscovered the tomb in 1989, said on an Internet discussion board this month that there were many possibilities for the identities of the two female mummies found in the tomb, known as KV 60.

"Zahi Hawass recently has taken some major steps to address these questions. Both of the KV 60 mummies are in Cairo now and are being examined in various clever ways that very well might shed light on these questions," he added.

In an undated article on his Web site, Hawass cast doubt on the theory that the KV-60 mummy with the folded right arm was that of Hatshepsut.

"I do not believe this mummy is Hatshepsut. She has a very large, fat body with huge pendulous breasts, and the position of her arm is not convincing evidence of royalty," he wrote.

He was more optimistic about the mummy found in the wet-nurse's coffin and traditionally identified as the nurse's. That mummy is stored away in the Egyptian Museum in Cairo.

"The body of the mummy now in KV 60 with its huge breasts may be the wetnurse, the original occupant of the coffin ... The mummy on the third floor at the Egyptian Museum in Cairo could be the mummy of Hatshepsut," Hawass wrote.

The Human Family Tree Has Become a Bush With Many Branches

By JOHN NOBLE WILFORD

Published: June 26, 2007

Fossils found in Ethiopia in the last 10 years include Ardipithecus kadabba bones from over five million years ago, left, teeth of the Australopithecus anamensis, which lived four million years ago; and the skull and lower limbs of a 3-year-old Australopithecus afarensis, thought to be 3.3 million years old. At right, teeth of the afarensis, anamensis and the modern chimpanzee. Photographs by Tim D. White/Brill Atlanta; Authority for Research and Conservation of Cultural Heritages (A. afarensis skull); National Museum of Ethiopia/European Pressphoto Agency (afarensis bones)

Time was, fossils and a few stone artifacts were about the only means scientists had of tracing the lines of early human evolution. And gaps in such material evidence were frustratingly wide.

When molecular biologists joined the investigation some 30 years ago, their techniques of genetic analysis yielded striking insights. DNA studies pointed to a common maternal ancestor of all anatomically modern humans in Africa by at least 130,000 years. She inevitably became known as the African Eve.

Other genetic research plotted ancestral migration patterns and the extremely close DNA relationship between humans and chimpanzees, our nearest living relatives. Genetic clues also set the approximate time of the divergence of the human lineage from a common ancestor with apes: between six million and eight million years ago.

Fossil researchers were skeptical at first, a reaction colored perhaps by their dismay at finding scientific poachers on their turf. These paleoanthropologists contended that the biologists’ "molecular clocks” were unreliable, and in some cases they were, though apparently not to a significant degree.

Now paleoanthropologists say they accept the biologists as allies triangulating the search for human origins from different angles. As much as anything, a rapid succession of fossil discoveries since the early 1990s has restored the confidence of paleoanthropologists in the relevance of their approach to the study of early hominids, those fossil ancestors and related species in human evolution.

The new finds have filled in some of the yawning gaps in the fossil record. They have doubled the record’s time span from 3.5 million back almost to 7 million years ago and more than doubled the number of earliest known hominid species. The teeth and bone fragments suggest the form — the morphology — of these ancestors that lived presumably just this side of the human-ape split.

"The amount of discord between morphology and molecules is actually not that great anymore,” said Frederick E. Grine, a paleoanthropologist at the State University of New York at Stony Brook.

With more abundant data, Dr. Grine said, scientists are, in a sense, fleshing out the genetic insights with increasingly earlier fossils. It takes the right bones to establish that a species walked upright, which is thought to be a defining trait of hominids after the split with the ape lineage.

"All biology can tell you is that my nearest relative is a chimpanzee and about when we had a common ancestor,” he said. "But biology can’t tell us what the common ancestor looked like, what shaped that evolutionary change or at what rate that change took place.”

Although hominid species were much more apelike in their earliest forms, Tim D. White of the University of California, Berkeley, said: "We’ve come to appreciate that you cannot simply extrapolate from the modern chimp to get a picture of the last common ancestor. Humans and chimps have been changing down through time.”

But Dr. White, one of the most experienced hominid hunters, credits the genetic data with giving paleoanthropologists a temporal framework for their research. Their eyes are always fixed on a time horizon for hominid origins, which now appears to be at least seven million years ago.

Ever since its discovery in 1973, the species Australopithecus afarensis, personified by the famous Lucy skeleton, has been the continental divide in the exploration of hominid evolution. Donald Johanson, the Lucy discoverer, and Dr. White determined that the apelike individual lived 3.2 million years ago, walked upright and was probably a direct human ancestor. Other afarensis specimens and some evocative footprints showed the species existed for almost a million years, down to three million years ago.

In the 1990s, scientists finally crossed the Lucy divide. In Kenya, Meave G. Leakey of the celebrated fossil-hunting family came up with Australopithecus anamensis, which lived about four million years ago and appeared to be an afarensis precursor. Another discovery by Dr. Leakey challenged the prevailing view that the family tree had a more or less single trunk rising from ape roots to a pinnacle occupied by Homo sapiens. Yet here was evidence that the new species Kenyanthropus platyops co-existed with Lucy’s afarensis kin.

The family tree now looks more like a bush with many branches. "Just because there’s only one human species around now doesn’t mean it was always that way,” Dr. Grine said.

Few hominid fossils have turned up from the three-million- to two-million-year period, during which hominids began making stone tools. The first Homo species enter the fossil record sometime before two million years ago, and the transition to much larger brains began with Homo erectus, about 1.7 million years ago.

Other recent discoveries have pushed deeper in time, closer to the hominid origins predicted by molecular biologists.

Dr. White was involved in excavations in Ethiopia of many specimens that lived 4.4 million years ago and were more primitive and apelike than Lucy. The species was named Ardipithecus ramidus. Later, a related species from 5.2 million to 5.8 million years ago was classified Ardipithecus kadabba.

At that time, six years ago, C. Owen Lovejoy of Kent State University said, "We are indeed coming very close to that point in the fossil record where we simply will not be able to distinguish ancestral hominid from ancestral” chimpanzees, because, he said, "They were so anatomically similar.”

Two even earlier specimens are even harder to interpret. One found in Kenya by a French team has been dated to six million years and named Orrorin tugenensis. The teeth and bone pieces are few, though the discoverers think a thigh fragment suggests that the individual was a biped — a walker on two legs.

Another French group then uncovered 6.7-million-year-old fossils in Chad. Named Sahelanthropus tchadensis, the sole specimen includes only a few teeth, a jawbone and a crushed cranium. Scientists said the head appeared to have perched atop a biped.

"These are clearly the earliest hominids we have,” said Eric Delson, a human-origins scientist at the American Museum of Natural History. "But we still know rather little about any of these specimens. The farther back we go toward the divergence point, the more similar specimens will look on both sides of the split.”

Other challenges arise from human evolution in more recent epochs. Just who were the "little people” found a few years ago in a cave on the island of Flores in Indonesia? The Australian and Indonesian discoverers concluded that one partial skeleton and other bones belonged to a now-extinct separate human species, Homo floresiensis, which lived as recently as 18,000 years ago.

The apparent diminutive stature and braincase of the species prompted howls of dispute. Critics contended that this was not a distinct species, but just another dwarf-size Homo sapiens, possibly with a brain disorder. Several prominent scientists, however, support the new-species designation.

The tempest over the Indonesian find is nothing new in a field known for controversy. Some scholars counsel patience, recalling that it was years after the discovery of the first Neanderthal skull, in 1856, before it was accepted as an ancient branch of the human family. Critics had at first dismissed the find as only the skull of a degenerate modern human or a Cossack who died in the Napoleonic wars.

Perhaps the analogy is not as encouraging as intended. Scientists to this day are arguing about Neanderthals, their exact relationship to us and the cause of their extinction 30,000 years ago, not long after the arrival in Europe of the sole surviving hominid that is so curious about its origins.

Fast-Reproducing Microbes Provide a Window on Natural Selection

By CARL ZIMMER

In the corner of a laboratory at Michigan State University, one of the longest-running experiments in evolution is quietly unfolding. A dozen flasks of sugary broth swirl on a gently rocking table. Each is home to hundreds of millions of Escherichia coli, the common gut microbe. These 12 lines of bacteria have been reproducing since 1989, when the biologist Richard E. Lenski bred them from a single E. coli. "I originally thought it might go a couple thousand generations, but it’s kept going and stayed interesting,” Dr. Lenski said. He is up to 40,000 generations now, and counting.

In that time, the bacteria have changed significantly. For one thing, they are bigger — twice as big on average as their common ancestor. They are also far better at reproducing in these flasks, dividing 70 percent faster than their ancestor. These changes have emerged through spontaneous mutations and natural selection, and Dr. Lenski and his colleagues have been able to watch them unfold.

When Dr. Lenski began his experiment 18 years ago, only a few scientists believed they could observe evolution so closely. Today evolutionary experiments on microbes are under way in many laboratories. And thanks to the falling price of genome-sequencing technology, scientists can now zero in on the precise genetic changes that unfold during evolution, a power previous generations of researchers only dreamed of.

One such study is of Myxococcus xanthus, top, which lash their tails together and hunt in a pack. If they starve, they form a ball, above. Indiana University

"It’s fun for us, because we can watch the game of life at the molecular level,” said Bernhard Palsson of the University of California, San Diego. "Many features of evolutionary theory are showing up in these experiments, and that’s why people are so excited by them.”

In the past century scientists have gathered a wealth of evidence about the power of natural selection. But much of that evidence has been indirect. Natural selection is a process that takes place over many generations, that may affect thousands or millions of individuals, and that may be shaped by many different conditions. To document it scientists have searched for historical fingerprints. They study fossils, for example, or compare the DNA of related species.

In the late 1980s a few scientists began experimenting with microbes, hoping to observe natural selection in something closer to real time. Microbes can reproduce several times a day, and a billion of them can fit comfortably in a flask. Scientists can carefully control the conditions in which the microbes live, setting up different kinds of evolutionary pressures.

While working at the University of California, Irvine, Dr. Lenski decided to set up a straightforward experiment: he made life miserable for some bacteria. He created 12 identical lines of E. coli and then fed them a meager diet of glucose. The bacteria would run out of sugar by the afternoon, and the following morning Dr. Lenski would transfer a few of the survivors to a freshly supplied flask.

From time to time Dr. Lenski also froze some of the bacteria from each of the 12 lines. It became what he likes to call a "frozen fossil record.” By thawing them out later, Dr. Lenski could directly compare them with younger bacteria.

Within a few hundred generations, Dr. Lenski was seeing changes, and the bacteria have been changing ever since. The microbes have adapted to their environment, reproducing faster and faster over the years. One striking lesson of the experiment is that evolution often follows the same path. "We’ve found a lot of parallel changes,” Dr. Lenski said.

In all 12 lines the speed of adaptation was greatest in the first few months of the experiment and has since been tapering off. The bacteria have all become larger as well, although Dr. Lenski is not sure what kind of adaptation this represents. When other scientists saw these sorts of results begin to emerge, they set up their own experiments with microbes. Today they are observing bacteria, viruses and even yeast as they adapt to challenges as diverse as infections, antibiotics and cold and heat.

Albert F. Bennett, a physiologist at the University of California, Irvine, is an expert on temperature adaptation. He started out studying animals like reptiles and fish, but he seized on bacteria after hearing about Dr. Lenski’s experiments. "It was one of those ‘Star Trek’ moments,” he said. "I was looking out the window, and for about 10 minutes my mind was going into hyperdrive.”

Dr. Bennett was particularly curious about how organisms adapt to different temperatures. He wondered if adapting to low temperatures meant organisms would fare worse at higher ones, a long-standing question. Working with Dr. Lenski, Dr. Bennett allowed 24 lines of E. coli to adapt to a relatively chilly 68 degrees for 2,000 generations. They then measured how quickly these cold-adapted microbes reproduced at a simmering 104 degrees.

Two-thirds of the lines did worse at high temperatures than their ancestors, experiencing the expected trade-off. "If you’re a betting person, that’s the way you’d better bet,” Dr. Bennett said. But the pattern was not universal. The bacteria that reproduced fastest in the cold did not do the worst job of breeding in the heat. A third of the cold-adapted lines did as well or better in the heat than the ancestor. Dr. Bennett and Dr. Lenski published their latest findings last month in The Proceedings of the National Academy of Sciences.

Other scientists are watching individual microbes evolve into entire ecosystems. Paul Rainey, a biologist at the New Zealand Institute for Advanced Study at Massey University, has observed this evolution in bacteria, called Pseudomonas fluorescens, that live on plants. When he put a single Pseudomonas in a flask, it produced descendants that floated in the broth, feeding on nutrients. But within a few hundred generations, some of its descendants mutated and took up new ways of life. One strain began to form fuzzy carpets on the bottom of the flask. Another formed a mat of cellulose, where it could take in oxygen from above and food from below.

But Dr. Rainey is only beginning to decipher the complexity that evolves in his flasks. The different types of Pseudomonas interact with one another in intricate ways. The bottom-growers somehow kill off most of the ancestral free-floating microbes. But they in turn are wiped out by the mat-builders, which cut off oxygen to the rest of the flask. In time, however, cheaters appear in the mat. They do not produce their own cellulose, instead depending on other bacteria to hold them up. Eventually the mat collapses. The other types of Pseudomonas recover, and the cycle begins again, with hundreds of other forms appearing over time. "The interactions are everything you’d expect in a rain forest,” Dr. Rainey said.

Scientists have long known that underlying these visible changes were genetic ones. But only now are they documenting the mutations that allow this evolution to happen in the first place.

Dr. Palsson has been running experiments in which E. coli must adapt to a diet of glycerol, an ingredient in soap. He found that within a few hundred generations, the bacteria could grow two to three times as fast as their ancestor. He then selected some of the evolved microbes and sequenced their genome. He compared their DNA with that of their common ancestor and pinpointed a few mutations that each line had acquired.

Dr. Palsson then inserted copies of these mutated genes into the ancestor and found that it now could thrive on glycerol as well. But the order in which he inserted the genes made a big difference to the bacteria.

Some mutations were beneficial only if the bacteria already carried other mutations. On their own, the mutations could even be harmful. Dr. Palsson’s results offer a detailed picture of what biologists call epistasis — the intimate ways in which mutations can influence the effects of other mutations during evolution.

As Dr. Palsson and other scientists have pinpointed mutations in microbes, they have been surprised by how mysterious the mutations are. They are struggling to find out how the mutations benefit the organisms. And in some cases, they do not even know what the mutated genes did before they mutated.

"It just makes you ask, ‘What on earth is that doing?’ ” said Gregory J. Velicer, a former student of Dr. Lenski’s who is now an associate professor at Indiana University. Dr. Velicer experienced this bafflement firsthand while watching the evolution of a predatory microbe called Myxococcus xanthus. Myxococcus swarms lash their tails together and hunt in a pack, releasing enzymes to kill their prey and feasting on the remains. If the bacteria starve, they come together to form a mound of spores. It is a cooperative effort. Only a few percent of the bacteria end up forming spores, while the rest face almost certain death.

This social behavior costs Myxococcus energy that it could otherwise use to grow, Dr. Velicer discovered. He and his colleagues allowed the bacteria to evolve for 1,000 generations in a rich broth. Most of the lines of bacteria lost the ability to swarm or form spores, or both.

Dr. Velicer discovered that some of the newly evolved bacteria were not just asocial — they were positively antisocial. These mutant cheaters could no longer make mounds of spores on their own. But if they were mixed with ordinary Myxococcus, they could make spores. In fact, they were 10 times as likely to form a spore as normal microbes.

Dr. Velicer set up a new experiment in which the bacteria alternated between a rich broth and a dish with no food. Over the generations, the cheaters became more common because of their advantage at making spores. But if the cheaters became too common, the entire population died out, because there were not enough ordinary Myxococcus left to make the spore mounds in the times of famine.

During this experiment, one of Dr. Velicer’s colleagues, Francesca Fiegna of the Max Planck Institute for Developmental Biology, discovered something strange. She had just transferred a population of cheaters to a dish, expecting them to die out. But the cheaters were making seven times as many spores as their normal ancestors. "It just made no sense,” Dr. Velicer said. "I asked her I don’t know how many times, ‘Are you sure you marked the plates correctly?’ ”

She had. It turned out that a single Myxococcus cheater had mutated into a cooperator. In fact, it had evolved into a cooperator far superior to its cooperative ancestors. Dr. Velicer and his colleagues sequenced the genome of the new cooperator and discovered a single mutation. The new mutation did not simply reverse the mutation that had originally turned the microbe’s ancestors into cheaters. Instead, it struck a new part of the genome.

But Dr. Velicer has no idea at the moment how the mutation brought about the remarkable transformation in behavior. The mutated segment of DNA actually lies near, but not inside, a gene. It is possible that proteins latch on to this region and switch the nearby gene on or off. But no one actually knows what the gene normally does.

Mutations like this one, Dr. Velicer said, "make for a much more complicated story.” It is a story he and other scientists are looking forward to revealing.

Humans Have Spread Globally, and Evolved Locally

By NICHOLAS WADE

Historians often assume that they need pay no attention to human evolution because the process ground to a halt in the distant past. That assumption is looking less and less secure in light of new findings based on decoding human DNA.

People have continued to evolve since leaving the ancestral homeland in northeastern Africa some 50,000 years ago, both through the random process known as genetic drift and through natural selection. The genome bears many fingerprints in places where natural selection has recently remolded the human clay, researchers have found, as people in the various continents adapted to new diseases, climates, diets and, perhaps, behavioral demands.

A striking feature of many of these changes is that they are local. The genes under selective pressure found in one continent-based population or race are mostly different from those that occur in the others. These genes so far make up a small fraction of all human genes.

A notable instance of recent natural selection is the emergence of lactose tolerance — the ability to digest lactose in adulthood — among the cattle-herding people of northern Europe some 5,000 years ago. Lactase, the enzyme that digests the principal sugar of milk, is usually switched off after weaning. But because of the great nutritional benefit for cattle herders of being able to digest lactose in adulthood, a genetic change that keeps the lactase gene switched on spread through the population.

Lactose tolerance is not confined to Europeans. Last year, Sarah Tishkoff of the University of Maryland and colleagues tested 43 ethnic groups in East Africa and found three separate mutations, all different from the European one, that keep the lactase gene switched on in adulthood. One of the mutations, found in peoples of Kenya and Tanzania, may have arisen as recently as 3,000 years ago.

That lactose tolerance has evolved independently four times is an instance of convergent evolution. Natural selection has used the different mutations available in European and East African populations to make each develop lactose tolerance. In Africa, those who carried the mutation were able to leave 10 times more progeny, creating a strong selective advantage.

Researchers studying other single genes have found evidence for recent evolutionary change in the genes that mediate conditions like skin color, resistance to malaria and salt retention.

The most striking instances of recent human evolution have emerged from a new kind of study, one in which the genome is scanned for evidence of selective pressures by looking at a few hundred thousand specific sites where variation is common.

Last year Benjamin Voight, Jonathan Pritchard and colleagues at the University of Chicago searched for genes under natural selection in Africans, Europeans and East Asians. In each race, some 200 genes showed signals of selection, but without much overlap, suggesting that the populations on each continent were adapting to local challenges.

Another study, by Scott Williamson of Cornell University and colleagues, published in PLoS Genetics this month, found 100 genes under selection in Chinese, African-Americans and European-Americans.

In most cases, the source of selective pressure is unknown. But many genes associated with resistance to disease emerge from the scans, confirming that disease is a powerful selective force. Another category of genes under selective pressure covers those involved in metabolism, suggesting that people were responding to changes in diet, perhaps associated with the switch from hunting and gathering to agriculture.

Several genes involved in determining skin color have been under selective pressure in Europeans and East Asians. But Dr. Pritchard’s study detected skin color genes only in Europeans, and Dr. Williamson found mostly genes selected in Chinese.

The reason for the difference is that Dr. Pritchard’s statistical screen detects genetic variants that have become very common in a population but are not yet universal. Dr. Williamson’s picks up variants that have already swept through a population and are possessed by almost everyone.

The findings suggest that Europeans and East Asians acquired their pale skin through different genetic routes and, in the case of Europeans, perhaps as recently as around 7,000 years ago.

Another puzzle is presented by selected genes involved in brain function, which occur in different populations and could presumably be responses to behavioral challenges encountered since people left the ancestral homeland in Africa.

But some genes have more than one role, and some of these brain-related genes could have been selected for other properties.

Two years ago, Bruce Lahn, a geneticist at the University of Chicago, reported finding signatures of selection in two brain-related genes of a type known as microcephalins, because when mutated, people are born with very small brains. Two of the microcephalins had come under selection in Europeans and one in Chinese, Dr. Lahn reported.

He suggested that the selected forms of the gene had helped improved cognitive capacity and that many other genes, yet to be identified, would turn out to have done the same in these and other populations.

Neither microcephalin gene turned up in Dr. Pritchard’s or Dr. Williamson’s list of selected genes, and other researchers have disputed Dr. Lahn’s claims. Dr. Pritchard found that two other microcephalin genes were under selection, one in Africans and the other in Europeans and East Asians.

Even more strikingly, Dr. Williamson’s group reported that a version of a gene called DAB1 had become universal in Chinese but not in other populations. DAB1 is involved in organizing the layers of cells in the cerebral cortex, the site of higher cognitive functions.

Variants of two genes involved in hearing have become universal, one in Chinese, the other in Europeans.

The emerging lists of selected human genes may open new insights into the interactions between history and genetics. "If we ask what are the most important evolutionary events of the last 5,000 years, they are cultural, like the spread of agriculture, or extinctions of populations through war or disease,” said Marcus Feldman, a population geneticist at Stanford. These cultural events are likely to have left deep marks in the human genome.

A genomic survey of world populations by Dr. Feldman, Noah Rosenberg and colleagues in 2002 showed that people clustered genetically on the basis of small differences in DNA into five groups that correspond to the five continent-based populations: Africans, Australian aborigines, East Asians, American Indians and Caucasians, a group that includes Europeans, Middle Easterners and people of the Indian subcontinent. The clusterings reflect "serial founder effects,” Dr. Feldman said, meaning that as people migrated around the world, each new population carried away just part of the genetic variation in the one it was derived from.

The new scans for selection show so far that the populations on each continent have evolved independently in some ways as they responded to local climates, diseases and, perhaps, behavioral situations.

The concept of race as having a biological basis is controversial, and most geneticists are reluctant to describe it that way. But some say the genetic clustering into continent-based groups does correspond roughly to the popular conception of racial groups.

"There are difficulties in where you put boundaries on the globe, but we know now there are enough genetic differences between people from different parts of the world that you can classify people in groups that correspond to popular notions of race,” Dr. Pritchard said.

David Reich, a population geneticist at the Harvard Medical School, said that the term "race” was scientifically inexact and that he preferred "ancestry.” Genetic tests of ancestry are now so precise, he said, that they can identify not just Europeans but can distinguish between northern and southern Europeans. Ancestry tests are used in trying to identify genes for disease risk by comparing patients with healthy people. People of different races are excluded in such studies. Their genetic differences would obscure the genetic difference between patients and unaffected people.

No one yet knows to what extent natural selection for local conditions may have forced the populations on each continent down different evolutionary tracks. But those tracks could turn out to be somewhat parallel. At least some of the evolutionary changes now emerging have clearly been convergent, meaning that natural selection has made use of the different mutations available in each population to accomplish the same adaptation.

This is the case with lactose tolerance in European and African peoples and with pale skin in East Asians and Europeans.

Nepalese researchers identify cost-effective treatment for drug-resistant typhoid

New research carried out by researchers in Nepal has shown that a new and affordable drug, Gatifloxacin, may be more effective at treating typhoid fever than the drug currently recommended by the World Health Organisation. The study, funded by the Wellcome Trust, has implications for the treatment of typhoid particularly in areas where drug resistance is a major problem. The results are published today in the open access journal PLoS ONE.

Enteric fever, of which typhoid fever is the most common form, is a major disease affecting the developing world, where sanitary conditions remain poor. The best global estimates are of at least 22 million cases of typhoid fever each year with 200,000 deaths. Drug resistance is becoming a major problem and treatment is becoming increasingly difficult, leading to patients taking longer to recover, suffering more complications and continuing to spread the disease to their family and to their community.

Clinical investigators based at Patan Hospital Lalitpur in Kathmandu, Nepal, and the Oxford University Clinical Research Unit in Vietnam have completed a study to see if they can improve the treatment for patients with typhoid fever. Kathmandu has been termed the typhoid fever capital of the world as a result of this disease remaining so common.

"Typhoid fever is a major problem in Nepal and in the developing world and drug-resistant strains are making it even more difficult to tackle," says Dr Buddha Basnyat, senior investigator on the study. "The currently recommended treatment, Cefixime, is relatively expensive and must be administered for a longer duration than is ideal. Clearly there is an urgent need for a treatment that is cost-effective and easy to administer."

The results of the study show that a cost-effective new fluoroquinolone drug, Gatifloxacin, may be a better treatment for enteric fever than Cefixime, which is currently recommended by the World Health Organisation. In addition, Salmonella enterica Typhi and Salmonella enterica serovar Paratyhpi A, the two most common bacteria to cause enteric fever, do not show resistance to Gatifloxacin, unlike for other fluoroquinolones.

"We have shown that Gatifloxacin may be better than an established drug used by many doctors around the world," says Dr Basnyat. "There is currently no resistance to the drug, and at just over US$1 dollar for a seven day treatment course is relatively inexpensive."

"This is an important study with major implications for treating disease widespread in the developing world," says Professor Jeremy Farrar from the Oxford University Clinical Research Unit in Vietnam. "It also shows the major contribution that clinical investigators in Nepal, with the experience and knowledge gained from access to thousands of patients, can help make to improving treatment for our patients and to global health.”

Ancient 'Ondol' Heating Systems Discovered in Alaska

What are believed to be the world's oldest underfloor stone-lined-channel heating systems have been discovered in Alaska's Aleutian Islands in the U.S. The heating systems are remarkably similar to ondol, the traditional Korean indoor heating system. The word ondol, along with the word kimchi, is listed in the Oxford English Dictionary. The ondol heating system is widely recognized as Korean cultural property.

According to "Archaeology", a bi-monthly magazine from the American Archaeological Society, the remains of houses equipped with ondol-like heating systems were found at the Amaknak Bridge excavation site in Unalaska, Alaska.

The leader of the excavation, archaeologist Richard Knecht from the University of Alaska, Fairbanks, said in an interview with the Chosun Ilbo on Monday that the team began the dig in 2003. Radiocarbon dating shows the remains are about 3,000 years old.

Until now the oldest known ondol heating systems were built 2,500 years ago by the Korean people of North Okjeo in what is now Russia's Maritime Province. The Alaskan ondol are about 500 years older, and are the first ondol discovered outside the Eurasian continent.

Professor Knecht said four ondol structures were discovered at the site. Other ondol structures were found in the area in 1997 but it was not known what they were at the time.

According to Knecht's data, the Amaknak ondol were built by digging a two- to four-meter-long ditch in the floor of the house. Flat rocks were place in a "v" shape along the walls of the ditch, which was then covered with more flat rocks. There was also a chimney to let the smoke out.

Professor Song Ki-ho of the department of Korean history at Seoul National University looked over the Amaknak excavation report. "All ancient ondol are one-sided, meaning the underfloor heating system was placed on just one side of the room. The ondol in Amaknak also seem to be one-sided," he said.

As the ondol of North Okjeo and Amaknak are more than 5,000 kilometers apart, Knecht and Song agree that the two systems seem to have been developed independently.

This theory is backed up by the fact ondol have not been found in areas between the two locations, such as Ostrov, Sakhalin or the Kamchatka Peninsula, and because the Amanak ondol are significantly older than those of the Russian Maritime Province.

Scientists expect to reproduce Neanderthal DNA

Technical study yields methods to sequence genome despite genetic decay

By Randolph E. Schmid

WASHINGTON -- Researchers studying Neanderthal DNA say it should be possible to construct a complete genome of the ancient hominid despite the degradation of the DNA over time.

There is also hope for reconstructing the genome of the mammoth and cave bear, according to a research team led by Svante Paabo of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany.

Their findings are published in this week's online edition of Proceedings of the National Academy of Sciences.

Debate has raged for years about whether there is any relationship between Neanderthals and modern humans. Some researchers believe that Neanderthals were simply replaced by early modern humans, while others argue the two groups may have interbred.

Sequencing the genome of Neanderthals, who lived in Europe until about 30,000 years ago, could shed some light on that question.

In studies of Neanderthals, cave bear and mammoth, a majority of the DNA recovered was that of microorganisms that colonized the tissues after death, the researchers said.

But they were able to identify some DNA from the original animal, and Paabo and his colleagues were able to determine how it broke down over time. They also developed procedures to prevent contamination by the DNA of humans working with the material.

"We are confident that it will be technically feasible to achieve a reliable Neanderthal genome sequence," Paabo and his researchers reported.

They said problem of damaged areas in some DNA could be overcome by using a sufficient amount of Neanderthal DNA from different individuals, so the whole genome can be determined.

"The contamination and degradation of DNA has been a serious issue for the last 10 years," observed Erik Trinkaus, a professor at Washington University in St. Louis. "This is a serious attempt to deal with that issue and that's welcome."

"I'm not sure they have completely solved the problem, but they've made a big step in that direction," said Trinkaus, who was not involved in the research.

Anthropologist Richard Potts of the Smithsonian's National Museum of Natural History, called the work "a very significant technical study of DNA decay."

The researchers "have tried to answer important questions about the potential to sequence ancient DNA," said Potts, who was not part of the research.

Milford Wolpoff, a University of Michigan Anthropologist, said creating a complete Neanderthal genome is a great goal.

But it is "sample intensive," he said, and he isn't sure enough DNA is available to complete the work. Curators don't like to see their specimens ground up, he said.

The research was funded by the Max Planck Society and the National Institutes of Health.

On the Net PNAS:

From a Few Genes, Life’s Myriad Shapes

By CAROL KAESUK YOON

Since its humble beginnings as a single cell, life has evolved into a spectacular array of shapes and sizes, from tiny fleas to towering Tyrannosaurus rex, from slow-soaring vultures to fast-swimming swordfish, and from modest ferns to alluring orchids. But just how such diversity of form could arise out of evolution’s mess of random genetic mutations — how a functional wing could sprout where none had grown before, or how flowers could blossom in what had been a flowerless world — has remained one of the most fascinating and intractable questions in evolutionary biology.

Now finally, after more than a century of puzzling, scientists are finding answers coming fast and furious and from a surprising quarter, the field known as evo-devo. Just coming into its own as a science, evo-devo is the combined study of evolution and development, the process by which a nubbin of a fertilized egg transforms into a full-fledged adult. And what these scientists are finding is that development, a process that has for more than half a century been largely ignored in the study of evolution, appears to have been one of the major forces shaping the history of life on earth.

For starters, evo-devo researchers are finding that the evolution of complex new forms, rather than requiring many new mutations or many new genes as had long been thought, can instead be accomplished by a much simpler process requiring no more than tweaks to already existing genes and developmental plans. Stranger still, researchers are finding that the genes that can be tweaked to create new shapes and body parts are surprisingly few. The same DNA sequences are turning out to be the spark inciting one evolutionary flowering after another. "Do these discoveries blow people’s minds? Yes,” said Dr. Sean B. Carroll, biologist at the Howard Hughes Medical Institute at the University of Wisconsin, Madison. "The first response is ‘Huh?’ and the second response is ‘Far out.’ ”

"This is the illumination of the utterly dark,” Dr. Carroll added.

The development of an organism — how one end gets designated as the head or the tail, how feet are enticed to grow at the end of a leg rather than at the wrist — is controlled by a hierarchy of genes, with master genes at the top controlling a next tier of genes, controlling a next and so on. But the real interest for evolutionary biologists is that these hierarchies not only favor the evolution of certain forms but also disallow the growth of others, determining what can and cannot arise not only in the course of the growth of an embryo, but also over the history of life itself.

"It’s been said that classical evolutionary theory looks at survival of the fittest,” said Dr. Scott F. Gilbert, a developmental biologist at Swarthmore College. By looking at what sorts of organisms are most likely or impossible to develop, he explained, "evo-devo looks at the arrival of the fittest.”

Charles Darwin saw it first. He pointed out well over a century ago that developing forms of life would be central to the study of evolution. Little came of it initially, for a variety of reasons. Not least of these was the discovery that perturbing the process of development often resulted in a freak show starring horrors like bipedal goats and insects with legs growing out of their mouths, monstrosities that seemed to shed little light on the wonders of evolution.

But the advent of molecular biology reinvigorated the study of development in the 1980s, and evo-devo quickly got scientists’ attention when early breakthroughs revealed that the same master genes were laying out fundamental body plans and parts across the animal kingdom. For example, researchers discovered that genes in the Pax6 family could switch on the development of eyes in animals as different as flies and people. More recent work has begun looking beyond the body’s basic building blocks to reveal how changes in development have resulted in some of the world’s most celebrated of evolutionary events.

In one of the most exciting of the new studies, a team of scientists led by Dr. Cliff Tabin, a developmental biologist at Harvard Medical School, investigated a classic example of evolution by natural selection, the evolution of Darwin’s finches on the Galápagos Islands.

Like the other organisms that made it to the remote archipelago off the coast of Ecuador, Darwin’s finches have flourished in their isolation, evolving into many and varied species. But, while the finches bear his name and while Darwin was indeed inspired to thoughts of evolution by animals on these islands, the finches left him flummoxed. Darwin did not realize for quite some time that these birds were all finches or even that they were related to one another.

He should be forgiven, however. For while the species are descendants of an original pioneering finch, they no longer bear its characteristic short, slender beak, which is excellent for hulling tiny seeds. In fact, the finches no longer look very finchlike at all. Adapting to the strange new foods of the islands, some have evolved taller, broader, more powerful nut-cracking beaks; the most impressive of the big-beaked finches is Geospiza magnirostris. Other finches have evolved longer bills that are ideal for drilling holes into cactus fruits to get at the seeds; Geospiza conirostris is one species with a particularly elongated beak.

But how could such bills evolve from a simple finch beak? Scientists had assumed that the dramatic alterations in beak shape, height, width and strength would require the accumulation of many chance mutations in many different genes. But evo-devo has revealed that getting a fancy new beak can be simpler than anyone had imagined.

Genes are stretches of DNA that can be switched on so that they will produce molecules known as proteins. Proteins can then do a number of jobs in the cell or outside it, working to make parts of organisms, switching other genes on and so on. When genes are switched on to produce proteins, they can do so at a low level in a limited area or they can crank out lots of protein in many cells.

What Dr. Tabin and colleagues found, when looking at the range of beak shapes and sizes across different finch species, was that the thicker and taller and more robust a beak, the more strongly it expressed a gene known as BMP4 early in development. The BMP4 gene (its abbreviation stands for bone morphogenetic protein, No. 4) produces the BMP4 protein, which can signal cells to begin producing bone. But BMP4 is multitalented and can also act to direct early development, laying out a variety of architectural plans including signaling which part of the embryo is to be the backside and which the belly side. To verify that the BMP4 gene itself could indeed trigger the growth of grander, bigger, nut-crushing beaks, researchers artificially cranked up the production of BMP4 in the developing beaks of chicken embryos. The chicks began growing wider, taller, more robust beaks similar to those of a nut-cracking finch.

In the finches with long, probing beaks, researchers found at work a different gene, known as calmodulin. As with BMP4, the more that calmodulin was expressed, the longer the beak became. When scientists artificially increased calmodulin in chicken embryos, the chicks began growing extended beaks, just like a cactus driller.

So, with just these two genes, not tens or hundreds, the scientists found the potential to recreate beaks, massive or stubby or elongated.

"So now one wants to go in a number of directions,” Dr. Tabin said. "What happens in a stork? What happens in a hummingbird? A parrot?” For the evolution of beaks, the main tool with which a bird handles its food and makes its living, is central not only to Darwin’s finches, but to birds as a whole.

BMP4’s reach does not stop at the birds, however.

In lakes in Africa, the fish known as cichlids have evolved so rapidly into such a huge diversity of species that they have become one of the best known evolutionary radiations. The cichlids have evolved in different shapes and sizes, and with a variety of jaw types specialized for eating certain kinds of food. Robust, thick jaws are excellent at crushing snails, while longer jaws work well for sucking up algae. As with the beaks of finches, a range of styles developed.

Now in a new study, Dr. R. Craig Albertson, an evolutionary biologist at Syracuse University, and Dr. Thomas D. Kocher, a geneticist at the University of New Hampshire, have shown that more robust-jawed cichlids express more BMP4 during development than those with more delicate jaws. To test whether BMP4 was indeed responsible for the difference, these scientists artificially increased the expression of BMP4 in the zebrafish, the lab rat of the fish world. And, reprising the beak experiments, researchers found that increased production of BMP4 in the jaws of embryonic zebrafish led to the development of more robust chewing and chomping parts.

And if being a major player in the evolution of African cichlids and Darwin’s finches — two of the most famous evolutionary radiations of species — were not enough for BMP4, Dr. Peter R. Grant, an evolutionary biologist at Princeton University, predicted that the gene would probably be found to play an important role in the evolution of still other animals. He noted that jaw changes were a crucial element in the evolution of lizards, rabbits and mice, among others, making them prime candidates for evolution via BMP4.

"This is just the beginning,” Dr. Grant said. "These are exciting times for us all.”

Used to lay out body plans, build beaks and alter fish jaws, BMP4 illustrates perfectly one of the major recurring themes of evo-devo. New forms can arise via new uses of existing genes, in particular the control genes or what are sometimes called toolkit genes that oversee development. It is a discovery that can explain much that has previously been mysterious, like the observation that without much obvious change to the genome over all, one can get fairly radical changes in form.

"There aren’t new genes arising every time a new species arises,” said Dr. Brian K. Hall, a developmental biologist at Dalhousie University in Nova Scotia. "Basically you take existing genes and processes and modify them, and that’s why humans and chimps can be 99 percent similar at the genome level.”

Evo-devo has also begun to shine a light on a phenomenon with which evolutionary biologists have long been familiar, the way in which different species will come up with sometimes jaw-droppingly similar solutions when confronted with the same challenges.

Among the placental mammals of the Americas and the marsupials of Australia, for example, have evolved the same sorts of animals independently: beasts that burrowed, loping critters that grazed, creatures that had long snouts for eating ants, and versions of wolf.

In the same way, the cichlids have evolved pairs of matching species, arising independently in separate lakes in Africa. In Lake Malawi, for example, there is a long and flat-headed species with a deep underbite that looks remarkably like an unrelated species that lives a similar lifestyle in Lake Tanganyika. There is another cichlid with a bulging brow and frowning lips in Lake Malawi with, again, an unrelated but otherwise extremely similar-looking cichlid in Lake Tanganyika. The same jaws, heads, and ways of living can be seen to evolve again and again.

The findings of evo-devo suggest that such parallels might in fact be expected. For cichlids are hardly coming up with new genetic solutions to eating tough snails as they each crank up the BMP4 or tinker with other toolkit genes. Instead, whether in Lake Malawi or Lake Tanganyika, they may be using the same genes to develop the same forms that provide the same solutions to the same ecological challenges. Why not, when even the beaked birds flying overhead are using the very same genes?

Evo-devo has even begun to give biologists new insight into one of the most beautiful examples of recurring forms: the evolution of mimicry.

It has long been a source of amazement how some species seem so able to evolve near-perfect mimicry of another. Poisonous species often evolve bright warning colors, which have been reproduced by nonpoisonous species or by other, similarly poisonous species, hoping to fend off curious predators.

Now in a new study of Heliconius butterflies, Dr. Mathieu Joron, an evolutionary biologist at the University of Edinburgh, and colleagues, found evidence that the mimics may be using some of the same genes to produce their copycat warning colors and patterns.

The researchers studied several species of tropical Heliconius butterflies, all of which are nasty-tasting to birds and which mimic one another’s color patterns. Dr. Joron and colleagues found that some of the main elements of the patterns — a yellow band in Heliconius melpomene and Heliconius erato and a complex tiger-stripe pattern in Heliconius numata — are controlled by a single region of DNA, a tightly linked set of genes known as a supergene.

Dr. Joron said he and colleagues were still mapping the details of color pattern control within the supergene. But if this turned out to function, as researchers suspected, like a toolkit gene turning the patterns on and off, it could explain both the prevalence of mimicry in Heliconius and the apparent ease with which these species have been shown to repeatedly evolve such superbly matching patterns.

One of evo-devo’s greatest strengths is its cross-disciplinary nature, bridging not only evolutionary and developmental studies but gaps as broad as those between fossil-hunting paleontologists and molecular biologists. One researcher whose approach epitomizes the power of such synthesis is Dr. Neil Shubin, an evolutionary biologist at the University of Chicago and the Field Museum.

Last year, Dr. Shubin and colleagues reported the discovery of a fossil fish on Ellesmere Island in northern Canada. They had found Tiktaalik, as they named the fish, after searching for six years. They persisted for so long because they were certain that they had found the right age and kind of rock where a fossil of a fish trying to make the transition to life on land was likely to be found. And Tiktaalik appeared to be just such a fish, but it also had a few surprises for the researchers.

"Tiktaalik is special,” Dr. Shubin said. "It has a flat head with eyes on top. It has gills and lungs. It’s an animal that’s exploring the interface between water and land.”

But Tiktaalik was a truly stunning discovery because this water-loving fish bore wrists, an attribute thought to have been an innovation confined strictly to animals that had already made the transition to land.

"This was telling us that a piece of the toolkit, to make arms, legs, hand and feet, could very well be present in fish limbs,” Dr. Shubin said. In other words, the genetic tools or toolkit genes for making limbs to walk on land might well have been present long before fish made that critical leap. But as fascinating as Tiktaalik was, it was also rock hard and provided no DNA that might shed light on the presence or absence of any particular gene.

So Dr. Shubin did what more and more evo-devo researchers are learning to do: take off one hat (paleontologist) and don another (molecular biologist). Dr. Shubin oversees one of what he says is a small but growing number of laboratories where old-fashioned rock-pounding takes place alongside high-tech molecular DNA studies.

He and colleagues began a study of the living but ancient fish known as the paddlefish. What they found, reported last month in the journal Nature, was that these thoroughly fishy fish were turning on control genes known as Hox genes, in a manner characteristic of the four-limbed, land-loving beasts known as tetrapods.

Tetrapods include cows, people, birds, rodents and so on. In other words, the potential for making fingers, hands and feet, crucial innovations used in emerging from the water to a life of walking and crawling on land, appears to have been present in fish, long before they began flip-flopping their way out of the muck. "The genetic tools to build fingers and toes were in place for a long time,” Dr. Shubin wrote in an e-mail message. "Lacking were the environmental conditions where these structures would be useful.” He added, "Fingers arose when the right environments arose.”

And here is another of the main themes to emerge from evo-devo. Major events in evolution like the transition from life in the water to life on land are not necessarily set off by the arising of the genetic mutations that will build the required body parts, or even the appearance of the body parts themselves, as had long been assumed. Instead, it is theorized that the right ecological situation, the right habitat in which such bold, new forms will prove to be particularly advantageous, may be what is required to set these major transitions in motion.

So far, most of the evo-devo work has been on animals, but researchers have begun to ask whether the same themes are being played out in plants.

Of particular interest to botanists is what Darwin described as an "abominable mystery”: the origin of flowering plants. A critical event in the evolution of plants, it happened, by paleontological standards, rather suddenly.

So what genes were involved in the origin of flowers? Botanists know that during development, the genes known as MADS box genes lay out the architecture of the blossom. They do so by turning on other genes, thereby determining what will develop where — petals here, reproductive parts there and so on, in much the same manner that Hox genes determine the general layout of parts in animals. Hox genes have had an important role in the evolution of animal form. But have MADS box genes had as central a role in the evolution of plants?

So far, said Dr. Vivian F. Irish, a developmental biologist at Yale University, the answer appears to be yes. There is a variety of circumstantial evidence, the most interesting of which is the fact that the MADS box genes exploded in number right around the time that flowering plants first appeared.

"It’s really analogous to what’s going on in Hox genes,” said Dr. Irish, though she noted that details of the role of the MADS box genes remained to be worked out. "It’s very cool that evolution has used a similar strategy in two very different kingdoms.”

Amid the enthusiast hubbub, cautionary notes have been sounded. Dr. Jerry Coyne, an evolutionary biologist at the University of Chicago, said that as dramatic as the changes in form caused by mutations in toolkit genes can be, it was premature to credit these genes with being the primary drivers of the evolution of novel forms and diversity. He said that too few studies had been done so far to support such broad claims, and that it could turn out that other, more mundane workaday genes, of the sort that were being studied long before evo-devo appeared on the scene, would play equally or even more important roles.

"I urge caution,” Dr. Coyne said. "We just don’t know.”

All of which goes to show that like all emerging fields, evo-devo’s significance and the uniqueness of its contributions will continue to be reassessed. It will remain to be seen just how separate or incorporated into the rest of evolutionary thinking its findings will end up being. Paradoxically, it was during just such a flurry of intellectual synthesis and research activity, the watershed known as the New or Modern Synthesis in which modern evolutionary biology was born in the last century, that developmental thinking was almost entirely ejected from the science of evolution.

But perhaps today synthesizers can do better, broadening their focus without constricting their view of evolution as they try to take in all of the great pageant that is the history of life.

"We’re still a very young field,” Dr. Gilbert said. "But I think this is a new evolutionary synthesis, an emerging evolutionary synthesis. I think we’re seeing it.”

How Fish Punish ‘Queue Jumpers’

Fish use the threat of punishment to keep would-be jumpers in the mating queue firmly in line and the social order stable, a new study led by Australian marine scientists has found.

Their discovery, which has implications for the whole animal kingdom including humans, has been hailed by some of the world’s leading biologists as a "must read” scientific paper and published in the Proceedings of the Royal Society of London Series B.

Studying small goby fish at Lizard Island on Australia’s Great Barrier Reef, Dr Marian Wong and colleagues from the ARC Centre of Excellence for Coral Reef Studies at James Cook University and, the Biological Station of Donana, Spain, have shown the threat of expulsion from the group acts as a powerful deterrent to keep subordinate fish from challenging those more dominant than themselves.

In fact the subordinate fish deliberately diet - or starve themselves - in order to remain smaller than their superiors and so present no threat that might lead to their being cast out, and perishing as a result.

"Many animals have social queues in which the smaller members wait their turn before they can mate. We wanted to find out how they maintain stability in a situation where you’d expect there would be a lot of competition,” says Dr Wong.

In the case of the gobies, only the top male and top female mate, and all the other females have to wait their turn in a queue based on their size – the fishy equivalent of the barnyard pecking order.

Dr Wong found that each fish has a size difference of about 5 per cent from the one above and the one below it in the queue. If the difference in size decreases below this threshold, a challenge is on as the junior fish tries to jump the mating queue – and the superior one responds by trying to drive it out of the group.

Her fascinating discovery is that, in order to avoid constant fights and keep the social order stable, the fish seem to accept the threat of punishment – and adjust their own size in order to avoid presenting a challenge to the one above them, she says.

"Social hierarchies are very stable in these fish and in practice challenges and expulsions are extremely rare – probably because expulsion from the group and the coral reef it occupies means almost certain death to the loser.

"It is clear the fish accept the threat of punishment and co-operate as a way of maintaining their social order – and that’s not so very different to how humans and other animals behave.”

Dr Wong said that experimentally it has always proved extremely difficult to demonstrate how higher animals, such as apes, use punishment to control subordinates and discourage anti-social activity because of the difficulty in observing and interpreting their behaviour.

In the case of the gobies the effect is much more apparent because they seek to maintain a particular size ratio relative to the fish above them in the queue, in order not to provoke a conflict.

"The gobies have shed new light on our understanding of how social stability is maintained in animals,” she says.

"While it not be accurate to draw a direct link between fish behaviour and specific human behaviour, it is clear there are general patterns of behaviour which apply to many higher life forms, ourselves included. These help us to understand why we do the things we do.”

The paper entitled "The threat of punishment enforces peaceful cooperation and stabilizes queues in a coral-reef fish” was co-authored with Dr Philip Munday and Professor Geoff Jones of CoECRS and Dr Peter Buston of.the Biological Station of Doñana, Spain. It appears in Proceedings of the Royal Society B, 274.

It has received a high "must read” ranking from the Faculty of 1000 Biology, which consists of leading international biological scientists.

Dr Wong has recently been appointed to a postdoctoral research position at Canada’s McMaster University.

Frog molecule could provide drug treatment for brain tumours

A synthetic version of a molecule found in the egg cells of the Northern Leopard frog (Rana pipiens) could provide the world with the first drug treatment for brain tumours.

Known as Amphinase, the molecule recognises the sugary coating found on a tumour cell and binds to its surface before invading the cell and inactivating the RNA it contains, causing the tumour to die.

In new research published in the Journal of Molecular Biology, scientists from the University of Bath (UK) and Alfacell Corporation (USA) describe the first complete analysis of the structural and chemical properties of the molecule.

Although it could potentially be used as a treatment for many forms of cancer, Amphinase offers greatest hope in the treatment of brain tumours, for which complex surgery and chemotherapy are the only current treatments.

"This is a very exciting molecule,” said Professor Ravi Acharya, from the Department of Biology & Biochemistry at the University of Bath.

"It is rather like Mother Nature’s very own magic bullet for recognising and destroying cancer cells.

"It is highly specific at hunting and destroying tumour cells, is easily synthesised in the laboratory and offers great hope as a therapeutic treatment of the future.”

Amphinase is a version of a ribonuclease enzyme that has been isolated from the oocytes (egg cells) of the Northern Leopard frog.

Ribonucleases are a common type of enzyme found in all organisms. They are responsible for tidying up free-floating strands of RNA cells by latching on to the molecule and cutting it into smaller sections.

In areas of the cell where the RNA is needed for essential functions, ribonucleases are prevented from working by inhibitor molecules. But because Amphinase is an amphibian ribonuclease, it can evade the mammalian inhibitor molecules to attack the cancer cells.

As a treatment, it is most likely to be injected into the area where it is needed. It will have no effect on other cells because it is only capable of recognising and binding to the sugar coating of tumour cells.

"Amphinase is in the very early stages of development, so it is likely to be several years and many trials before it could be developed into a treatment for patients,” said Professor Acharya and his colleagues Drs Umesh Singh and Daniel Holloway.

"Having said that, the early data is promising and through this study we have provided the kind of information needed if approval for use is requested in the future.”

Amphinase is the second anti-tumour ribonuclease to be isolated by Alfacell Corporation from Rana pipiens oocytes.

The other, ONCONASE(R) (ranpirnase), is currently in late-stage clinical trials as a treatment for unresectable malignant mesothelioma, a rare and fatal form of lung cancer, and in Phase I/II clinical trials in non-small cell lung cancer and other solid tumours.

"We are pleased with the superb work performed by Professor Acharya and his talented team at the University of Bath,” commented Kuslima Shogen, Alfacell’s chairman and chief executive officer.

"Their work is critical to the continued development and understanding of our family of novel ribonuclease based therapeutics with the potential to help patients suffering from cancer and other dismal diseases.”

The company is now working on pre-clinical trials of Amphinase with a view to beginning clinical trials in the future.

Why do power couples migrate to metropolitan areas? Actually, they don't

More than half of all "power couples” – couples in which both spouses are college graduates – live in large metropolitan areas (MSAs) with more than two million residents. What causes the concentration of well-educated couples in big cities" A new study from the Journal of Labor Economics disputes prior research suggesting power couples migrate to large MSAs. Instead, the researchers argue that college-educated singles are more likely to move to big cities where they meet, date, marry, and divorce other college-educated people. In other words, power couples don’t move to big cities intact – they’re formed there. This finding has important implications for city planners hoping to attract a well-educated workforce.

In 1970, 39 percent of power couples lived in a metropolitan area of at least two million residents. By 1990 this number had grown substantially: Fifty percent of all power couples lived in a big city. In contrast, couples in which neither spouse has a college degree have the lowest probability of living in a large city and the lowest rate of increase, growing from 30 percent to 34 percent in the same twenty year period.

Using data from a large-scale statistical study of 4,800 families (Panel Study on Income Dynamics), Janice Compton (University of Manitoba) and Robert A. Pollak (Washington University and National Bureau of Economic Research) argue that couple migration patterns to large metropolitan areas are influenced gendered determinants – couples in which the man has a college degree are far more likely to move to a metropolitan area than couples in which only the woman has a college degree.

The researchers analyzed data from men aged 25-39 and women aged 23-37, including all married couples who live together and all unmarried heterosexual couples who have lived together for at least one year. They found that migration patterns for "part-power couples” in which the woman is a college graduate are statistically similar to couples in which neither partner is college educated.

"Part-power couples” with a better educated wife are also less likely to migrate from one large metropolitan area to another large metropolitan area, and are more likely to migrate from a large metropolitan area to a mid-size metropolitan area, the researchers found.

"We find that power couples are not more likely to migrate to the largest metropolitan areas and are no less likely than other couples to migrate from such areas once they are there,” write the researchers. "The observed trends in location patterns are primarily due to differences in the rates at which power couples form and dissolve in cities of various sizes rather than to the migration of power couples to the largest metropolitan areas.”

Indeed, even during the 1990s when the proportion of power couples living in metropolitan areas dropped, the percentage of college educated single men and college educated single women living in big cities increased modestly.

Needle-stick injuries are common but unreported by surgeons in training

Residents risk serious infection, claim they are 'too busy,' or careers might suffer

A survey of nearly 700 surgical residents in 17 U.S. medical centers finds that more than half failed to report needle-stick injuries involving patients whose blood could be a source of HIV, hepatitis and other infections.

Authors of the report — appearing in the June 28 issue of The New England Journal of Medicine — say most residents in the survey falsely believe that reporting and getting timely medical attention won’t prevent infection. Residents also say reporting takes "too much time” and interrupts their work.

"The fact that we have so many residents who fail to understand the importance of timely reporting of needle-stick exposures in order to protect themselves from serious medical consequences clearly illustrates the breadth of this problem and the need for hospitals to develop systems to address it,” says contributing author Mark S. Sulkowski, M.D., of the Division of Infectious Diseases at Johns Hopkins.

Lead author Martin Makary, M.D., M.P.H., a surgeon at The Johns Hopkins Hospital, says that while residents must take more responsibility, it’s also up to hospitals to take "immediate steps to improve safety and care for health care workers to reduce the spread of HIV and hepatitis infection.”

Makary says injuries could be greatly reduced by hospitals’ increasing the use of nurse practitioners and physicians assistants to reduce surgical workloads and adopting sharpless surgical techniques such as electric scalpels, clips and glues.

"Twenty percent of all general surgery operations could be done without using any sharp instruments,” he says. Furthermore, Makary says, residents would more likely report exposures if hospitals used timely reporting mechanisms (e.g., internal hotlines and response teams), routine prompts (e.g., postoperative checklists that monitor exposures), and peer-to-peer education to create a local culture that encourages speaking up.

"We know also that many residents resist reporting because the training culture suggests that needle sticks ‘go with the territory’ and reporting them may lower peer esteem,” Makary notes.

The survey, which took place in 2003, revealed that 99 percent of surgeons-in-training suffered an average of eight needle-stick injuries in their first five years. Of these surgeons, only 49 percent reported injuries to an employee health service. Of those who reported, 53 percent had experienced an injury involving a patient with a history of intravenous drug use and/or infected with HIV, hepatitis B (HBV) or hepatitis C (HCV).

"We did not realize the extent to which health care workers are at risk — a risk that is preventable,” says Makary, a surgeon who studies medical errors and health care quality. Makary says improved techniques that reduce the number of needle sticks and timely treatment for those who are injured could all but eliminate the risk of getting infected with disease.

Makary says 57 percent of surgical residents reported a feeling of being "rushed” as the primary cause of the injury. He adds that 42 percent said they did not report the injury because it took "too much time” and 28 percent said there was "no utility in reporting.”

In fact, says Sulkowski, early reporting and treatment with antivirals can prevent the establishment of infection in people exposed to HIV and HBV and can eradicate evidence of virus in more than 90 percent of people with acute HCV infection.

Previous studies suggest that an estimated 600,000 to 800,000 needle-stick injuries are reported each year by U.S. health care workers. Furthermore, a recent study of a general surgical service in an urban academic hospital revealed that 20 percent to 38 percent of all procedures involved patients with bloodborne pathogens.

A study confirms the importance of sexual fantasies in the experience of sexual desire

- Regarding figures of the Spanish Association for Sexual Health (Asociación Española para la Salud Sexual), a loss of sexual drive is one of the main factors that cause sexual dysfunction in the Spanish female population.

- Researchers of the UGR have found that 32% of inhibited sexual desire in men is associated with negative sexual attitudes (low on erotophilia) and the presence or absence of certain types of sexual fantasies, while, in women, just an 18% of inhibited sexual desire can be explained. This 18% of inhibition of sexual desire in women is related to anxiety, negative sexual attitudes (erotophobia) and the absence of sexual fantasies.

C@MPUS DIGITAL Scientists of the Department of Personality, Evaluation and Psychological Treatment of the University of Granada (Universidad de Granada) have studied how some psychological variables such as erotophilia (positive attitude towards sexuality), sexual fantasies and anxiety are related to sexual desire in human beings.

The researcher Juan Carlos Sierra Freire states that there are very few reliable and valid instruments in Spain to evaluate sexual desire. Due to this vacuum, the researchers have adapted the Sexual Desire Inventory by Spector, Carey and Steinberg. This inventory is a tool that enables the researcher to measure, on the one hand, the solitary sexual motivation and, on the other hand, the interest in having sexual intercourse with another person (didactic sexual desire). This fact is of a great importance because "it gives relevant information about possible disagreements in sexual desire that may appear in a couple”. Regarding figures of the Spanish Association for Sexual Health, a loss of sexual desire is one of the main factors that cause sexual dysfunction in the Spanish female population.

The power of imagination

The results of this research, which have recently been published in the journals Análisis y Modificación de Conducta (Analysis and Modification of Behaviour) and Psychological Reports, reveal an important relation between sexual desire and erotophilia in men. Men respond more positively towards sexual stimuli and thoughts, and they accept them more easily. The male population has an attitude that, together with sexual fantasies, heightens sexual drive. Nevertheless, the research stresses that people sometimes may have a negative reaction to some types of fantasies. In this sense, the researchers have studied such behaviour in male subjects, where sexually sadistic fantasies inhibit sexual desire.

In turn, women also share the imagination at play. The more sexual fantasies they have, the more sexual desire they experience. However, "women normally present more anxiety disorders than men” regarding transitory emotional stages such as anxiety, because anxiety strongly affects women’s sexual function.

On the basis of the sample studied, which consists of 608 subjects aged 13 to 43, researchers have found that 32% of inhibited sexual desire in men is associated with low erotophilia as well as some sexual fantasies, while a 18% of such inhibited sexual desire in women is because of the increase of anxiety and the decrease of sexual fantasies. According to Juan Carlos Sierra, these figures show that psychological factors, which have a role in sexual response, depend on gender

Sexual education

Sexual desire leads to other stages of sexual intercourse: excitation and orgasm. Therefore, having intercourse without desire may negatively affect the stages of sexual response. "This first stage is the most complex because it is influenced by many factors”, declared the researcher. Sexual desire is explained by a three-dimensional model, which includes social, psychological, and neurophysiologic aspects. For that reason, proper neurohormonal activity with a right sexual stimulation is necessary in order to experience sexual desire. "Besides this complexity, there is no comparison model, as occurs in the men’s excitation stage, where it is possible to determine the degree of excitation depending on the erection”.

Juan Carlos Sierra points out that education on sexual stimulation and response as well as healthy attitudes towards sexuality is extremely important. In this way, sexual intercourse for those people will be more pleasurable and with less probability of having sexual dysfunctions. Furthermore, this study highlights the importance of sexual fantasies in sexuality. In fact, sexual fantasies are used in sex therapy to diminish levels of anxiety of execution or of sexual activity, provided that there are no organic anomalies (lack of hormones, endocrinal disorders, etc.). Researchers of Granada are currently working in this field of study.

Loss of cell's 'antenna' linked to cancer's development

Fox Chase Cancer Center researchers described dismantling proteins in journal Cell

Submarines have periscopes. Insects have antennae. And increasingly, biologists are finding that most normal vertebrate cells have cilia, small hair-like structures that protrude like antennae into the surrounding environment to detect signals that control cell growth. In a new study published in the June 29 issue of Cell, Fox Chase Cancer Center researchers describe the strong link between ciliary signaling and cancer and identify the rogue engineers responsible for dismantling the cell’s antenna.

Cilia-based sensing has important roles in sight, smell and motion detection and in helping an embryo develop into a normal baby. Defects in cilia can produce a range of disorders, including kidney cysts, infertility, respiratory problems, reversal of organs (for example, heart on the right) and a predisposition to obesity, diabetes and high blood pressure. In each case, cells fail to appropriately detect growth-controlling signals and develop abnormally. Now, researchers are adding cancer to this list.

"Many cancers arise from defects in cellular signaling systems, and we think we have just identified a really exciting signaling connection,” Fox Chase Cancer Center molecular biologist Erica A. Golemis, Ph.D., points out. In the new study, Golemis and her Fox Chase colleagues found that two proteins with important roles in cancer progression and metastasis, HEF1 and Aurora A, have an unexpected role in controlling the temporary disappearance of cilia during normal cell division, by turning on a third protein, HDAC6. This action causes the "antenna” to be dismantled in an untimely way.

Why cilia come and go on normal cells is not entirely understood, but scientists increasingly suspect that it may play a role in timing the cell division process. Commonly, cancer cells have entirely lost their cilia, and this absence may help explain why tumors fail to respond properly to environmental cues that cause normal cells to stop growing. Hence, the discovery that too much HEF1 and Aurora A cause cilia to disassemble provides important hints into what may be happening in cancers.

Defects in cilia have already been identified in one disease that represents a significant public health burden. Polycystic kidney disease, or PKD, arises from genetic mutations that cause flawed kidney-cell ciliary signaling. PKD is the most common serious hereditary disease, affecting more than 600,000 Americans and 12.5 million people worldwide.

In this incurable syndrome, patients develop numerous, fluid-filled cysts on the kidneys. For many patients, chronic pain is a common problem. PKD leads to kidney failure in about half of cases, requiring kidney dialysis or a kidney transplant.

The proteins involved in dismantling the cilia are no strangers to Golemis and her team. Golemis has been studying HEF1 for over a decade, since she first identified the gene. She first discovered that HEF1 has a role in controlling normal cell movement and tumor cell invasion. Golemis’ laboratory has also shown that Aurora A and HEF1 interact to initiate mitosis (chromosome separation) during cell division.

Suggestively, many cancers produce too much of the Aurora A protein, including breast and colorectal cancers and leukemia. In 2006, excessive production of HEF1 (also known as NEDD9) was found to drive metastasis in over a third of human melanomas, while HEF1 signaling also contributes to the aggressiveness of some brain cancers (glioblastomas).

"Now there’s a new activity for these proteins at cilia,” said co-author Elizabeth P. Henske, M.D., a medical oncologist and genetics researcher who studies the genetic basis of kidney tumors. This complex HEF1 and Aurora A function may mean the increased levels of these proteins in cancer affect cellular response to multiple signaling pathways, rather like a chain reaction highway accident.

Clinical Application

The research has significant implications for the understanding and treatment of cancer. The experiments leading to the new paper showed that "small-molecule inhibitors of Aurora A and HDAC6 selectively stabilize cilia,” the authors concluded, "suggesting a novel mode of action for these clinical agents.” Clinical trials of such inhibitors have already begun, so learning more about the mechanisms of their targets is important in understanding how these agents work and who might benefit from them.

"It is also tantalizing to consider that closer connections exist between dysplastic disorders leading to cysts and cancer than have previously been appreciated,” the authors wrote. "Overall, deregulated Aurora A/HEF1/HDAC6 signaling may have broad implications for studies of human development and disease.”

The authors are now investigating possible roles for HEF1 and Aurora A in PKD. They are intrigued by the fact that a study published last year showed that important gene, PKHD1, commonly mutated in PKD has also been found as a target of mutation in colorectal cancer.

Squash Seeds Show Andean Cultivation Is 10,000 Years Old, Twice as Old as Thought

By JOHN NOBLE WILFORD

Seeds of domesticated squash found by scientists on the western slopes of the Andes in northern Peru are almost 10,000 years old, about twice the age of previously discovered cultivated crops in the region, new, more precise dating techniques have revealed.

The findings about Peru and recent research in Mexico, anthropologists say, are evidence that some farming developed in parts of the Americas nearly as early as in the Middle East, which is considered the birthplace of the earliest agriculture.

Digging under house floors and grinding stones and in stone-lined storage bins, the archaeologist Tom D. Dillehay of Vanderbilt University, in Nashville, uncovered the squash seeds at several places in the Ñanchoc Valley, near the Pacific coast about 400 miles north of Lima. The excavations also yielded peanut hulls and cotton fibers — about 8,500 and 6,000 years old, respectively.

A peanut hull discovered in Northern Peru has been dated to 7600 B.P. (Before Present) Tom Dillehay

The new, more precise dating of the plant remains, some of which were collected two decades ago, is being reported by Dr. Dillehay and colleagues in today’s issue of the journal Science.

Their research also turned up traces of other domesticated plants, including a grain, manioc and unidentified fruits, and stone hoes, furrowed garden plots and small-scale irrigation canals from approximately the same period of time.

The researchers concluded that these beginnings in plant domestication "served as catalysts for rapid social changes that eventually contributed to the development of intensified agriculture, institutionalized political power and towns in both the Andean highlands and on the coast between 5,000 and 4,000 years ago.”

The evidence at Ñanchoc, Dr. Dillehay’s team wrote, indicated that "agriculture played a more important and earlier role in the development of Andean civilization than previously understood.”

In an accompanying article on early agriculture, Eve Emshwiller, an ethnobotanist at the University of Wisconsin, Madison, was quoted as saying that the reports of early dates for plant domestication in the New World were remarkable because this appeared to have occurred not long after humans colonized the Americas, now thought to be at least 13,000 years ago.

A cotton ball dated to 5500 B.P. discovered in Northern Peru. Tom Dillehay

The article also noted that 10,000-year-old cultivated squash seeds had recently been reported in Mexico, along with evidence of domesticated corn there by 9,000 years ago. Scholars now think that plants were domesticated independently in at least 10 "centers of origin,” including, in addition to the Middle East, Mexico and Peru, places in Africa, southern India, China and New Guinea.

In the Fertile Crescent of the Middle East, an arc from modern-day Israel through Syria and Turkey to Iraq, wheat and barley were domesticated by 10,000 years ago, and possibly rye by 13,000 years ago. Experts in ancient agriculture suspect that the transition from foraging to cultivation had started much earlier and was not as abrupt a transformation as indicated in the archaeological record.

Dr. Dillehay has devoted several decades of research to ancient cultures in South America. His most notable previous achievement was the discovery of a campsite of hunter-gatherers at Monte Verde, in Chile, which dates to 13,000 years ago. Most archaeologists recognize this as the earliest well-documented human occupation site uncovered so far in the New World.

Other explorations in recent years have yielded increasing evidence of settlements and organized political societies that flourished in the coastal valleys of northern Peru possibly as early as 5,000 years ago. Until now, the record of earlier farming in the region had been sparse.

Initial radiocarbon dating of the plant remains from Ñanchoc was based on wood charcoal buried at the sites, but the results varied widely and were considered unreliable. More recent radiocarbon dating, with a technique called accelerator mass spectrometry, relied on measurements from undisturbed buried charcoal and an analysis of the actual plant remains.

The distribution of building structures, canals and furrowed fields, Dr. Dillehay said, indicated that the Andean culture was moving beyond cultivation limited to individual households toward an organized agricultural society.

Botanists studying the squash, peanut and cotton remains determined that the specific strains did not grow naturally in the Ñanchoc area. The peanut, in particular, was thought to be better suited to cultivation in tropical forests and savannas elsewhere in South America.

The wild ancestor of squash has yet to be identified, though lowlands in Colombia are thought to be a likely source.

Giant microwave turns plastic back to oil

* 17:44 26 June 2007

* news service

* Catherine Brahic

A US company is taking plastics recycling to another level – turning them back into the oil they were made from, and gas.

All that is needed, claims Global Resource Corporation (GRC), is a finely tuned microwave and – hey presto! – a mix of materials that were made from oil can be reduced back to oil and combustible gas (and a few leftovers).

Key to GRC’s process is a machine that uses 1200 different frequencies within the microwave range, which act on specific hydrocarbon materials. As the material is zapped at the appropriate wavelength, part of the hydrocarbons that make up the plastic and rubber in the material are broken down into diesel oil and combustible gas.

GRC's machine is called the Hawk-10. Its smaller incarnations look just like an industrial microwave with bits of machinery attached to it. Larger versions resemble a concrete mixer.

"Anything that has a hydrocarbon base will be affected by our process," says Jerry Meddick, director of business development at GRC, based in New Jersey. "We release those hydrocarbon molecules from the material and it then becomes gas and oil."

Whatever does not have a hydrocarbon base is left behind, minus any water it contained as this gets evaporated in the microwave.

Simplified recycling

"Take a piece of copper wiring," says Meddick. "It is encased in plastic – a kind of hydrocarbon material. We release all the hydrocarbons, which strips the casing off the wire." Not only does the process produce fuel in the form of oil and gas, it also makes it easier to extract the copper wire for recycling.

The Hawk-10 uses specific microwave frequencies to extract oil and gas from plastics (Image: Global Resource Corporation)

Similarly, running 9.1 kilograms of ground-up tyres through the Hawk-10 produces 4.54 litres of diesel oil, 1.42 cubic metres of combustible gas, 1 kg of steel and 3.40 kg of carbon black, Meddick says.

Watch a video of tyre powder being reduced by the Hawk-10.

Less landfill

Gershow Recycling, a scrap metal company based in New York, US, has just said it will be the first to buy a Hawk-10. Gershow collects metal products, shreds them and turns them into usable pure metals. Most of its scrap comes from old cars, but for every ton of steel that the company recovers, between 226 kg and 318 kg of "autofluff" is produced.

Autofluff is the stuff that is left over after a car has been shredded and the steel extracted. It contains plastics, rubber, wood, paper, fabrics, glass, sand, dirt, and various bits of metal. GRC says its Hawk-10 can extract enough oil and gas from the left-over fluff to run the Hawk-10 itself and a number of other machines used by Gershow.

Because it makes extracting reusable metal more efficient and evaporates water from autofluff, the Hawk-10 should also reduce the amount of end material that needs to be deposited in landfill sites.

Killifish can survive without oxygen for 60 days

How long can you hold your breath? For even highly trained humans, it's a few minutes, tops. Compare that with the killifish, which can survive without oxygen for more than 60 days, by far the longest of any vertebrate.

Annual killifish, Austrofundulus limnaeus, live in temporary ponds in arid regions of Venezuela. Their embryos ride out seasonal droughts buried in mud, where microbial action often uses up all the oxygen.

Jason Podrabsky, a comparative physiologist at Portland State University in Oregon, and his colleagues tested killifish embryos by sealing them in oxygen-free vials. After 62 days, half the embryos recovered when given oxygen (The Journal of Experimental Biology, vol 210, p 2253). The next best vertebrates - turtles and a species of goldfish - can survive for only a few days.

Podrabsky found that longer-lived killifish embryos accumulated lactate - the end product of anaerobic metabolism - very slowly, suggesting that their anaerobic ability comes from being able to cut their metabolic rate to extremely low levels.

Podrabsky is now studying which genes are responsible for the metabolic slowing. Learning how the fish do this may help explain how human tissues respond to anoxia during, say, a heart attack, Podrabsky says.

Dying star generates the stuff of life

* 18:00 27 June 2007

* news service

* Ivan Semeniuk

One of the largest and most luminous stars in our galaxy is a surprisingly prolific building site for complex molecules important to life on Earth, new measurements reveal.

The discovery furthers an ongoing shift in astronomer's perceptions of where such molecules can form, and where to set the starting line for the chain of events that leads from raw atoms to true biology.

"Where we thought molecules could never form, we're finding them. Where we thought molecules could never survive, they're surviving," says Lucy Ziurys, an astronomer at the University of Arizona in Tucson, US.

Using the 10-metre radio dish atop Mount Graham in Arizona, Ziurys and her team searched the extended envelope of gas around VY Canis Majoris, a red hypergiant star estimated to be 25 times the Sun's mass and nearly half a million times the Sun's brightness.

There they found the telltale radio emissions of various compounds, including hydrogen cyanide (HCN), silicon monoxide (SiO), sodium chloride (NaCl) and a molecule, PN, in which a phosphorus atom and a nitrogen atom are bound together.

Even simple phosphorus-bearing molecules such as PN are of interest to astrobiologists because phosphorus is relatively rare in the universe – yet it is necessary for constructing both DNA and RNA molecules, as well as ATP, the key molecule in cellular metabolism.

VY Canis Majoris, which lies about 5000 light years away, is one of the largest and most luminous stars in our galaxy. Dust ejected by the star can be seen in this 2004 image made by the Hubble Space Telescope's Advanced Camera for Surveys, using polarising filters (Image: NASA/ESA/R Humphreys/U Minnesota)

Dust shields

For years astronomers have known that dense molecular clouds, which are pervasive in the plane of our Milky Way galaxy, are the repositories of a wide variety of chemicals that can later find their way into newborn solar systems. What has been less clear is exactly how those molecules form in the first place.

In the last few years, astronomers have turned their attention to ageing stars, which typically expel vast quantities of gas as they expand and turn into red giants. Until recently, it was expected that any molecules that condensed from the cooling, expelled gas would later be destroyed by the intense ultraviolet radiation emitted by the star.

Work by Ziurys and others demonstrates that this is not the case. The ejected material contains clumps of dust particles that apparently shield the molecules and can shepherd them safely into interstellar space.

Varied chemistry

The latest findings add a new twist to the story. Because VY Canis Majoris is an oxygen-rich star, it was not expected to harbour so many interesting molecules. Oxygen atoms easily outnumber carbon atoms around such stars and would be expected to take up the available carbon by forming carbon monoxide (CO).

The discovery of molecules such as HCN and a carbon sulphur compound (CS) around VY Canis Majoris suggests that chemical composition can vary greatly within a circumstellar envelope. It also implies that the chemistry that leads to life may be more widespread in the universe and more robust than previous studies have suggested.

"It shows that there is much more to be learnt about the use of old stars as laboratories for studying interstellar chemistry," comments Sun Kwok of the University of Hong Kong.

VY Canis Majoris is well known as an evolved star that expels large amounts of matter. But only within the past year have astronomers had the technology to detect the exceedingly faint radio emissions produced by molecules around the star.

'New era'

The new technology comes in the form of an improved version of a device called the Superconductor-Isolator-Superconductor (SIS) Mixer, which can discern the energy emitted when molecules spontaneously shift from one rotational state to another.

The detector used by Ziurys and her team was developed for the Atacama Large Millimetre Array (ALMA), a high-altitude radio interferometer, consisting of 50 dishes – each 12 metres wide – currently under construction in Chile.

According to Ziurys, the fact that a single detector is already yielding such significant data bodes well for the future study of interstellar matter and its relationship to life in the universe.

"This is just the beginning of a new era in interstellar chemistry," she told New Scientist.

Study Traces Cat’s Ancestry to Middle East

By NICHOLAS WADE

Some 10,000 years ago, somewhere in the Near East, an audacious wildcat crept into one of the crude villages of early human settlers, the first to domesticate wheat and barley. There she felt safe from her many predators in the region, such as hyenas and larger cats.

The rodents that infested the settlers’ homes and granaries were sufficient prey. Seeing that she was earning her keep, the settlers tolerated her, and their children greeted her kittens with delight.

Wildcats are divided into five subspecies: the European wildcat, the Near Eastern wildcat, the Southern African wildcat, the Central Asian wildcat and the Chinese desert cat. This wildcat was photographed in Africa. Kim Wolhuter/National Geographic, via Getty Images

At least five females of the wildcat subspecies known as Felis silvestris lybica accomplished this delicate transition from forest to village. And from these five matriarchs all the world’s 600 million house cats are descended.

A scientific basis for this scenario has been established by Carlos A. Driscoll of the National Cancer Institute and his colleagues. He spent more than six years collecting species of wildcat in places as far apart as Scotland, Israel, Namibia and Mongolia. He then analyzed the DNA of the wildcats and of many house cats and fancy cats.

Five subspecies of wildcat are distributed across the Old World. They are known as the European wildcat, the Near Eastern wildcat, the Southern African wildcat, the Central Asian wildcat and the Chinese desert cat. Their patterns of DNA fall into five clusters. The DNA of all house cats and fancy cats falls within the Near Eastern wildcat cluster, making clear that this subspecies is their ancestor, Dr. Driscoll and his colleagues said in a report published Thursday on the Web site of the journal Science.

The wildcat DNA closest to that of house cats came from 15 individuals collected in the deserts of Israel, the United Arab Emirates, Bahrain and Saudi Arabia, the researchers say. The house cats in the study fell into five lineages, based on analysis of their mitochondrial DNA, a type that is passed down through the female line. Since the oldest archaeological site with a cat burial is about 9,500 years old, the geneticists suggest that the founders of the five lineages lived around this time and were the first cats to be domesticated.

Wheat, rye and barley had been domesticated in the Near East by 10,000 years ago, so it seems likely that the granaries of early Neolithic villages harbored mice and rats, and that the settlers welcomed the cats’ help in controlling them.

Unlike other domestic animals, which were tamed by people, cats probably domesticated themselves, which could account for the haughty independence of their descendants. "The cats were adapting themselves to a new environment, so the push for domestication came from the cat side, not the human side,” Dr. Driscoll said.

Cats are "indicators of human cultural adolescence,” he remarked, since they entered human experience as people were making the difficult transition from hunting and gathering, their way of life for millions of years, to settled communities.

Until recently the cat was commonly believed to have been domesticated in ancient Egypt, where it was a cult animal. But three years ago a group of French archaeologists led by Jean-Denis Vigne discovered the remains of an 8-month-old cat buried with its human owner at a Neolithic site in Cyprus. The Mediterranean island was settled by farmers from Turkey who brought their domesticated animals with them, presumably including cats, because there is no evidence of native wildcats in Cyprus.

The date of the burial far precedes Egyptian civilization. Together with the new genetic evidence, it places the domestication of the cat in a different context, the beginnings of agriculture in the Near East, and probably in the villages of the Fertile Crescent, the belt of land that stretches up through the countries of the eastern Mediterranean and down through what is now Iraq.

Dr. Stephen O’Brien, an expert on the genetics of the cat family and a co-author of the Science report, described the domestication of the cat as "the beginning of one of the major experiments in biological history” because the number of house cats in the world now exceeds half a billion while most of the 36 other species of cat, and many wildcats, are now threatened with extinction.

So a valuable outcome of the new study is the discovery of genetic markers in the DNA that distinguish native wildcats from the house cats and feral domestic cats with which they often interbreed. In Britain and other countries, true wildcats may be highly protected by law.

Current and historic distribution of Felis silvestris, the ancestor of the domestic cat (Image: Science)

David Macdonald of Oxford University, a co-author of the report, has spent 10 years trying to preserve the Scottish wildcat, of which only 400 or so remain. "We can use some of the genetic markers to talk to conservation agencies like the Scottish Natural Heritage,” he said.

Scientists find that Earth and Mars are different to the core

Research comparing silicon samples from Earth, meteorites and planetary materials, published in Nature (28th June 2007), provides new evidence that the Earth’s core formed under very different conditions from those that existed on Mars. It also shows that the Earth and the Moon have the same silicon isotopic composition supporting the theory that atoms from the two mixed in the early stages of their development.

This latest research which was carried out by scientists from Oxford University along with colleagues from University of California, Los Angeles (UCLA) and the Swiss Federal Institute of Technology in Zurich (ETH) compared silicon isotopes from rocks on Earth with samples from meteorites and other solar system materials. This is the first time that isotopes have been used in this way and it has opened up a new line of scientific investigation into how the Earth’s core formed.

On Earth rocks that make up volcanoes and mountain ranges and underlie the ocean floor are made of silicate – compounds made of silicon and oxygen linked with other kinds of atoms. Silicate dominates down to a depth of 2,900 km – roughly half way to the centre of the Earth. At this point there is an abrupt boundary with the dense metallic iron core. Studies by Birch in the 1950’s demonstrated that the outer core had a density too low to be made of pure iron and that it must also be made up of some lighter elements (see notes to editors for further details).

Research team member, Bastian Georg, a post doctoral researcher from Oxford University’s Earth Sciences Department said, "We dissolved meteorites, provided by the Natural History Museum in London, in order to compare their isotopic composition with those of rocks from the Earth. The silicon was separated from other elements and the atomic proportions of isotopes measured using a particularly sophisticated mass spectrometer at the ETH in Zurich”.

Professor Alex Halliday, also from Oxford University explains, "We were quite startled at our results which showed that the heavier isotopes from silicate Earth samples contained increased proportions of the heavier isotopes of silicon. This is quite different from meteorites from the silicate portions of Mars and the large Asteroid Vesta – which do not display such an effect even though these bodies also have an iron core.”

Silicate samples from Mars and Vesta are identical to a primitive class of meteorites called chondrites that represent average solar system material from small "planetesimals” that never underwent core formation.

Professor Halliday continues, "The most likely explanation is that, unlike Mars and Vesta, the Earth’s silicon has been divided into two sorts – a portion that became a light element in the Earth’s core dissolved in metal and the greater proportion which formed the silicon-oxygen bonded silicate of the Earth’s mantle and crust.”

At depths the silicates change structure to denser forms so the isotopic make-up would depend on the pressure at which metal and silicate separate. Quantifying this effect is the subject of ongoing studies. Co-author on the paper Edwin Schauble from UCLA, has produced preliminary calculations that show that the isotopic effects found are of the right direction and magnitude.

This research provides new evidence that the Earth’s core formed under different conditions from those that existed on Mars. This could be explained in part by the difference in mass between the two planets. With Earth being eight times larger than Mars the pressure of core formation could be higher and different silicate phases may have been involved. The mass of a planet also affects the energy that is released as it accretes (or grows).

The Earth accreted most of its mass by violent collisions with other planets and planetary embryos. The bigger the planet, the greater the gravitational attraction and the higher the temperatures that are generated as the kinetic energy of impacting objects is converted to heat. Some have proposed that the outer Earth would have periodically become a "magma ocean” of molten rock as a result of such extreme high temperature events.

There is evidence that Mars stopped growing in the first few million years of the solar system and did not experience the protracted history of violent collisions that affected the Earth. There already exists compelling evidence for relatively strong magnetic fields early in martian history but a thorough understanding of the martian core must await geophysical measurements by future landers. It is however thought that the core of Mars is proportionally smaller than that of the Earth and it probably formed under lower pressures and temperatures.

The research also shows that the Moon has the same silicon isotopic composition as the Earth. This cannot be caused by high pressure core formation on the Moon which is only about one percent of the mass of the Earth. However, it is consistent with the recent proposal that the material that made the Moon during the giant impact between the proto-Earth and another planet, usually called "Theia”, was sufficiently energetic that the atoms of the disk from which the Moon formed mixed with those from the silicate Earth. This means the silicon in the silicate Earth must have already had a heavy isotopic composition before the Moon formed about 40 million years after the start of the solar system.

Essay

Human DNA, the Ultimate Spot for Secret Messages (Are Some There Now?)

By DENNIS OVERBYE

Correction Appended

In Douglas Adams’s science fiction classic, "The Hitchhiker’s Guide to the Galaxy,” there is a character by the name of Slartibartfast, who designed the fjords of Norway and left his signature in a glacier.

Jimmy Turrell

I was reminded of Slartibartfast recently as I was trying to grasp the implications of the feat of a team of Japanese geneticists who announced that they had taught relativity to a bacterium, sort of.

Using the same code that computer keyboards use, the Japanese group, led by Masaru Tomita of Keio University, wrote four copies of Albert Einstein’s famous formula, E=mc2, along with "1905,” the date that the young Einstein derived it, into the bacterium’s genome, the 4.2-million-long string of A’s, G’s, T’s and C’s that determine everything the little bug is and everything it’s ever going to be.

The point was not to celebrate Einstein. The feat, they said in a paper published in the journal Biotechnology Progress, was a demonstration of DNA as the ultimate information storage material, able to withstand floods, terrorism, time and the changing fashions in technology, not to mention the ability to be imprinted with little unobtrusive trademark labels — little "Made by Monsanto” tags, say.

In so doing they have accomplished at least a part of the dream that Jaron Lanier, a computer scientist and musician, and David Sulzer, a biologist at Columbia, enunciated in 1999. To create the ultimate time capsule as part of the millennium festivities at this newspaper, they proposed to encode a year’s worth of the New York Times magazine into the junk DNA of a cockroach. "The archival cockroach will be a robust repository,” Mr. Lanier wrote, "able to survive almost all conceivable scenarios.”

If cockroaches can be archives, why not us? The human genome, for example, consists of some 2.9 billion of those letters — the equivalent of about 750 megabytes of data — but only about 3 percent of it goes into composing the 22,000 or so genes that make us what we are.

The remaining 97 percent, so-called junk DNA, looks like gibberish. It’s the dark matter of inner space. We don’t know what it is saying to or about us, but within that sea of megabytes there is plenty of room for the imagination to roam, for trademark labels and much more. The King James Bible, to pick one obvious example, only amounts to about five megabytes.

Inevitably, if you are me, you begin to wonder if there is already something written in the warm wet archive, whether or not some Slartibartfast has already been here and we ourselves are walking around with little trademark tags or more wriggling and squiggling and folded inside us. Gill Bejerano, a geneticist at the University of California, Santa Cruz, who mentioned Slartibartfast to me, pointed out that the problem with raising this question is that people who look will see messages in the genome even if they aren’t there — the way people have claimed in recent years to have found secret codes in the Bible.

Nevertheless, no less a personage than Francis Crick, the co-discoverer of the double helix, writing with the chemist Leslie Orgel, now at the Salk Institute in San Diego, suggested in 1973 that the primitive Earth was infected with DNA broadcast through space by an alien species.

As a result, it has been suggested that the search for extraterrestrial intelligence, or SETI, should look inward as well as outward. In an article in New Scientist, Paul Davies, a cosmologist at Arizona State University, wrote, "So might ET have inserted a message into the genomes of terrestrial organism, perhaps by delivering carefully crafted viruses in tiny pace probes to infect host cell with message-laden DNA?”

I should say right now that I am not talking about theology or the near theology known as intelligent design. The ability to stick a message in a cockroach does not make us the designers or creators of the cockroach — only evolution could be so kind or clever.

But I’m a sucker for secret messages. Once, long ago, I stayed up all night with my friends playing the Beatles’ "White Album” backward hoping to hear the words "Turn me on dead man,” referring to the rumored death of Paul McCartney. I’m ready to find Slartibartfast’s signature and rediscover my cosmic heritage.

The sad truth is, as others will tell you, this is a bit like writing love letters in the sand. "I don’t buy it,” said Seth Shostak, an astronomer at the SETI Institute in Mountain View, Calif., pointing out that DNA is famously mutable. "Just ask Chuck Darwin,” he added in an e-mail message.

It is the relentless shifting and mutating, the probing and testing of every possibility on the part of DNA, after all, that generates the raw material for evolution to act on and ensures the success of life on Earth (and perhaps beyond). Dr. Davies said that he had been encouraged by the discovery a few years ago that some sections of junk DNA seem to be markedly resistant to change, and have remained identical in humans, rats, mice, chickens and dogs for at least 300 million years.

But Dr. Bejerano, one of the discoverers of these "ultraconserved” strings of the genome, said that many of them had turned out to be playing important command and control functions.

"Why they need to be so conserved remains a mystery,” he said, noting that even regular genes that do something undergo more change over time. Most junk bits of DNA that neither help nor annoy an organism mutate even more rapidly.

The Japanese team proposed to sidestep the mutation problem by inserting redundant copies of their message into the genome. By comparing the readouts, they said, they would be able to recover Einstein’s formula even when up to 15 percent of the original letters in the string had changed, or mutated. "This is the major point of our work,” Nozomu Yachie said in an e-mail message. At the rate of one mutation per generation, Dr. Yachie estimated it could take at least millions of years for the bacteria’s genome to change by 15 percent — a huge change. Only 1 percent separates us from chimps. But other experts say that a stretch of DNA that is at best useless, and perhaps annoying to the little bug could disappear much more rapidly.

Calling the idea of storing information in living DNA "a nifty idea,” Dr. Bejerano said: "The bottom line is if you want something to perpetuate forever, you can’t just come in and type what you want. It would get washed away.”

That dream, he said, "is hopeless with our current knowledge.”

If we want to leave a message that would last for eons, it seems, we have to be clever enough to make sure that the message would remain beneficial to its host pretty much forever.

The challenge for an erstwhile interstellar Johnny Appleseed is to make the message part of the basic nature of its host.

If that ever turns out to be us, if we find that we are the medium, to paraphrase the late Marshall McLuhan, then, in some sense, we are also the message. Never mind who or what are the intended readers.

But if we find, say, the digits of the number pi encoded in a cockroach, I want to have a talk with old Mr. Startibartfast.

Correction: June 28, 2007

An essay in Science Times on Tuesday about the ability to encode information in a genome referred incorrectly to the number of bases, or letters, in the genetic code of a bacterium in which researchers wrote the E=mc2 formula. The bacterium’s genome has about 4.2 million bases, not 400 million.

9,000-Year-Old Beer Tastes Great

By Liu Enming

A Delaware brewery known for its specialty beers has created a new one based on a 9,000-year-old recipe. VOA's Liu Enming recently traveled to Dogfish Head Craft Brewery to taste Chateau Jiahu beer. Jim Bertel narrates.

Of the more than 1300 breweries in the U.S., Dogfish Head Craft Brewery in Delaware stands out for its uniqueness.

Owner Sam Calagione says the 26 kinds of beer brewed at his microbrewery cater to different people's tastes, but have one thing in common.

"Our motto since the day we've opened has been 'off-centered ales for off-centered people',” says Calagione. "Just by virtue of that definition it's obvious that we are not planning on brewing light lagers, boring beers. The ideas for beers that come out of our brewery Dogfish Head either usually come from my own inspiration; a feeling for a style that doesn't exist yet that I think it would work well."

Calagione's latest concoction, Chateau Jiahu, made headlines in the National Geographic News and other media. But more importantly, it is a hit with many of the brewer's customers.

"It was very tasty, the honey-suckle almost flavor to it, tastes of a little bit of white grape in the background; very smooth, very mellow, very tasty," said a customer.

"It was good; it was, like, buttery but also sweet, honey, a little bit (of) crispness to it going down. It was good," says another.

"I am glad that they found the recipe; this is the type of beer I can drink," says a third customer.

The ancient brew was rediscovered in pottery dating back thousands of years at an excavation site in the Neolithic village of Jia Hu in Northern China

Dr. Patrick McGovern, an archaeochemist at the University of Pennsylvania's Museum of Archaeology and Anthropology in Philadelphia, derived the recipe from residue found in pottery jars. Careful research shows the ancient brew included rice, honey, grapes and hawthorn fruit.

"If you have pottery vessels, they are very useful in making a beverage and also storing and serving and drinking it,” says Dr. McGovern. "Often the beverage will be absorbed into the pottery and it will be held in the pottery by the clay matrix, the pores. The Jia Hu mixed beverage is so far the earliest chemically attested alcoholic beverage or wine from anywhere in the world. I think it's really quite remarkable that it doesn't come from Middle East. People often assume the first wine and first beer has to be from the Middle East, but it comes from China."

Bryan Selders, the lead brewer at the brewery explains how the ancient beer was resurrected.

"This is our first step brewing Chateau Jiahu, this vessel here is called 'mash tun',” says Selders. "In the mash tun what we do is mix up the mash. And what a mash is is a combination of crushed malted barley and hot water."

In addition, the recipe calls for rice flakes. The rice and barley malt are combined to make the mash for starch conversion and degradation.

"In the boil kettle we collect the full volume of wort that we need and we boil the wort,” explains Selders. "What that accomplishes is it concentrates sugars and sterilizes the wort. Then we are able to spice the wort with hops. We also add honey and hawthorn berries."

The liquid is then mixed with a fresh culture of sake yeast to ferment. The result is a golden colored, fragrant Chateau Jiahu beer. Calagione believes his new take on the ancient brew probably tastes better than the original.

"I am sure the original wines and beers back then had infections of bacteria and wild yeast, whereas today in our modern brewing facility with our lab and high tech equipment, we can make sure that no wild yeast and bacteria gets into the beer,” adds Calagione. ”So it will be a lot cleaner tasting."

And beer lovers will tell you it is all about the taste.

"Man, I will tell you what. For an old beer that's good stuff, very good; I enjoyed it very much," says a customer.

Therapeutic value of meditation unproven, says study

While it's not likely to do you any harm, there is also no compelling evidence that meditation has therapeutic value

"There is an enormous amount of interest in using meditation as a form of therapy to cope with a variety of modern-day health problems, especially hypertension, stress and chronic pain, but the majority of evidence that seems to support this notion is anecdotal, or it comes from poor quality studies,” say Maria Ospina and Kenneth Bond, researchers at the University of Alberta/Capital Health Evidence-based Practice Center in Edmonton, Canada.

In compiling their report, Ospina, Bond and their fellow researchers analyzed a mountain of medical and psychological literature—813 studies in all—looking at the impact of meditation on conditions such as hypertension, cardiovascular diseases and substance abuse.

They found some evidence that certain types of meditation reduce blood pressure and stress in clinical populations. Among healthy individuals, practices such as Yoga seemed to increase verbal creativity and reduce heart rate, blood pressure and cholesterol. However, Ospina says no firm conclusions on the effects of meditation practices in health care can be drawn based on the available evidence because the existing scientific research is characterized by poor methodological quality and does not appear to have a common theoretical perspective.

"Future research on meditation practices must be more rigorous in the design and execution of studies and in the analysis and reporting of results,” Ospina explains.

But the researchers caution against dismissing the therapeutic value of meditation outright. "This report’s conclusions shouldn’t be taken as a sign that meditation doesn’t work,” Bond says. "Many uncertainties surround the practice of meditation. For medical practitioners who are seeking to make evidence-based decisions regarding the therapeutic value of meditation, the report shows that the evidence is inconclusive regarding its effectiveness.” For the general public, adds Ospina, "this research highlights that choosing to practice a particular meditation technique continues to rely solely on individual experiences and personal preferences, until more conclusive scientific evidence is produced.”

The report, published June 2007 and titled Meditation Practices for Health: State of the Research, identified five broad categories of meditation practices: mantra meditation, mindfulness meditation, Yoga, Tai Chi and Qi Gong. Transcendental Meditation and relaxation response (both of which are forms of mantra meditation) were the most commonly studied types of meditation. Studies involving Yoga and mindfulness meditation were also common.

Researchers identify alcoholism subtypes

Analyses of a national sample of individuals with alcohol dependence (alcoholism) reveal five distinct subtypes of the disease, according to a new study by scientists at the National Institute on Alcohol Abuse and Alcoholism (NIAAA), part of the National Institutes of Health (NIH).

"Our findings should help dispel the popular notion of the ‘typical alcoholic,’” notes first author Howard B. Moss, M.D., NIAAA Associate Director for Clinical and Translational Research. "We find that young adults comprise the largest group of alcoholics in this country, and nearly 20 percent of alcoholics are highly functional and well-educated with good incomes. More than half of the alcoholics in the United States have no multigenerational family history of the disease, suggesting that their form of alcoholism was unlikely to have genetic causes.”

"Clinicians have long recognized diverse manifestations of alcoholism,” adds NIAAA Director Ting-Kai Li, M.D, "and researchers have tried to understand why some alcoholics improve with specific medications and psychotherapies while others do not. The classification system described in this study will have broad application in both clinical and research settings.” A report of the study is now available online in the journal Drug and Alcohol Dependence.

Previous efforts to identify alcoholism subtypes focused primarily on individuals who were hospitalized or otherwise receiving treatment for their alcoholism. However, recent reports from NIAAA’s National Epidemiological Survey on Alcohol and Related Conditions (NESARC), a nationally representative epidemiological study of alcohol, drug, and mental disorders in the United States, suggest that only about one-fourth of individuals with alcoholism have ever received treatment. Thus, a substantial proportion of people with alcoholism were not represented in the samples previously used to define subtypes of this disease.

In the current study, Dr. Moss and colleagues applied advanced statistical methods to data from the NESARC. Their analyses focused on the 1,484 NESARC survey respondents who met diagnostic criteria for alcohol dependence, and included individuals in treatment as well as those not seeking treatment. The researchers identified unique subtypes of alcoholism based on respondents’ family history of alcoholism, age of onset of regular drinking and alcohol problems, symptom patterns of alcohol dependence and abuse, and the presence of additional substance abuse and mental disorders:

Young Adult subtype: 31.5 percent of U.S. alcoholics. Young adult drinkers, with relatively low rates of co-occurring substance abuse and other mental disorders, a low rate of family alcoholism, and who rarely seek any kind of help for their drinking.

Young Antisocial subtype: 21 percent of U.S. alcoholics. Tend to be in their mid-twenties, had early onset of regular drinking, and alcohol problems. More than half come from families with alcoholism, and about half have a psychiatric diagnosis of Antisocial Personality Disorder. Many have major depression, bipolar disorder, and anxiety problems. More than 75 percent smoked cigarettes and marijuana, and many also had cocaine and opiate addictions. More than one-third of these alcoholics seek help for their drinking.

Functional subtype: 19.5 percent of U.S. alcoholics. Typically middle-aged, well-educated, with stable jobs and families. About one-third have a multigenerational family history of alcoholism, about one-quarter had major depressive illness sometime in their lives, and nearly 50 percent were smokers.

Intermediate Familial subtype: 19 percent of U.S. alcoholics. Middle-aged, with about 50 percent from families with multigenerational alcoholism. Almost half have had clinical depression, and 20 percent have had bipolar disorder. Most of these individuals smoked cigarettes, and nearly one in five had problems with cocaine and marijuana use. Only 25 percent ever sought treatment for their problem drinking.

Chronic Severe subtype: 9 percent of U.S. alcoholics. Comprised mostly of middle-aged individuals who had early onset of drinking and alcohol problems, with high rates of Antisocial Personality Disorder and criminality. Almost 80 percent come from families with multigenerational alcoholism. They have the highest rates of other psychiatric disorders including depression, bipolar disorder, and anxiety disorders as well as high rates of smoking, and marijuana, cocaine, and opiate dependence. Two-thirds of these alcoholics seek help for their drinking problems, making them the most prevalent type of alcoholic in treatment.

The authors also report that co-occurring psychiatric and other substance abuse problems are associated with severity of alcoholism and entering into treatment. Attending Alcoholics Anonymous and other 12-step programs is the most common form of help-seeking for drinking problems, but help-seeking remains relatively rare.

Critical protein prevents DNA damage from persisting through generations

A protein long known to be involved in protecting cells from genetic damage has been found to play an even more important role in protecting the cell's offspring. New research by a team of scientists at Rockefeller University, Howard Hughes Medical Institute and the National Cancer Institute shows that the protein, known as ATM, is not only vital for helping repair double-stranded breaks in DNA of immune cells, but is also part of a system that prevents genetic damage from being passed on when the cells divide.

Early in the life of B lymphocytes -- the immune cells responsible for hunting down foreign invaders and labeling them for destruction -- they rearrange their DNA to create various surface receptors that can accurately identify different intruders, a process called V(D)J recombination. Now, in an study published online today in the journal Cell, Rockefeller University Professor Michel Nussenzweig, in collaboration with his brother André Nussenzweig at NCI and their colleagues, shows that when the ATM protein is absent, chromosomal breaks created during V(D)J recombination go unrepaired, and checkpoints that normally prevent the damaged cell from replicating are lost.

Normal lymphocytes contain a number of restorative proteins, whose job it is to identify chromosomal damage and repair it or, if the damage is irreparable, prevent the cell from multiplying. Earlier research by André and Michel Nussenzweig, who is an investigator at HHMI, had identified other DNA repair proteins that are important during different phases of a B lymphocyte's life. It was during one of these studies, which examined genetic damage late in the life of a B cell, that they came across chromosomal breaks that could not be explained.

So the researchers began to look into the potential role of V(D)J recombination. "We were not expecting it to be responsible for the breaks we were seeing," says Michel, Sherman Fairchild Professor and head of the Laboratory of Molecular Immunology. "Because for it to be responsible, the breaks would have had to happen early on, the cell would have to divide, mature, maintain the breaks, and stay alive with broken chromosomes."

This, in fact, was precisely what they found.

The ATM protein appears to have two roles in a B cell: It helps repair the DNA double-strand breaks, and it activates the cell-cycle checkpoint that prevents genetically damaged cells from dividing. "ATM is required for a B cell to know that it has a broken chromosome. And if it doesn't know that it seems to be able to keep on going," says Michel.

Since the ATM protein is mutated in a number of lymphomas -- cancers of the lymph and immune system -- the new finding suggests to researchers that the lymphocytes could have been living with DNA damage for a long time, and that this damage likely plays a role in later chromosomal translocations, rearrangements of genetic materials that can lead to cancer.

Michel and his brother, who've been collaborators for more than a decade, intend to pursue the molecular mechanisms by which these chromosomal translocations occur. "I think it's important to understand them," he says, "because eventually we might be able to prevent these dangerous chromosome fusions."

Modern brains have an ancient core

The marine ragworm Platynereis dumerilii uses similar hormones as secreted by the vertebrate brain.

Multifunctional neurons that sense the environment and release hormones are the evolutionary basis of our brains

Hormones control growth, metabolism, reproduction and many other important biological processes. In humans, and all other vertebrates, the chemical signals are produced by specialised brain centres such as the hypothalamus and secreted into the blood stream that distributes them around the body. Researchers from the European Molecular Biology Laboratory [EMBL] now reveal that the hypothalamus and its hormones are not purely vertebrate inventions, but have their evolutionary roots in marine, worm-like ancestors. In this week's issue of the journal Cell they report that hormone-secreting brain centres are much older than expected and likely evolved from multifunctional cells of the last common ancestor of vertebrates, flies and worms.

Hormones mostly have slow, long-lasting and body-wide effects, rendering them the perfect complement to the fast and precise nervous system of vertebrates. Also insects and nematode worms rely on the secretion of hormones to transmit information, but the compounds they use are often very different from the vertebrate counterparts.

"This suggested that hormone-secreting brain centres have arisen after the evolution of vertebrates and invertebrates had split," says Detlev Arendt, whose group studies development and evolution of the brain at EMBL. "But then vertebrate-type hormones were found in annelid worms and molluscs, indicating that these centres might be much older than expected."

Scientist Kristin Tessmar-Raible from Arendt's lab directly compared two types of hormone-secreting nerve cells of zebrafish, a vertebrate, and the annelid worm Platynereis dumerilii, and found some stunning similarities. Not only were both cell types located at the same positions in the developing brains of the two species, but they also looked similar and shared the same molecular makeup. One of these cell types secretes vasotocin, a hormone controlling reproduction and water balance of the body, the other secretes a hormone called RF-amide.

Each cell type has a unique molecular fingerprint - a combination of regulatory genes that are active in a cell and give it its identity. The similarities between the fingerprints of vasotocin and RF-amide-secreting cells in zebrafish and Platynereis are so big that they are difficult to explain by coincidence. Instead they indicate a common evolutionary origin of the cells. "It is likely that they existed already in Urbilateria, the last common ancestors of vertebrates, insects and worms" explains Arendt.

Both of the cell types studied in Platynereis and fish are multifunctional: they secrete hormones and at the same time have sensory properties. The vasotocin-secreting cells contain a light-sensitive pigment, while RF-amide appears to be secreted in response to certain chemicals. The EMBL scientists now assume that such multifunctional sensory neurons are among the most ancient neuron types. Their role was likely to directly convey sensory cues from the ancient marine environment to changes in the animal's body. Over time these autonomous cells might have clustered together and specialised forming complex brain centres like the vertebrate hypothalamus.

"These findings revolutionise the way we see the brain," says Tessmar-Raible. "So far we have always understood it as a processing unit, a bit like a computer that integrates and interprets incoming sensory information. Now we know that the brain is itself a sensory organ and has been so since very ancient times."

More Than 80% Of Nyc Restaurants Now Using Zero Grams Trans Fat Oils

First Phase of Trans Fat Regulation Takes Effect July 1, 2007

NEW YORK– Facing a July 1 deadline, most restaurants have already eliminated artificial trans fat in oils used for frying, a new Health Department survey shows. The agency reported today that 83% of restaurants were not using artificial trans fat for frying as of June 1 – a full month before the new regulation will take effect.

The first phase of the trans fat regulation takes effect on July 1 and applies to oils, shortening and margarines used for frying and spreading – not to baked goods or prepared foods, or oils used to deep-fry dough or cake batter. These are covered by the second phase of the regulation, which takes effect on July 1, 2008. The Health Department’s new survey found that 57% of restaurants where trans fat content could be determined were using oils free of artificial trans fat for frying, as spreads, and even for baking – a purpose covered by the 2008 deadline. That’s up from approximately 50% in 2006.

"The vast majority of restaurants are using trans fat free oil for frying,” said Health Commissioner Dr. Thomas R. Frieden. "This confirms that the switch is feasible. But many restaurants are still using spreads such as margarine that contain artificial trans fat. These products need to be replaced with widely available alternatives. We will continue to work closely with restaurants to eliminate harmful trans fat.”

"We’re excited about this change,” said Susan Giannetto, executive chef at the Bubba Gump Shrimp Company in Times Square. "We’re keeping people healthy, and we’re making a better product. We want people to feel good about what they eat. The taste hasn’t changed.” Bubba Gump’s NYC restaurant switched to trans-fat-free fry oils more than three months ago. The company now plans make the same change at all of its establishments worldwide.

Waterfront Ale House in Brooklyn, popular for its game burgers, barbeque and chocolate cake, made the switch easily. "We changed the oil in a few recipes, and we have not had any problem,” said owner Sam Barbieri. "We have not seen any change in quality or price.”

Trans Fat Regulations

Starting July 1, 2007, restaurants may not use partially hydrogenated vegetable oils, shortenings or margarines for frying, pan-frying (sautéing) or grilling if they contain 0.5 grams or more of trans fat per serving. The same restriction applies to spreads. Restaurants will be cited for violations, but fines will not be issued until October 1, 2007, after a three-month grace period.

Beginning July 1, 2008, no food containing partially hydrogenated vegetable oils, shortenings or margarines with 0.5 grams or more trans fat per serving may be stored, used or served by food service establishments in New York City. The regulation does not apply to food served in the manufacturer’s original, sealed packaging, such as a package of crackers.

The Trans Fat Help Center

Margarine and Fried ChickenThe Health Department, with a grant from the American Heart Association, launched the Trans Fat Help Center in April to help restaurants switch from artificial trans fat to more healthful oils while maintaining the same taste and texture of food. The Help Center offers the following resources at no cost to restaurants:

The Help Line. Restaurants can call 311 to reach the help line for information on the regulation Monday through Friday from 9am to 5pm. Assistance is also available in Chinese, Spanish, and more than 150 other languages with interpretation services.

The Website. features easy-to-use resources, available to restaurants 24 hours a day, seven days a week. Restaurant operators may download "0 grams trans fat” product lists, a guide to frying without trans fat, get information about classes, or download a brochure on the new regulations.

Classes for Restaurant Operators. Restaurant operators can sign up for classes on cooking and baking without artificial trans fat. Classes will be offered monthly, in a variety of locations depending on demand, until December 2008. For information about how to sign up, visit . A complete guide to complying with NYC’s new trans fat regulation is available at:

Calorie Labeling Regulation in New York City

A separate regulation, which will also go into effect July 1, 2007, requires restaurants with standard portions that make calorie information publicly available to post it on menus where consumers can see it when they order. The rule will affect about 10% of city restaurants. No fines or citations will be issued for violations until October 1, 2007. The Health Department is working with restaurants affected by this regulation that are in the process of redesigning menu boards to assure compliance.

One restaurant association has sued New York City, challenging the calorie labeling regulation. "This rule simply requires restaurants to provide information they already publish where their customers will actually see it,” said Dr. Frieden. "It is unfortunate that some restaurants are so ashamed of what they are serving that they would rather go to court than present this important information where their customers can readily see it.” Information about the calorie labeling regulation in New York City restaurants is online at:

Cognitive scores vary as much within test takers as between age groups making testing less valid

Getting a 'mental batting average' from a short series of repeated tests may more precisely define mental function

WASHINGTON — How precise are tests used to diagnose learning disability, progressive brain disease or impairment from head injury" Timothy Salthouse, PhD, a noted cognitive psychologist at the University of Virginia, has demonstrated that giving a test only once isn’t enough to get a clear picture of someone’s mental functioning. It appears that repeating tests over a short period may give a more accurate range of scores, improving diagnostic workups.

The study is published in the July issue of Neuropsychology, which is published by the American Psychological Association (APA).

Salthouse gave 16 common cognitive and neuropsychological tests to evenly divided participants (90 in the first, 1600 in the second) into groups of ages 18-39, 50-59 and 60-97 years old. In both studies, the variation between someone’s scores on the same test given three times over two weeks was as big as the variation between the scores of people in different age groups. It’s as if on the same test, someone acted like a 20-year-old on a Monday, a 45-year-old the following Friday, and a 32-year-old the following Wednesday. This major inconsistency raises questions about the worth of single, one-time test scores.

"I don’t think many people would have expected that the variability would be this large, and apparent in a wide variety of cognitive tests – not simply tests of speed or alertness,” says Salthouse.

Psychologists frequently use tests of vocabulary, word recall, spatial relations, pattern comparison and the like to understand normal function and diagnose impairment. Experts use the scores to differentiate between diagnoses, detect changes in level of functioning or to give a diagnosis in the first place. Where scores fall relative to standardized cutoffs affects treatment, insurance, education plans and more. Yet the apparent fuzziness of one-time assessments could make it hard to tell whether someone is truly impaired, or truly improving or worsening, instead of showing normal short-term fluctuation.

Accordingly, Salthouse has come to believe that everyone has a range of typical performances, a one-person bell curve. Any given test will net a performance somewhere along that curve, as when a hitter’s good and bad days are factored into a seasonal batting average. Some persons’ scores would hew more closely to their average, but for those who have high internal variation, classification based on one assessment could be way off the mark.

Salthouse says it may be time to view cognitive abilities as a distribution of many potential levels of performance instead of as one stable short-term level. He proposes the use of a "measurement burst” procedure that bases understanding on several parallel assessments within a relatively short period. Results gained in this manner are likely to be more stable, offering a better basis for calibrating individual change.

Before any procedural updates, Salthouse says, "More will have to be learned about this phenomenon and the conditions under which it operates.” Multiple assessments involve more time and expense but may be necessary, he notes, to distinguish short-term fluctuation from true ability level. In addition, psychologists would have to develop new test norms and truly equivalent versions of the same test.

Finally, Salthouse believes that measures of within-person variability could be a useful diagnostic marker in their own right. For example, he and other cognitive psychologists are discussing whether wilder fluctuations within one person’s test scores are an early warning of mental decline.

Found: The clearest ocean waters on Earth

* 12:38 29 June 2007

* news service

* Catherine Brahic

As clear as the clearest lakes on the planet, salty as ocean waters, and roughly the size of the Mediterranean – this, say researchers, is the clearest and most lifeless patch of ocean in the world. And it is in the middle of the Pacific.

"Satellite images that track the amount of chlorophyll in ocean waters suggested that this was one of the most life-poor systems on Earth," explains Patrick Raimbault of the University of the Mediterranean, in Marseille, France (see image, right).

In October 2004, Raimbault and colleagues set out to study the remarkable patch of ocean water on a three month cruise – called BIOSOPE – that left from Tahiti in French Polynesia, passed by Easter Island and ended on the Chilean coast. Along the way, they sampled the water's chemistry, physics and biology.

Marc Tedetti, also from the University of the Mediterranean, was on the expedition to investigate the water's clarity. He was struck by the colour of the water, which he describes as closer to violet than to blue (see image, right).

Tedetti says the ocean waters, which the researchers sampled using these canisters, were "almost violet" (Image: Joséphine Ras)

Beautiful but barren

Tedetti returned having found "unequivocally" the clearest ocean waters on the planet. "Some bodies of freshwater are equally clear, but only the purest freshwater," Tedetti told New Scientist. "For instance, researchers have found equivalent measurements in Lake Vanda in Antarctica, which is under ice, and is really extremely pure."

At the clearest point of the south-east Pacific, near to Easter Island, Tedetti found that UV rays could penetrate more than 100 metres below the surface.

This correlates with Raimbault's chlorophyll measurements, which suggest the patch contains roughly 10 times less chlorophyll that is found in most ocean waters. Raimbault says the patch of ocean is the least productive marine region known to man.

In a sense, the patch is isolated from the global river and ocean circulation, which explains its lack of life. Being far away from the coast it does not benefit from continental run-off, and the thermohaline circulation – the "global conveyer belt" – which ferries ocean waters around the world, also mostly runs along the continental shelves.

To compound things, this area of ocean does not benefit from seasonal variations which tend to bring nutrients up from the seabed.

Carbon rich

Elsewhere, winter temperatures cool surface waters, making them denser and causing them to sink and push deeper, nutrient-rich waters up to the surface. But the surface waters of the southeast Pacific are warm year-round, which means they tend to perpetually "float" on top of the deeper, colder waters.

Satellite images reveal the chlorophyll-poor patch of water in the south-east Pacific (in purple) (Image: SeaWiFS Project, NASA/Goddard Space Flight Center, ORBIMAGE)

In spite of this, the expedition found the clear water is able to support a food chain, which Raimbault suspects relies heavily on the organisms' ability to recycle nutrients. "As there is no supply, there cannot be any loss either," he says.

Raimbault made another surprising discovery: the patch of the ocean that is poorest in life appears to be extremely rich in dissolved organic carbon.

He is currently teasing apart data in an attempt to explain the apparent contradiction, but believes it may be that the limited availability of nutrients such as nitrogen and phosphorus means the bacteria that would normally degrade the dissolved organic matter are not able to complete the task.

Journal reference: Geophysical Research Letters (DOI: 10.1029/2007GL029823)

McDonald's puts oil to green use

McDonald's is to convert all its UK delivery vehicles to run on biodiesel, using the firm's supply of cooking oil.

The fast-food chain has pledged to convert all its 155 vehicles by next year, starting with 45 lorries based at its distribution centre in Hampshire.

By using the fuel - made by combining cooking oil and rapeseed oil - the firm said it would save more than 1,650 tonnes of carbon every year.

The move follows a successful trial last year.

'Environmental example'

McDonald's has long faced criticism over its environmental record.

The firm said it was "delighted" to be putting its large stock of cooking oil to a "practical, efficient use" within its own business.

"This is a great example of how businesses can work together to help the environment," said its senior vice president Matthew Howe.

The retailer added that it was working on a range of other initiatives spanning recycling and packaging to reduce its carbon emissions.

It said it was committed to working with its suppliers to reduce the use of pure rapeseed oil in its manufacturing process.

Iron Age 'Mickey Mouse' Found

Jennifer Viegas, Discovery News

One thousand years before the cartoon character Mickey Mouse was even a glint in Walt Disney's eye, a French artist created a bronze brooch that looks remarkably like the famous rodent, according to archaeologists at Sweden's Lund Historical Museum, which houses the recent find.

The object, dated to 900 A..D., was excavated at a site called Uppåkra in southern Sweden.

Although made of bronze, the brooch ornament likely adorned the clothing of an Iron Age woman. Excavations at nearby sites, such as at Järrestad, have yielded other unusual pieces of jewelry, such as a necklace with a pail fob at the end and another necklace strung with 262 pieces of amber.

Mickey, is that you?

The bronze brooch may remind modern viewers of Mickey Mouse, but archaeologist Jerry Rosengren from Lund University told Discovery News that it actually represents a lion.

"Similar shaped jewelry representing lions originated in France around 700 A.D.," he said. "After 200 years, some French artist, who probably never saw a lion in his entire life, came up with this fantasy version."

Rosengren explained that lions became an important symbol to Scandinavian royals and warlords, particularly after Judeo-Christian teachings were introduced to the area.

The Bible mentions lions 157 times. Even before the Biblical era, this wild cat was an important symbol representing power, strength and victory in battle for some of the earliest Middle Eastern cultures.

Prior to the lion symbol's introduction to Sweden, royals there associated themselves with the wild boar, an ancestor to pigs that aggressively defends itself with its sharp tusks when threatened.

At Uppåkra, Rosengren also excavated Roman coins, stamped gold foil, various surgical instruments, figurines depicting Scandinavian gods and goddesses and a large temple complex that once was devoted to the Norse religion.

Since the figurines primarily show the Norse mythological god Odin and the goddess Freya, locals at the time probably conducted pagan ceremonies for these gods at the temple. Freya was a goddess of beauty, love, fertility and attraction, while Odin was a god of wisdom, war, battle and death.

The lion symbolism of the "Mickey Mouse" brooch, therefore, would have been in keeping with the popular culture and beliefs of the time in Sweden's Iron Age (500 B.C.-1050 A.D.), although the object's charm has not diminished over the years.

A spokesman for the Walt Disney Company told Discovery News, "Mickey has always been a timeless Disney character with universal appeal across the generations. This certainly reinforces that notion in a way we never expected."

It is possible the connection between the two images might have to do with the simple "circle upon circle" design. The Disney company's website mentions that the earliest drawings of Mickey Mouse in the 1920's consisted of multiple circles, even for the character's body. Changes over the following decades, such as the addition of Mickey's pear-shaped body and eye pupils, gradually led to how the character looks today.

One generation's rodent turned out to be another's fierce lion.

As Rosengren said, "An elite Swedish woman from the Iron Age never would have worn a mouse on her clothing, but the lion object certainly does look like our culture's modern Mickey Mouse."

Scientists discover key to manipulating fat

Pathway also explains stress-induced weight gain

Washington, D.C. − In what they call a "stunning research advance,” investigators at Georgetown University Medical Center have been able to use simple, non-toxic chemical injections to add and remove fat in targeted areas on the bodies of laboratory animals. They say the discovery, published online in Nature Medicine on July 1, could revolutionize human cosmetic and reconstructive plastic surgery and treatment of diseases associated with human obesity.

Investigators say these findings may also, over the long-term, lead to better control of metabolic syndrome, which is a collection of risk factors that increase a patient’s chances of developing heart disease, stroke, and diabetes. Sixty million Americans were estimated to be affected by metabolic syndrome in 2000, according to a study funded by the Centers for Disease Control in 2004.

In the paper, the Georgetown researchers describe a mechanism they found by which stress activates weight gain in mice, and they say this pathway − which they were able to manipulate − may explain why people who are chronically stressed gain more weight than they should based on the calories they consume.

This pathway involves two players − a neurotransmitter (neuropeptide Y, or NPY) and the receptor (neuropeptide Y2 receptor, or Y2R) it activates in two types of cells in the fat tissue: endothelial cells lining blood vessels and fat cells themselves. In order to add fat selectively to the mice they tested, researchers injected NPY into a specific area. The researchers found that both NPY and Y2R are activated during stress, leading to apple-shape obesity and metabolic syndrome. Both the weight gain and metabolic syndrome, however, were prevented by administration of Y2R blocker into the abdominal fat.

Investigators at Georgetown University Medical Center have been able to use simple, nontoxic chemical injections to add and remove fat in targeted areas on the bodies of laboratory animals.

"We couldn’t believe such fat remodeling was possible, but the numerous different experiments conducted over four years demonstrated that it is, at least in mice; recent pilot data also suggest that a similar mechanism exist in monkeys as well,” said the study’s senior author, Zofia Zukowska, M.D., Ph.D., professor and chair of the Department of Physiology & Biophysics at Georgetown University Medical Center.

"We are hopeful that these findings might eventually lead to control of metabolic syndrome, which is a huge health issue for many Americans,” she said. "Decreasing fat in the abdomen of the mice we studied reduced the fat in their liver and skeletal muscles, and also helped to control insulin resistance, glucose intolerance, blood pressure and inflammation. Blocking Y2R might work the same way in humans, but much study will be needed to prove that.”

More immediately, the findings could provide some comfort to stressed individuals who blame themselves for a weight gain that seems outsized given the food they eat, said Lydia Kuo, a medical student who earned her Ph.D. in physiology due to work on the study.

"This is the first study to show that stress has a direct effect on fat accumulation, body weight and metabolism,” she said. "In humans, this kind of stress-mediated fat gain may have nothing to do with the brain, and is actually just a physiological response of their fat tissue.”

And perhaps the most rapid clinical application of these results will be in both cosmetic and reconstructive plastic surgery, said co-author Stephen Baker, M.D., D.D.S, associate professor of plastic surgery at Georgetown University Hospital. The ability to add fat as a graft would be useful for facial rejuvenation, breast surgery, buttock and lip enhancement, and facial reconstruction, he said, and using injections like those tested in this study could make fat grafts predictable, inexpensive, biocompatible and permanent.

Equally important, blocking Y2R resulted in local elimination of adipose, or fat, tissue, said Baker. "This is the first well-described mechanism found that can effectively eliminate fat without using surgery,” he said. "A safe, effective, non-surgical means to eliminate undesirable body fat would be of great benefit to our patients.”

Roxanne Guy, MD, president of the American Society of Plastic Surgeons, of which Baker is a member, is also excited by the findings, although she agrees that more research is needed to find out how the animal findings translate in humans. "Providing a long lasting, natural wrinkle filler and a scientifically studied, non-surgical method for melting fat could revolutionize ‘growing old gracefully,’” she said. "This discovery could also have positive implications for reconstructive plastic surgery procedures performed on the face and breasts.”

Stress + "comfort” foods = excess weight gain

As part of the study, Zukowska and her team examined the effect of several forms of chronic stress that mice in the wilderness can encounter, such as exposure for an hour a day over a two-week period to standing in a puddle of cold water or to an aggressive alpha mouse, and they conducted the experiments in combination with a regular diet or with a high-fat, high-sugar diet. Stressed animals fed a normal diet did not gain weight, but stressed mice given a high-fat diet did. In fact, the researchers found these mice put on more weight than expected given the calories they were consuming.

"They gained twice as much fat as would be expected, and it was all in their belly area,” Kuo said. Stressed versus non-stressed animals ate the same amount of food, but the stressed animals processed it differently, she said, explaining, "the novel finding here is that NPY works on fat tissue, not in the brain.”

This finding makes sense if evolutionary advantage is considered, Zukowska said. "If you can store fat for times of hardship, you have a fat reserve that can be turned into energy for the next fight.

"The same mechanism may be happening in humans,” she said. "An accumulation of chronic stressors, like disagreements with your boss, taking care of a chronically ill child, or repeated traffic road rages, could be acting as an amplifier to a hypercaloric diet when protracted over time. Depression may also be acting as a stressor.”

Not only were the stressed mice much fatter, they began to exhibit the metabolic and cardiovascular consequences of obesity, Kuo said. "They had the glucose intolerance seen in diabetes, elevated blood pressure, inflammation in the blood vessels, and fat in their livers and muscles.”

"Although we don’t expect that, in the future, a person will be able to eat everything he or she wants, chase it down with a Y2R blocking agent, and end up looking like a movie star,” said Zukowska, "we are encouraged that these findings could improve human health.”

"The concepts described in this study might give us the tools to design one method to remodel fat and another to tackle obesity and metabolic syndrome,” Baker said. "It is very exciting.”

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download