Get latest biomedical research news compilation here

Sunday, October 25, 2015

Neuroscientists decode the brain activity of the worm



Head of a roundworm whose nerve cells have been genetically modified to glow under the microscope.
Credit: Image courtesy of Research Institute of Molecular Pathology

Manuel Zimmer and his team at the Research Institute of Molecular Pathology (IMP) present new findings on the brain activity of the roundworm Caenorhabditis elegans. The scientists were able to show that brain cells (neurons), organized in a brain-wide network, albeit exerting different functions, coordinate with each other in a collective manner. They could also directly link these coordinated activities in the worm's brain to the processes that generate behavior. The results of the study are presented in the current issue of the journal Cell.

One of the major goals of neuroscience is to unravel how the brain functions in its entirety and how it generates behavior. The biggest challenge in solving this puzzle is represented by the sheer complexity of nervous systems. A mouse brain, for example, consists of millions of neurons linked to each other in a highly complex manner. In contrast to that, the nematode Caenorhabditis elegans is equipped with a nervous system comprised of only 302 neurons. Due to its easy handling and its developmental properties, this tiny, transparent worm has become one of the most important model organisms for basic research. For almost 30 years, the list of connections between individual neurons has been known. Despite the low number of neurons, its neuronal networks possesse a high degree of complexity and sophisticated behavioral output; the worm thus represents an animal of choice to study brain function.

Interplay of neuronal groups in brain-wide networks

Researchers have mostly concentrated on studying the functions of single or a handful of neural cells and some of their interactions to explain behavior such as movements. For the worm, it has been known how some single neurons function as isolated units within the network, but it remained unknown how they work together as a group. Manuel Zimmer, a group leader at the IMP, wanted to address this unsolved question in his research. Together with his team, he combined two state-of-the-art technologies for the current study: first, the scientists used 3D microscopy techniques to simultaneously and rapidly measure different regions of the brain; second, they used worms genetically engineered with a fluorescent protein that caused the worm's neurons to flash when they were active. "This combination was brilliant for us, as it allowed a brain-wide single-cell resolution of our recordings in real time," Zimmer explains the advantages of this approach.

Reading the worm's mind

Zimmer and his team tested the animals' reaction to stimuli from outside when they were trying to find food. Under the microscope, a fascinating picture was revealed to the researchers: "We saw that most of the neurons are constantly active and coordinate with each other in a brain-wide manner. They act as an ensemble," explains postdoctoral scientist Saul Kato, who spearheaded the study together with Harris Kaplan and Tina Schrödel, graduate students in the Zimmer laboratory. The animals were immobilized for these experiments, their reactions therefore representing intentions as opposed to reflecting actual movement.

With a different technique of microscopy, set up for freely moving worms, the scientists were able to detect the neurons that initiate movement. There was a direct correlation between the activity of certain networks and the impulse for movements; thus Zimmer and his co-workers could literally watch the worms think. These network activities not only represented short movements, but also their assembly into longer lasting behavioral strategies such as foraging. "This is something that no one has managed to do before," Zimmer points out. Suggestions of similar patterns of neural activity have been found in higher animals, but so far only a fraction of neurons in sub-regions of the brain could be examined at the same time. Zimmer and his colleagues are therefore confident that their results represent basic principles of brain function, even though the worm is only distantly related to mammals.

Investigation of molecular mechanisms

Many questions in the area of neurobiology remain largely unsolved, such as how decisions are made or whether the brain operates in a formal algorithmic manner, like a computer. In the next phase of research, Manuel Zimmer intends to analyze the molecular mechanisms underlying the processes he investigated. "It would also be interesting to have a closer look at long lasting brain states such as sleep and waking," he says, laying out his ambitious plans for the future.

Source:
Research Institute of Molecular Pathology. | Sciencedirect

Reference:
Kato et al. Global Brain Dynamics Embed the Motor Command Sequence of Caenorhabditis elegans. Cell, October 2015 DOI: 10.1016/j.cell.2015.09.034

Deep-sea bacteria could help neutralize greenhouse gas!!

Deep-sea bacteria could help neutralize greenhouse gas.
Credit: Image courtesy of University of Florida

A type of bacteria plucked from the bottom of the ocean could be put to work neutralizing large amounts of industrial carbon dioxide in the Earth’s atmosphere, a group of University of Florida researchers has found.

Carbon dioxide, a major contributor to the buildup of atmospheric greenhouse gases, can be captured and neutralized in a process known as sequestration. Most atmospheric carbon dioxide is produced from fossil fuel combustion, a waste known as flue gas. But converting the carbon dioxide into a harmless compound requires a durable, heat-tolerant enzyme. That’s where the bacterium studied by UF Health researchers comes into play. The bacterium -- Thiomicrospira crunogena -- produces carbonic anhydrase, an enzyme that helps remove carbon dioxide in organisms.

So what makes the deep-sea bacterium so attractive? It lives near hydrothermal vents, so the enzyme it produces is accustomed to high temperatures. That’s exactly what’s needed for the enzyme to work during the process of reducing industrial carbon dioxide, said Robert McKenna, Ph.D., a professor of biochemistry and molecular biology in the UF College of Medicine, a part of UF Health.

“This little critter has evolved to deal with those extreme temperature and pressure problems. It has already adapted to some of the conditions it would face in an industrial setting,” he said.
The findings by the McKenna group, which included graduate research assistants Brian Mahon and Avni Bhatt, were published recently in the journals Acta Crystallographica D: Biological Crystallography and Chemical Engineering Science.

The chemistry of sequestering works this way: The enzyme, carbonic anhydrase, catalyzes a chemical reaction between carbon dioxide and water. The carbon dioxide interacts with the enzyme, converting the greenhouse gas into bicarbonate. The bicarbonate can then be further processed into products such as baking soda and chalk.

In an industrial setting, the UF researchers believe the carbonic anhydrase could be captured this way: The carbonic anhydrase would be immobilized with solvent inside a reactor vessel that serves as a large purification column. Flue gas would be passed through the solvent, with the carbonic anhydrase converting the carbon dioxide into bicarbonate.

Neutralizing industrial quantities of carbon dioxide can require a significant amount of carbonic anhydrase, so McKenna’s group found a way to produce the enzyme without repeatedly harvesting it from the sea floor. The enzyme can be produced in a laboratory using a genetically engineered version of the common E. coli bacteria. So far, the UF Health researchers have produced several milligrams of the carbonic anhydrase, though Bhatt said much larger quantities would be needed to neutralize carbon dioxide on an industrial scale.

That’s just one of the challenges researchers face before the enzyme could be put to use against carbon dioxide in real-world settings. While it has good heat tolerance, the enzyme studied by McKenna’s team isn’t particularly efficient.

“You want it to do the reaction faster and more efficiently,” Bhatt said. “The fact that it has such a high thermal stability makes it a good candidate for further study.”

Ideally, Bhatt said, more research will produce a variant of the enzyme that is both heat-tolerant and fast-acting enough that it can be used in industrial settings. Next, they want to study ways to increase the enzyme’s stability and longevity, which are important issues to be addressed before the enzyme could be put into widespread industrial use.

While carbonic anhydrase’s ability to neutralize carbon dioxide has been widely studied by McKenna and other scientists around the world for some time, finding the best enzyme and putting it to work in an efficient and affordable carbon sequestration system has been challenging. Still, McKenna said he is encouraged by the prospect of discoveries that could ultimately benefit the planet.

“It shows that it’s physically possible to take known enzymes such as carbonic anhydrase and utilize them to pull carbon dioxide out of flue gas,” he said.

The study was funded by grant GM25154 from the National Institutes of Health and grant NSF-MCB-0643713 from the National Science Foundation.

Video descriptionhttps://www.youtube.com/watch?v=Zk_u3OrxWZQ

Source:  University of Florida. | By: Doug Bennett.

Probable Biomarker for Premature Death

This schematic summarizes an investigation of the biology of GlycA, a known biomarker for short-term mortality. They reveal GlycA's long-term behavior in apparently healthy patients: it is stable for >10 years and associated with chronic low-grade inflammation. Accordingly, GlycA predicts death from infection up to 14 years in the future.
Credit: Ritchie et al./Cell Systems 2015

A single blood test could reveal whether an otherwise healthy person is unusually likely to die of pneumonia or sepsis within the next 14 years. Based on an analysis of 10,000 individuals, researchers have identified a molecular byproduct of inflammation, called GlycA, which seems to predict premature death due to infections.

The findings, published October 22 in Cell Systems, suggest that high GlycA levels in the blood indicate a state of chronic inflammation that may arise from low-level chronic infection or an overactive immune response. That inflammation damages the body, which likely renders individuals more susceptible to severe infections.

"As biomedical researchers, we want to help people, and there are few more important things I can think of than identifying apparently healthy individuals who might actually be at increased risk of disease and death," said co-senior author Michael Inouye, of the University of Melbourne, in Australia. "We want to short-circuit that risk, and to do that we need to understand what this blood biomarker of disease risk is actually telling us."

Inouye and his colleagues note that additional studies are needed to uncover the mechanisms involved in GlycA's link to inflammation and premature death, and whether testing for GlycA levels in the clinic might someday be warranted.

"We still have a lot of work ahead to understand if we can modify the risk in some way," said co-senior author Johannes Kettunen, of the University of Oulu and the National Institute for Health and Welfare, in Finland. "I personally would not want to know I was at elevated risk of death or disease due to this marker if there was nothing that could be done about it."

For example, to plan a course of treatment, researchers need to know whether high GlycA is the result of a chronic, low-level microbial infection or an aberrant reaction of the body's own inflammatory response.

The findings will likely form the foundation for numerous other studies that will investigate the role of GlycA in the body. "The more high-quality genomics data we have, linked health records and long-term follow-up, the better our models and predictions will be," Inouye says. "This study is an example of the progress that can be made when altruistic research volunteers, clinicians, technologists, and data scientists work together, but we have the potential to do much more, and large-scale strategic inter-disciplinary initiatives are vitally needed."

Source:
Cell Press. | Sciencedirect

Reference:
Ritchie et al. The biomarker GlycA is associated with chronic inflammation and predicts long-term risk of severe infection. Cell Systems, October 2015 DOI: 10.1016/j.cels.2015.09.007

Saturday, October 24, 2015

Antioxidant use may promote spread of cancer

Picture source: dailymail.co.uk
Metastasis, the process by which cancer cells disseminate from their primary site to other parts of the body, leads to the death of most cancer patients. New research suggests that when antioxidants were administered to lab mice, their cancer spread more quickly than in mice that did not get antioxidants.

A team of scientists at the Children's Research Institute at UT Southwestern (CRI) has made a discovery that suggests cancer cells benefit more from antioxidants than normal cells, raising concerns about the use of dietary antioxidants by patients with cancer. The studies were conducted in specialized mice that had been transplanted with melanoma cells from patients. Prior studies had shown that the metastasis of human melanoma cells in these mice is predictive of their metastasis in patients.

Metastasis, the process by which cancer cells disseminate from their primary site to other parts of the body, leads to the death of most cancer patients. The CRI team found that when antioxidants were administered to the mice, the cancer spread more quickly than in mice that did not get antioxidants. The study was published online today in Nature.

It has long been known that the spread of cancer cells from one part of the body to another is an inefficient process in which the vast majority of cancer cells that enter the blood fail to survive.

"We discovered that metastasizing melanoma cells experience very high levels of oxidative stress, which leads to the death of most metastasizing cells," said Dr. Sean Morrison, CRI Director and Mary McDermott Cook Chair in Pediatric Genetics at UT Southwestern Medical Center. "Administration of antioxidants to the mice allowed more of the metastasizing melanoma cells to survive, increasing metastatic disease burden."

"The idea that antioxidants are good for you has been so strong that there have been clinical trials done in which cancer patients were administered antioxidants," added Dr. Morrison, who is also a CPRIT Scholar in Cancer Research and a Howard Hughes Medical Institute Investigator. "Some of those trials had to be stopped because the patients getting the antioxidants were dying faster. Our data suggest the reason for this: cancer cells benefit more from antioxidants than normal cells do."

Healthy people who do not have cancer may very well benefit from antioxidants that can help reduce damage from highly reactive oxidative molecules generated by normal metabolism. While the study's results have not yet been tested in people, they raise the possibility that cancer should be treated with pro-oxidants and that cancer patients should not supplement their diet with large doses of antioxidants.

"This finding also opens up the possibility that when treating cancer, we should test whether increasing oxidative stress through the use of pro-oxidants would prevent metastasis," said Dr. Morrison. "One potential approach is to target the folate pathway that melanoma cells use to survive oxidative stress, which would increase the level of oxidative stress in the cancer cells."

Source:
UT Southwestern Medical Center.

Reference:
Elena Piskounova, Michalis Agathocleous, Malea M. Murphy, Zeping Hu, Sara E. Huddlestun, Zhiyu Zhao, A. Marilyn Leitch, Timothy M. Johnson, Ralph J. DeBerardinis, Sean J. Morrison. Oxidative stress inhibits distant metastasis by human melanoma cells. Nature, 2015; DOI: 10.1038/nature15726

Deep sea methane metabolizing organism discovered



The production and consumption of methane by microorganisms play a major role in the global carbon cycle. Although these processes can occur in a range of environments, from animal guts to the deep ocean, these metabolisms are confined to the Archaea. Evans et al. used metagenomics to assemble two nearly complete archaeal genomes from deep groundwater methanogens (see the Perspective by Lloyd). The two reconstructed genomes are members of the recently described Bathyarchaeota and not the phylum to which all previously known methane-metabolizing archaea belonged.

Textbooks on methane-metabolising organisms might have to be rewritten after researchers in a University of Queensland-led international project on 23 October announced the discovery of two new organisms.

Deputy Head of UQ's Australian Centre for Ecogenomics in the School of Chemistry and Molecular Biosciences Associate Professor Gene Tyson said these new organisms played an unknown role in greenhouse gas emissions and consumption.

"We sampled the microorganisms in the water from a deep coal seam aquifer 600m below the earth's surface in the Surat Basin, near Roma, Queensland, and reconstructed genomes of organisms able to perform methane metabolism," Associate Professor Tyson said.

"Traditionally, these type of methane-metabolising organisms occur within a single cluster of microorganisms called Euryarchaeota. "This makes us wonder how many other types of methane-metabolising microorganisms are out there?"

Dr Tyson's group discovered novel methane metabolising organisms belonging to a group of microorganisms, called the Bathyarchaeota - an evolutionarily diverse group of microorganisms found in a wide range of environments, including deep-ocean and freshwater sediments.

"To use an analogy, the finding is like knowing about black and brown bears, and then coming across a giant panda," Dr Tyson said.
"They have some basic characteristics in common, but in other ways these they are fundamentally different.
"The significance of the research is that it expands our knowledge of diversity of life on Earth and suggests we are missing other organisms involved in carbon cycling and methane production."
The discovery of the novel methane-metabolising microorganisms was made using techniques that sequence DNA on a large scale and assemble these sequences into genomes using advanced computational tools, many of which were developed at The Australian Centre for Ecogenomics over the past 24 months.
The research, titled Methane metabolism in the archaeal phylum Bathyarchaeota revealed by genome-centric metagenomics, was published in Science.

Reference:
"Methane metabolism in the archaeal phylum Bathyarchaeota revealed by genome-centric metagenomics." Science 23 October 2015: DOI: 10.1126/science.aac7745

Source: University of Queensland | Phys.

Cellular damage control system helps plants tough it out

Plants Naturally Recycle Chloroplasts


In plants, chloroplasts can accumulate high levels of toxic singlet oxygen, a reactive oxygen species formed during photosynthesis. In these cells, most of the chloroplasts (green organelles) and mitochondria (red organelles) appear healthy. However, the chloroplast in the top left of the image is being selectively degraded and is interacting with the central vacuole (blue). Salk scientists reveal how this strategy to degrade singlet oxygen-damaged chloroplasts may help a cell avoid any further oxidative damage during photosynthesis.
Credit: Salk Institute
As food demands rise to unprecedented levels, farmers are in a race against time to grow plants that can withstand environmental challenges--infestation, climate change and more. Now, new research at the Salk Institute, published in Science on October 23, 2015, reveals details into a fundamental mechanism of how plants manage their energy intake, which could potentially be harnessed to improve yield.

"Plants are unique in that they are stuck wherever they germinate, so they must use a variety of ways to deal with environmental challenges," says Joanne Chory, senior author of the paper and director of Salk's Plant Molecular and Cellular Biology Laboratory. "Understanding the techniques plants use to cope with stress can help us to engineer stronger crops with improved yield to face our growing food shortage."

Plants have cellular organelles akin to tiny solar panels in each leaf. These microscopic structures, called chloroplasts, convert sunlight into chemical energy to enable the plant to grow. The command center of the cell, the nucleus, occasionally sends out signals to destroy all of the 50-100 chloroplasts in the cell, such as in autumn when leaves turn brown and drop off. However, the Salk team found how the plant nucleus begins to degrade and reuse the materials of select, malfunctioning chloroplasts--a mechanism that had been suspected but never shown until now.

"We've discovered a new pathway that lets a cell do a quality control check on the chloroplasts," says Jesse Woodson, Salk staff scientist and first author of the paper. Chloroplasts are full of enzymes, proteins and other materials that the plant can otherwise use if the chloroplast is defective (for example, creating toxic materials) or not needed.

While studying a mutant version of the model plant Arabidopsis, the team noticed the plant was making defective chloroplasts that created a reactive, toxic molecule called singlet oxygen that accumulated in the cells. The team noticed that the cells were marking the damaged chloroplasts for degradation with a protein tag called ubiquitin, which is used in organisms from yeast to humans to modify the function of a protein. Under closer investigation, the team observed that a protein called PUB4 was initiating the tagging.

"Damaged chloroplasts were being coated in this ubiquitin protein," says Woodson. "We think this is fundamentally different than the cell-wide signal, because the cell wants to continue doing photosynthesis, but has some bad chloroplasts to target and remove."

While PUB4 had been tied to cell death in other work, the Salk team showed that this protein initiates the degradation of chloroplasts by placing ubiquitin tags to mark the organelle for cellular recycling. This process, says Woodson, is like labeling defective solar panels to break them down for other materials.

"Understanding the basic biology of plants like this selective chloroplast degradation leads us a step closer to learning how to control chloroplasts and design crops that are more resistant to stressors," says Chory, who is also a Howard Hughes Medical Institute investigator and holder of the Howard H. and Maryam R. Newman Chair in Plant Biology. For example, if a plant is growing in an environment that is fairly relaxed, one could potentially reduce the degradation of chloroplasts to boost the growth of the plant. Or, if the environment contained a lot of sun, spurring on the breakdown and regeneration of chloroplasts could help the plant thrive.

Interestingly, chloroplasts could help us understand our brains as well. Neurons have energy-generating organelles similar to chloroplasts called mitochondria. "Recently it's become apparent that mitochondria are selectively degraded in the cell and that bad mitochondria accumulation could lead to disease like Parkinson's and maybe Alzheimer's," says Woodson. "Cells, whether plant or animal, learn how to degrade defunct energy organelles selectively to survive."

By better understanding this process in chloroplasts, the Salk team may be able to also glean insight into how the cells handle misbehaving mitochondria. "So far it seems like it might be a parallel process," Woodson adds. "We're hoping with our molecular and genetic tools available for plants we can continue to uncover general concepts on how cells do these quality control checks on organelles and learn something about neurodegenerative disease as well."

Source:
Salk Institute. | Sciencedirect

Reference:
Jesse D. Woodson, Matthew S. Joens, Andrew B. Sinson, Jonathan Gilkerson, Patrice A. Salomé, Detlef Weigel, James A. Fitzpatrick, and Joanne Chory. Ubiquitin facilitates a quality-control pathway that removes damaged chloroplasts. Science, 23 October 2015: 450-454 DOI: 10.1126/science.aac7444

Friday, October 23, 2015

Gene therapy treats all muscles in the body in muscular dystrophy dogs

Human clinical trials are next step..
Source: www.healthcare.uiowa.edu

Muscular dystrophy, which affects approximately 250,000 people in the U.S., occurs when damaged muscle tissue is replaced with fibrous, fatty or bony tissue and loses function. For years, scientists have searched for a way to successfully treat the most common form of the disease, Duchenne Muscular Dystrophy (DMD), which primarily affects boys. Now, a team of University of Missouri researchers have successfully treated dogs with DMD and say that human clinical trials are being planned in the next few years.

"This is the most common muscle disease in boys, and there is currently no effective therapy," said Dongsheng Duan, the study leader and the Margaret Proctor Mulligan Professor in Medical Research at the MU School of Medicine. "This discovery took our research team more than 10 years, but we believe we are on the cusp of having a treatment for the disease."

Patients with Duchenne muscular dystrophy have a gene mutation that disrupts the production of a protein known as "dystrophin." Absence of dystrophin starts a chain reaction that eventually leads to muscle cell degeneration and death. Affected boys lose their ability to walk and breathe as they get older.


Dystrophin is a rod-shaped cytoplasmic protein, and a vital part of a protein complex that connects the cytoskeleton of a muscle fiber to the surrounding extracellular matrix through the cell membrane. This complex is variously known as the costamere or the dystrophin-associated protein complex.
Dystrophin deficiency has been definitively established as one of the root causes of the general class of myopathies collectively referred to as muscular dystrophy. The large cytosolic protein was first identified in 1987 by Louis M. Kunkel, after the 1986 discovery of the mutated gene that causes Duchenne muscular dystrophy (DMD).

This places significant limitations on individuals afflicted with the disease. Dystrophin also is one of the largest genes in the human body. "Due to its size, it is impossible to deliver the entire gene with a gene therapy vector, which is the vehicle that carries the therapeutic gene to the correct site in the body," Duan said. "Through previous research, we were able to develop a miniature version of this gene called a microgene. This minimized dystrophin protected all muscles in the body of diseased mice."

However, it took the team more than 10 years to develop a strategy that can safely send the micro-dystrophin to every muscle in a dog that is afflicted by the disease. The dog has a body size similar to that of an affected boy. Success in the dog will set the foundation for human tests.

In this latest study, the MU team demonstrated for the first time that a common virus can deliver the microgene to all muscles in the body of a diseased dog. The dogs were injected with the virus when they were two to three months old and just starting to show signs of DMD. The dogs are now six to seven months old and continue to develop normally.

"The virus we are using is one of the most common viruses; it is also a virus that produces no symptoms in the human body, making this a safe way to spread the dystrophin gene throughout the body," Duan said. "These dogs develop DMD naturally in a similar manner as humans. It's important to treat DMD early before the disease does a lot of damage as this therapy has the greatest impact at the early stages in life."

Sources:
University of Missouri-Columbia. | Protein Data Bank | Sciencedirect

Reference:
Yongping Yue, Xiufang Pan, Chady H. Hakim, Kasun Kodippili, Keqing Zhang, Jin-Hong Shin, Hsiao T. Yang, Thomas McDonald, Dongsheng Duan. Safe and bodywide muscle transduction in young adult Duchenne muscular dystrophy dogs with adeno-associated virus. Human Molecular Genetics, 2015; 24 (20): 5880 DOI: 10.1093/hmg/ddv310

Thursday, October 22, 2015

Study reveals how brain multitasks

Findings help explain how the brain pays attention to what;s important and how neural circuits may be 'broken' in attention-deficit disorders

This is an image of a human brain. The thalamic reticular nucleus (TRN) surrounds the thalamus (pictured in red, with a switchboard in the background). Credit: Courtesy of Michael Halassa

Researchers at NYU Langone Medical Center say they have added to evidence that a shell-shaped region in the center of the mammalian brain, known as the thalamic reticular nucleus or TRN, is likely responsible for the ability to routinely and seamlessly multitask.

The process, they suggest, is done by individual TRN neurons that act like a "switchboard," continuously filtering sensory information and shifting more or less attention onto one sense -- like sight -- while relatively blocking out distracting information from other senses, including sound.

In their research in mice, described in the journal Nature online Oct. 21, the investigators showed that TRN neurons, which have been previously implicated in the dampening of brain signals in people, were also less active when the mice were led to focus on -- and respond to -- a visual flash of light to get a milk reward.

In contrast, when the mice were made to pay attention to a sound and ignore the flash of light, researchers say TRN neurons that controlled vision were more active, suppressing the visual signals in order to pay more attention to the sound. Earlier research by the same team of scientists showed that different TRN neurons controlled specific senses.

"Our latest research findings support a newly emerging model of how the brain focuses attention on a particular task, using neurons in the thalamic reticular nucleus as a switchboard to control the amount of information the brain receives, limiting and filtering out sensory information that we don't want to pay attention to," says senior study investigator and neuroscientist Michael Halassa, MD, PhD.

"Filtering out distracting or irrelevant information is a vital function," explains Halassa, an assistant professor of neuroscience and psychiatry at NYU Langone and its Druckenmiller Neuroscience Institute. "People need to be able to focus on one thing and suppress other distractions to perform everyday functions such as driving, talking on the phone, and socializing."

According to Halassa, the new research sets the stage for ever more detailed studies on the complex behavior involved in how the mammalian brain pays attention to what's important, and especially how those neural circuits are broken in cases of attention-deficit diseases, such as ADHD, autism, and schizophrenia.

Halassa says previous research, at NYU and elsewhere, had identified the TRN region of the brain and its individual neurons as possible regulators of the brain's ability to multitask but had until now been unable to successfully prove the hypothesis. In fact, Nobel laureate Francis Crick hypothesized as early as 1984 that the TRN might function like a gate for the flow of sensory information. However, Halassa explains, scientists faced technical struggles in accurately recording signals from the small anatomical structure of the TRN deep within the brain. There was also no method of isolating behavior associated with the TRN until his research team designed a novel experimental setup to do so.

For the new study, Halassa and his colleagues developed a behavioral experiment in which they monitored the ability of mice to successfully collect a milk reward by paying attention to a light signal or a sound. The test, they say, was designed to gauge how well the area of the brain known to control higher behavioral functions, the prefrontal cortex, could direct the focus on one sense over another.

As part of the test, researchers distracted the mice with opposing stimuli: If the mouse was expecting a flash of light to guide it to the milk reward, the researchers distracted it with a sound, and vice versa. Distracting the mice decreased their ability to collect the food reward to 70 percent from nearly 90 percent, even if the distracting stimulus was removed later.

Concurrently, the research team recorded electrical signals from TRN neurons and also tracked the mice's behavior while at the same time inactivating various parts of the brain's neural circuits with a laser beam.

They found that inactivating the prefrontal cortex region of the brain, which is believed responsible for decision-making in complex behaviors, disrupted TRN neural signaling and reduced mice to only random success in obtaining a milk reward when presented with specifically cued light or sound signals. Inactivating the TRN, while leaving the cortical regions intact, also diminished success with obtaining the prompted food reward. Halassa says these results demonstrate how the prefrontal cortex is essential to performing such behavioral tasks and how this part of the brain "stores the knowledge ultimately communicated to the TRN to control how much visual or auditory sensory information is suppressed or not, and how the brain ultimately multitasks."

Halassa says the team next plans to study exactly how much "distracting" information the TRN can block or allow through and how this mechanism can get disrupted in models of disease, such as autism.

Source:
NYU Langone Medical Center / New York University School of Medicine. | Sciencedirect
Reference:
Ralf D. Wimmer, L. Ian Schmitt, Thomas J. Davidson, Miho Nakajima, Karl Deisseroth, Michael M. Halassa. Thalamic control of sensory selection in divided attention. Nature, 2015; DOI: 10.1038/nature15398

Wednesday, October 21, 2015

Life on Earth likely started 4.1 billion years ago, much earlier than scientists thought

Evidence that early Earth was not dry and desolate


Fossil-like rock found in Australia contain hints of life from 4.1 billion years ago Photo: Bruce Watson/Proceedings of the National Academy of Sciences (PNAS) via AP

UCLA geochemists have found evidence that life likely existed on Earth at least 4.1 billion years ago -- 300 million years earlier than previous research suggested. The discovery indicates that life may have begun shortly after the planet formed 4.54 billion years ago.

Carbon in 4.1 billion year old zircon.
Credit: Stanford/UCLA.

The research is published today in the online early edition of the journal Proceedings of the National Academy of Sciences.

"Twenty years ago, this would have been heretical; finding evidence of life 3.8 billion years ago was shocking," said Mark Harrison, co-author of the research and a professor of geochemistry at UCLA.

"Life on Earth may have started almost instantaneously," added Harrison, a member of the National Academy of Sciences. "With the right ingredients, life seems to form very quickly."

The new research suggests that life existed prior to the massive bombardment of the inner solar system that formed the moon's large craters 3.9 billion years ago.

"If all life on Earth died during this bombardment, which some scientists have argued, then life must have restarted quickly," said Patrick Boehnke, a co-author of the research and a graduate student in Harrison's laboratory.

Scientists had long believed the Earth was dry and desolate during that time period. Harrison's research -- including a 2008 study in Nature he co-authored with Craig Manning, a professor of geology and geochemistry at UCLA, and former UCLA graduate student Michelle Hopkins -- is proving otherwise.

"The early Earth certainly wasn't a hellish, dry, boiling planet; we see absolutely no evidence for that," Harrison said. "The planet was probably much more like it is today than previously thought."

The researchers, led by Elizabeth Bell -- a postdoctoral scholar in Harrison's laboratory -- studied more than 10,000 zircons originally formed from molten rocks, or magmas, from Western Australia. Zircons are heavy, durable minerals related to the synthetic cubic zirconium used for imitation diamonds. They capture and preserve their immediate environment, meaning they can serve as time capsules.

The scientists identified 656 zircons containing dark specks that could be revealing and closely analyzed 79 of them with Raman spectroscopy, a technique that shows the molecular and chemical structure of ancient microorganisms in three dimensions.

Bell and Boehnke, who have pioneered chemical and mineralogical tests to determine the condition of ancient zircons, were searching for carbon, the key component for life.

One of the 79 zircons contained graphite -- pure carbon -- in two locations.

"The first time that the graphite ever got exposed in the last 4.1 billion years is when Beth Ann and Patrick made the measurements this year," Harrison said.

How confident are they that their zircon represents 4.1 billion-year-old graphite?

"Very confident," Harrison said. "There is no better case of a primary inclusion in a mineral ever documented, and nobody has offered a plausible alternative explanation for graphite of non-biological origin into a zircon."

The graphite is older than the zircon containing it, the researchers said. They know the zircon is 4.1 billion years old, based on its ratio of uranium to lead; they don't know how much older the graphite is.

The research suggests life in the universe could be abundant, Harrison said. On Earth, simple life appears to have formed quickly, but it likely took many millions of years for very simple life to evolve the ability to photosynthesize.

The carbon contained in the zircon has a characteristic signature -- a specific ratio of carbon-12 to carbon-13 -- that indicates the presence of photosynthetic life.

"We need to think differently about the early Earth," Bell said.

Wendy Mao, an associate professor of geological sciences and photon science at Stanford University, is the other co-author of the research.

The research was funded by the National Science Foundation and a Simons Collaboration on the Origin of Life Postdoctoral Fellowship granted to Bell.

Source:
TUniversity of California - Los Angeles | by Stuart Wolpert.

Reference:
Elizabeth A. Bell, Patrick Boehnke, T. Mark Harrison, and Wendy L. Mao. Potentially biogenic carbon preserved in a 4.1 billion-year-old zircon. PNAS, October 19, 2015 DOI: 10.1073/pnas.1517557112

Tuesday, October 20, 2015

Camels test positive for respiratory virus (MERS) in Kenya



MERS-CoV particles as seen by negative stain electron microscopy. Virions contain characteristic club-like projections emanating from the viral membrane.
Credit: Maureen Metcalfe/Cynthia Goldsmith/Azaibi Tamin 

A team of scientists surveyed 335 dromedary – single humped – camels from nine herds in Laikipia County, Kenya and found that 47% tested positive for MERS antibodies, showing they had been exposed to the virus.

A new study has found that nearly half of camels in parts of Kenya have been infected by the virus that causes Middle East Respiratory Syndrome (MERS) and calls for further research into the role they might play in the transmission of this emerging disease to humans.

MERS was first identified in Saudi Arabia in 2012 and there is currently no vaccine or specific treatment available. To date, it has infected 1,595 people in more than 20 countries and caused 571 deaths. Although the majority of human cases of MERS have been attributed to human-to-human infections, camels are likely to be a major reservoir host for the virus and an animal source of MERS infection in humans.

A team of scientists from the University of Liverpool and institutions in the USA, Kenya and Europe, surveyed 335 dromedary - single humped - camels from nine herds in Laikipia County, Kenya and found that 47% tested positive for MERS antibodies, showing they had been exposed to the virus.

Professor Eric Fèvre, Chair of Veterinary Infectious Diseases at the University's Institute of Infection and Global Health said: "Although Laikipia County camel density is low relative to more northern regions of Kenya, our study suggests the population is sufficient to maintain high rates of viral transmission and that camels may be constantly re-infected and serve as long term carriers of the virus. MERS in camels, it seems, is much like being infected by the common cold.

"The significance of this is not yet clear, because we don't know if the virus is universally zoonotic. While the risk of these camels spreading MERS to humans cannot yet be discounted, it appears to be, for now, very low as there have been no human cases diagnosed in Kenya.

"It might be that the mutations required to make this virus zoonotic have only evolved recently in the Middle East, where the human outbreaks have so far been concentrated."

Lead author Dr Sharon Deem, Director of the Saint Louis Zoo Institute for Conservation Medicine, said: "Demand for livestock products, such as meat and milk, is rising across the globe and could offer poor farmers a route out of poverty as markets expand, but zoonotic disease remains a major obstacle to this goal.

"Further research to determine whether the MERS virus is dangerous to humans in Kenya and other sub-Saharan countries is critical."

Source:
University of Liverpool | ScienceDirect

Reference:
Chantal B. Reusken et al. Serological Evidence of MERS-CoV Antibodies in Dromedary Camels (Camelus dromedaries) in Laikipia County, Kenya. PLOS ONE, October 2015 DOI: 10.1371/journal.pone.014012

Chantal BEM Reusken, Bart L Haagmans, Marcel A Müller, Carlos Gutierrez, Gert-Jan Godeke, Benjamin Meyer, Doreen Muth, V Stalin Raj, Laura Smits-De Vries, Victor M Corman, Jan-Felix Drexler, Saskia L Smits, Yasmin E El Tahir, Rita De Sousa, Janko van Beek, Norbert Nowotny, Kees van Maanen, Ezequiel Hidalgo-Hermoso, Berend-Jan Bosch, Peter Rottier, Albert Osterhaus, Christian Gortázar-Schmidt, Christian Drosten, Marion PG Koopmans. Middle East respiratory syndrome coronavirus neutralising serum antibodies in dromedary camels: a comparative serological study. The Lancet Infectious Diseases, 2013; 13 (10): 859 DOI: 10.1016/S1473-3099(13)70164-6

http://www.cdc.gov/coronavirus/mers/

http://www.who.int/csr/disease/coronavirus_infections/faq/en/

https://en.wikipedia.org/wiki/Middle_East_respiratory_syndrome_coronavirus

Monday, October 19, 2015

Study challenges scientific principle about Alzheimer protein amyloid beta

Scientific Reports, a Nature group journal, has recently published results that challenge the findings of studies to date on the initial aggregates formed by amyloid beta, a protein closely associated with the onset and development of Alzheimer's disease.

Globular shape that initial aggregates of amyloid beta protein adopt.
Credit: N. Carulla, IRB Barcelona

Headed by Natàlia Carulla, a specialist in biomedical chemistry at the Institute for Research in Biomedicine (IRB Barcelona), the study focuses on the number of molecules and shape that this protein has when it begins to aggregate, a process that leads to the so-called Abeta fibrils, the main components of the plaques observed in the brains of those suffering from Alzheimer's disease. "Comprehensive knowledge of the number of units and conformation of Abeta at the initial stages of aggregation is crucial for the design of drugs capable of breaking them up or preventing their formation," explains Natàlia Carulla.

The team at IRB Barcelona has studied the aggregation of two of the most common variants of Abeta, namely Abeta 40 and Abeta 42, with 40 and 42 amino acids, respectively, the latter being the variant most closely associated with Alzheimer's disease. The literature reports that while Abeta 40 self-aggregates to sequentially form dimers (two units), trimers (three units) and tetramers (four units), Abeta 42 self-aggregates to form pentamers (five units) and hexamers (six units). These findings have been cited more than 1000 times and consequently numerous studies have been based on this premise. However, IRB Barcelona researchers Rosa Pujol-Pina and Sílvia Vilaprinyó-Pascual, the first two authors of the study, have observed that Abeta 40 and Abeta 42 goes through exactly the same aggregation states.

The authors uphold that the results published to date are biased by the technique most widely used to study Aβ aggregates. The technique in question, SDS-PAGE, is characterised by the need for a small amount of sample and therefore is used for more straightforward studies. Using a new approach based on mass spectrometry and computational modelling and in collaboration with the IRB Barcelona groups headed by Marta Vilaseca and Modesto Orozco, respectively, Dr. Carulla's team has observed that both Abeta 40 and Abeta 42 form dimers, trimers and tetramers and that in these initial stages these aggregates are spherical lacking defined structure.

"The structure that we have observed challenges the kind of structure accepted until now, the so-called beta-sheet. It should be noted that up to now drug design has been based on the premise of interfering with the beta-sheet structure. We believe that this strategy should be reconsidered and recommend caution when using SDS-PAGE to study Abeta oligomers," states Sílvia Vilaprinyó-Pascual. The experiments on aggregation have been performed with several techniques, including SDS-PAGE. "This study will lead to reservations on the part of the scientific community and that is why we have been thorough and present methodologically robust data," says Natàlia Carulla.

Carulla's team is now working on the identification of therapeutic molecules that prevent the formation of the first amyloid beta aggregates.

Source:
Institute for Research in Biomedicine-IRB | ScienceDaily

Reference:
    Rosa Pujol-Pina, Sílvia Vilaprinyó-Pascual, Roberta Mazzucato, Annalisa Arcella, Marta Vilaseca, Modesto Orozco, Natàlia Carulla. SDS-PAGE analysis of Aβ oligomers is disserving research into Alzheimer´s disease: appealing for ESI-IM-MS. Scientific Reports, 2015; 5: 14809 DOI: 10.1038/srep14809

Sunday, October 18, 2015

Investigators create complex kidney structures from human stem cells derived from adults

New technique offers model for studying disease, progress toward cell therapy

Researchers modeled kidney development and injury in kidney organoids (shown here), demonstrating that the organoid culture system can be used to study mechanisms of human kidney development and toxicity.
Credit: Ryuji Morizane, Brigham and Women's Hospital

Investigators at Brigham and Women's Hospital (BWH) and the Harvard Stem Cell Institute (HSCI) have established a highly efficient method for making kidney structures from stem cells that are derived from skin taken from patients. The kidney structures formed could be used to study abnormalities of kidney development, chronic kidney disease, the effects of toxic drugs, and be incorporated into bioengineered devices to treat patients with acute and chronic kidney injury. In the longer term, these methods could hasten progress toward replacing a damaged or diseased kidney with tissue derived from a patient's own cells. These results were published in Nature Biotechnology.

"Kidneys are the most commonly transplanted organs, but demand far outweighs supply," said co-corresponding author Ryuji Morizane, MD, PhD, associate biologist in BWH's Renal Division. "We have converted skin cells to stem cells and developed a highly efficient process to convert these stem cells into kidney structures that resemble those found in a normal human kidney. We're hopeful that this finding will pave the way for the future creation of kidney tissues that could function in a patient and eliminate the need for transplantation from a donor."

Chronic kidney disease (CKD) affects 9 to11 percent of the U.S. adult population and is a serious public health problem worldwide. Central to the progression of CKD is the gradual and irreversible loss of nephrons, the individual functional units of the kidney. Patients with end-stage kidney disease benefit from treatments such as dialysis and kidney transplantation, but these approaches have several limitations, including the limited supply of compatible organ donors.

While the human kidney does have some capacity to repair itself after injury, it is not able to regenerate new nephrons. In previous studies, researchers have successfully differentiated stem cells into heart, liver, pancreas or nerve cells by adding certain chemicals, but kidney cells have proved challenging. Using normal kidney development as a roadmap, the BWH investigators developed an efficient method to create kidney precursor cells that self assemble into structures which mimic complex structures of the kidney. The research team further tested these organoids -- three-dimensional organ structures grown in the lab -- and found that they could be used to model kidney development and susceptibility of the kidney tissue to therapeutic drug toxicity. The kidney structures also have the potential to facilitate further studies of how abnormalities occur as the human kidney develops in the uterus and to establish models of disease where they can be used to test new therapies.

"This new finding could hasten progress to model human disease, find new therapeutic agents, identify patient-specific susceptibility to toxicity of drugs and may one day result in replacement of human kidney tissue in patients with kidney disease from cells derived from that same patient," said author Joseph V. Bonventre, chief of BWH's Renal Division and Chief of BWH's Division of Biomedical Engineering. "This approach is especially attractive because the tissues obtained would be 'personalized' and, because of their genetic identity to the patient from whom they were derived, this approach may ultimately lead to tissue replacement without the need for suppression of the immune system."

Source:
Brigham and Women's Hospital | ScienceDaily

Reference:
Ryuji Morizane, Albert Q Lam, Benjamin S Freedman, Seiji Kishi, M Todd Valerius, Joseph V Bonventre. Nephron organoids derived from human pluripotent stem cells model kidney development and injury. Nature Biotechnology, 2015; DOI: 10.1038/nbt.3392

Saturday, October 17, 2015

Larger brains do not lead to high IQs, new meta-analysis finds

Is brain size related to cognitive ability of humans? This question has captured the attention of scientists for more than a century. An international team of researchers provides no evidence for a causal role of brain size for IQ test performance. In a meta-analysis of data from more than 8000 participants, they show that associations between in vivo brain volume and IQ are small.

Brain scans (stock image). Credit: © nimon_t / Fotolia

As early as 1836, the German physiologist and anatomist Friedrich Tiedemann, in an article in the Philosophical Transactions, expressed his opinion that "there is undoubtedly a connection between the absolute size of the brain and the intellectual powers and functions of the mind." With the advent of brain imaging methods (e.g., MRI, PET), reliable assessments of in-vivo brain volume and investigations of its association with IQ are now possible.

Now, an international team of researchers, led by University of Vienna researchers Jakob Pietschnig, Michael Zeiler, and Martin Voracek from the Faculty of Psychology, together with Lars Penke (University of Göttingen) and Jelte Wicherts (Tilburg University), published a meta-analysis examining correlations between in-vivo brain volume and IQ in Neuroscience and Biobehavioral Reviews. Based on the data from 148 samples comprising over 8000 participants, they report a robust but weak association between brain size and IQ. This association appeared to be independent of participant sex and age.

"The presently observed association means that brain volume plays only a minor role in explaining IQ test performance in humans. Although a certain association is observable, brain volume appears to be of only little practical relevance. Rather, brain structure and integrity appear to be more important as a biological foundation of IQ, whilst brain size works as one of many compensatory mechanisms of cognitive functions," explains Jakob Pietschnig from the Institute of Applied Psychology of the University of Vienna.

Brain structure vs. brain size

The importance of brain structure compared to brain volume becomes already evident when comparing different species. When considering absolute brain size, the sperm whale weighs in with the largest central nervous system. When controlling for body mass, the shrew is on the top of the list. Similar results emerge when considering other aspects of species anatomy: Homo sapiens never appears at the top at the list, as would be expected. Rather, differences in brain structure appear to be mainly responsible for between-species differences in cognitive performance.

Within Homo sapiens, there are indications that render a large association between IQ and brain volume similarly questionable. For instance, differences in brain size between men and women are well-established, yielding larger brains of men compared to women. However, there are no differences in global IQ test performance between men and women. Another example are individuals with megalencephaly syndrome (enlarged brain volume) who typically show lower IQ test performance than the average population. "Therefore, structural aspects appear to be more important for cognitive performance within humans as well," concludes Jakob Pietschnig.

Source:
University of Vienna | ScienceDaily

Reference:
Jakob Pietschnig, Lars Penke, Jelte M. Wicherts, Michael Zeiler, Martin Voracek. Meta-analysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience and Biobehavioral Reviews, 2015; DOI: 10.1016/j.neubiorev.2015.09.017

Schizophrenia symptoms linked to features of brain's anatomy?

Roger Harris/Photo Researchers, ISM/Phototake

Using advanced brain imaging, researchers have matched certain behavioral symptoms of schizophrenia to features of the brain's anatomy. The findings, at Washington University School of Medicine in St. Louis, could be a step toward improving diagnosis and treatment of schizophrenia.


The study, available online in the journal NeuroImage, will appear in print Oct. 15.

"By looking at the brain's anatomy, we've shown there are distinct subgroups of patients with a schizophrenia diagnosis that correlates with symptoms," said senior investigator C. Robert Cloninger, MD, PhD, the Wallace Renard Professor of Psychiatry and a professor of genetics. "This gives us a new way of thinking about the disease. We know that not all patients with schizophrenia have the same issues, and this helps us understand why."

The researchers evaluated scans taken with magnetic resonance imaging (MRI) and a technique called diffusion tensor imaging in 36 healthy volunteers and 47 people with schizophrenia. The scans of patients with schizophrenia revealed various abnormalities in portions of the corpus callosum, a bundle of fibers that connects the left and right hemispheres of the brain and is considered critical to neural communication.

When the researchers looked at abnormalities across the corpus callosum, they found that certain characteristics revealed in the brain scans matched specific symptoms of schizophrenia. For example, patients with specific features in one part of the corpus callosum typically displayed bizarre and disorganized behavior. In other patients, irregularities in a different part of that structure were associated with disorganized thinking and speech and symptoms such as a lack of emotion. Other brain abnormalities in the corpus callosum were associated with delusions or hallucinations.

In 2014, the same team of researchers reported evidence suggesting that schizophrenia is not a single disease but a group of eight genetically distinct disorders, each with its own set of symptoms. In that study, Cloninger and Igor Zwir, PhD, an instructor in psychiatry at Washington University and an associate professor in the Department of Computer Science and Artificial Intelligence at the University of Granada, Spain, found that distinct sets of genes were strongly associated with particular clinical symptoms.

The current study provides further evidence that schizophrenia is a heterogeneous group of disorders rather than a single disorder. The researchers believe it will be important for future studies to focus on how precise gene networks are linked to specific brain features and individual symptoms so that treatments can be tailored to patients. Currently, therapies for schizophrenia tend to be more all-encompassing, regardless of an individual patient's symptoms.

In analyzing the clusters of genes and the brain scans, the researchers developed a complex method of analysis, similar to what companies such as Netflix use to predict movies that viewers might want to stream.

"We didn't start with people who had certain symptoms and then look to see whether they had corresponding abnormalities in the brain," Zwir said. "We just looked at the data, and these patterns began to emerge. This kind of granular information, combined with data about the genetics of schizophrenia, one day will help physicians treat the disorder in a more precise way."

Source:
Washington University School of Medicine  | by Jim Dryden
ScienceDaily

Reference:
Javier Arnedo, Daniel Mamah, David A. Baranger, Michael P. Harms, Deanna M. Barch, Dragan M. Svrakic, Gabriel A. de Erausquin, C. Robert Cloninger, Igor Zwir. Decomposition of brain diffusion imaging data uncovers latent schizophrenias with distinct patterns of white matter anisotropy. NeuroImage, 2015; 120: 43 DOI: 10.1016/j.neuroimage.2015.06.083

Engineers create artificial skin that can send pressure sensation to brain cell


Human finger touches robotic finger. The transparent plastic and black device on the golden "fingertip" is the skin-like sensor developed by Stanford engineers. This sensor can detect pressure and transmit that touch sensation to a nerve cell. The goal is to create artificial skin, studded with many such miniaturized sensors, to give prosthetic appendages some of the sensory capabilities of human skin. Credit: Bao Lab

Stanford engineers have created a plastic "skin" that can detect how hard it is being pressed and generate an electric signal to deliver this sensory input directly to a living brain cell. The work takes a big step toward adding a sense of touch to prosthetic limbs.


Zhenan Bao, a professor of chemical engineering at Stanford, has spent a decade trying to develop a material that mimics skin's ability to flex and heal, while also serving as the sensor net that sends touch, temperature and pain signals to the brain. Ultimately she wants to create a flexible electronic fabric embedded with sensors that could cover a prosthetic limb and replicate some of skin's sensory functions.

Bao's work, reported today in Science, takes another step toward her goal by replicating one aspect of touch, the sensory mechanism that enables us to distinguish the pressure difference between a limp handshake and a firm grip.

"This is the first time a flexible, skin-like material has been able to detect pressure and also transmit a signal to a component of the nervous system," said Bao, who led the 17-person research team responsible for the achievement.

Benjamin Tee, a recent doctoral graduate in electrical engineering; Alex Chortos, a doctoral candidate in materials science and engineering; and Andre Berndt, a postdoctoral scholar in bioengineering, were the lead authors on the Science paper.

Digitizing touch

The heart of the technique is a two-ply plastic construct: the top layer creates a sensing mechanism and the bottom layer acts as the circuit to transport electrical signals and translate them into biochemical stimuli compatible with nerve cells. The top layer in the new work featured a sensor that can detect pressure over the same range as human skin, from a light finger tap to a firm handshake.

Five years ago, Bao's team members first described how to use plastics and rubbers as pressure sensors by measuring the natural springiness of their molecular structures. They then increased this natural pressure sensitivity by indenting a waffle pattern into the thin plastic, which further compresses the plastic's molecular springs.

To exploit this pressure-sensing capability electronically, the team scattered billions of carbon nanotubes through the waffled plastic. Putting pressure on the plastic squeezes the nanotubes closer together and enables them to conduct electricity.

This allowed the plastic sensor to mimic human skin, which transmits pressure information as short pulses of electricity, similar to Morse code, to the brain. Increasing pressure on the waffled nanotubes squeezes them even closer together, allowing more electricity to flow through the sensor, and those varied impulses are sent as short pulses to the sensing mechanism. Remove pressure, and the flow of pulses relaxes, indicating light touch. Remove all pressure and the pulses cease entirely.

The team then hooked this pressure-sensing mechanism to the second ply of their artificial skin, a flexible electronic circuit that could carry pulses of electricity to nerve cells.

Importing the signal

Bao's team has been developing flexible electronics that can bend without breaking. For this project, team members worked with researchers from PARC, a Xerox company, which has a technology that uses an inkjet printer to deposit flexible circuits onto plastic. Covering a large surface is important to making artificial skin practical, and the PARC collaboration offered that prospect.

Finally the team had to prove that the electronic signal could be recognized by a biological neuron. It did this by adapting a technique developed by Karl Deisseroth, a fellow professor of bioengineering at Stanford who pioneered a field that combines genetics and optics, called optogenetics. Researchers bioengineer cells to make them sensitive to specific frequencies of light, then use light pulses to switch cells, or the processes being carried on inside them, on and off.

For this experiment the team members engineered a line of neurons to simulate a portion of the human nervous system. They translated the electronic pressure signals from the artificial skin into light pulses, which activated the neurons, proving that the artificial skin could generate a sensory output compatible with nerve cells.

Optogenetics was only used as an experimental proof of concept, Bao said, and other methods of stimulating nerves are likely to be used in real prosthetic devices. Bao's team has already worked with Bianxiao Cui, an associate professor of chemistry at Stanford, to show that direct stimulation of neurons with electrical pulses is possible.

Bao's team envisions developing different sensors to replicate, for instance, the ability to distinguish corduroy versus silk, or a cold glass of water from a hot cup of coffee. This will take time. There are six types of biological sensing mechanisms in the human hand, and the experiment described in Science reports success in just one of them.

But the current two-ply approach means the team can add sensations as it develops new mechanisms. And the inkjet printing fabrication process suggests how a network of sensors could be deposited over a flexible layer and folded over a prosthetic hand.

"We have a lot of work to take this from experimental to practical applications," Bao said. "But after spending many years in this work, I now see a clear path where we can take our artificial skin."

Source:
Stanford University | by Tom Abate
ScienceDaily

Reference:
B.C.K. Tee et al. A skin-inspired organic digital mechanoreceptor. Science, 2015 DOI: 10.1126/science.aaa9306

Friday, October 16, 2015

Scientists produce clearest-ever images of enzyme that plays key roles in aging, cancer

Research on telomerase could lead to new strategies for treating disease

An enzyme called telomerase plays a significant role in aging and most cancers, but until recently many aspects of the enzyme's structure could not be clearly seen.

This is a photograph of subunits of telomerase. 
Credit: UCLA department of chemistry and biochemistry


The telomerase enzyme is known to play a significant role in aging and most cancers. Scientists have discovered several major new insights about this enzyme and they are now able to see the complex enzyme's sub-units in much sharper resolution than ever before.

Now, scientists from UCLA and UC Berkeley have produced images of telomerase in much higher resolution than ever before, giving them major new insights about the enzyme. Their findings, published online today by the journal Science, could ultimately lead to new directions for treating cancer and preventing premature aging.

"Many details we could only guess at before, we can now see unambiguously, and we now have an understanding of where the different components of telomerase interact," said Juli Feigon, a professor of chemistry and biochemistry in the UCLA College and a senior author of the study. "If telomerase were a cat, before we could see its general outline and the location of the limbs, but now we can see the eyes, the whiskers, the tail and the toes."

The research brought together experts in structural biology, biochemistry and biophysics, and a wide range of cutting-edge research techniques.

Telomerase's primary job is to maintain the DNA in telomeres, the structures at the ends of our chromosomes that act like the plastic tips at the ends of shoelaces. When telomerase isn't active, each time our cells divide, the telomeres get shorter. When that happens, the telomeres eventually become so short that the cells stop dividing or die.

On the other hand, cells with abnormally active telomerase can constantly rebuild their protective chromosomal caps and become immortal. Making cells immortal might sound like a promising prospect, but it actually is harmful because DNA errors accumulate over time, which damages cells, said Feigon, who also is a researcher at UCLA's Molecular Biology Institute and an associate member of the UCLA-Department of Energy Institute of Genomics and Proteomics.

Telomerase is particularly active in cancer cells, which helps make them immortal and enables cancer to grow and spread. Scientists believe that controlling the length of telomeres in cancer cells could be a way to prevent them from multiplying.

When Feigon began her research on telomerase slightly more than a decade ago, she merely wanted to learn how telomerase works; fighting cancer and slowing the aging process were not even in the back of her mind.

"Our research may make those things achievable, even though they were not our goals," she said. "You never know where basic research will go. When telomerase and telomeres were discovered, no one had any idea what the impact of that research would be. The question was, 'How are the ends of our chromosomes maintained?' We knew there had to be some activity in the cell that does that."

Earlier research led by UC San Francisco professor Elizabeth Blackburn revealed that telomerase was responsible for this activity, but the study didn't connect telomerase to cancer and it provided little information about its structural biology. The research was conducted using tiny, single-celled microorganisms called Tetrahymena thermophila that are commonly found in freshwater ponds. Blackburn won a Nobel Prize in 2009 for the finding.

Since then, Feigon and her colleagues have been filling in pieces of the telomerase puzzle, also using Tetrahymena. Their latest study found that the microorganism's telomerase is more analogous to human telomerase than previously thought.

"This is the first time that a whole telomerase directly isolated from its natural workplace has been visualized at a sub-nanometer resolution and all components are identified in the structure," said Jiansen Jiang, the study's co-lead author and a UCLA postdoctoral scholar. (A nanometer is equivalent to one billionth of a meter.)

Among the new insights the team reported:

  • Scientists had thought telomerase contains eight sub-units: seven proteins and an RNA. But Feigon and her colleagues discovered two additional proteins, Teb2 and Teb3, that increase telomerase's activity. "Knowing we were the first people in the world who knew about these new proteins was amazing," she said. "Days like that are what scientific discovery is all about, and it's exhilarating."
  • Feigon's research team knew that the RNA strand interacts with the proteins, but not exactly where it interacted. The new study found that within the enzyme's "catalytic core," which is formed by the RNA and its partner proteins TERT and p65, the RNA forms a ring around the donut-shaped TERT protein.
  • Scientists previously knew that telomerase contains three proteins, p75, p45 and p19, but their structures and functions were poorly understood. The new research identified the proteins' structures and revealed that they are similar to proteins found at human telomeres.
  • The researchers showed that a key protein called p50 interacts with several components of telomerase, including TERT, Teb1 and p75, and this network of interactions has important implications for telomerase's function.
Feigon knew that the Tetrahymena enzyme's catalytic core, where the majority of the telomerase activity occurs, was a close analogue to the catalytic core in the human enzyme, but she did not previously know whether the other proteins had human counterparts.

"It turns out that nearly all, if not all, of the telomerase proteins in Tetrahymena have similar proteins in humans," Feigon said. "Now we can use our model system to learn more about how telomerase interacts at the telomeres."

Feigon and her colleagues are working to fill in even more details of the telomerase puzzle. Their research could lead to the development of pharmaceuticals that target specific sub-units of telomerase and disrupt interactions between proteins.

"There is so much potential for treating disease if we understand deeply how telomerase works," Feigon said.

Among the technologies the researchers used to produce the groundbreaking images were UCLA's cryoelectron microscopes, which are housed in the laboratory of Z. Hong Zhou, director of the Electron Imaging Center for Nanomachines at the California NanoSystems Institute at UCLA and a co-author of the paper. The researchers also used nuclear magnetic resonance spectroscopy, X-ray crystallography, mass spectrometry and biochemical methods.

Henry Chan, a UCLA graduate student, was a co-lead author of the paper. Other UCLA co-authors were postdoctoral scholar Darian Cash, former postdoctoral scholar Edward Miracco, staff scientist Rachel Ogorzalek Loo, senior staff scientist Duilio Cascio and graduate student Reid O'Brien Johnson, all of UCLA; and UC Berkeley graduate student Heather Upton. Senior authors were Zhou, UCLA biochemistry professor Joseph Loo and UC Berkeley professor Kathleen Collins.

The research was funded by the National Institutes of Health (grants GM048123, GM071940, GM103479, R01GM054198) and the National Science Foundation (grant MCB1022379).

Source:

University of California - Los Angeles | By Stuart Wolpert.| ScienceDaily

Reference:
Jiansen Jiang, Henry Chan, Darian D. Cash, Edward J. Miracco, Rachel R. Ogorzalek Loo, Heather E. Upton, Duilio Cascio, Reid O’Brien Johnson, Kathleen Collins, Joseph A. Loo, Z. Hong Zhou, and Juli Feigon. Structure of Tetrahymena telomerase reveals previously unknown subunits, functions, and interactions. Science, 15 October 2015 DOI: 10.1126/science.aab4070

Quantum physics meets genetic engineering

Researchers use engineered viruses to provide quantum-based enhancement of energy transport


Rendering of a virus used in the MIT experiments. The light-collecting centers, called chromophores, are in red, and chromophores that just absorbed a photon of light are glowing white. After the virus is modified to adjust the spacing between the chromophores, energy can jump from one set of chromophores to the next faster and more efficiently.
Credit: Courtesy of the researchers and Lauren Alexa Kaye

Nature has had billions of years to perfect photosynthesis, which directly or indirectly supports virtually all life on Earth. In that time, the process has achieved almost 100 percent efficiency in transporting the energy of sunlight from receptors to reaction centers where it can be harnessed -- a performance vastly better than even the best solar cells.

One way plants achieve this efficiency is by making use of the exotic effects of quantum mechanics -- effects sometimes known as "quantum weirdness." These effects, which include the ability of a particle to exist in more than one place at a time, have now been used by engineers at MIT to achieve a significant efficiency boost in a light-harvesting system.

Surprisingly, the MIT researchers achieved this new approach to solar energy not with high-tech materials or microchips -- but by using genetically engineered viruses.

This achievement in coupling quantum research and genetic manipulation, described this week in the journal Nature Materials, was the work of MIT professors Angela Belcher, an expert on engineering viruses to carry out energy-related tasks, and Seth Lloyd, an expert on quantum theory and its potential applications; research associate Heechul Park; and 14 collaborators at MIT and in Italy.

Lloyd, a professor of mechanical engineering, explains that in photosynthesis, a photon hits a receptor called a chromophore, which in turn produces an exciton -- a quantum particle of energy. This exciton jumps from one chromophore to another until it reaches a reaction center, where that energy is harnessed to build the molecules that support life.

But the hopping pathway is random and inefficient unless it takes advantage of quantum effects that allow it, in effect, to take multiple pathways at once and select the best ones, behaving more like a wave than a particle.

This efficient movement of excitons has one key requirement: The chromophores have to be arranged just right, with exactly the right amount of space between them. This, Lloyd explains, is known as the "Quantum Goldilocks Effect."

That's where the virus comes in. By engineering a virus that Belcher has worked with for years, the team was able to get it to bond with multiple synthetic chromophores -- or, in this case, organic dyes. The researchers were then able to produce many varieties of the virus, with slightly different spacings between those synthetic chromophores, and select the ones that performed best.

In the end, they were able to more than double excitons' speed, increasing the distance they traveled before dissipating -- a significant improvement in the efficiency of the process.

The project started from a chance meeting at a conference in Italy. Lloyd and Belcher, a professor of biological engineering, were reporting on different projects they had worked on, and began discussing the possibility of a project encompassing their very different expertise. Lloyd, whose work is mostly theoretical, pointed out that the viruses Belcher works with have the right length scales to potentially support quantum effects.

In 2008, Lloyd had published a paper demonstrating that photosynthetic organisms transmit light energy efficiently because of these quantum effects. When he saw Belcher's report on her work with engineered viruses, he wondered if that might provide a way to artificially induce a similar effect, in an effort to approach nature's efficiency.

"I had been talking about potential systems you could use to demonstrate this effect, and Angela said, 'We're already making those,'" Lloyd recalls. Eventually, after much analysis, "We came up with design principles to redesign how the virus is capturing light, and get it to this quantum regime."

Within two weeks, Belcher's team had created their first test version of the engineered virus. Many months of work then went into perfecting the receptors and the spacings.

Once the team engineered the viruses, they were able to use laser spectroscopy and dynamical modeling to watch the light-harvesting process in action, and to demonstrate that the new viruses were indeed making use of quantum coherence to enhance the transport of excitons.

"It was really fun," Belcher says. "A group of us who spoke different [scientific] languages worked closely together, to both make this class of organisms, and analyze the data. That's why I'm so excited by this."

While this initial result is essentially a proof of concept rather than a practical system, it points the way toward an approach that could lead to inexpensive and efficient solar cells or light-driven catalysis, the team says. So far, the engineered viruses collect and transport energy from incoming light, but do not yet harness it to produce power (as in solar cells) or molecules (as in photosynthesis). But this could be done by adding a reaction center, where such processing takes place, to the end of the virus where the excitons end up.

The research was supported by the Italian energy company Eni through the MIT Energy Initiative. In addition to MIT postdocs Nimrod Heldman and Patrick Rebentrost, the team included researchers at the University of Florence, the University of Perugia, and Eni.

Source:
Massachusetts Institute of Technology| By David Chandler, MIT News Office

Reference:
Heechul Park, Nimrod Heldman, Patrick Rebentrost, Luigi Abbondanza, Alessandro Iagatti, Andrea Alessi, Barbara Patrizi, Mario Salvalaggio, Laura Bussotti, Masoud Mohseni, Filippo Caruso, Hannah C. Johnsen, Roberto Fusco, Paolo Foggi, Petra F. Scudo, Seth Lloyd, Angela M. Belcher. Enhanced energy transport in genetically engineered excitonic networks. Nature Materials, 2015; DOI: 10.1038/nmat4448

Wednesday, October 14, 2015

Antiviral compound provides full protection from Ebola virus in nonhuman primates

Rhesus monkeys were completely protected from the deadly Ebola virus when treated three days after infection with a compound that blocks the virus's ability to replicate. These encouraging preclinical results suggest the compound, known as GS-5734, should be further developed as a potential treatment, according to research findings.

Picture source: www.globalresearch.ca

Travis Warren, Ph.D., a principal investigator at the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID), said the work is a result of the continuing collaboration between USAMRIID and Gilead Sciences of Foster City, Calif. Scientists at the Centers for Disease Control and Prevention (CDC) also contributed by performing initial screening of the Gilead Sciences compound library to find molecules with promising antiviral activity.

The initial work identified the precursor to GS-5734, a small-molecule antiviral agent, which led to the effort by Gilead and USAMRIID to further refine, develop and evaluate the compound. Led by USAMRIID Science Director Sina Bavari, Ph.D., the research team used cell culture and animal models to assess the compound's efficacy against several pathogens, including Ebola virus.

In animal studies, treatment initiated on day 3 post-infection with Ebola virus resulted in 100 percent survival of the monkeys. They also exhibited a substantial reduction in viral load and a marked decrease in the physical signs of disease, including internal bleeding and tissue damage.

"The compound, which is a novel nucleotide analog prodrug, works by blocking the viral RNA replication process," said Warren. "If the virus can't make copies of itself, the body's immune system has time to take over and fight off the infection."

In cell culture studies, GS-5734 was active against a broad spectrum of viral pathogens. These included Lassa virus, Middle East Respiratory Syndrome (MERS) virus, Marburg virus, and multiple variants of Ebola virus, including the Makona strain causing the most recent outbreak in West Africa.

"This is the first example of a small molecule--which can be easily prepared and made on a large scale--that shows substantive post-exposure protection against Ebola virus in nonhuman primates," Bavari commented. "In addition to 100 percent survival in treated animals, the profound suppression of viral replication greatly reduced the severe clinical signs of disease."

Taken together, the robust therapeutic efficacy observed in primates and the potential for broad-spectrum antiviral activity suggest that further development of GS-5734 for the treatment of Ebola virus and other viral infections is warranted, Bavari said.

According to Tomas Cihlar, Ph.D., of Gilead Sciences, the company is currently conducting phase I clinical studies of the compound in healthy human volunteers to establish the safety and pharmacokinetic profile.

"We are exploring alternative directions for developing this compound, including potential use of the animal efficacy rule," Cihlar said, referring to a regulatory mechanism under which the U.S. Food and Drug Administration may consider efficacy findings from adequate and well-controlled animal studies of a drug in cases where it is not feasible or ethical to conduct human trials.

Ebola virus causes severe hemorrhagic fever in humans and nonhuman primates with high mortality rates and continues to emerge in new geographic locations, including West Africa, the site of the largest outbreak to date. Over 28,000 confirmed, probable and suspected cases have been reported in Guinea, Liberia and Sierra Leone, with over 11,000 reported deaths, according to the World Health Organization. Although several clinical trials are currently underway, there are no licensed vaccines or therapies against Ebola virus.

Research on Ebola virus is conducted in Biosafety Level 4 (maximum containment) laboratories, where investigators wear positive-pressure "space suits" and breathe filtered air as they work. USAMRIID is the only organization in the Department of Defense with Biosafety Level 4 capabilities, and its research benefits both military personnel and civilians.

Presentation: 
"Nucleotide Prodrug GS-5734 Is a Broad-Spectrum Filovirus Inhibitor that Provides Complete Therapeutic Protection Against Ebola Virus Disease in Infected Non-human Primates."

https://idsa.confex.com/idsa/2015/webprogram/Paper54208.html

Source:

US Army Medical Research Institute of Infectious Diseases | ScienceDaily