Posted on Leave a comment

Novel system to track brain chemicals

Researchers at UCLA and Columbia University have developed a novel method for tracking the activity of small molecules in the brain, including the neurotransmitters serotonin and dopamine. Pairing tiny artificial receptors with semiconductor devices that are able to function in living tissue, the team was able to observe brain chemicals at a high level of detail.

The research, published in the journal Science, is part of the BRAIN Initiative, a large-scale collaboration among government, private industry, nonprofits, and numerous colleges and universities.

“Understanding the fundamentals of how neurotransmission occurs will help us understand not only how our brains work, but what’s going on in psychiatric disorders,” said lead researcher Anne M. Andrews, professor of psychiatry and chemistry at UCLA. “In order to move forward with dramatically better treatments, we need to understand how we encode information about anxiety or mood — processes that can go awry, sometimes with devastating consequences.”

The idea for the project began 20 years and had its origins in Andrews research on serotonin. “My group was using state of the art in vivo monitoring — but it became apparent to me that improving the methods in hand wasn’t going to be enough to provide the necessary resolution,” Andrews said. “We needed a totally new sensing strategy.” This led to collaboration with Paul Weiss, professor of chemistry and materials science at UCLA.

Andrews envisioned coupling artificial receptors with a nanoscale signaling platform. A major hurdle, however, was that the required transistors, which are basic units of computers and cell phones, and are needed to process a signal, don’t work well in wet, salty environments.

“The workhorse of any transistor is the semiconductor,” Andrews said. “But when you put it in salt water, the salt ions — charged atoms — line up on the semiconductor surface, and shield it, preventing detection of electric field changes. The question was, ‘How can we tap into the powerful science and sensitivity of existing transistors to use them in high-salt environments like the brain?’ ” A collaboration with Yang Yang, a professor of materials science at UCLA, provided the team with high-performance nanoscale semiconductor materials.

Looking to nature is sometimes more effective than devising totally new methods, Andrews said. So she teamed up with professor Milan Stojanovic and Dr. Kyung-Ae Yang, both of Columbia, who were using nucleic acid sequences as receptors. An advantage of these biomolecules is that they are smaller than bulkier protein receptors used by native cells and other investigators for biosensors.

“Our breakthrough was that we used a different kind of receptor that was biologically inspired — after all, life began with RNA,” Andrews said. “The Columbia investigators develop nucleic acid sequences that act as receptors, called aptamers, which are small enough that some part is close to semiconductor surfaces. And in this, we’ve overcome the ‘salt shielding’ problem.”

In the new paper, the team successfully identified and tested receptors for serotonin, dopamine, and glucose. The receptors were found to be extremely selective, binding only the molecules they were designed to bind. The system was successful even in living brain tissue from mice.

The method is universal, so it can be used for almost any target — to learn, for instance, how drugs change with time in the brain or other organs, how blood pressure is regulated, and how signaling molecules associated with the gut microbiome ebb and flow.

Andrews’ main interest still lies with neurotransmitters. “We don’t currently have methods to study neurotransmitter signaling at the scales over which information is encoded,” said Andrews. “So these sensors will allow us to approach critical dimensions. One goal is to ultimately figure out how brains process information through different neurotransmitters.” The findings have implications not only for observing how neurochemicals act under normal conditions, but also in understanding psychiatric conditions like depression and anxiety.

The team is now testing the strategy to watch neurochemicals in the brains of behaving animals.

Posted on Leave a comment

Properties of stem cells that determine cell fate identified

Researchers from the University of California, Irvine have identified intrinsic cell properties that influence the fate of neural stem cells, affecting what type of brain cell they will form: neurons, astrocytes, or oligodendrocytes. This discovery could give scientists a new way to predict or control the fate of stem cells, improving their use in transplantation therapies.

Published today in Stem Cell Reports, the study was led by Lisa A. Flanagan, PhD, an associate professor of neurology at UCI School of Medicine, and revealed that neural stem cells differing in fate potential expressed distinct patterns of sugars on the cell surface. These sugars contribute to neural stem cell membrane electrical properties and ultimately cell fate. “Stem cells hold great promise for treating disease, but it can be difficult to tell what a stem cell will become after it has been transplanted,” said Flanagan. “We can transplant the same number of stem cells in one patient as in another, but the outcomes will be significantly different if the transplanted cells in the first patient become neurons and those in the second patient become astrocytes. With this new discovery, we will be able to predict what a neural stem cell will become and possibly direct cell fate, which will greatly enhance the success of stem cell transplant therapies for a wide variety of diseases.”

In research initially published in 2008, Flanagan and colleagues discovered a new way to identify and sort neural stem cells that have different fates by using cell electrical properties. They now build on these findings by showing that differences in cell surface sugars are the reason that the cells have different electrical properties.

In this study, researchers examined several pathways that add sugars to cells and found one that differed between cells that make neurons and those making astrocytes. They stimulated this pathway in neural stem cells, changed cell electrical properties, and caused them to make more astrocytes and fewer neurons, showing that cell surface sugars can control fate. The pathway is active in cells grown for transplants and in cells of the developing brain, so this pathway may also control how neural stem cells form neurons and astrocytes when the brain is being formed during development.

The team is now testing whether modifying this pathway changes how cells behave in transplants or how the developing brain is formed. They are focusing on the machinery inside the cell that adds the sugars in the first place, to see how the process is regulated. They also are finding that particular proteins on the cell surface are changed by this pathway, which will help to uncover how the sugars are telling stem cells which type of cell to form. The long-term goal of these studies is to find ways to improve the effectiveness of stem cell transplants to treat injury and disease.


The work was supported by the National Science Foundation, California Institute for Regenerative Medicine, National Institutes of Health, and National Multiple Sclerosis Society.

bout the UCI School of Medicine

Each year, the UCI School of Medicine educates more than 400 medical students, as well as 200 doctoral and master’s students. More than 600 residents and fellows are trained at UC Irvine Medical Center and affiliated institutions. The UCI School of Medicine offers an MD degree, a dual MD/PhD medical scientist training program, PhDs and master’s degrees in anatomy and neurobiology, biomedical sciences, genetic counseling, epidemiology, environmental health sciences, pathology, pharmacology, physiology and biophysics, and translational sciences. Medical students also may pursue an MD/MBA program, a combined MD/Master’s in Public Health or a dual MD/master’s program called the Program in Medical Education for the Latino Community (PRIME-LC). UCI School of Medicine is accredited by Liaison Committee on Medical Accreditation (LCME), and ranks among the top 50 nationwide for research. For more information, visit:

About the University of California, Irvine

Founded in 1965, UCI is the youngest member of the prestigious Association of American Universities. The campus has produced three Nobel laureates and is known for its academic achievement, premier research, innovation and anteater mascot. Led by Chancellor Howard Gillman, UCI has more than 30,000 students and offers 192 degree programs. Located in one of the world’s safest and most economically vibrant communities, UCI is Orange County’s second-largest employer, contributing $5 billion annually to the local economy. For more on UCI, visit

Story Source:

Materials provided by University of California – Irvine. Note: Content may be edited for style and length.

Posted on Leave a comment

Gender identities disrupted — and reinforced

New Northwestern University research analyzing the ways children’s gender narratives reinforce or disrupt gender inequality found that older children — and girls — are more likely to tell alternative narratives that disrupted the gender status quo.

In the study, a racially diverse group of more than 230 children, ages 7 to 12 years old, told four types of narratives about their gender identities. The analysis revealed two “master narratives” (difference and genderblind), which are shared cultural stories defined as those that reinforce the existing gender hierarchy and “alternative narratives” (incongruent and counternarrative) as those that disrupt it.

“To date, master narratives have only been used with emerging adults,” said Onnie Rogers, author of the study and assistant professor of psychology in the Weinberg College of Arts and Sciences at Northwestern. “But by middle childhood, children are capable of narrating stories about their own lives and about gendered experiences.”

Rogers said a surprising result was the consistency of the use of the “difference” narrative across age and racial groups. Younger and older children, black and white children were equally likely to tell this narrative, which speaks to the ubiquity of the master narrative.

Children who told “difference” narratives spoke about gender in ways that emphasized the differences between boys and girls.

For example, a third-grader defined what being a boy means to him in one word: “Sports.” Girls in the “difference” narrative similarly relied on stereotypes and group comparisons. Being a girl is important, according to one fifth-grader, “because I don’t really like boys’ clothes…I like being a girl because girls are pretty.”

When asked to imagine how things would be different if he was not a boy, one fifth-grader said: “It would be a lot different because you wouldn’t be able to play football or anything.”

The “difference” narrative was the primary version of the master narrative, capturing 61 percent of the sample.

“The idea of not being able to play or be anything if navigating the world as a different gender exemplifies an ideology of difference,” said Rogers, also a faculty fellow at Northwestern’s Institute for Policy Research.

In a second master narrative, “genderblind” children characterized gender as inconsequential: “It [gender] doesn’t really matter because you’re still a human being,” said a fourth-grade girl.

Very few children, 3 percent, were coded in this narrative type.

Rogers said it’s conceivable to interpret these genderblind narratives as children who do not want to discuss gender or held egalitarian values, and thus more representative of an alternative narrative.

“But, given that ‘blindness’ and ‘silence’ exacerbates rather than attenuates social inequality, it seems compelling that this narrative may function to reinforce rather than disrupt the master narrative,” she said.

The “incongruent” narrative was one of the two alternative narratives, capturing 22 percent of the sample.

Incongruent narratives characterized children whose gender stories seemed to express conflict or discordance, as they espoused one of the master narratives but also questioned it.

For example, a second-grade girl, when asked what she likes about being a girl, said: “I like everything about being a girl!” Yet, when asked if there is anything hard about being a girl, said: “I don’t like being a girl and being treated like — sometimes people treat me like trash.”

Rogers said the incongruent narrative was the most interesting pattern they discovered.

“As we analyzed the data, we discovered that children were speaking in two voices — a voice of society, what they are ‘supposed to say’ and a more authentic voice that seemed to represent their own experiences,” she said.

The final alternative narrative type, the “counternarrative,” which was 13 percent of the sample, more explicitly challenged the master narrative.

When asked how important being a girl is, a fourth-grader answered: “A lot, because some people think that boys are better, and I just want to change that.” A sixth-grade girl explained how it was unfair that boys did not let her join their “boy game” at recess.

“Children who told counternarratives stood apart because they explicitly stated that there are inequalities related to gender,” Rogers said. “We mostly discuss the ways in which children reinforce gender stereotypes or align themselves with gender norms, but children also question and resist these gender scripts.

“If we had not intentionally considered the ways in which children might challenge gender master narratives, we might have missed the ‘incongruent’ children because through one lens, they were reinforcing gender norms. But, they were also disrupting it, and this may be a developmental moment to intervene and support healthy resistance to gender norms that are unhealthy for society,” Rogers said.

“‘I’m Kind of a Feminist’: Using Master Narratives to Analyze Gender Identity in Middle Childhood” published this week in the journal Child Development.

Posted on Leave a comment

Synthetic DNA vaccine effective against influenza A virus subtype

Currently available vaccines for the prevention of seasonal influenza virus infection have limited ability to induce immunity against diverse H3N2 viruses, an influenza A subtype that has led to high morbidity and mortality in recent years.

Now, Wistar scientists have engineered a synthetic DNA vaccine shown to produce broad immune responses against these H3N2 viruses. Study results were published online in the journal Human Gene Therapy.

The recent severe influenza seasons in 2013/2014, 2014/2015 and 2017/2018 can be directly attributed to H3N2. Commercial vaccine efficacy against H3N2 in 2017/2018 was low and contributed to a greater rate of pneumonia and influenza-associated deaths.

“Current vaccine design and manufacturing to meet the antigenic diversity of H3N2 viruses is challenging, and with another flu season approaching there remains a pressing need for new vaccine approaches for influenza,” said lead researcher David B. Weiner, Ph.D., executive vice president and director of the Vaccine & Immunotherapy Center at The Wistar Institute, and W.W. Smith Charitable Trust Professor in Cancer Research. “There is also a need for improvements in rapid selection and deployment against newly emergent viral strains and synthetic DNA vaccines represent an important tool to reach this goal.”

To overcome the antigenic diversity of H3N2 viruses, Weiner and colleagues used H3N2 strains from 1968 to the present retrieved from the Influenza Research Database to generate four synthetic common sequences of the hemagglutinin antigen (HA), a protein present on the viral surface. These micro-consensus sequences were used to generate four DNA vaccines that were co-mixed to create a cocktail vaccine labeled pH3HA. The scientists administered the vaccine or placebo to mice and a booster vaccine two weeks later. Two weeks after the booster injection, they inoculated them with two representative influenza viruses.

Sarah Elliot, Ph.D., a senior postdoctoral fellow in the Weiner Lab, and colleagues monitored clinical signs, body weight and survival for two weeks after infection. All mice immunized with the synthetic DNA vaccine developed broad, robust antibody responses against HA and effective cellular immune responses including CD4 and CD8 T cell responses.

They were protected against lethal influenza A infection from two different challenge H3N2 viruses. Vaccination with pH3HA induced robust antibodies against the 1968 pandemic H3N2 as well as contemporary H3N2 strains that were components of commercially available vaccines from 2015/2016 and 2017/2018.

Compared with those who received placebo, immunized mice survived intranasal virus challenge with 10 times the median lethal dose; the placebo group succumbed to infection within six days of exposure to the challenge virus.

“The pH3HA vaccine represents a unique micro-consensus approach to producing immune responses to antigenically related — yet diverse, seasonal influenza A H3N2 viruses,” Weiner said. “The overarching goals of this approach are to limit the number of vaccine reformulations that can be deployed to protect against novel H3N2 viruses.”

Story Source:

Materials provided by The Wistar Institute. Note: Content may be edited for style and length.

Posted on Leave a comment

California's large minority population drives state's relatively low death rate

High poverty rates, low education and lack of insurance are all social determinants that are expected to lead to high mortality rates and negative health outcomes. Despite a 62 percent minority population with these characteristics in California, the state’s health profile was significantly better than the nation’s as a whole, researchers report in a new study appearing in the journal Health Affairs. This profile was largely driven by the state’s minority population.

California’s death rate from all causes in 2016 was 619.1 per 100,000 population, compared with the nation’s 729.9. Broken down by racial/ethnic group, death rates were 686.4 for non-Hispanic whites, 514.4 for Latinos, 807.6 for African-Americans, 394.5 for Asian/Pacific Islanders, and 380.2 for American Indian/Alaska Natives.

California’s population shifted from 22 percent racial/ethnic minority in 1970 to 62 percent in 2016. From about 1965 through the early 1990s, researchers and policymakers had focused their attention on the high levels of unemployment and poverty; and a lack of education among minorities, positing that these dysfunctions were a result of poor personal values that led to poor personal choices. As a result of these characteristics, the historical narrative held that minorities were likelier to live shorter lives and suffer poorer health outcomes. Subsequently, health care and social policies were built around this framework.

The researchers examined data from the National Center for Health Statistics and other sources.

Cultural differences in the ways that racial and ethnic minorities interact and take action to compensate for lack of access to health care and other public and private services may help to facilitate good health outcomes. For instance, to address unmet health care needs in the 1970s during California’s demographic shift, Latinos created an alternative system of health care and policy that has since grown to more than 1,300 nonprofit, community-based clinics serving underserved communities. These findings could form the basis for the development of theoretical models and methods to assist in identifying and tracking health disparities. These new models, based on the “epidemiology of diversity,” would be better able to make use of the roles that race, place and diversity play in health outcomes.

Authors of the study are: Paul Hsu and David Hayes-Bautista of UCLA; Mara Bryant and Teodocia Hayes-Bautista of Adventist Health White Memorial Hospital; and Keosha Partlow of Charles R. Drew University of Medicine and Science. The paper is published in the September issue of Health Affairs.

This study was partially funded by grants from Adventist Health White Memorial.

Story Source:

Materials provided by University of California – Los Angeles Health Sciences. Note: Content may be edited for style and length.

Posted on Leave a comment

More daytime sleepiness, more Alzheimer's risk?

Analysis of data captured during a long-term study of aging adults shows that those who report being very sleepy during the day were nearly three times more likely than those who didn’t to have brain deposits of beta amyloid, a protein that’s a hallmark for Alzheimer’s disease, years later.

The finding, reported Sept. 5 in the journal SLEEP, adds to a growing body of evidence that poor quality sleep could encourage this form of dementia to develop, suggesting that getting adequate nighttime sleep could be a way to help prevent Alzheimer’s disease.

“Factors like diet, exercise and cognitive activity have been widely recognized as important potential targets for Alzheimer’s disease prevention, but sleep hasn’t quite risen to that status — although that may well be changing,” says Adam P. Spira, PhD, associate professor in the Department of Mental Health at the Johns Hopkins Bloomberg School of Public Health. Spira led the study with collaborators from the National Institute on Aging (NIA), the Bloomberg School and Johns Hopkins Medicine.

“If disturbed sleep contributes to Alzheimer’s disease,” he adds, “we may be able to treat patients with sleep issues to avoid these negative outcomes.”

The study used data from the Baltimore Longitudinal Study of Aging (BLSA), a long-term study started by the NIA in 1958 that followed the health of thousands of volunteers as they age. As part of the study’s periodic exams, volunteers filled a questionnaire between 1991 and 2000 that asked a simple yes/no question: “Do you often become drowsy or fall asleep during the daytime when you wish to be awake?” They were also asked, “Do you nap?” with response options of “daily,” “1-2 times/week,” “3-5 times/week,” and “rarely or never.”

A subgroup of BLSA volunteers also began receiving neuroimaging assessments in 1994. Starting in 2005, some of these participants received positron emission tomography (PET) scans using Pittsburgh compound B (PiB), a radioactive compound that can help identify beta-amyloid plaques in neuronal tissue. These plaques are a hallmark of Alzheimer’s disease.

The researchers identified 123 volunteers who both answered the earlier questions and had a PET scan with PiB an average of nearly 16 years later. They then analyzed this data to see if there was a correlation between participants who reported daytime sleepiness or napping and whether they scored positive for beta-amyloid deposition in their brains.

Before adjusting for demographic factors that could influence daytime sleepiness, such as age, sex, education, and body-mass index, their results showed that those who reported daytime sleepiness were about three times more likely to have beta-amyloid deposition than those who didn’t report daytime fatigue. After adjusting for these factors, the risk was still 2.75 times higher in those with daytime sleepiness.

The unadjusted risk for amyloid-beta deposition was about twice as high in volunteers who reported napping, but this did not reach statistical significance.

It’s currently unclear why daytime sleepiness would be correlated with the deposition of beta-amyloid protein, Spira says. One possibility is that daytime sleepiness itself might somehow cause this protein to form in the brain. Based on previous research, a more likely explanation is that disturbed sleep — due to obstructive sleep apnea, for example — or insufficient sleep due to other factors, causes beta-amyloid plaques to form through a currently unknown mechanism, and that these sleep disturbances also cause excessive daytime sleepiness.

“However, we cannot rule out that amyloid plaques that were present at the time of sleep assessment caused the sleepiness,” he added.

Animal studies in Alzheimer’s disease models have shown that restricting nighttime sleep can lead to more beta-amyloid protein in the brain and spinal fluid. A handful of human studies have linked poor sleep with greater measures of beta-amyloid in neuronal tissue.

Researchers have long known that sleep disturbances are common in patients diagnosed with Alzheimer’s disease — caregiver stress from being up with patients at night is a leading reason for Alzheimer’s disease patients to be placed in long-term care, Spira explains. Growing beta-amyloid plaques and related brain changes are thought to negatively affect sleep.

But this new study adds to growing evidence that poor sleep might actually contribute to Alzheimer’s disease development, Spira adds. This suggests that sleep quality could be a risk factor that’s modifiable by targeting disorders that affect sleep, such as obstructive sleep apnea and insomnia, as well as social- and individual-level factors, such as sleep loss due to work or binge-watching TV shows.

“There is no cure yet for Alzheimer’s disease, so we have to do our best to prevent it. Even if a cure is developed, prevention strategies should be emphasized,” Spira says. “Prioritizing sleep may be one way to help prevent or perhaps slow this condition.”

“Excessive Daytime Sleepiness and Napping in Cognitively Normal Adults: Associations with Subsequent Amyloid Deposition Measured by PiB PET” was written by Adam P. Spira, Yang An, Mark N. Wu, Jocelynn T. Owusu, Eleanor M. Simonsick, Murat Bilgel, Luigi Ferrucci, Dean F. Wong, and Susan M. Resnick.

This study was supported in part by National Institute on Aging extramural grants AG050507, AG050745, AG049872, and AG050507-02S1, Intramural Research Program (IRP), National Institute on Aging (NIA), National Institutes of Health (NIH) and by Research and Development Contract HHSN-260-2004-00012C.

Posted on Leave a comment

Same mutations underpin spread of cancer in individuals

Scientists have arrived at a key understanding about how cancers in individual patients spread, or metastasize, a study from the Stanford University School of Medicine and other collaborating institutions reports.

The study found that mutations that drive cancer growth are common among metastases in a single patient.

Most cancer-related deaths are caused by metastases, or secondary tumors in distant locations of the body that have spread away from the original, primary tumor. While primary tumors can often be surgically removed, metastatic tumors typically require treatment such as standard chemotherapy or targeted therapy. The success of such new targeted therapies depends on the presence of a specific mutation in all cancer cells, in particular in metastatic tumors.

Until now, most studies that aimed to decode the genetic variability, or heterogeneity, of cancers focused mainly on primary tumors. And while that information is still extremely valuable, it leaves much of the story untold; cancer cells are notorious for their ability to change, evolve and evade treatments, particularly as they spread in the body.

“We took samples from multiple untreated metastases of each patient, and we observed a mix of overlapping and differing driver mutations,” said Johannes Reiter, PhD, an instructor of radiology at Stanford. “But through computational analyses, we inferred that the driver mutations that were most likely to contribute to cancer development were shared among all metastases in each patient.”

A tumor composed of billions of cells is riddled with genetic mutations; cancer cells and normal cells acquire multiple mutations as they divide. Identifying the driver mutations that significantly contribute to cancer development is critical to precision oncology, in which doctors aim to treat a patient’s cancer based on its genetic composition.

“Doctors might take a sample of the primary tumor and find some mutation — call it mutation X — in a driver gene and then treat it with a drug that targets that driver gene to specifically kill all cells that have mutation X,” Reiter said. “But what if that particular mutation is only present in some of the metastases of the patient?” Only the metastases composed of cells with mutation X would respond to treatment and shrink or go extinct; those without mutation X would continue to grow. In the end, the doctor wouldn’t see a remission of the patient’s cancer if driver mutations were different across its metastases. “So that’s why it’s very important for us to know whether or not the driver gene mutations are the same across all metastases of the patient,” Reiter said.

The paper will be published Sept. 7 in Science. Reiter; postdoctoral scholar Alvin Makohon-Moore, PhD, at Memorial Sloan Kettering Cancer Center; and graduate student Jeffrey Gerold, at Harvard University, share lead authorship. Martin Nowak, PhD, professor of biology and of mathematics at Harvard University, is the senior author.

Will the real driver mutations please stand up?

Driver mutations occur in genes known to be involved in tumor genesis — such as genes that typically control cell division. When mutated, these genes may spur a cell to divide in an uncontrolled fashion, generating cancer. While hundreds of driver genes have been identified across cancer types over the last decades, relatively few mutations are thought to be important in the development of an individual’s cancers. Likewise, it’s hard to know which ones are truly culpable and which are “passenger mutations,” or innocuous mutations that occur by happenstance and are just along for the ride ? even if they occurred in a driver gene.

To see whether driver gene mutations were the same across all metastases of a patient’s cancer, Reiter and his colleagues analyzed DNA samples from 76 untreated metastases from a group of 20 patients with eight different cancer types, making sure at least two distinct metastases were sampled in each person.

Like choosing the right suspects in a lineup, the scientists picked out the mutations that occurred in known driver genes and investigated whether or not they were found in all the sampled metastases of an individual patient. In some cancers, the researchers only identified two driver gene mutations; in others, there were as many as 18.

By analyzing their data against massive databases that hold mutational data of more than 25,000 previously sequenced cancers, they found that the driver gene mutations that were shared among all metastases in an individual were also frequently mutated in previously sequenced cancers, indicating that these mutations are the true drivers of the disease and play a critical role during cancer development.

The scientists also saw that the few driver gene mutations that were not found across all metastases of a patient’s cancer were predicted to have weak or no functional consequences. In other words, the mutations not shared among all metastases were likely passenger mutations, despite their occurrence in driver genes, and likely did not play a critical role during cancer development. This finding could open new avenues to understanding and interpreting tumor biopsies in the future, Reiter said.

Confirming common driver mutations

Reiter said that, for now, it’s too early to generalize these findings due to the small cohort size. But the study does suggest that tumor samples from a single metastasis typically represent the full set of functional driver mutations of a patient’s cancer. Next, Reiter hopes to expand the study to more patients with different cancer types. “It’s rare that we can access untreated metastases, and that’s fortunate for the patients, but we do want to look at the concept of our findings in a larger cohort,” Reiter said. Studies of treated samples cannot provide the same mechanistic insights of cancer evolution, he said, because the observed mutations could be the result of the treatment and may not have been observed with a different or no treatment. “We want to see if the idea of common functional drivers holds up when dealing with 20 to 30 cancer types and hundreds of untreated samples,” he said.

Posted on Leave a comment

Communication among organs, tissues regulating body's energy revealed

An international research team led by the University of California, Irvine has identified a system of communication networks that exists among organs and tissues that regulate metabolism. Findings from their study provide, for the first time, a detailed “atlas” illustrating how the body creates and uses energy, and how imbalances in the networks may impact overall health.

Published Sept. 6 in the journal Cell, the research reveals the highly coordinated, multi-tissue metabolism underlying the body’s circadian rhythms and examines how disruptions in these rhythms — such as those caused by high-fat diets — induce misalignment among the network clocks and can trigger inflammation, which has been linked to major diseases and can affect lifespan.

Lead author Paolo Sassone-Corsi, Donald Bren Professor of Biological Chemistry at UCI’s School of Medicine, first showed the circadian rhythm-metabolism link some 10 years ago, identifying the metabolic pathways through which circadian proteins sense energy levels in cells.

“The human body is a complex, beautifully integrated system that functions at optimum efficiency when the networks are in balance,” said Sassone-Corsi, director of UCI’s Center for Epigenetics and Metabolism. “When this system is disrupted through misalignment among organs, the body will function at a less-than-optimum level, which may lead to disease. We are presenting a map that illustrates how to achieve the best health possible through proper balance and homeostasis.”

The researchers examined a variety of genetic clocks — ranging from those in blood serum, the liver and muscle to those in the brain’s prefrontal cortex and hypothalamus, as well as in brown and white body fat. The resultant atlas maps the connections among various organs and tissues, which together make up the so-called body clock that governs day-night patterns of metabolic activity. The team then tested the connections to see how a high-fat diet in mice scrambled the body’s fine-tuned metabolic patterns and rewired the communication and coordination among clocks.

“The effects of the high-fat diet give evidence that external factors can disrupt the coordinated metabolic pattern,” Sassone-Corsi said, adding that with this atlas, information from one organ or tissue group can provide a systemwide understanding of metabolic irregularities and the illnesses related to them.

“We can now create an approach to personalized medicine based on an individual’s circadian metabolism,” he said. “Metabolic profiling is a big-data method of optimizing metabolic health.”

The international team partnered with the biomedical company Metabolon on this current research, and they will collaborate on a human study and further exploration of circadian-controlled metabolic networks in other organs and tissue groups.

Also contributing to the work were Nicholas J. Ceglia, Yu Liu, Danny Armenta and Pierre Baldi of UCI’s Institute for Genomics and Bioinformatics; Sara de Mateo, Marlene Cervantes, Serena Abbondante, Paola Tognini, Ricardo Orozco-Solis, Kenichiro Kinouchi, Selma Masri, Emiliana Borrelli and Kristin Eckel-Mahan of UCI’s Center for Epigenetics and Metabolism; and scientists from Germany’s Center for Diabetes Research and Institute of Experimental Genetics, the University of Cambridge, Harbor-UCLA Medical Center, and the King Abdullah University of Science and Technology in Saudi Arabia.

The National Institutes of Health, the Defense Advanced Research Projects Agency, the French National Institute of Health and Medical Research (INSERM), the King Abdullah University of Science and Technology, and the Novo Nordisk Foundation provided support.

Story Source:

Materials provided by University of California – Irvine. Note: Content may be edited for style and length.

Posted on Leave a comment

What Anglo Saxon teeth can tell us about modern health

Evidence from the teeth of Anglo Saxon children could help identify modern children most at risk from conditions such as obesity, diabetes and heart disease.

Researchers from the University of Bradford found that analysis of milk teeth of children’s skeletons from a 10th Century site in Northamptonshire, England, gave a more reliable indicator of the effects of diet and health than bone.

The study, published today, 6 September 2018, in the American Journal of Physical Anthropology, shows that by analysing dentine from the milk teeth of the Anglo Saxon children, a picture emerges of the development of these children from the third trimester of pregnancy onwards, and is a proxy indicator of the health of the mothers. This is the first time that secure in utero data has been measured.

The skeletons analysed at the University of Bradford come from a settlement at Raunds Furnells and are from a group known to have been under nourished. The effect of this under nourishment, or stress, is to limit the growth of bones. This can limit the evidence available from analysis of bones alone, such as age.

Researchers were also able to look at children of different ages to see whether those who survived the first 1,000 days from conception, during which factors such as height are set, had different biomarkers for stress than those who died during this high-risk period.

Teeth, unlike bone, continue to grow under such stress and, unlike bone, record high nitrogen values. This evidence gives a clearer picture of what is happening to the child from before birth. The teeth are, in effect, acting as an archive of diet and health of both the child and mother.

Dr Julia Beaumont, of the University of Bradford’s School of Archaeological and Forensic Sciences, said: “This is the first time that we have been able to measure with confidence the in utero nitrogen values of dentine. We find that when bone and teeth form at the same time, bone doesn’t record high nitrogen values that occur during stress. Our hypothesis is that bone isn’t growing but teeth are. So archaeology can’t rely on the evidence from bones alone because bone is not forming and recording during high stress and we can’t be sure, for example, of the age of a skeleton. Teeth are more reliable as they continue to grow even when a child is starving.”

As well as the archaeological significance of this method of analysis, Dr Beaumont believes it has a direct application to modern medicine.

She said: “There is a growing consensus that factors such as low birthweight have a significant impact on our likelihood of developing conditions such as heart disease, diabetes and obesity and that the first 1,000 days from conception onwards set our ‘template’. By analysing the milk teeth of modern children in the same way as the Anglo Saxon skeletons, we can measure the same values and see the risk factors they are likely to face in later life, enabling measures to be taken to mitigate such risks.”

Story Source:

Materials provided by University of Bradford. Note: Content may be edited for style and length.

Posted on Leave a comment

Towards animal-friendly machines

Semi-autonomous and autonomous machines and robots can become moral machines using annotated decision trees containing ethical assumptions or justifications for interactions with animals.

Machine ethics is a young, dynamic discipline, which primarily targets people, not animals. However, it is important that animals are kept from harm when encountering these machines since animals cannot make informed decisions or react as humans would.

Several prototypes of semi-autonomous and autonomous machines that do not startle animals in the wild have been developed at the FHNW University in Brugg-Windisch, Switzerland. The prototypes are a ladybird-friendly robot vacuum cleaner, a self-driving car, a drone study for nature photography and advanced driver assistance systems.

The article “Towards animal-friendly machines” by Professor Oliver Bendel of the FHNW School of Business, published in De Gruyter’s open access journal Paladyn, Journal of Behavioral Robotics, describes how annotated decision trees for animal-friendly moral machines are being developed and compared while making the moral justifications transparent.

The modeling for the drone, for example, was presented in 2015 and instructed it to ignore humans, to avoid harming flying birds and to identify skittish animals and only photograph them from an appropriate height.

The robot vacuum cleaner was programmed to identify ladybirds by their coloring, and stop vacuuming until the insect had moved on. Furthermore, the owner could control the morality of the machine by presetting it to spare ladybirds, but vacuum other invasive or undesirable species. This may not seem animal-friendly, but absolute moral rules need not be enforced consistently if, for example, a vermin-free house is justified.

Programming advanced driver assistance systems (ADAS) in terms of decisions they can make with respect to animals is the main focus of the Robocar design study. The study posits that ADAS should recognize warning signs for toad migration, hedgehog populations or deer crossings and adapt the car’s reactions (emergency brake, reduced speed, etc) accordingly. In short, ADAS systems should identify such animals and animal species directly and react appropriately.

“Both robotics and computer science must be sensitized to animal protection and advocates for animal ethics should follow developments in robotics and artificial intelligences and should be involved in both,” said Professor Bendel.

Story Source:

Materials provided by De Gruyter. Note: Content may be edited for style and length.