Synchronization of Memory Cells Critical For Learning and Forming Memories

Credit: UNH

On the left is an enlarged image showing many hippocampal neurons, most of which are silent and only a few are active. On the right are close ups of three highly active neurons, or memory cells, which become synchronized after memory formation

Advertisements

The phrase “Pavlov’s dogs” has long evoked images of bells, food and salivating dogs. Even though this tried-and-true model of repetitive patterns mimics a variety of learning processes, what happens on a cellular level in the brain isn’t clear. Researchers at the University of New Hampshire took a closer look at the hippocampus, the part of the brain critical for long-term memory formation, and found that the neurons involved in so-called Pavlovian learning shift their behavior during the process and become more synchronized when a memory is being formed – a finding that helps better understand memory mechanisms and provides clues for the development of future therapies for memory-related diseases like dementia, autism and post-traumatic stress disorder (PTSD).

Advertisements

“There are tens of millions of neurons in the hippocampus but only a small fraction of them are involved in this learning process” said Xuanmao (Mao) Chen, assistant professor of neurobiology. “Before engaging in Pavlovian conditioning, these neurons are highly active, almost chaotic, without much coordination with each other, but during memory formation they change their pattern from random to synchronized, likely forging new connecting circuits in the brain to bridge two unrelated events.

In the study, recently published in The FASEB Journal, researchers looked at Pavlovian learning patterns, or respondent conditioning, in mice. In the beginning, before any repetitive learning exercises, the mice did not know what to expect and using special imaging with an endomicroscope the researchers saw that the neural activity was disorderly. But after repeating different tasks associated with a conditional stimulus, like a tone or bell, the mice began to recognize the pattern and the highly active neurons became more synchronized. The researchers hypothesize that without forming synchronization, animals cannot form or retrieve this type of memory.

In the 1890’s, Russian psychologist, Ivan Pavlov discovered classical conditioning through repetitive patterns of bell ringing which signaled to his dogs that food was on its way and stimulated salivation. This same learned behavior is important for episodic knowledge which is the basis for such things as learning vocabulary, textbook knowledge, and memorizing account passwords. Abnormal learning processing and memory formation are associated with a number of diseases like dementia, autism, and PTSD. People who struggle with these cognitive dysfunction-related disorders may have trouble retaining memories or can even form too strong a memory, as with PTSD patients. The UNH researchers believe that understanding the fundamentals of how classical conditioning shape neural connections in the brain could speed up the development of treatments for these disorders in the future.

Advertisements

Contributing to these findings are Yuxin Zhou, doctoral candidate; Liyan Qiu, research scientist; both at UNH, and Haiying Wang, assistant professor at the University of Connecticut.

This work was supported by the National Institutes of Health (NIH) and the Cole Neuroscience and Behavioral Faculty Research Awards.

The University of New Hampshire inspires innovation and transforms lives in our state, nation and world. More than 16,000 students from all 50 states and 71 countries engage with an award-winning faculty in top-ranked programs in business, engineering, law, health and human services, liberal arts and the sciences across more than 200 programs of study. As one of the nation’s highest-performing research universities, UNH partners with NASA, NOAA, NSF and NIH, and receives more than $110 million in competitive external funding every year to further explore and define the frontiers of land, sea and space. 

Advertisements

Hollywood's dirtiest secret?😗

Advertisements

With the Academy Awards around the corner, moviegoers and critics are busy scrutinizing the costumes, sets and performances of this year’s cinematic stand-outs.
  
When film scholar Hunter Vaughan watches a movie, he considers something else: How big of a toll did it take on the environment?

Advertisements
Advertisements

“I want to provide a counter-narrative to the typical story of Hollywood that looks at it not in terms of grandiose romanticization of the silver-screen, but through the hidden environmental tolls—the natural resource use, the waste production, the greenhouse gas emissions—that are seldom talked about,” says Vaughan, an environmental media scholar-in-residence in the College of Media Communication and Information (CMCI).

Vaughan’s new book, Hollywood’s Dirtiest Secret: The Hidden Environmental Costs of the Movies, does just that, shedding light on a wide range of surprising ecological villains, from the 1939 epic Gone With the Wind—which ignited Hollywood’s polluting love affair with explosions—to the 2009 sci-fi Avatar, an ostensibly eco-friendly digital pioneer that generated mountains of real-world waste. 

Advertisements

Vaughan found inspiration for the book when he was a doctoral student at Oxford in England. Walking home at night he noticed a blinding glow emanating from a window. When he peeked inside, he found a building full of humming generators, snaking electrical cords and glaring lights—all set up for a practice run-through of a scene in the 2007 fantasy The Golden Compass.

“I was shocked and disturbed by the amount of resources being used just for a run-through,” he recalls.

In the coming years, he scoured through film archives and directors’ reports, toured studio lots and interviewed execs and local film crews.

He discovered an industry culture in which extravagance and waste have been not only allowed but celebrated, even as other industries have been pressured to conserve. 

Titanic waste

During the filming of Gone With the Wind amid lingering economic depression and scarcity, the filmmakers lit mounds of sets from previous films (including King Kong) ablaze for the epic “burning of Atlanta” scene, sending a plume of potentially hazardous smoke into the Los Angeles sky.

For Gene Kelly’s classic dance scene in the 1952 musical Singin’ in the Rain, producers ran countless gallons of water for a week on a backlot at MGM. When they realized they were starting to lose water pressure around 5 p.m., as residents of nearby Culver City got home from work, they altered their schedule to run the water sooner. 

Advertisements

“They knew that water was a resource of the commons and that it was finite, but rather than conserve it, they just ran it earlier,” he notes.

Such waste did not abate with the coming of the environmental movement.

In 1997, during the filming of the box-office smash Titanic, wastewater from the lavish set on the shore of a Mexican village polluted the nearby ocean, decimating the local sea urchin population, reducing fish levels by a third and taking a heavy toll on the local fishing industry. Three years later, during the filming of the drama The Beach, also starring Leonardo DiCaprio, film crews uprooted local flora on Phi Phi Leh island in Thailand, destroying dunes that served as a natural barrier against monsoons and tsunami. The subsequent boom in tourism related to the film took such a toll on the coral reefs that the government suspended tourism there.

Advertisements

The truth about digital

Then there was Avatar.

Avatar is problematic on so many levels,” says Vaughan.

A cautionary tale of the dangers of unbridled resource use, the 2009 production billed itself as entirely digital—a filmmaking method described as less resource-intensive than shooting live action. But Vaughan’s research revealed that the filmmakers produced entire real-life sets and wardrobes anyway, to get a better sense of how bodies and fabrics might move in the digital world of Pandora.

Our increased reliance on digital technology for entertainment comes at its own ecological cost, Vaughan stresses, with fiber-optic cables strung along the ocean floor, satellites and cell towers built to transmit signals and the benign-sounding “cloud” made up of server farms gobbling up energy around-the-clock. While Netflix deserves kudos for its socially progressive content, he notes, it uses inordinate amounts of server space (and associated energy and coolant), and a built-in interface which automatically starts the next show once the previous show is finished exacerbates its waste.

Advertisements

Award ceremonies like the Academy Awards and Golden Globes, with their lavish, wear-only-once gowns, $10,000 gift bags and private jets, also have an impact.

“They take a massive toll, both materially and symbolically,” he says.

In all, research has shown, the film industry is on par with the aerospace, apparel, hotel and semiconductor industries when it comes to energy use and emissions.

Advertisements

Greening future films

But the news is not all bad, stresses Vaughan.

Some studios have vowed to go carbon neutral, and actors like Mark Ruffalo, Matt Damon, Shailene Woodley and Jane Fonda are taking genuine and public steps to fight for cleaner air, water, land and environmental justice.

Vaughan is also doing his part.

With a grant from the UK-based Arts and Humanities Research Council (AHRC), he’s working to build a Global Green Media Production Network to facilitate eco-friendly production from East Asia to Latin America. 

What can moviegoers do?

Start by asking yourself some tough questions, he writes: Would you accept the extinction of a species in exchange for your favorite movie? How many downed trees’ or highways’ worth of carbon output is it worth?

“Filmgoers can choose not to endorse movies that rely on a spectacle of explosion, materialism and waste, and they can use social media to draw attention to their choices,” he says. “If we as an audience show Hollywood we want a certain type of film, they will start making them.”

Pregnant Women with Very High Blood Pressure Face Greater Heart Disease Risk😔

Photo by Suhyeon Choi
Advertisements

Women with preeclampsia are four times more likely to suffer a heart attack or cardiovascular death, Rutgers study finds

Advertisements

Women with high blood pressure in their first pregnancy have a greater risk of heart attack or cardiovascular death, according to a Rutgers study. 

The study is published in the Journal of Women’s Health

Approximately 2 to 8 percent of pregnant women worldwide are diagnosed with preeclampsia, a complication characterized by high blood pressure that usually begins after 20 weeks of pregnancy in women whose blood pressure had been normal. Doctors haven’t identified a single cause, but it is thought to be related to insufficiently formed placental blood vessels. Preeclampsia is also the cause of 15 percent of premature births in the U.S.

The researchers analyzed cardiovascular disease in 6,360 women, age 18 to 54, who were pregnant for the first time and diagnosed with preeclampsia in New Jersey hospitals from 1999 to 2013 and compared them to pregnant women without preeclampsia. They found that those with the condition were four times more likely to suffer a heart attack or cardiovascular death and more than two times more likely to die from other causes during the 15-year study period. 

Advertisements

“Women who were diagnosed with preeclampsia tended also to have a history of chronic high blood pressure, gestational diabetes and kidney disease and other medical conditions,” said lead author Mary  Downes Gastrich, an associate professor at Rutgers Robert Wood Johnson Medical School and a member of the Cardiovascular Institute of New Jersey.

Advertisements

Gastrich said the study suggests that all women be screened for preeclampsia throughout their pregnancy and that treatment be given to those with preeclampsia within five years after birth. “Medication such as low-dose aspirin also may be effective, according to one study, in bringing down blood pressure as early as the second trimester​​,” she said. 

Advertisements

 Other Rutgers authors include Stavros Zinonos, Gloria Bachmann, Nora M. Cosgrove, Javier Cabrera, Jerry Q. Cheng and John B. Kostis.

The Hidden History of Valentine's Day🥰

Advertisements
Photo by Nick Fewings

UNLV history professor Elizabeth Nelson separates facts about the effects of marketing, consumerism, and social media on the holiday’s evolution from fiction about love’s golden age.

Advertisements

Pets, spouses, co-workers, friends, classmates: They’re all in line to be on the receiving end of another record year for Valentine’s Day spending, says a new survey by the National Retail Federation.

But as Americans strive to return to the good old days of romance, one UNLV history professor says they never actually existed.

“People love the idea that there were these wonderful eras before our own time when people celebrated Valentine’s Day in the most authentic way,” says Elizabeth Nelson, a 19th-century pop culture expert, who began researching Valentine’s Day three decades ago and literally wrote the book on marketing the holiday. “But there was always this long and complicated history about Valentine’s Day and people actually thought that it was too commercial and insincere from the very beginning.”

We sat down with Nelson to get a handle on the history behind the holiday and the ways advertising, consumerism, and social media have changed the way we celebrate.

Advertisements

Who is St. Valentine and why does he have a holiday?

Popular lore says that in 5th century A.D., there was a St. Valentine who was imprisoned for some transgression. The myth says the jailer’s daughter took pity, brought him food, and tried to save him. The incarcerated man sent her a note of thanks, signing it: “From your Valentine.” 

The story falls apart on multiple historical levels — it seems unlikely that the jailer’s daughter would have been literate or that Valentine could’ve gotten paper and pen in a jail cell. But historians argue that — like Christmas, Easter, and many other modern holidays — Christians in the past tended to link saint holidays with pagan celebrations to help solidify conversion because people didn’t want to give up the ways in which they lived their lives. Blending these holidays allowed revelers to keep observing rituals from centuries ago. Over time, the original intent was forgotten.

In this case, there was also a Roman festival called Lupercalia, celebrating fertility, that might have influenced the celebration of Valentine’s Day. While we now celebrate Valentine’s Day in February, in the Middle Ages, Chaucer, in “The Canterbury Tales,” describes the holiday as occurring in May with imagery of springtime, birds, and budding flowers — which makes sense if linked to a Roman holiday centered on fertility. 

What’s more, there are several saints throughout history named Valentine. But none of them are patron saints of love.

Advertisements

Who celebrates Valentine’s Day and why?

Valentine’s Day is mostly only celebrated in the United States and Britain. Before the 18th century, it was about exchanging gifts — gloves and spoons were traditional — and being someone’s valentine for a whole year. It sometimes served as a precursor to betrothal. 

There are some interesting stories circulating about why it’s not as popular overseas.

Legend has it that in France, women who were rejected by their desired valentine would burn those men in effigy in a bonfire, causing a riotous ruckus — so allegedly, the government outlawed Valentine’s Day in the early 19th century. 

In England, there was a practice called “valentining,” where kids would go door to door asking for treats, similar to Halloween. However, over time, these public celebrations got out of hand and sometimes devolved into violence and mob action. So the proper, genteel middle class opted instead to change the focus from human interaction to the less dangerous exchange of cards.

Advertisements

When did the commercialization of Valentine’s Day begin?

In the 1840s, Valentine’s Day took off in the U.S. as increased paper production and printing presses lowered costs and increased the number of pre-printed cards that people could exchange that featured fancy lace, pictures, and other decorations. And sometimes celebrants copied pre-written poems out of books called “valentine writers” that featured bawdy sexual innuendo. My favorite metaphor: grating someone’s nutmeg.

One of earliest American valentine businesses was run by Esther Howland in Worcester, Mass. She was the daughter of an insurance agent who ran a stationary store. She asked her father to import fancy paper, lace, and other decor from England to make valentines to sell. She employed female friends of the family, and asked her brothers to share sample valentines during their work trips as traveling salesmen. Esther received many orders and created a successful business during the 1850s and 1860s. Her story is quite amazing because we don’t think of women as running businesses in the 19th century. 

Hallmark was founded in 1911, and technology made it possible to produce valentines in color and with various textures even more inexpensively than before. So, it’s really in the beginning of the 20th century that Valentine’s Day becomes part of a general movement to turn holidays into opportunities for selling things from candy to flowers to magazine advertisements. Valentine’s Day began to center more on children than before. People began exchanging valentines in school. Hallmark played a big role in marketing it to elementary students, shifting the focus to the competitive collecting of the most valentines rather than a single sincere one.

Advertisements

Has romance always been at the center of Valentine’s Day?

Initially, it was about having one valentine throughout the year and possibly becoming betrothed. But it evolved in the 19th century, sparking questions about the sincerity of exchanging pre-printed cards and the sanity of spending exorbitant amounts of money on them. 

Valentine’s Day and the exchange of valentines were a way that people in the emerging middle class in the 19th century negotiated that complicated relationship between romantic love and the economic reality of marriage. You could marry someone for love, but you still had to marry someone for love who could support you because most middle-class women didn’t work. So, it was dangerous just to fall in love with people without knowing anything about them. The celebration of Valentine’s Day became a way for people to test the uncomfortable juxtapositions of what love and marriage should be and the reality of what was actually possible. So, not so different from today.

Advertisements

How has social media shifted the celebration of Valentine’s Day?

One of the things that’s nice about Valentine’s Day today is that there are a variety of ways to celebrate. There are Galentine’s and Single Awareness Day celebrations, you can give your pet a gift, or you can even celebrate alone. You don’t have to wait for the candy or the flowers to come. People still do those things, but there’s less pressure to conform to a public declaration or celebration of it. And that’s the thing about Valentine’s Day: It’s about what other people see you doing or getting. How do you perform the idea of love rather than actually express or engage in the act of love. It’s the representation of the commercial items — getting flowers delivered to your office or going to a fancy restaurant or getting a piece of expensive piece of jewelry. It’s what other people think of your couplehood rather than what you think about it.

Advertisements

It is likely that Facebook and other social media have made Valentine’s Day more viral and more toxic, but the framework was already there. It’s not so much that social media really changed the scrutiny that was already at the core of Valentine’s Day; it just created a whole new possibility for performing the act of Valentine’s Day. Because social media sites are all about performing your imagined best self, the level of scrutiny on how you celebrate Valentine’s Day or what you got for Valentine’s day is ratcheted up exponentially on Facebook . It is not just about the people in your office or in your neighborhood, everybody in your world sees whether your sweetie did right by you or not, or vice versa.

Half-Quantum Step Toward Quantum Advantage🤔

Credit: Yufan Li, Johns Hopkins University

A famous metaphor for a qubit is Schrodinger’s hypothetical cat that can be both dead and alive. A flux qubit, a ring made of superconducting material, can have electric current flowing clockwise and counterclockwise simultaneously with an external field.

Advertisements

The Science

Superconductors are materials that have no electrical resistance below a critical temperature. They typically push away magnetic fields, as if they are surrounded by an anti-field shield. But to use superconductors as qubits (the unit of a quantum computer), scientists had to surround them with magnetic fields. Researchers recently measured a surprising effect for a new type of superconductor: bismuth palladium (β-Bi2Pd). Even when there was no magnetic field around this superconductor, they found it existed between two states. That’s a necessary requirement for creating a qubit. This superconductor may host a Majorana fermion, an exotic quasiparticle. Scientists have proposed the idea of Majorana fermions, but have not observed them in experiments. 

Advertisements

The Impact

Classical computers process information using binary states (0 and 1), called bits. Quantum computers use quantum bits (qubits). However, a qubit can be both 0 and 1 at the same time. Qubits enable quantum computers to perform certain calculations at speeds many times faster than a classical computers. However, large-scale quantum computers need qubits that are much more stable than those currently available. Scientists are researching many different possible approaches to qubits, including photons, trapped ions, loops of superconducting material, and Majorana fermions. Majorana fermions are a promising candidate for stable qubits.  

Advertisements

Summary

Scientists are currently looking for material systems that can support long-lived, coherent quantum phenomena that can be used for the development of qubits for future quantum computers.  One actively investigated system is based on Majorana fermions. In condensed matter physics, Majorana fermions are quasiparticles that are their own antiparticles, a fascinating quantum property. Because they always come in pairs, entangled Majorana quasiparticles could store quantum information at two discrete locations. For example, it could store data at opposite ends of one-dimensional wires. Scientists have suggested that Majorana fermions might exist in a spin-triplet superconductor (a superconductor in which pairs of electrons align their spins in parallel, resulting in a net total spin). In this research, scientists observed a half-quantum flux or half-quantum step when measuring the influence of a magnetic field on patterned rings of thin films of β-Bi2Pd. This observation proves unconventional Cooper pairing of electrons. The half-quantum flux was first observed in high-temperature copper-oxide superconductors. Other experiments reported in the literature are also consistent with spin-triplet pairing in this material. Spin-triplet superconductors have the necessary but not sufficient potential to host topologically-protected Majorana fermions. These quasiparticles could serve as a platform for the development of stable qubits for quantum computers with long coherence times and robustness toward atomic perturbations. What makes the half-quantum flux superconducting ring especially attractive is that such a field-free qubit device may enable practical applications of flux qubits for quantum computing.

Funding

Advertisements

This work was supported by the U.S. Department of Energy (DOE), Basic Energy Sciences, including the SHINES Energy Frontier Research Center. E-beam lithography was conducted at the University of Delaware Nanofabrication Facility (UDNF) and the NanoFab laboratory of NIST (CNST).

Astronomers discover unusual monster galaxy in the very early universe

Advertisements
Photo by Suzy Hazelwood on Pexels.com

An international team of astronomers led by scientists at the University of California, Riverside, has found an unusual monster galaxy that existed about 12 billion years ago, when the universe was only 1.8 billion years old.

Dubbed XMM-2599, the galaxy formed stars at a high rate and then died. Why it suddenly stopped forming stars is unclear.

“Even before the universe was 2 billion years old, XMM-2599 had already formed a mass of more than 300 billion suns, making it an ultramassive galaxy,” said Benjamin Forrest, a postdoctoral researcher in the UC Riverside Department of Physics and Astronomy and the study’s lead author. “More remarkably, we show that XMM-2599 formed most of its stars in a huge frenzy when the universe was less than 1 billion years old, and then became inactive by the time the universe was only 1.8 billion years old.”

Advertisements

The team used spectroscopic observations from the W. M. Keck Observatory‘s powerful Multi-Object Spectrograph for Infrared Exploration, or MOSFIRE, to make detailed measurements of XMM-2599 and precisely quantify its distance.

Study results appear in the Astrophysical Journal.

“In this epoch, very few galaxies have stopped forming stars, and none are as massive as XMM-2599,” said Gillian Wilson, a professor of physics and astronomy at UCR in whose lab Forrest works.  “The mere existence of ultramassive galaxies like XMM-2599 proves quite a challenge to numerical models. Even though such massive galaxies are incredibly rare at this epoch, the models do predict them. The predicted galaxies, however, are expected to be actively forming stars. What makes XMM-2599 so interesting, unusual, and surprising is that it is no longer forming stars, perhaps because it stopped getting fuel or its black hole began to turn on. Our results call for changes in how models turn off star formation in early galaxies.”

Advertisements

The research team found XMM-2599 formed more than 1,000 solar masses a year in stars at its peak of activity — an extremely high rate of star formation. In contrast, the Milky Way forms about one new star a year.

“XMM-2599 may be a descendant of a population of highly star-forming dusty galaxies in the very early universe that new infrared telescopes have recently discovered,” said Danilo Marchesini, an associate professor of astronomy at Tufts University and a co-author on the study.

The evolutionary pathway of XMM-2599 is unclear.

“We have caught XMM-2599 in its inactive phase,” Wilson said. “We do not know what it will turn into by the present day. We know it cannot lose mass. An interesting question is what happens around it. As time goes by, could it gravitationally attract nearby star-forming galaxies and become a bright city of galaxies?”

Co-author Michael Cooper, a professor of astronomy at UC Irvine, said this outcome is a strong possibility.

“Perhaps during the following 11.7 billion years of cosmic history, XMM-2599 will become the central member of one of the brightest and most massive clusters of galaxies in the local universe,” he said. “Alternatively, it could continue to exist in isolation. Or we could have a scenario that lies between these two outcomes.”

Advertisements

The team has been awarded more time at the Keck Observatory to follow up on unanswered questions prompted by XMM-2599.

“We identified XMM-2599 as an interesting candidate with imaging alone,” said co-author Marianna Annunziatella, a postdoctoral researcher at Tufts University. “We used Keck to better characterize and confirm its nature and help us understand how monster galaxies form and die. MOSFIRE is one of the most efficient and effective instruments in the world for conducting this type of research.”

Schizophrenia Is A Disease, Not An Extreme of Normal Variation

Advertisements

“Bipolar disorder and schizophrenia, and many other types of mental illness, are diseases of the brain and should be treated and studied as such,” say Johns Hopkins researchers.

Photo by DESIGNECOLOGIST

Does this statement seem a bit obvious and not exactly rocket science? Although it may, this isn’t how the National Institute of Mental Health (NIMH) — the psychiatry wing of the National Institutes of Health — currently views severe mental disorders such as schizophrenia, autism, bipolar disorder and dementia. The NIMH is the largest federal agency that provides research funding on mental disorders.

For the past decade, the NIMH has used a system called Research Domain Criteria (RDoC) to describe all mental illnesses as dimensions of psychological norms that fall along extremes of too much or too little of common personality traits. For example, everyone has minor fears of things such as spiders, heights or snakes. But, having very strong or unmanageable fears might constitute an anxiety disorder.

Advertisements

While this way of thinking may make sense for anxiety, Johns Hopkins physicians argue that for the most severe of mental disorders — such as autism, schizophrenia or bipolar disorder — the approach will lead clinicians and scientists in the wrong direction. These conditions aren’t the result of too much or too little of a normal human trait. Rather they represent a clear-cut shift outside the typical dimensions of human experience.

In every other field of medicine, researchers use animal models of diseases based on genes and their interactions that contribute to disease risk. However, the current NIMH approach directs psychiatric researchers to focus on normal variation. Research on animal models with genetic variations that increase the risk of diseases often doesn’t get funded, they say.

Advertisements

The researchers lay out their thoughts in two commentaries, both published in Molecular Neuropsychiatry, one published in 2018 and the other in the October 2019 issue.

In their first commentary, the researchers argue that the NIMH approach of thinking of mental illness in dimensional terms is like regressing back to Galen’s Humors of the second century, when all illnesses were attributed to the imbalance of one of the four humors: yellow bile, black bile, blood and phlegm. Then, they argue that a biomedical approach using the tools of genetics, neuroscience and imaging can lead to rational targets for therapies. The second commentary is a point-by-point critique of the NIMH system and its flaws. They say that the RDoC system moves away from the proven power of biomedical research, which explores the causes of diseases and their effects on human biology. They add that the RDoC system doesn’t appropriately address the natural history or progression of a disease.

“Using the RDoC system hasn’t advanced the field of psychiatry, diverts attention from achieving an understanding of underlying mechanisms and ultimately delays discovering rational treatments for these diseases,” says author Christopher A. Ross, M.D., Ph.D., professor of psychiatry, neurology, neuroscience and pharmacology at the Johns Hopkins University School of Medicine.

Advertisements

This change in how the NIMH approaches mental illnesses occurred about a decade ago. Leadership at the NIMH initiated the RDoC system with the best motives in mind, in order to encourage neuroscience research to study how cells communicate with one another in the brain. However, this change to the RDoC system happened before modern genetic and other techniques pointed toward specific causes of major mental illnesses.

“No other NIH institute has adopted a scheme so discordant from modern biomedical research practice,” says Ross.

“The NIMH strategy makes psychiatry — and especially psychiatric research — seem like a strange and esoteric endeavor, not part of mainstream biomedicine, with the consequence of stigmatizing the entire discipline, including its patients,” says co-author Russell Margolis, M.D., a professor of psychiatry and neurology at Johns Hopkins.

Now that investigators have identified some genetic and environmental causes, and are beginning to reveal molecular mechanisms behind these disorders, the researchers say that it’s time for the NIMH to readjust their system. These changes should allow for conditions such as autism, bipolar disorder and schizophrenia to be researched and treated as diseases — and not as fringe versions of normal variation. Moving toward a system that values the biomedical approach, comparable with the other NIH institutes, they say, would guide the NIMH to support studies on mechanisms of disease, so researchers can design more targeted therapies for those with different forms of these illnesses. As psychiatric genetics is complex, so are the genetics of many common medical diseases, such as diabetes and rheumatoid arthritis. Nevertheless, in other fields, scientists successfully use modern biomedical technique to address complex diseases. The authors contend that the field of psychiatry and patients with severe mental diseases deserve no less.

Advertisements

Ross received research support from JNJ/Janssen, Teva, Raptor/Horizon, Vaccinex, uniQure and Roche/Genentech unrelated to these publications, and has consulted for Teva, Sage, uniQure, Roche/Ionis, Azevan, Annexon and the Healthcare Services Group. Margolis received grant support from Teva unrelated to the publications discussed here.

Too Much of a Good Thing? 😜

Photo by Adrienn on Pexels.com

That Italian restaurant with the excellent linguini that you’ve indulged in so often you can no longer face a meal there.

Advertisements

The conference with brilliant but endless keynotes: You start the day full of enthusiasm, but by the fourth breakout you’re flagging. The action movie that has you on the edge of your seat for so long and with so little down time that your brain goes numb long before your legs do.

It’s called satiation. And once you pass the satiation point, consuming more — even of something you love — means enjoying it less. Your senses become clogged by so much of one stimulus; they become tired and don’t process your enjoyment.

Advertisements

Of course, feeling satiated is a temporary state. Taking a break from the restaurant or skipping a few of the keynotes will leave you ready for more in due course.

So how do you know where the satiation point will kick in? And how long does it take to rebuild your appetite for more?

Shedding rigorous scientific light on all of this is new research by Darden Professor Manel Baucells.

Advertisements

EVERYTHING IN MODERATION?

Together with Lin Zhao of the Chinese Academy of Sciences, Baucells has created a mathematical model that charts the satiation state and the time that it takes for satiation to “decay” — in other words, the optimal amount of rest from an experience or activity that is needed in order for enjoyment to resume.  

“We know from research — and common sense — that the old axiom is true: Everything is better in moderation,” says Baucells. “You tire of something if you’re overexposed to it. If you go to a concert, you’re likely to enjoy the first songs more than those that come in the middle, unless the playlist has been carefully calibrated to avoid satiation. We wanted to calculate where satiation kicks in and how it impacts enjoyment. We also wanted to understand how much time needs to elapse until satiation subsides and we start to enjoy something again.”

Understanding these dynamics, says Baucells, can help optimize the design of experiences and activities.

Advertisements

THE SATIATION MODEL

Baucells and Zhao’s satiation model plots three core dimensions: the consumption rate of an experience or activity or product, the satiation level, and the moment-by-moment enjoyment produced by that experience or activity or product. This third dimension is called “instant utility.”

The model is novel in that it is the first to introduce a “de-satiation motive,” charting the time it takes for satiation to decay — and enjoyment rise again.

The satiation model captures three key ideas:

  • The more frequently we consume something, the faster our satiation rate increases.
  • Enjoyment levels go down as satiation levels go up.
  • Resting between experiences decreases satiation and increases the enjoyment of the experiences that come after the break.

The paper also offers a “proof of concept” on how to measure, based on reports from individuals, specific parameters of the model such as how fast the satiation level decays during rest. Such measurements would allow us to improve the design of experiences, make better predictions on how much individuals would like a particular design, or monitor preferences from beginning to end of a time period.

Advertisements

THE SCIENCE IN LEAVING THE BEST FOR LAST

“Right now, a combination of intuition and experience determine how experiential services are design in many spheres of business,” says Baucells. “Intuitively we know when we go to a show or a concert that the best is generally left for last. But if you ask organizers or producers why that is, you’ll likely get a host of different reasons.”

The satiation model brings greater coherence to our understanding of the dynamics at play — a logical approach that can serve to either support or debunk gut feeling.

The model shows that satiation peaks and falls over a period of time. A high-low-high pattern works best for maximum satisfaction: Ideally, we’d still start an experience with a bang, then take things down a notch, then end with a grand finale. Having satiation peak right at the end of an experience or activity won’t penalize that activity because, simply put, nothing comes after the end. There is no further chance for satiation to increase, as the final peak is followed by an indefinite period of decay or rest. Moreover, ending on the highest note leaves one with a positive memory of the experience — an important source of consumer satisfaction.

Baucells’ model also points to how to optimize rests or breaks between activities (e.g., between songs in a concert), or to use variety to minimize satiation and optimize enjoyment.

“It’s the scientific explanation behind why we need to hear acoustic songs in a rock concert, or have our high-energy action interspersed with quieter scenes in a movie.”

So no matter how much you like kayaking or golfing, booking a six-day vacation centered around the activity will not be as fun as booking two separate three-day vacations. And mixing things up with, say, a horseback ride, will do wonders for how much more you appreciate the next golf course.

Advertisements

IMPLICATIONS

Decision-makers would do well to factor this understanding into business models, loyalty programs and marketing efforts, say the researchers.

Managing satiation more scientifically has benefits that span any number of sectors.

Restaurant mangers might want to think about reducing portion size in order to boost the sale of desserts. Customer loyalty efforts might be well served both by prioritising innovation and variety of offers, and by allowing greater periods of time to elapse between promotions.

There are key insights here that can even inform the debate on income inequality, Baucells says.

“The satiation model shows us that people tire of something if they do it too frequently. This can be just as easily applied to high-wealth individuals and spending habits,” says Baucells. “The model tells us that people cannot efficiently spend money on consumption indefinitely, and that has implications for inequality or philanthropy. Individuals with large wealth will eventually reach their satiation points in consumption, and their capacity to make any significant increase in enjoyment by spending more will eventually plateau. Past this point, philanthropy may make more sense.”

What Is An Endangered Species?

Advertisements

Gray wolves, like this pair on Isle Royale, are listed as endangered in the United States.

Credit: Michigan Tech

By John Vucetich, professor, College of Forest Resources and Environmental Science

Lions and leopards are endangered species. Robins and raccoons clearly are not. The distinction seems simple until one ponders a question such as: How many lions would there have to be and how many of their former haunts would they have to inhabit before we’d agree they are no longer endangered?

Advertisements

To put a fine point on it, what is an endangered species? The quick answer: An endangered species is at risk of extinction. Fine, except questions about risk always come in shades and degrees, more risk and less risk.

Extinction risk increases as a species is driven to extinction from portions of its natural range. Most mammal species have been driven to extinction from half or more of their historic range because of human activities. 

The query “What is an endangered species?” is quickly transformed into a far tougher question: How much loss should a species endure before we agree that the species deserves special protections and concerted effort for its betterment? My colleagues and I put a very similar question to nearly 1,000 (representatively sampled) Americans after giving them the information in the previous paragraph. The results, “What is an endangered species?: judgments about acceptable risk,” are published today in Environmental Research Letters.

Advertisements

Three-quarters of those surveyed said a species deserves special protections if it had been driven to extinction from any more than 30% of its historic range. Not everyone was in perfect agreement. Some were more accepting of losses. The survey results indicate that people more accepting of loss were less knowledgeable about the environment and self-identify as advocates for the rights of gun and land owners. Still, three-quarters of people from the group of people who were more accepting of loss thought special protections were warranted if a species had been lost from more than 41% of their former range.

Advertisements

These attitudes of the American public are aligned with the language of the U.S. Endangered Species Act — the law for preventing species endangerment in the U.S. That law defines an endangered species as one that is “in danger of extinction throughout all or a significant portion of its range.”

But there might be a problem

Government decision-makers have tended to agree with the scientists they consult in judging what counts as acceptable risk and loss. These scientists express the trigger point for endangerment in very different terms. They tend to say a species is endangered if its risk of total and complete extinction exceeds 5% over 100 years.

Advertisements

Before human activities began elevating extinction risk, a typical vertebrate species would have experienced an extinction risk of 1% over a 10,000-year period. The extinction risk that decision-makers and their consultant experts have tended to consider acceptable (5% over 100 years) corresponds to an extinction risk many times greater that the extinction risk we currently impose on biodiversity! Experts and decision-makers — using a law designed to mitigate the biodiversity crisis — tend to allow for stunningly high levels of risk. But the law and the general public seem accepting of only lower risk that would greatly mitigate the biodiversity crisis. What’s going on?

One possibility is that experts and decision-makers are more accepting of the risks and losses because they believe greater protection would be impossibly expensive. If so, the American public may be getting it right, not the experts and decision-makers. Why? Because the law allows for two separate judgements. The first judgement is, is the species endangered and therefore deserving of protection? The second judgment is, can the American people afford that protection? Keeping those judgements separate is vital because making a case that more funding and effort is required to solve the biodiversity crisis is not helped by experts and decision-makers when they grossly understate the problem — as they do when they judge endangerment to entail such extraordinarily high levels of risk and loss.

Facts and Values

Advertisements

Another possible explanation for the judgments of experts and decision-makers was uncovered in an earlier paper led by Jeremy Bruskotter of Ohio State University (also a collaborator on this paper). They showed that experts tended to offer judgments about grizzly bear endangerment — based not so much their own independent expert judgement — but on basis of what they think (rightly or wrongly) their peers’ judgement would be.

Regardless of the explanation, a good answer to the question, “What an endangered species?” is an inescapable synthesis of facts and values. Experts on endangered species have a better handle on the facts than the general public. However, there is cause for concern when decision-makers do not reflect the broadly held values of their constituents. An important possible explanation for this discrepancy in values is the influence of special interests on decision-makers and experts charged with caring for biodiversity.  

Getting the answer right is of grave importance. If we do not know well enough what an endangered species is, then we cannot know well enough what it means to conserve nature, because conserving nature is largely — either directly or indirectly — about giving special care to endangered species until they no longer deserve that label.

Research collaborators include Jeremy T. Bruskotter of Ohio State University, Adam Feltz of University of Oklahoma, and Tom Offer-Westort also of University of Oklahoma.

Coronavirus

Photo by Jonathan Borba on Pexels.com

Two New Rapid Tests Could Play Key Role in Efforts to Contain Growing Epidemic

Advertisements

WASHINGTON – Breaking research in AACC’s Clinical Chemistry journal shows that two new tests accurately diagnose coronavirus infection in about 1 hour. These tests could play a critical role in halting this deadly outbreak by enabling healthcare workers to isolate and treat patients much faster than is currently possible. 

Advertisements

Since the coronavirus emerged in Wuhan, China last month, this pneumonia-like illness has spread at an alarming rate. Just yesterday, the World Health Organization officially declared the outbreak a public health emergency, and as of today, the virus has infected nearly 10,000 people in China, with the death toll soaring to more than 200. More cases continue to appear around the globe, with six coronavirus cases already confirmed in the U.S. In order to contain this pandemic, healthcare workers need to quickly and accurately identify new coronavirus cases so that patients get crucial medical care and transmission can be halted. However, the Chinese labs that can test for coronavirus are currently overwhelmed. There are reports of hospitals in Wuhan having to deny testing for severely ill patients, who are then also denied full-time admission because beds need to be saved for those with confirmed diagnoses. Partly as a result of these testing difficulties, researchers estimate that only 5.1% of coronavirus cases in Wuhan have actually been caught. 

A team of researchers led by Leo L.M. Poon, DPhil, of the University of Hong Kong has developed two rapid tests for the coronavirus that could break this diagnostic bottleneck. Using a technology known as real-time reverse transcription polymerase chain reaction (RT-PCR), the tests detect two gene regions that are only found in the Wuhan coronavirus (officially known as 2019-novel-coronavirus) and in other closely related coronaviruses such as SARS. The two gene regions detected by the tests are known as ORF1b and N. Significantly, both tests also take only about 1 hour and 15 minutes to run. This fast turnaround time could enable Chinese labs to greatly increase patient access to coronavirus testing. 

Advertisements

To evaluate the performance of these tests, Poon’s team first confirmed that the tests accurately identify genetic material extracted from cells infected with the SARS coronavirus. The researchers also showed that the tests return negative results for samples containing genetic material from other respiratory viruses, demonstrating that the tests accurately differentiate coronavirus infection from other causes of pneumonia. Lastly, Poon’s team used the tests to analyze sputum and throat swab samples from two patients infected with the 2019-novel-coronavirus. The tests correctly gave positive results for both patients. 

“Signs of [coronavirus] infection are highly non-specific and these include respiratory symptoms, fever, cough, [shortness of breath], and viral pneumonia,” said Poon. “Thus, diagnostic tests specific for this infection are urgently needed for confirming suspected cases, screening patients, and conducting virus surveillance. The established assays [in this study] can achieve a rapid detection of 2019-novel-coronavirus in human samples, thereby allowing early identification of patients.”

Advertisements

About AACC

Dedicated to achieving better health through laboratory medicine, AACC brings together more than 50,000 clinical laboratory professionals, physicians, research scientists, and business leaders from around the world focused on clinical chemistry, molecular diagnostics, mass spectrometry, translational medicine, lab management, and other areas of progressing laboratory science. Since 1948, AACC has worked to advance the common interests of the field, providing programs that advance scientific collaboration, knowledge, expertise, and innovation. For more information, visit www.aacc.org

Clinical Chemistry (clinchem.org) is the leading international journal of laboratory medicine, featuring nearly 400 peer-reviewed studies every year that help patients get accurate diagnoses and essential care. This vital research is advancing areas of healthcare ranging from genetic testing and drug monitoring to pediatrics and appropriate test utilization.