In The News
A new study finds that when it comes to prejudice, it doesn’t matter if you are smart or conservative or liberal. Each group has its own specific biases.
In fact, the study found that cognitive ability — whether high or low — only predicts prejudice towards specific groups.
“Very few people are immune to expressing prejudice, especially prejudice towards people they disagree with,” said lead author Dr. Mark Brandt of Tilburg University in the Netherlands.
For their study, Brandt and Dr. Jarrett Crawford of the College of New Jersey analyzed data from 5,914 people in the United States that included a measure of verbal ability and prejudice towards 24 different groups.
Analyzing the results, the researchers found that people with both relatively higher and lower levels of cognitive ability show approximately equal levels of intergroup bias, but towards different groups.
For instance, people with low cognitive ability tended to express prejudice towards groups perceived as liberal and unconventional, such as atheists, gays, and lesbians, as well as groups of people perceived as having low choice over group membership, such as ethnic minorities.
People with high cognitive ability showed the reverse pattern, according to the study’s findings. They tended to express prejudice towards groups perceived as conservative and conventional — Christians, the military, big business.
“There are a variety of belief systems and personality traits that people often think protect them from expressing prejudice,” Brandt said. “In our prior work we found that people high and low in the personality trait of openness to experience show very consistent links between seeing a group as ‘different from us’ and expressing prejudice towards that group. The same appears to be true for cognitive ability. ”
While previous work has found that people with low cognitive ability express more prejudice, Brandt said his study found this was limited to only some target groups.
“For other target groups, the relationship was in the opposite direction,” he said. “For these groups, people with high levels of cognitive ability expressed more prejudice. So, cognitive ability also does not seem to make people immune to expressing prejudice.”
The researchers noted they would like to see if their findings will replicate in new samples, with new target groups, and additional measures of cognitive ability.
“We used a measure of verbal ability, which is essentially a vocabulary test,” Brandt said. “Although this measure correlates pretty well with other measures of cognitive ability, it is not a perfect nor a complete measure.”
The study was published in the journal Social Psychological and Personality Science.
Researchers at the University of Kent suggest that creativity and intermedial languages can be used as a bridge to communicate with autistic children.
In a new study, researchers engaged autistic children in an all-surrounding drama experience. This immersion environment exposes children to lights, sound, puppets, and masked characters. Moreover, the intervention allows children to free play and respond, drawing out eye contact, speech, and shared play within the rich sensory context.
In a joint article, “Material voices: intermediality and autism” appearing in the journal Research in Drama Education, Dr Melissa Trimingham and Professor Nicola Shaughnessy say autism continues to be regarded as a community that is difficult to access due to “perceived disruptions of interpersonal connectedness”.
Their pioneering research using drama with autistic children started with a project “Imagining Autism: Drama, Performance and Intermediality as Interventions for Autistic Spectrum Conditions” (2011-2014). The intervention began in special schools and has now extended to working with families.
The project aims to help the whole family through teaching them new play skills using drama and puppetry, multi-sensory materials, and even comedy to help with challenging behavior.
The family program developed from workshops with teachers and caregivers in NAS (National Autistic Society) schools and was funded by the University of Kent.
The writers are parents of autistic children themselves and have personal experience of family life with autism.
Through detailed observations of two children, they demonstrate how “intermediality” unlocked some of the many and various languages autistic children use, facilitating their self-awareness.
They argue for wider use of creative ‘material’ languages such as puppetry, costumes, projection, microphones, lights, and sound in play as a bridge between the lived experience of autism and practices of education and care.
Source: University of Kent/EurekAlert
For many women, menopause often leads to a significant drop in physical activity levels. But rather than being a simple lack of energy, inactivity after menopause may be due to changes in dopamine signaling within the brain’s pleasure center. This could lead to a lack of rewarding feelings after exercising, according to a new rat study by researchers at the University of Missouri.
The findings show that activation of the dopamine receptors in a certain part of the brain may serve as a future treatment to improve motivation for physical activity in postmenopausal women.
“Postmenopausal women are more susceptible to weight gain and health issues,” said Victoria Vieira-Potter, assistant professor of nutrition and exercise physiology at University of Missouri. “This is especially frustrating for women, who already are dealing with significant changes to their bodies. We found that the decrease in physical activity that leads to weight gain may be caused by changes in brain activity.”
For the study, the research team compared the physical activity of highly fit rats to rats that had lower fitness levels. The researchers closely observed the rats’ use of their running wheels both before and after the rats had their ovaries removed. They also examined gene expression changes of dopamine receptors within the brain’s pleasure center.
The researchers found that the high-fit rat group naturally had more activity in the brain’s pleasure center. This correlated with greater wheel running before and after the loss of ovarian hormones.
Still, the high-fit rats experienced a significant reduction in wheel running after their ovaries were removed. This reduction in physical activity was also significantly connected to a reduction in their dopamine signaling levels, indicating that the brain’s pleasure center is involved.
“We found that in both groups of rats, the hormonal changes from menopause led to changes in the brain that translated to less physical activity,” Vieira-Potter said.
“The findings confirm previous evidence in humans and rodents that weight gain that occurs after menopause is likely due to decreased overall physical activity rather than increased energy intake from diet.”
“Understanding what is causing the decrease in activity and subsequent weight gain may allow us to intervene, possibly by activating dopamine receptors, to preserve the motivation to be physically active.”
Menopause typically occurs in women between the ages of 45 and 55 with an average age of 51. As the ovaries begin to reduce their production of female hormones, estrogen and progesterone, menstruation becomes less frequent until it finally comes to a stop.
Source: University of Missouri-Columbia
A new study finds that social media provides a solution for many women as they confront difficult decisions after a diagnosis of breast cancer.
Nevertheless, barriers to use of social media persist.
Researchers from the University of Michigan Comprehensive Cancer Center discovered women who engaged on social media after a breast cancer diagnosis expressed more deliberation about their treatment decision and more satisfaction with the path they chose.
But the researchers found significant barriers to social media for some women, particularly older women, those with less education, and minorities.
“Our findings highlight an unmet need in patients for decisional support when they are going through breast cancer treatment,” says lead study author Lauren P. Wallner, Ph.D., MPH.
“But at this point, leveraging social media and online communication in clinical practice is not going to reach all patients. There are barriers that need to be considered,” she adds.
Researchers surveyed 2,460 women newly diagnosed with breast cancer about their use of email, texting, social media, and web-based support groups following their diagnosis. Women were identified through the Surveillance, Epidemiology, and End Results database.
The study appears in JAMA Oncology.
The multiple communication channels associated with Social Media aided engagement. Overall, 41 percent of women reported some or frequent use of online communication.
Texting and email were most common, with 35 percent of women using it. Twelve percent of women reported using Facebook, Twitter or other social media sites, and 12 percent used web-based support groups.
“Women reported separate reasons for using each of these modalities. Email and texting were primarily to let people know they had been diagnosed. They tended to use social media sites and web-based support groups to interact about treatment options and physician recommendations,” Wallner says.
“Women also reported using all of these outlets to deal with the negative emotions and stress around their breast cancer diagnosis. They’re using these communications to cope,” she says.
Online communication was more common in younger women and those with more education. Use also varied by race, with 46 percent of white women and 43 percent of Asian women reporting frequent use, compared to 35 percent of black women and 33 percent of Latinas.
The researchers also found that women who frequently used online communication had more positive feelings about their treatment decision. They were more likely to report a deliberate decision and more likely to be highly satisfied with their decision.
Despite these benefits, the study authors urge caution.
“For some women, social media may be a helpful resource. But there are still questions to answer before we can rely on it as a routine part of patient care,” Wallner says.
“We don’t know a lot about the type of information women are finding online. What are they sharing and what is the quality of that information? We need to understand that before we can really harness the potential of social media to better support patients through their cancer treatment and care.”
A new study provides surprising results in that financial factors, including wives’ ability to support themselves in the event of a divorce, do not influence the risk of divorce.
Factors that influence divorce include how the couple divides work tasks. The division of labor — paid and unpaid — appears to influence whether a divorce occurs instead of financial factors.
“My results suggest that, in general, financial factors do not determine whether couples stay together or separate,” said study author Alexandra Killewald, a professor of sociology at Harvard University.
“Instead, couples’ paid and unpaid work matters for the risk of divorce, even after adjusting for how work is related to financial resources.”
Titled, “Money, Work, and Marital Stability: Assessing Change in the Gendered Determinants of Divorce,” the study uses nationally representative data on more than 6,300 different-sex couples, both spouses age 18 to 55.
Researchers examined what effect, if any, couples’ division of labor, their overall financial resources, and wives’ economic prospects following divorce — have on marital stability.
In the study, which appears in the American Sociological Review, Killewald compared couples married in 1974 or earlier with couples married in 1975 or later to explore whether the effects, or lack thereof, of these factors changed over time.
Killewald found that, in both the old and new cohorts, financial factors did not play a role in divorce. On the other hand, while the division of labor did affect marital outcomes in both cohorts, there was some variation in terms of what division of labor was better for marriage stability.
For couples married before 1975, the higher the percentage of housework a woman did, the less likely her marriage was to end in divorce.
Researchers found that duration of a marriage does make a difference as for people married after 1975, the amount of housework a women does, did not make a difference.
“For couples married more recently, expectations for the division of housework between spouses appear to have changed, so that men are expected to contribute at least somewhat to household labor,” said Killewald.
Killewald discovered that, even in the more recent marriage cohort, wives do more than 70 percent of the housework, on average.
“In general, men seem to be contributing a little more than they used to, and these contributions may now be expected and appreciated by wives.”
Killewald found that, for couples married after 1974, neither wives’ full-time employment nor sharing the housework more evenly was associated with the risk of divorce.
In this cohort, husbands’ full-time employment was an important factor in marital stability, with the risk of divorce higher for men who were not employed full-time.
“For contemporary couples, wives can combine paid and unpaid labor in various ways without threatening the stability of their marriage,” according to Killewald.
Killwald discovered that while the gender revolution and the feminist movement have allowed women to take on traditionally male-dominated roles and responsibilities, men’s roles and responsibilities have not expanded or diversified proportionately.
“While contemporary wives need not embrace the traditional female homemaker role to stay married, contemporary husbands face higher risk of divorce when they do not fulfill the stereotypical breadwinner role, by being employed full-time,” Killewald said.
Regarding financial factors, by finding that couples’ overall resources and wives’ economic prospects following divorce did not determine whether marriages lasted.
The study dispels the theory that attributes the spike in divorce rates to women’s increased financial independence.
“The fact that divorce rates rose during the second half of the 20th century at the same time when women were moving into the labor force has prompted some speculation that marital stability has declined because women no longer ‘need’ to be married for financial security,” Killewald said.
“For some, this implies that women’s entry into the work force has come at the expense of stable marriages. My results do not suggest any tradeoff of that kind.”
Though changing gender roles have afforded women greater flexibility in terms of labor without jeopardizing their marriages, the study indicates that men have not been granted similar freedom.
“Often when scholars or the media talk about work-family policies or work-family balance, they focus mostly on the experiences of women,” Killewald said.
“Although much of the responsibility for negotiating that balance falls to women, my results suggest one way that expectations about gender and family roles and responsibilities affect men’s lives, too: men who aren’t able to sustain full-time work face heightened risk of divorce.”
In terms of the study’s policy implications, Killewald said her research may help guide policymakers who are considering the societal impact of policies that provide financial support to unmarried women.
The finding may influence public policy.
“Because I do not find that couples are more likely to divorce when women are better able to sustain themselves financially in the event of a divorce, public financial support — to divorced women and other groups — such as the earned income tax credit (EITC) or the Supplemental Nutrition Assistance Program (SNAP), is unlikely to heighten divorce rates,” Killewald said.
Psycotherapeutic mental exercise modules provided on smartphones can help to quickly improve mood, say researchers from the University of Basel.
An international study discovered brief, directed smartphone mental exercises helped participants feel more alert, calmer, and uplifted. The apps consisted of five-minute video tutorials that guided participants on variety of topics — such as concentrating on their bodies.
The subjects could choose between various established or more modern psychotherapeutic exercise modules termed micro-interventions. Some of the participants, for example, recalled emotional experiences during the exercise, while other test subjects repeated short sentences or number sequences in a contemplative manner, or played with their facial gestures.
The subjects recorded their mood on their smartphones, answering short questions by marking a six-step scale both before and after the exercise.
Those who succeeded in immediately improving their mood through the brief exercises benefited over the longer term as well: Their mood improved overall during the two-week study phase.
The study, conducted by researchers in associate professor Marion Tegethoff’s team at the University of Basel, included 27 healthy young men as part of a larger research program.
The use of modern communication technology to improve psychological health is a current topic of research referred to as “mobile health”, or “mHealth” for short. Complex internet-based therapy programs have been studied in depth in recent years.
However, to date researchers have paid somewhat less attention to the study of smartphone-aided micro-interventions.
“These findings demonstrate the viability of smartphone-based micro-interventions for improving mood in concrete, everyday situations,” explains Tegethoff. Such applications could represent a useful addition to the psychotherapeutic options currently available.
“Now we need to carry out more extensive studies to help us understand the extent to which smartphone-based micro-interventions are responsible for the improvement in mood, and also perform studies on patients with psychological disorders,” says Tegethoff.
She also notes that such help options, which are available anytime, anywhere, are also in keeping with the idea of personalized medicine — a step along the path towards a health-care system that will one day be able to provide exactly the right treatment at the right time and in the right place.
The videos are available free of charge to anyone who is interested, allowing them to be used for future studies as well.
Investigators, caution that the videos should not replace treatment by a qualified professional for people suffering from depression or other psychological disorders.
Source: University of Basel
Emerging research provides evidence that medications used to treat attention-deficit/hyperactivity disorder (ADHD) offer long-term benefits.
The finding is one of the first to show how ADHD medication in childhood can reduce risky behavior in adolescence.
Based on an analysis of Medicaid claims for nearly 150,000 children diagnosed with ADHD in South Carolina between 2003 and 2013, researchers found treatment with ADHD medication made children less likely to suffer consequences of risky behaviors such as sexually transmitted diseases, substance abuse during their teen years and injuries.
The finding is salient as eleven percent of children in the United States ages four to 17 have been diagnosed with ADHD and almost 70 percent of them are treated with medications.
Children who are diagnosed with ADHD — a chronic condition characterized by attention difficulty and/or hyperactivity and impulsiveness — are known to be at higher risk for risky behaviors such as dangerous driving, drug use, and risky sexual behavior.
“ADHD is such a major issue, but no one seemed to be able to give a very definite answer to the long-term effect of the medication,” says including Princeton University postdoctoral associate Dr. Anna Chorniy. Chorniy conducted the research with Leah Kitashima, a Ph.D. candidate at Clemson University.
“For our sample population, we were able to see everyone who had an ADHD diagnosis and track their health over time to identify any potential benefits of the medication or the lack of thereof.”
Researchers compared children who were diagnosed with ADHD but did not receive medication, and those who took medication.
They discovered those who took the medication were 3.6 percentage points less likely to contract a sexually transmitted disease, 7.3 percentage points less likely to have a substance-abuse disorder, and 2.3 percentage points less likely to be injured.
In absolute numbers in a sample of about 14,000 teens diagnosed with ADHD, it translates into 512 fewer teens contracting an STD and 998 fewer having a substance abuse disorder. There also would be 6,122 fewer yearly injury cases for children and teens under 19 years old.
The research is described in an article published online this month by the journal Labour Economics.
While previous research has established the effectiveness of medications in treating the core symptoms of ADHD, little has been known about the effects of pharmacological treatment on health, behavioral and educational outcomes in the long run.
Evidence so far points to positive effects on some outcomes but not others. A 2014 paper by Princeton economist Dr. Janet Currie and other researchers found such treatment was actually associated with a decrease in academic performance, a deterioration in relationship with parents, and an increased likelihood of depression. Other work has shown some reduction in hospital visits and police interactions.
“Many professionals and parents still doubt the existence of beneficial long-term effects of ADHD medication,” said Dr. Helena Skyt Nielsen, a professor at Aarhus University in Denmark who has studied ADHD treatment in children but wasn’t involved in this research.
“Therefore, it is extremely important to collect more hard evidence on the impact of ADHD medication. Chorniy’s paper is a great example that non-experimental impact assessments are very informative about the consequences of ADHD medication.”
The current paper is the first of several research projects in which Chorniy paints a clearer picture of how ADHD is diagnosed and treated, as well as the associated short- and long-term effects of medication. One paper in the works seeks to provide an explanation for the rise in ADHD diagnoses and treatment, and look at the effects of recently approved medications for ADHD.
“I think all these papers together will give us a clearer picture of the reasons behind ADHD’s explosion and the effects of ADHD medication,” Chorniy said.
“Given that disadvantaged children and teens enrolled in Medicaid, a public insurance program, are disproportionately diagnosed with ADHD, these are important policy questions to address: why are there more children taking ADHD drugs today than a decade ago, what benefits do they deliver and at what cost.”
Source: Princeton University
For survivors of acute respiratory distress syndrome (ARDS), poor lifestyle factors — such as obesity and smoking — are more closely tied to a subsequent poorer quality of life rather than the actual severity of their illness, according to a new multi-university study.
ARDS is a progressive condition in which patients have difficulty breathing due to a fluid leak in the lungs. It typically occurs in people who are already critically ill or who have suffered traumatic injuries. Most ARDS patients are unable to breathe on their own without support from a ventilator.
For the study, critical care researchers from Intermountain Medical Center in Salt Lake City evaluated 616 patients who had been treated for ARDS to find out which factors played the most significant role in their quality of life six months following discharge from the hospital.
The findings show that the patient’s acuity, or level of illness, was not a significant marker in their subsequent quality of life, but rather it was lifestyle factors, specifically obesity and smoking, that were tied to a worse quality of life.
Researchers from the National Institutes of Health ARDS Network, Johns Hopkins University, Brigham Young University and the University of Utah School of Medicine, also participated in the study.
With survival rates improving for ARDS patients, understanding and improving their quality of life outcomes is a clinical and research priority, according to the study’s lead researcher Samuel M. Brown, M.D., MS, FASE, director of the Center for Humanizing Critical Care at Intermountain Medical Center.
“The ICU and the critical care environment are so focused on life-and-death issues, and we’re so busy as clinicians, that we often don’t have time to think about lifestyle factors, such as obesity and smoking and the role they play in our patient’s long-term quality of life. Our study emphasizes the need for us to do more of that,” said Brown.
Another significant finding from the study was that patients’ level of acuity in the hospital was not a significant predictor of a poor quality of life after being discharged from the hospital.
“We see patients who we’re treating for ARDS who are very sick and, who at the time, may not look like their quality of life will be great, but our study shows that their level of acuity is not a marker of whether they will experience a high quality of life once they leave the hospital,” said Brown.
“We found that quality of life for ARDS survivors is more influenced by lifestyle choices, such as smoking and obesity.”
Brown added that these findings suggest that smoking cessation education should be incorporated into the critical care setting.
“Evidence from our study, and other evidence, suggests that there is an urgent need to better support these patients who survive ARDS because they’re confronting some difficulties and unique challenges,” said Brown.
Next, the researchers plan to study specific interventions that would benefit vulnerable patients at-risk for a poor quality of life after hospital discharge, he added.
The study findings are published online in the journal Thorax.
Source: Intermountain Medical Center
Being a couch potato and bingeing on TV series can literally be hazardous to your health.
So says the American Heart Association as a new study found that watching a lot of television every day may increase your risk of dying from a blood clot in the lung.
A lung blood clot, known medically as a pulmonary embolism, usually begins as a clot in the leg or pelvis as a result of inactivity and slowed blood flow. If the clot breaks free, it can travel to a lung and become lodged in a small blood vessel, where it is especially dangerous.
In the study, from 1988-1990, Japanese researchers asked 86,024 participants, age 40-79, how many hours they spent watching TV. Over the next 19 years, 59 participants died of a pulmonary embolism.
Researchers found that compared to participants who watched TV less than 2.5 hours each day, deaths from a pulmonary embolism increased by:
- 70 percent among those who watched TV from 2.5 to 4.9 hours;
- 40 percent for each additional two hours of daily TV watching; and
- 2.5 times among those who watched TV five or more hours.
“Pulmonary embolism occurs at a lower rate in Japan than it does in Western countries, but it may be on the rise,” said Hiroyasu Iso, M.D., Ph.D., professor of public health at Osaka University Graduate School of Medicine and study corresponding author.
“The Japanese people are increasingly adopting sedentary lifestyles, which we believe is putting them at increased risk.”
Authors noted that the risk is likely greater than the findings suggest.
Deaths from pulmonary embolism are believed to be underreported because diagnosis is difficult. The most common symptoms of pulmonary embolism — chest pain and shortness of breath — are the same as other life-threatening conditions, and diagnosis requires imaging that many hospitals are not equipped to provide.
Researchers accounted for several factors that might have influenced findings, including obesity, diabetes, cigarette smoking, and hypertension. After the number of hours spent watching TV, obesity appeared to have the next strongest link to pulmonary embolism.
Toru Shirakawa, M.D., study first author and a research fellow in public health at Osaka University Graduate School of Medicine, said the findings may be particularly relevant to Americans. Other studies indicate U.S. adults watch more television than Japanese adults.
“Nowadays, with online video streaming, the term ‘binge-watching’ to describe viewing multiple episodes of television programs in one sitting has become popular,” Shirakawa said. “This popularity may reflect a rapidly growing habit.”
Authors said people who watch a lot of TV can take several easy steps to reduce their risk of developing blood clots in their legs that may then move to their lungs.
“After an hour or so, stand up, stretch, walk around, or while you’re watching TV, tense and relax your leg muscles for five minutes,” said Iso, noting this advice is similar to that given to travelers on long plane flights. He added that drinking water may also help and, in the long run, shedding pounds if overweight is likely to reduce risk.
The study, published in the journal Circulation, recorded participants’ viewing habits before computers, tablets and smartphones became popular sources of information and entertainment.
Sadly, the new technologies have probably increased the risk of pulmonary embolism although additional studies are needed.
Source: American Heart Association
Using brain imaging technology, scientists have found similarities in white matter impairments in children with autism spectrum disorder (ASD), attention-deficit hyperactivity disorder (ADHD), and obsessive compulsive disorder (OCD).
A team of Toronto-based researchers used brain imaging to examine the white matter in 200 children with ASD, ADHD, OCD, or no diagnosis. White matter is made up of bundles of nerve fibers that connect cell bodies across the brain, and enable communication between different brain regions.
“We found impairments in white matter in the main tract connecting the right and left hemispheres of the brain in children with either ASD, ADHD, or OCD, when compared to healthy children in the control group,” said Dr. Stephanie Ameis, first author on the study.
This particular white matter tract, the corpus callosum, is the largest in the brain and among the first to develop.
The research team also found children with ASD and ADHD showed more severe impairments affecting more of the brain’s white matter than those with OCD.
This finding may reflect the fact that both autism and ADHD typically have an onset at a much younger age than OCD, and at a time when a number of different white matter tracts are going through rapid development, said Ameis.
Autism, ADHD, and OCD have common symptoms and are linked by some of the same genes. Yet historically they have been studied as separate disorders. Together, these three neurodevelopmental disorders affect roughly 15 percent of children and youth.
The study is part of a major Ontario initiative, the Province of Ontario Neurodevelopmental Disorders Network (POND) that is examining various childhood brain-related disorders collectively, to better understand their similarities and differences, and develop more effective and targeted therapies.
The study has been published in the American Journal of Psychiatry.
Investigators explain that many of the behaviors that contribute to impairment in autism, ADHD, and OCD, such as attention problems or social difficulties, occur across these conditions, and differ in severity from person to person.
The researchers found that the brain’s white matter structure was associated with a spectrum of behavioral symptoms present across these diagnoses. Children with greater brain impairment also had higher impairments in functioning in daily life, regardless of their diagnosis, said Ameis.
This finding has implications for our understanding of the nature of brain-related disorders, notes senior author Dr. Evdokia Anagnostou.
The new research provides biological evidence that brain structure relates to a spectrum of behavioral symptoms that cut across different developmental conditions. As such, it highlights the shared biology among such conditions.
Investigators also believe their finding suggests that treatments targeting a spectrum of behaviors may be relevant for all three conditions.
New research suggests children with autism often have an inner ear deficiency that may impact their ability to recognize speech.
Investigators believe the finding could ultimately be used as a way to identify children at risk for the disorder at an early age.
“This study identifies a simple, safe, and non-invasive method to screen young children for hearing deficits that are associated with Autism,” said Anne Luebke, Ph.D., an associate professor at the University of Rochester Medical Center and a co-author of the study.
“This technique may provide clinicians a new window into the disorder and enable us to intervene earlier and help achieve optimal outcomes.”
Study findings appear in the journal Autism Research.
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by impairments in social-communication skills and restricted and repetitive behaviors.
The disorder is difficult to identify in very young children and while many signs of ASD are present before age two, the majority of children with ASD are not diagnosed until after age four.
Early detection of ASD would allow corrective therapies to be started before symptoms fully develop thereby enhancing the impact of the interventions.
One of the challenges to early detection of ASD is to find ways to identify children at risk for the disorder sooner and in children with speech delays.
Some of the earliest and consistent signs of ASD involve auditory communication. This presents a conundrum as most tests rely on speech, and are often ineffective in children who are very young or who have communication delays.
In the new study, researchers used a technique that measures what are called otoacoustic emissions. The test is similar to the screening that many newborns must undergo before leaving the hospital to check for hearing problems.
Using miniature speaker/microphone earplugs, the researchers were able to measure hearing deficiencies by listening for signs that the ear is having difficulty processing sounds.
Specifically, the device’s highly sensitive microphone can detect minute sound emission made by inner ear outer hair cells in response to certain tones or clicking sounds. If these cells are not functioning properly, the device fails to detect an emission which indicates that inner ear — or cochlear — function is impaired.
In the current study, researchers tested the hearing of children between the ages of six and 17, roughly half of whom have been diagnosed with ASD. They found that the children with ASD had hearing difficultly in a specific frequency (1-2 kHz) that is important for processing speech. They also found a correlation between the degree of cochlear impairment and the severity of ASD symptoms.
“Auditory impairment has long been associated with developmental delay and other problems, such as language deficits,” said Loisa Bennetto, Ph.D., an associate professor of clinical and social sciences in psychology and a co-author of the study.
“While there is no association between hearing problems and autism, difficulty in processing speech may contribute to some of the core symptoms of the disease. Early detection could help identify risk for ASD and enable clinicians to intervene earlier.
Additionally, these findings can inform the development of approaches to correct auditory impairment with hearing aids or other devices that can improve the range of sounds the ear can process.”
Researchers believe use of the test is appealing because it is non-invasive, inexpensive, and does not require the subject to respond verbally. As such, this technique could be adapted to screen infants, and enhance early detection of ASD.
Source: University of Rochester
Can you look at someone and tell if they are honest? Many of us believe we can and a new Canadian study explains why we have this perception (accurate or not).
Researchers determined that certain facial features, not the expression, influence whether people think someone is trustworthy. That is, some people may “look” honest.
University of British Columbia’s psychology professor Dr. Stephen Porter, and Ph.D. student Alysha Baker, recently completed two studies determining that people often make judgments of trustworthiness based solely on the face.
“Our findings in this and our past studies suggest that your physical appearance can have major implications for your assumed credibility and other character traits, even more powerful than the manner in which you behave and the words you speak,” said Porter.
“The implications in social, workplace, corporate, and criminal justice settings are enormous.”
In their studies, the researchers asked participants to watch a video, listen to audio-only pleas, or examine a photo of people publicly asking for the return of a missing relative. They then asked for their personal perceptions of general trustworthiness and honesty.
“A lot of information that feeds into our impressions about one’s trustworthiness is deduced from the face,” said Baker, who conducted much of the research.
“More specifically, there are certain facial features considered that make an individual look more trustworthy — higher eyebrows, more pronounced cheekbones, rounder face — and other features that are perceived to be untrustworthy-looking — downturned eyebrows, or a thinner face.”
The studies cited two real criminal cases, one with an 81-year-old woman and one with a father of a missing nine-year-old girl. People believed the elderly woman’s public appeal for justice, even though it was later determined she had killed her husband.
Many judged the father to be lying, based on his facial features, even though he later proved to be innocent.
“When encountering a person in any given situation, we automatically and instantaneously form an impression of whether a target is worthy of our trust because, evolutionarily, this kind of assessment has helped our survival. For example, assessing ‘friend or foe’,” said Baker.
“We’re typically not aware of this quick decision and it may be experienced as ‘intuition’, but this can be particularly problematic in the legal system because these first impressions are often unfounded and can lead to biased decision-making.”
Baker cautions that in some legal settings those who are untrustworthy-looking may be judged more harshly and receive different outcomes than those deemed to be trustworthy-looking.
This has occurred in the United States where untrustworthy-looking men are more likely to receive the death penalty than trustworthy-looking men convicted of similar crimes.
This study appears in the journal Psychology, Crime & Law.
New research now shows that even a brief period of stress can cause part of the brain involved in memory to start shrinking — even before changes are evident in behavior and memory itself.
The region in question is the hippocampus, a pair of curved structures at the base of our brains. This brain region encodes memories of facts and events — names, phone numbers, dates, and daily events that we need to run our lives.
“Until now, no one actually knew the evolution of these changes. Does the hippocampus shrink before or after memory loss? Or do the two happen hand-in-hand?” said Dr. Sumantra Chattarji, one of the main investigators in this study.
To address this, an international collaborative project involving Chattarji’s group from the National Centre for Biological Sciences (NCBS) in Bangalore, India, and Dr. Shane O’Mara’s lab at Trinity College, Dublin, used rats as a model system.
The lab researched used rats as a model because they react to stress much as humans do. That is, they develop anxiety-related behaviors and their ability to form memories are affected.
Long years of research have established methods to test rats’ memories and responses to various forms of stress. This makes rats widely used models to study brain and behavior related questions.
In the current study, rats were subjected to stress for two hours every day over ten days. The rats’ brains were examined with MRI scans on several days over the course of the study, and their ability to form memories were assessed repeatedly using two different tests.
Striking results emerged in the first set of MRI scans taken after just three days of stress — the hippocampus of every stressed rat had shrunk.
“It was a totally unexpected result. Normally, structural changes are seen in the brain after a long time — say 10 to 20 days. Three days doesn’t even count as chronic stress,” said Chattarji.
Five days after stress exposure, the rats’ hippocampus-based ability to make memories were tested. Here again, the researchers were in for a surprise.
Stressed rats performed almost as well as unstressed rats.
“Volume loss and shrinkage has already happened, yet spatial memory is still holding up,” said Chattarji.
At the end of the chronic stress regime, the hippocampus of stressed rats had shrunk even more. Further, a second and different memory test performed after this scan, showed stark differences between stressed and unstressed rats. Stressed rats performed poorly in this test compared to unstressed rats.
The findings that loss of brain volume may lead to memory loss as well as detail on other interesting aspects of how the brain changes in structure during stress, are published in the journal Scientific Reports.
In the early days of stress, shrinkage in the left hippocampus is more pronounced, but at the end of 10 days, the right hippocampus loses the most volume.
“Right now, we don’t really know the functional significance of this. There is some evidence that in mice undergoing social stress, only the left hippocampus shrinks. If there is any inherent difference between the left and right hippocampus, that needs to be studied,” said Mohammed Mostafizur Rahman, a Ph.D. student with Chattarji and the lead author of the study.
Another discovery is that there are individual differences amongst the rats in how much the chronic stress regime affected them. The amount of shrinkage in a rat’s hippocampus on day three can predict the shrinkage seen at the end of the 10-day stress period. The higher the shrinkage, the worse the rats’ performance in memory tests at the end of the stress.
“This makes it even stronger that volume loss is a pretty good predictor of what the behavioral consequences will be at a much later stage,” said Chattarji.
Many different groups, including Chattarji’s have studied stress in rodent models for a long time. “What comes out in our study is that there are individual differences between rats,” said Mostafizur.
“In today’s world, with so much talk about personalized medicine, these results could have huge implications for future studies on human disease,” he said.
New web-based technology is helping pediatricians manage growing case-loads of children with attention deficit hyperactivity disorder.
The web-based solution helps reduce ADHD behavioral symptoms in children receiving care at community pediatric practices by coordinating care and ensuring patients get the most effective ADHD medications.
The benefit of the software was recently validated by a multi-institutional study that found the software helped to improve the quality of ADHD care and patient outcomes.
Community-based pediatric practices are often a busy and chaotic environment with providers carrying high volume caseloads. In this setting, providers have historically had a difficult time managing medications and monitoring children, resulting in substandard care for children with ADHD.
“Our data show the software not only helped improve the quality of medication care received by children treated at community-based pediatric practices, but it also improved treatment outcomes for these children,” said Jeffery Epstein, Ph.D., the study’s principal investigator.
“As a result of the improved quality of ADHD care, children treated by pediatricians using this new technology had significantly less ADHD symptoms than children treated by pediatricians who were not given access to this web-based technology.”
The study is discussed online in the journal Pediatrics.
The ADHD care quality improvement (QI) software was developed by Epstein and research colleagues at Cincinnati Children’s Hospital. The American Academy of Pediatrics (AAP) has selected this software for use in pediatric practices that are participating in a five-state QI learning collaborative to improve care for children with ADHD.
Available through a web-based portal, the software helps community practices collect, score, and interpret reports from parents and teachers regarding children’s ADHD symptoms. This allows pediatricians to better gauge whether medications are working with their patients.
Providers at community practices can customize the schedule of collection of these ratings for each patient. When ratings are completed, automated algorithms score and interpret data.
Physicians then receive text and graphs charting patient response to medication and other related information, allowing them to determine if ADHD symptoms are improving in response to the prescribed medication and dosage.
The current study involved a randomized clinical trial coordinated through Cincinnati Children’s and Nationwide Children’s Hospital in Columbus, Ohio, where study co-author Kelly Kelleher, M.D., serves as director of the Center for Innovation in Pediatric Practice.
The trial was conducted at 50 community based pediatric practices involving 199 providers. The providers were randomized to either provide ADHD care using the technology assisted QI intervention or without the intervention.
A total of 373 children with ADHD included in study were prescribed ADHD medications for their condition (165 children at practices using the software intervention and 208 at control practices not using the software). A standard rating scale (the Vanderbilt ADHD Parent Rating Scale) was used before and following treatment to rate ADHD symptoms.
Medicated children cared for at control practices (which did not use the software) experienced an average 10.19-point reduction on the parent-rated scale of symptoms. Children at pediatric practices using the technology based intervention experienced an average symptom reduction of 13.19 points.
Use of the technology enabled significantly more treatment contacts with clinical staff and a greater number of parent and teacher ratings to monitor the effectiveness of medications. Moreover, researchers discovered treatment effectiveness and outcomes were more quickly assessed at practices using the software.
A limitation of the study is that community settings often do not have a standardized method to collect data. This made it difficult to generalize the data to all community practices and providers, according to the authors.
The study also focused only on the primary outcome of ADHD symptoms. It did not evaluate functional impairments (such as school performance), which are often why families seek treatment for ADHD.
Source: Cincinnati Children’s Hospital
New research from Columbia University Medical Center (CUMC), New York State Psychiatric Institute, and New York-Presbyterian suggests an odor identification test may prove useful in predicting cognitive decline and detecting early-stage Alzheimer’s disease.
The investigators performed two studies that found the smell identification test offers a practical, low-cost alternative to detecting Alzheimer’s. Researchers believe the test, called the University of Pennsylvania Smell Identification Test (UPSIT) is a practical, low-cost alternative to other tests that are often more invasive and expensive.
In one study, researchers administered UPSIT to 397 older adults (average age of 80 years) without dementia from a multiethnic population in northern Manhattan. Each of the participants also had an MRI scan to measure the thickness of the entorhinal cortex, the first area of the brain to be affected by Alzheimer’s disease.
Four years later, 50 participants (12.6 percent) had developed dementia, and nearly 20 percent had signs of cognitive decline.
The researchers found that low UPSIT scores, but not entorhinal cortical thickness, were significantly associated with dementia and Alzheimer’s disease. (Low UPSIT scores indicate decreased ability to correctly identify odors.)
Low UPSIT scores, but not entorhinal cortical thickness, also predicted cognitive decline, although entorhinal cortical thickness was significantly associated with UPSIT score in those who transitioned to dementia.
“Our research showed that odor identification impairment, and to a lesser degree, entorhinal cortical thickness, were predictors of the transition to dementia,” said Seonjoo Lee, Ph.D., presenting author.
“These findings support odor identification as an early predictor, and suggest that impairment in odor identification may precede thinning in the entorhinal cortex in the early clinical stage of Alzheimer’s disease.”
In another study, researchers from CUMC evaluated the usefulness of UPSIT and tests that measure the amount of amyloid in the brain (in higher amounts, the protein forms plaques in the brains of those with Alzheimer’s disease) in predicting memory decline.
The researchers administered UPSIT and performed either beta amyloid PET scanning or analysis of cerebrospinal fluid in 84 older adults (median age of 71 years). Of these, 58 participants had mild cognitive impairment. The researchers followed the participants for at least six months.
At follow-up, 67 percent of the participants had signs of memory decline. Testing positive for amyloid with either method, but not UPSIT score, predicted cognitive decline. However, participants with a score of less than 35 were more than three times as likely to have memory decline as those with higher UPSIT scores.
“Our research suggests that both UPSIT score and amyloid status predict memory decline,” said William Kreisl, M.D., a neurologist at New York-Presbyterian/Columbia.
“Younger age, higher education, and shorter follow-up may explain why UPSIT did not predict decline as strongly in this study as in previous studies. Although more research is needed, odor identification testing, which is much less expensive and easier to administer than PET imaging or lumbar puncture, may prove to be a useful tool in helping physicians counsel patients who are concerned about their risk of memory loss.”
Current methods are only capable of clinically detecting Alzheimer’s disease in the later stages of its development, when significant brain damage has already occurred.
“Our study adds to the growing body of evidence demonstrating the potential value of odor identification testing in the detection of early-stage Alzheimer’s disease,” said D.P. Devanand, M.D., professor of psychiatry at CUMC and senior author of both studies.
University of Pittsburgh research scientists discovered that participation in a community-based behavioral lifestyle intervention program helped individuals increase their health-related quality of life by an average of nearly 10 percent.
Community based behavioral lifestyle intervention programs help individuals lose weight, increase their physical activity levels, and reduce their risk of diabetes and heart disease.
The finding that these programs concomitantly improve quality of life and health demonstrates the emotional and mental benefits of living a healthy lifestyle.
The analysis appears in the journal Quality of Life Research.
“These community-based lifestyle intervention programs have additional valuable benefits, beyond the improvement of risk factors for type II diabetes and heart disease,” said lead author Yvonne L. Eaglehouse, Ph.D., a postdoctoral researcher at Pittsburgh Public Health.
“Our study demonstrates that these programs, delivered in diverse community settings such as senior centers and worksites, simultaneously and significantly improved the quality of life of the participants.”
Eaglehouse and colleagues investigated the impact of the Group Lifestyle Balance program, modified from the lifestyle intervention program used in the highly successful U.S. Diabetes Prevention Program (DPP).
The DPP was a national study demonstrating that people at risk for diabetes who lost a modest amount of weight and increased their physical activity levels sharply reduced their chances of developing diabetes and metabolic syndrome and outperformed people who took a diabetes drug instead.
Group Lifestyle Balance is a 22-session program administered over a one-year period aimed at helping people make lifestyle changes to improve their risk for diabetes and heart disease. The goals of the program are to help participants reduce their weight by seven percent and increase their moderate-intensity physical activity (such as brisk walking) to 150 minutes per week.
As part of the Pitt community intervention effort, a total of 223 participants were enrolled to test the effectiveness of the Group Lifestyle Balance program at a worksite and three community centers in the Pittsburgh area. The participants averaged 58 years of age and had pre-diabetes or metabolic syndrome or both.
Before beginning the program, each participant ranked his or her current health on a scale from zero “worst imaginable health state” to 100 “best imaginable health state.” The U.S. average is 79.2, whereas the participants averaged 71.5 at baseline.
After completing the year-long Group Lifestyle Balance program, the participants increased their average health-related quality-of-life score to 78.2.
When looking at only those with baseline health-related quality of life below the U.S. average, there was an even greater magnitude of improvement, from 61.8 at baseline to 74 at the end of the program.
Researchers found that participants who met weight loss and physical activity goals were found to have increased their health-related quality-of-life score by nine more points compared to those participants who met neither program goal.
“It is exciting that we were able to document an improvement in health-related quality of life in addition to improvement in risk factors for diabetes and cardiovascular disease,” said senior author Andrea Kriska, Ph.D., professor in Pittsburgh Public Health’s Department of Epidemiology and principal investigator of the NIH study.
“This important benefit was most evident in those who started the intervention program having a relatively lower quality of life — in other words, those who needed to improve the most.”
A new international study discovers it takes almost six years following symptoms of bipolar disorder and determination of diagnosis and initiation of treatment.
Many experts believe crucial opportunities to manage bipolar disorder early are being lost because of the delay.
Researchers from the University of New South Wales and Italian colleagues have published their findings in the Canadian Journal of Psychiatry.
Investigators performed a meta-analysis of 9,415 patients from 27 studies, the largest of its kind.
They discovered many patients experience distressing and disruptive symptoms for several years until receiving proper treatment for bipolar disorder, previously known as manic-depressive illness.
According to lead researcher Dr. Matthew Large, a psychiatrist at Prince of Wales Hospital, the delay is often longer for young people because moodiness is sometimes misperceived by parents.
This is common as providers may attribute symptoms as the ups and downs of the teenage years rather than the emergence of bipolar disorder. The misdiagnosis is disturbing as bipolar can be effectively treated with mood stabilizing medication.
“This is a lost opportunity because the severity and frequency of episodes can be reduced with medication and other interventions,” Large said. “While some patients, particularly those who present with psychosis, probably do receive timely treatment, the diagnosis of the early phase of bipolar disorder can be difficult.”
“This is because mental health clinicians are sometimes unable to distinguish the depressed phase of bipolar disorder from other types of depression.”
“The diagnosis of bipolar disorder can also be missed because it relies on a detailed life history and corroborative information from caregivers and family, information that takes time and care to gather.
“Clinicians should look more closely at a patient’s history of mood symptoms, looking for distinct changes in mood, and other risk factors, for example, a family history and mood swings caused by external events such as treatment with antidepressants, overseas travel and taking drugs,” Large said.
As a result of the findings, researchers are calling for a consistent approach to the recording of the onset of symptoms of bipolar disorder. Additionally, further studies on the early symptoms and predictors of bipolar disorder and the reasons for treatment delay are indicated.
The study was conducted in collaboration with researchers from St. Vincent’s Hospital in Sydney, and St. John of God Clinical Research Centre and the University of Bari in Italy.
A new study finds that induction of labor is not associated with increased risk of autism spectrum disorders in children.
The large Harvard School of Public Health study should allay concerns about induced labor increasing autism risk and will aid clinical decisions about whether or not to induce labor.
The study appears online in JAMA Pediatrics.
Autism spectrum disorders (ASD) are a group of permanent developmental disabilities characterized by impairments in social interaction and language development, and repetitive behaviors. ASD is estimated to affect roughly one in 90 children in the United States.
Labor induction is recommended when labor doesn’t progress on its own and there’s concern that waiting for it to start could endanger the health of the baby or mother.
Methods to induce labor include rupturing of membranes, mechanical or pharmacological ripening of the cervix, and administration of oxytocin, either used alone or in combination.
The number of induced labors and the incidence of ASD are on the rise in the U.S. Moreover, in 2013, a large study in North Carolina found an association between induction of labor and risk of autism in offspring.
The report gained widespread media attention, and although both the paper’s authors and other experts cautioned that the association may not be a cause and effect relationship, obstetricians began reporting that some of their patients were expressing concern about or opposition to being induced.
As a result, researchers decided to further explore whether induction of labor truly causes increased risk of neuropsychiatric disorders, in order to help in weighing the risks and benefits of this common therapeutic intervention.
“When we used close relatives, such as siblings or cousins, as the comparison group, we found no association between labor induction and autism risk,” said Dr. Anna Sara Oberg, lead author of the study.
“Many of the factors that could lead to both induction of labor and autism are completely or partially shared by siblings, such as maternal characteristics or socioeconomic or genetic factors. Finding no association when comparing siblings suggests that previously observed associations could have been due to some of these familial factors — not the result of induction.”
Working with colleagues from Sweden’s Karolinska Institutet and Karolinska University Hospital, Harvard Medical School, and Indiana University, the researchers studied all live births in Sweden from 1992-2005.
They followed over one million births through 2013, looking for any neuropsychiatric diagnoses and identifying all siblings and maternal first cousins. They also incorporated several measures of the mothers’ health in their analysis.
Nearly two percent of babies in the study population were diagnosed with autism during the follow-up period, the researchers found.
Overall, 11 percent of the deliveries had involved induction of labor, often occurring in conjunction with pregnancy complications such as gestational diabetes, gestational hypertension, and preeclampsia. Twenty-three percent of the induced pregnancies were post-term.
In their initial comparison of individuals who weren’t related to each other, the researchers found an association between labor induction and ASD risk, similar to that previously reported. But when they compared children born to the same mother — in one, labor was induced, in the other, it wasn’t (“induction-discordant” siblings) — they no longer saw an association.
“Overall, these findings should provide reassurance to women who are about to give birth, that having their labor induced will not increase their child’s risk of developing autism spectrum disorders,” said Dr. Brian Bateman, anesthesiologist and senior author of the study.
“It is important to note that the findings pertain to the risks associated with labor induction per se, and not the specific method or medication used in the process, including oxytocin,” said Oberg.
A new study finds that a diagnosis of mild cognitive impairment or early dementia does not necessarily portend a dark prognosis.
Scientists from University of Kentucky’s Sanders-Brown Center on Aging asked 48 men and women with early dementia or mild cognitive impairment (MCI) a series of questions about their quality of life and personal outlook post-diagnosis.
The survey, called the Silver Lining Questionnaire (SLQ), was designed to measure the extent to which people believe their illness has had a positive benefit in certain key life areas.
Study participants responded that the diagnosis has improved personal relationships and fostered a greater appreciation for life.
Moreover, study members reported the diagnosis has helped to enhance their personal inner strength and helped to facilitate changes in life philosophy. They report the diagnosis has also been a positive influence on others.
The SLQ assessment instrument has been administered previously to patients with cancer diagnoses, but hasn’t been given to MCI/dementia patients, according to Gregory Jicha, M.D., Ph.D., a professor at the Sanders-Brown Center on Aging and the study’s lead author.
“The overall assumption is that this diagnosis would have a uniformly negative impact on a patient’s outlook on life, but we were surprised to find that almost half of respondents reported positive scores,” Jicha said.
Positive responses were even higher on certain scores, such as:
- appreciation and acceptance of life;
- less concern about failure;
- self-reflection, tolerance of others, and courage to face problems in life;
- strengthened relationships and new opportunities to meet people.
“The common stereotype for this type of diagnosis is depression, denial, and despair,” Jicha said. “However, this study, while small, suggests that positive changes in attitude are as common as negative ones.”
The next step, according to Jicha, is to explore the variables that affect outlook in these patients with an eye towards interventions that might help the other half find their “silver lining.”
Jicha presented the study data at the Alzheimer’s Association International Conference in Toronto.
A new Dutch study suggests that fraudulent behavior often occurs when an individual is unhappy about being rejected in a business environment.
Fraud is deliberate deception to secure unfair or unlawful gain. The false representation of a matter of fact is often done by individuals who are not professional criminals.
In the new study, researchers discovered we are more likely to submit false insurance claims if our original submissions are rejected. Regardless of whether that rejection is fair or unfair, or if there is a financial reward at stake, being rejected makes us feel unhappy and we react by behaving dishonestly.
For the study, researchers used a mock insurance claim scenario and found that people whose claims were initially rejected were quick to fudge their stories to get their claims settled.
Whilst the odd small claim inflation in the real world may seem harmless enough to the perpetrator, insurance fraud is a very expensive crime. According to the FBI, insurance fraud amounts to around $40 billion per year, or $400-$700 per family per year in the U.S.
This finding is salient given the escalation of health care costs and federal deficits.
“Fraud is a widespread issue that is costing society and thereby each individual large sums of money. The problem with fraud is that it benefits a few people, but as a result harms the rest of a population,” said researcher Dr. Sophie Van Der Zee.
Understanding what drives people to falsify information on their insurance claims could mean huge savings to both the insurance firm and the consumer.
The scientists responsible for this research think they have the answer: clarity and transparency on the part of the insurer. Make the guidelines clear, and make the rejection policy clearer again.
The study looked at the rejection of a person’s efforts, and how it affected their emotions and subsequent behavior.
To do this, researchers used an online platform that allowed participants to fill out and submit mock insurance claims. The format also allowed the participants to report their levels of happiness, sadness, frustration, anxiety, and guilt. The claims were either accepted or rejected by the researchers.
People whose claims were rejected reported more negative emotions. They were also significantly more likely to cheat or lie in the next phase of the study, regardless of whether the rejection was made on objective or subjective grounds, or whether or not there was a financial incentive.
Interestingly, the emotional insult from having a claim rejected was the issue as it did not matter if the rejection was fair or not, or if they stood to gain financially.
According to Van Der Zee this means that tackling fraud will positively affect a lot of (honest) people’s lives.
“If we understand when people tend to behave dishonestly and commit fraud, we can construct the environment in a way that people are encouraged to behave honestly rather than deceptively.”