Health Life

An app monitors cancer patients’ health status and rewards participation

Credit: CC0 Public Domain

Close2U, an electronic device application, has been developed by researchers at the Complutense University (UCM) and the University of Zaragoza (UZA) to monitor cancer patients’ physical and mental health using gamification.

Users answer a series of daily questions about their mood and where they are experiencing pain. In return, the app rewards them in the form of advice or songs, resources intended to increase their motivation.

“The use of gamification enables more continuous monitoring of by obtaining frequent information about their mood. Among other things, this lets us know if they are depressed, stressed or in pain,” explained Iván García-Magariño, a researcher in the Department of Software Engineering and Artificial Intelligence at the UCM.

The study was conducted in collaboration with the Spanish Cancer Association (Spanish initials: AECC), primarily at its branch in Teruel, where patients tested the app.

Researchers from both universities have reported development of the app and the results obtained in the Journal of Biomedical Informatics and Journal of Healthcare Engineering.

Exchange of resources among patients

For example, for the question “How did you sleep?”, users mark a point on a horizontal line between the two extremes “very badly” and “very well,” while for the question “Where in your body are you experiencing pain?”, the screen displays an image of a body on which patients mark areas affected by .

The information obtained from their answers is sent to a hospital or association physician.

In return, patients are rewarded with advice or songs which “are intended to amuse and entertain them, and which they can also share with other patients to provide mutual support,” observed García Magariño.

He also noted that the researchers were working on incorporating the app on other devices such as smart furniture or watches. “We

Health Life

Study reveals design flaws of chatbot-based symptom-checker apps

Credit: Pixabay/CC0 Public Domain

Millions of people turn to their mobile devices when seeking medical advice. They’re able to share their symptoms and receive potential diagnoses through chatbot-based symptom-checker (CSC) apps.

But how do these apps compare to a trip to the doctor’s office?

Not well, according to a new study. Researchers from Penn State’s College of Information Sciences and Technology have found that existing CSC apps lack the functions to support the full diagnostic process of a traditional visit to a medical facility. Rather, they said, the apps can only support five processes of an actual exam: establishing a patient history, evaluating symptoms, giving an initial diagnosis, ordering further , and providing referrals or other follow-up treatments.

“These apps do not support conducting physical exams, providing a final diagnosis, and performing and analyzing test results, because these three processes are difficult to realize using mobile apps,” said Yue You, a graduate student in the College of Information Sciences and Technology and lead author on the study.

In the study, the researchers investigated the functionalities of popular CSC apps through a feature review, then examined user experiences by analyzing user reviews and conducting user interviews. Through their user experience analysis, You and her team also found that users perceive CSC apps to lack support for a comprehensive medical history, flexible symptom input, comprehensible questions, and diverse diseases and user groups.

The findings could inform functional and conversational design updates for health care chatbots, such as improving the functions that enable users to input their symptoms or using comprehensible language and providing explanations during conversations.

“Especially in health and medicine, [another question is] is there something else we should consider in the chatbot design, such as how should we let users describe their symptoms when interacting with the chatbot?”

Health Life

Researchers use artificial intelligence tools to predict loneliness

Credit: CC0 Public Domain

For the past couple of decades, there has been a loneliness pandemic, marked by rising rates of suicides and opioid use, lost productivity, increased health care costs and rising mortality. The COVID-19 pandemic, with its associated social distancing and lockdowns, have only made things worse, say experts.

Accurately assessing the breadth and depth of societal loneliness is daunting, limited by available tools, such as self-reports. In a new proof-of-concept paper, published online September 24, 2020 in the American Journal of Geriatric Psychiatry, a team led by researchers at University of California San Diego School of Medicine used artificial intelligence technologies to analyze patterns (NLP) to discern degrees of loneliness in older adults.

“Most studies use either a direct question of ‘ how often do you feel lonely,’ which can lead to biased responses due to stigma associated with loneliness or the UCLA Loneliness Scale which does not explicitly use the word ‘lonely,'” said senior author Ellen Lee, MD, assistant professor of psychiatry at UC San Diego School of Medicine. “For this project, we used natural language processing or NLP, an unbiased quantitative assessment of expressed emotion and sentiment, in concert with the usual loneliness measurement tools.”

In recent years, numerous studies have documented rising rates of loneliness in various populations of people, particularly those most vulnerable, such as older adults. For example, a UC San Diego study published earlier this year found that 85 percent of residents living in an independent senior housing community reported moderate to severe levels of loneliness.

The new study also focused on independent senior living residents: 80 participants aged 66 to 94, with a mean age of 83 years. But, rather than simply asking and documenting answers to questions from the UCLA Loneliness Scale, participants were also interviewed

Health Life

Deep learning helps explore the structural and strategic bases of autism?

Visualization of logics of classification learned by recurrent attention model (RAM). Credit: The Korea Advanced Institute of Science and Technology (KAIST)

Psychiatrists typically diagnose autism spectrum disorders (ASD) by observing a person’s behavior and by leaning on the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), widely considered the ‘bible’ of mental health diagnosis.

However, there are substantial differences amongst individuals on the spectrum and a great deal remains unknown by science about the causes of autism, or even what autism is. As a result, an accurate diagnosis of ASD and a prognosis prediction for patients can be extremely difficult.

But what if (AI) could help? Deep learning, a type of AI, deploys based on the to recognize patterns in a way that is akin to, and in some cases can surpass, human ability. The technique, or rather suite of techniques, has enjoyed remarkable success in recent years in fields as diverse as voice recognition, translation, autonomous vehicles, and drug discovery.

A group of researchers from KAIST in collaboration with the YonseiUniversity College of Medicine has applied these to autism diagnosis. Their findings were published on August 14 in the journal IEEE Access.

Magnetic resonance imaging (MRI) scans of brains of people known to have autism have been used by researchers and clinicians to try to identify structures of the brain they believed were associated with ASD. These researchers have achieved considerable success in identifying abnormal gray and white matter volume and irregularities in cerebral cortex activation and connections as being associated with the condition.

These findings have subsequently been deployed in studies attempting more consistent diagnoses of patients than has been achieved via psychiatrist observations during counseling sessions. While such studies have reported high levels of diagnostic accuracy,