Health Life

Artificial intelligence could help predict future diabetes cases

Credit: CC0 Public Domain

A type of artificial intelligence called machine learning can help predict which patients will develop diabetes, according to an ENDO 2020 abstract that will be published in a special supplemental section of the Journal of the Endocrine Society.

Diabetes is linked to increased risks of severe health problems, including heart disease and cancer. Preventing is essential to reduce the risk of illness and death. “Currently we do not have sufficient methods for predicting which generally healthy individuals will develop diabetes,” said lead author Akihiro Nomura, M.D., Ph.D., of the Kanazawa University Graduate School of Medical Sciences in Kanazawa, Japan.

The researchers investigated the use of a type of called in diagnosing diabetes. Artificial intelligence (AI) is the development of computer systems able to perform tasks that normally require human intelligence. Machine learning is a type of AI that enables computers to learn without being explicitly programmed. With each exposure to new data, a machine-learning algorithm grows increasingly better at recognizing patterns over time.

“Using machine learning, it could be possible to precisely identify high-risk groups of future diabetes patients better than using existing risk scores,” Nomura said. “In addition, the rate of visits to might be improved to prevent future onset of diabetes.”

Nomura and colleagues analyzed 509,153 nationwide annual health checkup records from 139,225 participants from 2008 to 2018 in the city of Kanazawa. Among them, 65,505 participants without diabetes were included.

The data included physical exams, blood and urine tests and participant questionnaires. Patients without diabetes at the beginning of the study who underwent more than two annual health checkups during this period were included. New cases of diabetes were recorded during patients’ checkups.

The researchers identified a total of 4,696 new diabetes patients (7.2%) in

Health Life

New tool exploring different paths the corona pandemic may take

Frank Dignum, professor of computer science specialising in artificial intelligence at Umeå University in Sweden. Credit: Virginia Dignum

Umeå University in Sweden is leading a team of researchers across Europe in the development of a coronavirus simulation framework that can support decision makers to experiment and evaluate possible interventions and their combined effects, in a simulated controlled world.

“No one can predict the future but with the ASSOCC you can accurately explore a wide range of possible scenarios, gain an understanding of the connections between health, economy and well-being, and therefore be better prepared to take decisions on the policies to implement,” says Frank Dignum, team leader and professor at the Department of Computing science at Umeå University in Sweden.

The pandemic is the biggest crisis of our time. In their efforts to limit the spread of the virus, are struggling to balance their responses to the health situation with the needs of societies and economies. The interactions are complex and contextual and short-term steps can have large long-term consequences.

“We have developed a few initial scenarios based on theoretical models of epidemics, and economics as an illustration of the framework. We invite decision makers and researchers to apply ASSOCC to their specific questions and empirical data,” says Frank Dignum.

“We have gathered a group of experienced researchers from all over Europe to work on this project. All are contributing their time and expertise voluntarily and without any funding. It is heart-warming to see the commitment and results from all who are working day and night since two weeks,” continues Frank Dignum.

At this moment, the team is already applying the framework to specific questions from the UK, Australia and the Netherlands. At the same time, they are working on the release

Health Life

Machine learning helps doctors diagnose severity of brain tumors

Credit: CC0 Public Domain

An estimated 18,000 people in the United States will die of brain and spinal cord tumors in 2020. To help doctors differentiate between the severity of cancers in the brain, an international team of researchers led by Dr. Murat Günel, Chair of Neurosurgery at Yale School of Medicine, and Nixdorff-German Professor of Neurosurgery, built a machine learning model that uses complex mathematics to learn how various types of brain tumors look in the brain. The model is designed to “learn” from this gathered data to make predictions and help doctors diagnose the stage of brain cancers faster and more accurately.

To test their artificial learning method, the team used 229 patients with brain tumors along a spectrum of how likely they are to become malignant from lower-grade gliomas, which are relatively slow-growing tumors that originate from glial cells of the brain—to glioblastomas, the highly aggressive counterpart to gliomas.

“Our machine learning models used to differentiate the tumor types were very accurate,” said Hang Cao, a from Xiangya Hospital working with Dr. Gunel, and the lead author of the study published in European Radiology.

The researchers compiled data from a public tumor machine resonance imaging (MRI) database called The Cancer Imaging Archive. Board-certified neuro-radiologists then identified and selected glioma cases, which the researchers used for their model.

The team found significant differences in how the cancers looked, their volumes in various regions of the brain, and their locations. When taken together, the model could predict which tumors were lower-grade gliomas or glioblastomas with a high degree of accuracy.

The timeline for using such a model in a is not known at this time. Although it would be possible to implement now as a stand-alone evaluation, the process is not yet integrated into

Health Life

New study uses robots to uncover the connections between the human mind and walking control

Using a robot to disrupt the gait cycle of participants, researchers discovered that feedforward mechanisms controlled by the cerebellum and feedback mechanisms controlled at the spinal level determine how the nervous system responds to robot-induced changes in step length. Credit: Wyss Institute at Harvard University

Many of us aren’t spending much time outside lately, but there are still many obstacles for us to navigate as we walk around: the edge of the coffee table, small children, the family dog. How do our brains adjust to changes in our walking strides? Researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Motion Analysis Laboratory at Spaulding Rehabilitation Hospital used robots to try to answer that question, and discovered that mechanisms in both the cerebellum and the spinal cord determine how the nervous system responds to robot-induced changes in step length. The new study is published in the latest issue of Scientific Reports, and points the way toward improving robot-based physical rehabilitation programs for patients.

“Our understanding of the neural mechanisms underlying locomotor adaptation is still limited. Specifically, how behavioral, functional, and work in concert to achieve adaptation during locomotion has remained elusive to date,” said Paolo Bonato, Ph.D., an Associate Faculty member of the Wyss Institute and Director of the Spaulding Motion Analysis Lab who led the study. “Our goal is to create a better understanding of this process and hence develop more effective clinical interventions.”

For the study, the team used a robot to induce two opposite unilateral mechanical perturbations to as they were walking that affected their step length over multiple gait cycles. Electrical signals recorded from muscles were collected and analyzed to determine how synergies (the activation of a group of muscles to create a specific movement) change