spot_img
HomeAnalytical Insights & PerspectivesUC Stroke Experts Advocate for Ethical AI Integration in...

UC Stroke Experts Advocate for Ethical AI Integration in Research and Patient Care

TLDR: University of Cincinnati stroke specialists, led by Dr. Joseph Broderick, recently convened to discuss the evolving role of artificial intelligence in stroke research and treatment. Their discussions, summarized in the journal *Stroke*, emphasize the critical need for ‘human in the loop’ AI systems, robust data sets, and ethical considerations to ensure responsible and effective application of AI in medicine, from clinical trial design to personalized patient care.

CINCINNATI, OH – As artificial intelligence (AI) continues its pervasive growth across industries, the medical field faces a unique imperative to ensure its ethical and responsible deployment, particularly in areas as critical as stroke research and treatment. This was the central theme of discussions held by University of Cincinnati (UC) stroke experts, led by Joseph Broderick, MD, at the Stroke Treatment Academic Industry Roundtable meeting on March 28. Their insights have since been published in the journal Stroke on September 30.

Stroke physicians are already leveraging AI to enhance clinical decision-making, notably in the analysis of brain and vessel imaging. AI tools also play a role in identifying potential participants for clinical trials. However, Dr. Broderick, a professor in UC’s College of Medicine, senior adviser at the UC Gardner Neuroscience Institute, and director of the NIH StrokeNet National Coordinating Center, and his colleagues underscored the necessity of designing ‘human in the loop’ systems. These systems mandate human input and expertise throughout the training and application of AI models.

Dr. Broderick drew an analogy, stating, ‘Think about AI like a toddler learning to ride a bike. It is an amazing feat to ride a bike, but there are a lot of falls (mistakes) in the learning. Having an expert, and even training wheels, to help support the bike while the child is learning is helpful. Eventually children do learn to ride the bike very well.’ This highlights the need for continuous human oversight and refinement in AI’s developmental stages in medicine.

The experts differentiated between machine learning (ML) and generative AI in stroke applications. Machine learning models are trained on structured, human-curated datasets to classify or predict outcomes based on ‘ground truth.’ While this demands significant human effort for data curation, it forms the backbone of many current AI applications in stroke care.

Looking ahead, Dr. Broderick outlined several promising applications for AI once robust models are developed and human-validated. These include more efficient identification of suitable clinical trial participants, simplifying complex trial designs into understandable language for patients, translating vital trial information for non-English speaking individuals, and crucially, pinpointing the most effective treatment for each patient. ‘We have been talking about precision medicine for some time, but AI is a major step forward to accomplish this,’ he noted.

However, the discussions also brought to light significant challenges. Researchers must proactively ensure that data sets used for AI training are robust and diverse, encompassing data from various scanner manufacturers, institutions, and patient demographics to improve generalizability. Dr. Broderick warned against the perils of inadequate data: ‘If we use bad or limited data and human experts don’t correct the bad data or classifications, AI can produce inaccurate and wrong recommendations. My biggest concern is when AI is trained on bad data and gives answers that can harm.’

Also Read:

Beyond AI, the roundtable also explored innovative clinical trial designs, such as platform trials, which allow for the simultaneous testing of multiple research questions and the flexible integration of new questions as others are resolved. Another key focus is pragmatic trials, designed to evaluate treatment effectiveness within routine clinical care settings, moving beyond idealized research conditions.

Rhea Bhattacharya
Rhea Bhattacharyahttps://blogs.edgentiq.com
Rhea Bhattacharya is an AI correspondent with a keen eye for cultural, social, and ethical trends in Generative AI. With a background in sociology and digital ethics, she delivers high-context stories that explore the intersection of AI with everyday lives, governance, and global equity. Her news coverage is analytical, human-centric, and always ahead of the curve. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -