Over 10 Total Lots Up For Auction at One Location - CO 06/17

Special report highlights LLM cybersecurity threats in radiology

Press releases may be edited for formatting or style | May 14, 2025
OAK BROOK, Ill. — In a new special report, researchers address the cybersecurity challenges of large language models (LLMs) and the importance of implementing security measures to prevent LLMs from being used maliciously in the health care system. The special report was published today in Radiology: Artificial Intelligence, a journal of the Radiological Society of North America (RSNA).

LLMs, such as OpenAI's GPT-4 and Google's Gemini, are a type of artificial intelligence (AI) that can understand and generate human language. LLMs have rapidly emerged as powerful tools across various health care domains, revolutionizing both research and clinical practice. These models are being employed for diverse tasks such as clinical decision support, patient data analysis, drug discovery and enhancing communication between health care providers and patients by simplifying medical jargon. An increasing number of health care providers are exploring ways to integrate advanced language models into their daily workflows.

"While integration of LLMs in health care is still in its early stages, their use is expected to expand rapidly," said lead author Tugba Akinci D'Antonoli, M.D., neuroradiology fellow in the Department of Diagnostic and Interventional Neuroradiology, University Hospital Basell, Switzerland. "This is a topic that is becoming increasingly relevant and makes it crucial to start understanding the potential vulnerabilities now."
stats Advertisement
DOTmed text ad

Training and education based on your needs

Stay up to date with the latest training to fix, troubleshoot, and maintain your critical care devices. GE HealthCare offers multiple training formats to empower teams and expand knowledge, saving you time and money

stats
LLM integration into medical practice offers significant opportunities to improve patient care, but these opportunities are not without risk. LLMs are susceptible to security threats and can be exploited by malicious actors to extract sensitive patient data, manipulate information or alter outcomes using techniques such as data poisoning or inference attacks.

AI-inherent vulnerabilities and threats can range from adding intentionally wrong or malicious information into the AI model's training data to bypassing a model's internal security protocol designed to prevent restricted output, resulting in harmful or unethical responses.

Non-AI-inherent vulnerabilities extend beyond the model and typically involve the ecosystem in which LLMs are deployed. Attacks can lead to severe data breaches, data manipulation or loss and service disruptions. In radiology, an attacker could manipulate image analysis results, access sensitive patient data or even install arbitrary software.

You Must Be Logged In To Post A Comment

OSZAR »