Beyond HIPAA: National Security and Data Integrity Take Center Stage in Healthcare AI

At today's CTeL Spring Digital Health Summit, a critical keynote discussion warned that as artificial intelligence becomes woven into the fabric of healthcare, the industry's traditional focus on patient privacy is no longer sufficient. The new, more pressing challenges are national security threats and, most critically, the integrity of the data that underpins every AI-driven decision.

The panel, "National Security Requirements: AI Integration into Digital Health Platforms," featured insights from Michael McLaughin, Esq., a cybersecurity and government relations expert, and Andrew Taylor, MD, from the University of Virginia. They painted a picture of a healthcare landscape rapidly adopting powerful AI tools while struggling with the immense security and governance challenges they create.

For anyone in the digital health space, the message was clear: the risks have evolved, and our strategies must evolve faster.

The New Frontier of Risk: From Data Vacuums to Data Poisoning

Michael McLaughin began by drawing a startling parallel between popular consumer apps and the potential vulnerabilities in digital health. Using the example of TikTok, he explained that the national security concern isn't just about dance videos; it's about a foreign adversary's ability to "vacuum up data" from 160 million Americans. More than just a collection tool, such platforms can act as a "megaphone" to target specific demographics with propaganda, potentially influencing everything from elections to public health crises.

The danger for healthcare is when the data underpinning our health platforms—from wearables to large language models (LLMs)—is collected, shared, or processed by technology providers with ties to adversarial nation-states like China. Chinese law compels companies to share information with the government upon request, with no warrant or subpoena required. If the data integrity of a health AI model is compromised, the consequences could be devastating for individual patient safety and public trust.

The Ground Reality: Healthcare's "Wild West" of AI Adoption

Dr. Andrew Taylor provided a sobering view from inside healthcare organizations, which he described as "really struggling" to keep up.

Key challenges include:

  • The Infrastructure Gap: Healthcare organizations, often operating in the red, have not invested in the expensive, high-powered computing infrastructure (like GPUs) needed to run advanced AI models internally.

  • The Unsanctioned Workaround: This lack of internal resources forces clinicians who recognize the "profound aspects of these tools" for tasks like diagnosis to use public-facing models like ChatGPT. They do this despite organizational policies, potentially inputting sensitive information into platforms with no guarantee of privacy. There is currently "no reasonable expectation of privacy when you're dealing with an AI model".

  • The Rise of "Shadow AI": Echoing the long-standing problem of "Shadow IT," McLaughin warned of "Shadow AI". This is where well-meaning individuals within an organization use company data to develop their own AI models, enabling them to perform their jobs more efficiently, but without proper oversight. This creates significant compliance and privacy issues, as data may be used for purposes it was not originally collected.

A Paradigm Shift: From Confidentiality to Integrity

Perhaps the most crucial insight from the panel was the need to shift focus from data confidentiality to data integrity.

While HIPAA primarily centers on preventing the unauthorized disclosure of protected health information (confidentiality), it doesn't adequately address the threat of data being subtly altered or poisoned. McLaughin explained the cybersecurity "CIA Triad": Confidentiality, Integrity, and Availability. He argued that with AI, integrity is the new battleground.

Good AI is predicated on good data. If a bad actor—be it a nation-state or a ransomware group—poisons the training data for a diagnostic AI, it could have "profound and cascading impacts on the delivery of healthcare". This problem is magnified by "black box AI," where models are built upon other models, making it nearly impossible to determine how a decision was made or where the data corruption occurred.

The Governance Gap: A Lack of Monitoring and Oversight

The panelists agreed that the systems for governing and monitoring AI are dangerously immature.

  • No Central Catalog: Most healthcare organizations don't have a good catalog of all the AI models running in their systems. Different departments like radiology and emergency medicine acquire and use their own sets of models, leading to a lack of standardization in how they are evaluated and monitored.

  • Inadequate Monitoring: Organizations are failing to adequately monitor AI models over time for performance changes or "data drift". They often rely on the vendors to do the monitoring, leaving them blind to whether a model's accuracy is degrading or has been subtly changed.

  • Regulatory Lag: The U.S. is falling behind in regulation. The EU recently enacted the EU AI Act, which classifies medical devices as "high-risk" and imposes significant requirements for training, policies, and awareness. Meanwhile, the U.S. has seen deregulation in this area, creating uncertainty for both innovation and privacy.

  • Need for New Roles: Just as the Chief Information Security Officer became a standard role, organizations now need to designate individuals responsible for the implementation and oversight of AI systems, potentially creating roles like a Chief AI Officer.

Key Takeaways for Digital Health Leaders

For those who couldn't attend, here are the essential takeaways that should inform your work immediately:

  • Rethink Your Risk Framework: Your risk analysis must now extend beyond HIPAA compliance to include national security threats and, most importantly, threats to data integrity.

  • Address "Shadow AI": Assume your employees are using public LLMs and creating their own AI tools. You must develop clear policies and provide sanctioned, secure alternatives to reduce administrative burdens and improve efficiency safely.

  • Invest in Governance and Monitoring: You cannot protect what you cannot see. Prioritize creating a comprehensive catalog of all AI models in use across your organization. Develop robust, standardized systems to monitor model performance continuously, rather than relying on vendors.

  • Secure Your Training Data: Treat your AI training data with the same security rigor as your live network. This data is the foundation of your AI capabilities; if it's compromised, your entire AI ecosystem is at risk.

  • Establish Clear AI Oversight: Designate a specific person or team responsible for AI governance, even if it’s not yet a C-suite role. This is essential for managing the complex technical, security, and compliance challenges ahead.

The conversation made it evident that AI in healthcare is not just another IT implementation. It's a fundamental shift that touches everything from clinical diagnosis to patient safety and national security. Getting it right requires a new way of thinking—one that is proactive, security-focused, and deeply aware of the integrity of the data that fuels it all.

Thank you to Nixon Peabody for sponsoring the 2025 CTeL Digital Health Summit.

Next
Next

CTeL Members Storm Capitol Hill: Charting the Course for Digital Health's Future