Artificial Intelligence & Machine Learning , Healthcare , Industry Specific

Lawmaker Quizzes Google on 'Guardrails' for AI in Healthcare

Sen. Mark Warner Asks Google CEO to Address AI Trust, Privacy, Ethical Practices
Lawmaker Quizzes Google on 'Guardrails' for AI in Healthcare
Google's Med-PaLM 2 large language model tool is being tested by a handful of healthcare entities, including the Mayo Clinic. (Image: Google)

Citing concerns including patient confidentiality and potential information inaccuracies, Sen. Mark Warner, D-Va., on Tuesday sent a letter quizzing Google CEO Sundar Pichai about how the tech giant is applying privacy, trust and ethical "guardrails" around the development and use of its generative AI product, Med-PaLM 2, in healthcare settings.

See Also: Cloud Analytics & Data Masking: Making the Most of Machine Learning on the Public Clouds

Google in April said it had begun rolling out Med-PaLM 2, a large language model designed to answer medical questions, to a "select group" of Google Cloud customers for limited testing.

Among those customers is Arlington, Virginia-based VHC Health, which is a member of the Mayo Clinic Care Network, according to Warner, chairman of the Senate Select Committee on Intelligence. Warner told Pichai he wants more "clarity" about how Med-PaLM 2 is being tested in healthcare settings.

"While artificial intelligence undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors," Warner wrote.

Sen. Mark Warner wants Google to explain how patient privacy is protected in the use of the company's Med-PaLM 2 AI tool by medical entities. (Image: U.S. Senate)

The lawmaker voiced concerns in 2019 that Google was allegedly "skirting health privacy laws" through "secretive partnerships" with a handful of large hospital systems to train diagnostic AI models using patients' sensitive health data without their knowledge or consent (see: Privacy Analysis: Google Accesses Patient Data on Millions).

"The race to establish market share is readily apparent and especially concerning in the healthcare industry, given the life-and-death consequences of mistakes in the clinical setting, declines of trust in healthcare institutions in recent years, and the sensitivity of health information," Warner wrote to Google on Tuesday.

A Dozen Questions

Warner is urging Pichai to answer a dozen questions relating to Google's practices in the development and use of Med-PaLM 2 in medical settings.

They question topics include how Google is ensuring the accuracy and timeliness of LLM training data to avoid misdiagnosis and other medical errors, safeguarding the privacy of patients' protected health information, and applying ethical "guardrails" to prevent misuse or inappropriate use of Med-PaLM 2.

"It is clear more work is needed to improve this technology, as well as to ensure the healthcare community develops appropriate standards governing the deployment and use of AI," Warner wrote.

A Google spokesperson, in a statement to Information Security Media Group, disputed Warner's description of Med-Palm 2 in his letter as a "chatbot."

”Med-PaLM 2 is not a chatbot; it is a fine-tuned version of our large language model PaLM 2, and designed to encode medical knowledge,” she said.

"We believe AI has the potential to transform healthcare and medicine and are committed to exploring with safety, equity, evidence and privacy at the core. As stated in April, we're making Med-PaLM 2 available to a select group of healthcare organizations for limited testing, to explore use cases and share feedback - a critical step in building safe and helpful technology. These customers retain control over their data."

The Mayo Clinic did not immediately respond to Information Security Media Group's request for comment on Warner's concerns.

Last month, Google was among seven technology companies that the Biden administration said had voluntarily pledged to ensure that the development of their AI products would be safe, secure and trustworthy (see: 7 Tech Firms Pledge to White House to Make AI Safe, Secure).

Other big tech firms making that commitment included Amazon, Anthropic, Inflection, Meta, Microsoft and OpenAI.

In July, the Department of Health and Human Services' Health Sector Cybersecurity Coordination Center issued a threat brief spotlighting the use of AI in healthcare, including for cybersecurity defense purposes, as well as for malicious attacks by threat actors.

"Moving forward, expect a cat-and-mouse game," HHS HC3 wrote. "As AI capabilities enhance offensive efforts, they'll do the same for defense; staying on top of the latest capabilities will be crucial."


About the Author

Marianne Kolbasuk McGee

Marianne Kolbasuk McGee

Executive Editor, HealthcareInfoSecurity, ISMG

McGee is executive editor of Information Security Media Group's HealthcareInfoSecurity.com media site. She has about 30 years of IT journalism experience, with a focus on healthcare information technology issues for more than 15 years. Before joining ISMG in 2012, she was a reporter at InformationWeek magazine and news site and played a lead role in the launch of InformationWeek's healthcare IT media site.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.asia, you agree to our use of cookies.