One Expert’s Perspective on AI and Regulation

Jan. 12, 2024
Laura Stark, Ph.D., a Vanderbilt University professor, looks at the prospects for potential regulation of AI, in a New England Journal of Medicine op-ed

Laura Stark, Ph.D., a professor in the Department of Medicine, Health, and Society at Vanderbilt University, has written a detailed, nuanced op-ed in the Perspective section of The New England Journal of Medicine, entitled “Medicine’s Lessons for AI Regulation.” The op-ed, published last month online and in print, looks at the attempts several decades ago to impose regulations on the treatment of human subjects in medical research—and the implications of that history, for artificial intelligence (AI) development, in the present moment.

Indeed, Professor Stark notes, next year marks the fiftieth anniversary of the National Research Act, meaning that AI regulation could end up going down a similar path over time. She writes that “This is far from the first time the United States has written rules to safeguard the public as science reached new capacities. Next year marks the 50th anniversary of the National Research Act, which created rules for the treatment of human subjects in medicine. Like AI regulations, rules for the treatment of human subjects were put in place swiftly during a time of intense public scrutiny of unethical uses of science. In 1972, the racial injustices of the Tuskegee Study of Untreated Syphilis were revealed in the U.S. mass media. Although this unethical research had been under way for four decades, with results published in scientific journals, Tuskegee’s exposure in the popular press galvanized lawmakers to pass legislation on research with human subjects that had been in the works for years. Moreover, like the use of AI today, human-subjects research in the 1970s was a long-standing practice that held new potential, had innovative applications, received unprecedented levels of funding, and was taking place on a new, larger scale. And like the use of AI today, research using human subjects in the 1970s was both exciting and risky, with many effects unknown — and unknowable.”

Stark notes that “Rules governing the treatment of human subjects have traveled a bumpy road since they were first passed in 1974. Their history holds insights for AI regulation that aims for efficiency, flexibility, and greater justice.” She goes into significant detail to narrate how things became increasingly complicated when it came to clinical trial governance. And in that context, she writes, “Debates over AI have raised similar issues about the appropriate relationship between government and professional authority in the regulation of science. In July 2023, leaders of seven top AI companies made voluntary commitments to support safety, transparency, and antidiscrimination in AI. Some leaders in the field also urged the U.S. government to enact rules for AI, with the stipulation that AI companies set the terms of regulation. AI leaders’ efforts to create and guide their own oversight mechanisms can be assessed in a similar light to Beecher’s campaign for professional autonomy. Both efforts raise questions about enforcement, the need for hard accountability, and the merits of public values relative to expert judgement in a democracy.”

In that context, Stark writes, “Public concern about AI has emphasized applications, such as the use of medical chatbots, which has drawn attention to effects on people as users of AI tools. But with increases in social-media content, use of personal electronic devices, and techniques such as data scraping, AI systems also have ample access to benchmark and training data generated by people in the course of their everyday lives. The history of human-subjects research shows that rules for AI would do well to prioritize protections and clarify rights regarding the data that underlie generative tools — in addition to protecting people from harmful effects of AI applications.” And she concludes that “[T]he history of human-subjects regulation shows that for any fast-moving area of science, anticipating and planning for rule revision is necessary. AI’s emerging properties and new use cases warrant clear, built-in mechanisms to allow speedy regulatory updates made with meaningful public input to support science, medicine, and social justice.”

 

Sponsored Recommendations

A Cyber Shield for Healthcare: Exploring HHS's $1.3 Billion Security Initiative

Unlock the Future of Healthcare Cybersecurity with Erik Decker, Co-Chair of the HHS 405(d) workgroup! Don't miss this opportunity to gain invaluable knowledge from a seasoned ...

Enhancing Remote Radiology: How Zero Trust Access Revolutionizes Healthcare Connectivity

This content details how a cloud-enabled zero trust architecture ensures high performance, compliance, and scalability, overcoming the limitations of traditional VPN solutions...

Spotlight on Artificial Intelligence

Unlock the potential of AI in our latest series. Discover how AI is revolutionizing clinical decision support, improving workflow efficiency, and transforming medical documentation...

Beyond the VPN: Zero Trust Access for a Healthcare Hybrid Work Environment

This whitepaper explores how a cloud-enabled zero trust architecture ensures secure, least privileged access to applications, meeting regulatory requirements and enhancing user...