What Clinical Groups Told HHS They Need to Accelerate AI Adoption

MGMA stresses the importance of a transparency framework to provide visibility into model attributes, data sources, and validation methods
Feb. 28, 2026
7 min read

Key Highlights

  • Recommendations include establishing guardrails for AI decision-making, ongoing bias monitoring, and aligning reimbursement models with measurable care improvements.
  • Interoperability enhancements, such as FHIR API standards and provider connectivity, are vital for scalable, safe AI integration into clinical workflows.
  • Liability ambiguity and workflow misalignment are key barriers; clear responsibility frameworks and user-centered AI design are essential for adoption.

Healthcare advocacy groups have weighed in with responses to the Department of Health & Human Services’ Request for Information on “Accelerating the Adoption and Use of Artificial Intelligence as Part of Clinical Care.” Topics raised include data fragmentation, regulatory and liability ambiguity, governance challenges and workflow integration.

In its letter, the Medical Group Management Association (MGMA) said it supports HHS’ goal of establishing federal policy for AI that is predictable, proportionate to risk, and supportive of innovation, noting that a fragmented or inconsistent federal approach to AI oversight could create confusion, duplicative requirements, and additional administrative burden for practices.

The organization expressed concern about recent deregulatory proposals in the HTI-5 Proposed Rule, saying they would affect transparency requirements that enable medical practices to access consistent information about how AI-enabled decision support tools are developed, validated, and intended to be used.

MGMA said that HHS should ensure comparable transparency mechanisms remain in place if removal or modification of these requirements occurs as part of future policy. “The lack of a clearly articulated transparency framework could reduce visibility into model attributes, data sources, and validation methods and potentially expose practices to greater direct evaluation and liability burden,” the organization wrote. “At a time when governance and trust in AI are critical, certification-based transparency as a part of the HHS Office of the National Coordinator (ONC) Health IT Certification Program remains important for informed adoption and safe implementation.”

MGMA stressed that transparency is also critical as it pertains to payers. It contends that HHS should require payers to be transparent about their use of AI for utilization management, claims processing, and coverage limitations and ensure AI systems utilized by payers are evidence-based, do not exacerbate administrative burden for medical groups, and do not interfere with physician clinical decision-making.

In terms of challenges to clinical groups, MGMA highlighted organizational readiness and governance hurdles. In medical groups, AI governance readiness remains uneven. A Jan. 20, 2026, MGMA Stat poll (n=328) found that 42% of medical group leaders report having AI governance structures or formal policies in place (20%) or in development (22%), while 56% report having none and 2% are unsure.

Organizational readiness (including governance capacity, workforce training, infrastructure, and financial resources) and practical limitations can shape organizational decisions involving AI tools and their ability to deliver meaningful value in practice settings, MGMA wrote. 

In its response, the American Hospital Association wrote that certain statutes and regulations in the healthcare ecosystem, such as the patchwork of state privacy laws and 42 CFR Part 2, have indirectly impacted hospitals and health systems’ ability to develop and deploy certain AI tools.

AHA is encouraging the administration to work with Congress to enact a full HIPAA preemption provision, noting that “varying state laws only add costs and create complications for hospitals and health systems.” AHA also urges the administration to work with Congress to remove remaining requirements under 42 CFR Part 2 about the sharing of substance use disorder data that it says hinder care team access to important health information. 

AHA also raises the issue of the lack of clarity surrounding liability as a significant barrier to provider adoption of AI tools. “AI systems are often developed and deployed with inputs from a variety of stakeholders, where providers are just one of many sources,” its letter says. “Also, certain algorithm elements may be treated by developers as proprietary, which makes it challenging for hospitals and other AI users to identify model flaws, discrepancies between training data and real-world applications, or any model drift over time.”

While many of these issues may intersect with case law and state-level malpractice statutes, AHA notes that HHS can “play a vital role in supporting reasonable standards for developer transparency and post-deployment monitoring. Some of these issues underscore the importance of policies like post-deployment standards to ensure the ongoing integrity of tools and transparency standards for health IT certification.” As the agency continues to explore novel liability challenges, AHA urges HHS to provide formal mechanisms for provider input.

The Alliance of Community Health Plans (ACHP) recommends that HHS establish clear guardrails for AI-enabled utilization management decisions, including requirements for:
• Human review of adverse determinations;
• Explainability of decision logic to providers and patients;
• Ongoing monitoring for bias, error rates and disparate impact.
• Align prior authorization modernization efforts with existing interoperability initiatives, including FHIR-based Prior Authorization APIs, to ensure AI tools can operate within standardized, transparent workflows.

Outcomes-based reimbursement models

ACHP also supports advancing outcomes-based reimbursement models that align payment for AI-enabled tools with measurable improvements in care quality, access and total cost of care. To that end, ACHP also recommends HHS:
• Encourage the use of performance-based payment arrangements such as shared savings, performance guarantees or risk corridors.
• Promote standardized performance metrics for AI tools (e.g., impact on avoidable utilization, clinician efficiency, patient adherence or health equity outcomes) to enable consistent evaluation across payers and providers.
• Modify existing medical code sets and ensure EHR capabilities can capture when AI is used in medical care and adjust payments accordingly.

ACHP recommends HHS establish targeted safe harbors for early adoption of AI tools with demonstrated potential value. Examples include:
• Temporary regulatory or payment flexibilities for pilot programs operating under defined guardrails, transparency requirements and monitoring protocols.
• Protection from retrospective payment recoupment when AI tools are deployed in good faith and consistent with published federal guidance.
• Explicit encouragement of provider-plan collaboration to test AI tools within alternative payment models without triggering fraud and abuse concerns.

Several of the stakeholders joined ACHP in mentioning that HHS could support health AI innovation by addressing interoperability challenges that hinder provider connectivity and widespread health IT adoption. ACHP said it supports efforts to improve data standards and interoperability, recognizing access to accurate, robust data is essential for scaling AI tools safely and effectively. Ensuring data quality, volume and hygiene is a critical element for successful health AI implementation, given the potential for AI models to ingest incomplete, inaccurate or poor-quality healthcare data. Additionally, ACHP member companies recognize the need for a sufficient health IT infrastructure to support the data exchange required to enable value-based, technology-enabled care. 

Among ACHP’s interoperability recommendations to HHS are to improve provider connectivity to FHIR-based APIs, including Prior Authorization and Provider Access APIs, by:
• Creating a national digital endpoint directory that enables reliable discovery of payer and provider endpoints.
• Establishing clearer EHR workflow standards so AI-enabled data exchange is embedded into clinical operations rather than treated as an add-on.
• Advancing provider-focused adoption requirements that prioritize usability and reduce implementation friction.

Interconnected barriers

The Society for Cardiovascular Angiography and Interventions identified several interconnected barriers impede both development and responsible deployment of AI in clinical care, including data fragmentation, regulatory and liability ambiguity, and workflow integration.

“The current landscape does not clearly delineate responsibility when AI influences a clinical decision that results in patient harm. The professional accountability framework in medicine is well established: the licensed clinician is responsible. But when an AI tool contributes to a decision, and the clinician had no role in designing, validating, or choosing that tool, the assignment of responsibility becomes unclear,” SCAI wrote. “Technology developers operate under liability protections that were not designed for clinical consequences. This ambiguity discourages adoption by the very professionals who would need to use these tools.”

SCAI also pointed to algorithmic opacity: “Most clinical AI operates as an associative model whose internal logic is not transparent to clinicians. Licensed professionals are trained to reason from evidence, document their rationale, and defend their decisions. Acting on opaque algorithmic recommendations is fundamentally at odds with how professional clinical judgment is exercised and evaluated.

Another issue is workflow integration. AI systems are often designed without adequate understanding of clinical workflows, SCAI wrote. “Tools that generate excessive alerts, require parallel documentation, or interrupt established care pathways create friction rather than efficiency. In high-acuity settings like the catheterization laboratory, poorly integrated AI could compromise rather than enhance patient safety.”

 

About the Author

David Raths

David Raths

David Raths is a Contributing Senior Editor for Healthcare Innovation, focusing on clinical informatics, learning health systems and value-based care transformation. He has been interviewing health system CIOs and CMIOs since 2006.

 Follow him on Twitter @DavidRaths

Sign up for our eNewsletters
Get the latest news and updates