February 23, 2026
U.S. Department of Health and Human Services
Assistant Secretary for Technology Policy and the Office of the National Coordinator for Health Information Technology, Office of the Secretary
Attention: Request for Information: HHS Health Sector AI RFI
Dear Secretary Kennedy:
On behalf of the Society of Hospital Medicine (SHM), we appreciate the opportunity to provide comments to the U.S. Department of Health and Human Services (HHS) regarding the responsible development, regulation, and adoption of artificial intelligence (AI) in clinical care.
SHM represents the nation’s more than 50,000 hospitalists, who specialize in managing the care of hospitalized patients, and are often at the forefront of healthcare innovation and technological advancement. As clinicians who have adapted to and lead during times of rapid change, hospitalists bring unique expertise in evaluating and implementing both clinical practice and new technologies in hospital settings. For example, hospitalists were the front lines against a novel disease caring for hospitalized patients during the COVID-19 pandemic and have long been leaders of both the implementation and superusers of inpatient electronic health records (EHRs) following the 2009 HITECH Act.
The implementation and use of AI in healthcare is growing rapidly. We are concerned that the policy and regulatory conversations have not kept up with the pace of change, creating risks for patients and clinicians alike. AI should improve clinician workflow and patient care at the individual level with the goal of improving overall community health. Importantly, physicians must maintain ultimate authority and accountability as experts and decision-makers. Without these goals, there will be resistance to adoption. We applaud this RFI as an important step in gathering perspectives on AI use in healthcare and the policy sphere surrounding it.
We offer the following framework and recommendations in response to the questions posed by HHS.
Potential Framework for Evaluating AI – Risk-Based Categorization and Evaluation
HHS should carefully evaluate AI tools based on their risk level to patients and calibrate regulatory oversight proportionate to that risk. AI tools in hospital medicine exist on the following spectrum of clinical risk, but such examples could be generalized more broadly dependent on care setting.
- Low Risk – Non-Clinical, Non-Patient-Facing Administrative Tasks:
Tools used for creating reports, generating presentations, writing business letters, and other administrative functions. These require minimal clinical scrutiny as they do not directly impact patient care.
- Moderate Risk – Clinical Documentation and Decision Support:
Tools that are assistive in nature through summarization or initial drafting. Specific examples include, but aren’t necessarily limited to the following:
- Generative AI for summarizing medical records or drafting discharge summaries.
- AI-enhanced evidence review tools.
- Predictive triage and throughput support systems.
- Ambient documentation technologies.
Such tools supplement but do not supplant expert human clinical judgment. Examples include ambient AI documentation tools and evidence-based search platforms like OpenEvidence®, among others. Although helpful, these uses require a ‘human-in-the-loop’ to ensure accuracy and to review for accuracy and AI hallucinations.
- High Risk – Clinical Decision Support and Alerts:
Tools embedded in EHRs that generate alerts, recommend diagnostic or therapeutic interventions, or provide risk stratification. These require the highest level of vetting, individual research, and validation within each unique clinical context. Algorithms must be trained specifically for each clinical setting and cannot be readily transferred across institutions or patient populations without retraining as AI trained on non-representative datasets can perpetuate or worsen racial, ethnic, and socioeconomic disparities in care. AI at this risk level must also be accompanied by ongoing post-deployment monitoring and operate with clear accountability structures in place. Part of ongoing operations should also include a transparency requirement. Clinicians need to understand why an AI is making a recommendation to exercise meaningful oversight.
Focus on Core Principles for AI in Healthcare
The overall mission of AI implementation should be to improve patient care and clinician wellbeing. This should be clearly stated and reinforced in all regulatory and implementation efforts.
Human-Centered Care: AI may offer potential diagnoses or medical therapies, but it is not physically present at the bedside. Further, it cannot exercise clinical judgment considering a patient’s full psychosocial and clinical needs. Technology should reduce documentation burden to allow physicians and the care team to build deeper patient connections, not increase clinical workload or reduce reimbursement based on efficiency gains.
Physician Authority and Autonomy: Physicians must maintain ultimate authority and control over all clinical decision-making. AI tools must be viewed as a supplement to human expertise but do not replace it.
Address Privacy and Security Concerns
The adoption and use of AI presents challenges, some new and some longstanding, for existing patient privacy and security concerns. Regulations and laws may need to be updated to reflect the current reality of patient information and privacy. We urge the administration to carefully consider how to maintain and strengthen patient protections to prevent the misuse and abuse of patient’s private health information.
Informed Consent:
- Patients whose data will be used by AI should be informed with a clear process for opt-out.
- Current workflow challenges include obtaining consent before recording patient encounters. This risks adding complexity to clinical workflows and redistributing (or even increasing) administrative burden without meaningful reductions.
Data Storage Protections:
Video and audio streaming AI devices generate vast amounts of potentially identifiable data. What happens to those recordings after the healthcare interaction takes place is highly variable. Healthcare entities need pragmatic regulation governing use and storage of audio/video recordings to ensure consistency across settings and systems. Protections will be required to guard against disclosure or theft.
Patient and Clinician Privacy Guardrails:
Both patient privacy and clinician privacy must be protected. As such, concerns around ‘AI surveillance’ must be addressed. For example, ambient AI being used to evaluate clinician performance or to determine compensation without appropriate safeguards.
Legal and Liability Issues
In the event something goes wrong, there is currently unclear fault allocation among AI developers, healthcare providers, and institutions. Aligned with the risk categorization outlined above, as the risks for patients increase, so does the importance of addressing legal liability issues. There are significant concerns about AI tools being integrated into clinical workflows without ample research, policy safeguards, or ongoing monitoring. For example, discharge summary tools that begin to hallucinate false information could dictate future patient care and incorrectly trained or degraded algorithms could produce inaccurate results leading to incorrect treatment. AI tools should have clear guidelines for use and liability, similar to other healthcare technologies
Barriers to AI Innovation and Adoption
Several barriers to innovation and adoption have become evident over the past few years as AI tools have evolved and their use has become more common. Some of these barriers are structural to the healthcare system, while others have more to do with culture and trust in the tools themselves.
- Weak interoperability for existing tech systems in healthcare, including between EHRs and other databases. Without complete data, AI tools will face the same challenges as human care teams in trying to get as clear a picture of the patient’s needs as possible.
- No predictable reimbursement pathways for AI-enabled workflows. The existing reimbursement system does not compensate for a significant amount of work done by clinicians. As AI creates efficiencies, this “time back” for clinicians should prioritize more face-to-face and bedside time with patients, not financial cuts.
- High integration costs for EHR customization and workflow redesign.
- Misaligned incentives: profits go to vendors and savings accrue to payers while costs of implementation and ongoing operation fall on clinicians and facilities. These financial constraints will continue to slow adoption.
- AI tools are not currently integrated within existing workflows, creating additional time burden for their use.
- Alert fatigue. This phenomenon has increased as more technology ‘assisted’ tools have been introduced into the healthcare system and clinicians are bombarded with alarms, pop-ups, alerts, and other signals. AI has the potential to expand the number of alerts dramatically.
While AI shows significant promise for improving healthcare systems, operations, and even clinical care, there is a reasonable skepticism among clinicians. While humans can make mistakes (“To Err is Human”), mechanisms for accountability and a culture of learning have been built over decades. AI does not have this demonstrated track record of learning and adjusting or established systems of accountability. In addition, there is reasonable concern that gains from AI will be dramatically offset by cuts to the human workforce and undermine the necessary human element of delivering patient care.
Conclusion
The Society of Hospital Medicine recognizes the tremendous potential of AI to support hospitalists and improve patient care. However, successful integration requires addressing data quality and interoperability challenges, clarifying regulatory frameworks, establishing clear liability structures, protecting patient and clinician privacy, and ensuring that AI tools genuinely improve workflow efficiency and enhance the quality of patient care. AI should be implemented with the overarching goal of improving overall community health. Without addressing these issues, there will be resistance to adoption.
SHM has general optimism about AI tools that can significantly reduce administrative burdens, allow more face time with patients, and perform accurate encounter-based documentation, coding, and billing. However, adoption remains slow, with most current use limited to resident teaching rounds, private practice ambient documentation, and some sepsis alert systems.
HHS should focus on creating an environment where AI innovations meet healthcare needs rather than prioritizing technology sector interests. The ultimate measure of success must be improved patient outcomes, enhanced clinician wellbeing, and better overall health – not technological sophistication for its own sake, or pure economic efficiency aims such as higher encounter quotas or reduced reimbursement.
We appreciate HHS’s leadership in seeking stakeholder input and stand ready to partner in advancing policies that ensure AI strengthens—not supplants—the human practice of medicine.
Sincerely,
Chad T. Whelan, MD, MHSA, SFHM
President
Society of Hospital Medicine
