HL7 Releases PHR Ballot Model

June 24, 2011
Ann Arbor, Mich.-based Health Level Seven (HL7), a healthcare IT standards development organization, has released a ballot to approve its Personal

Ann Arbor, Mich.-based Health Level Seven (HL7), a healthcare IT standards development organization, has released a ballot to approve its Personal Health Record System Functional Model (PHR-S FM) as a Draft Standard for Trial Use (DSTU).

According to HL-7, the PHR-S FM defines a set of functions that may be present in PHR systems, and offers guidelines that facilitate health information exchange among different PHR systems and between PHR and EHR systems.

While the PHR-S FM is not yet a fully ANSI-accredited standard, a Draft Standard for Trial Use (DSTU) version allows the industry to work with a stable standard for up to two years while it is being refined into an ANSI-accredited version.

HL7 says it invites the public to vote on the PHR-S Functional Model. The voting period began Nov. 2 and will continue through Dec. 1. Members and non-members of HL7 can vote, and the model and the ballot package can be downloaded at www.hl7.org/ehr.

Sponsored Recommendations

Six Cloud Strategies to Combat Healthcare's Workforce Crisis

The healthcare workforce shortage is a complex challenge, but cloud communications offer powerful solutions to address it. These technologies go beyond filling gaps—they are transformin...

Transforming Healthcare with AI Powered Solutions

AI-powered solutions are revolutionizing healthcare by enhancing diagnostics, patient monitoring, and operational efficiency - learn how to integrate these innovations into your...

Enhancing Healthcare Through Strategic IT and AI Innovations

Learn how strategic IT and AI innovations are transforming healthcare - join Tomas Gregorio as he explores practical applications that enhance clinical decision-making, optimize...

The Intersection of Healthcare Compliance and Security in the Age of Deepfakes

As healthcare regulations struggle to keep up with rapid advancements in AI-driven threats like deepfakes, the security gaps have never been more concerning.