FHIR Bulk Data Standard Gains Momentum

Dec. 16, 2019
Transferring large data sets will aid in population health, value-based care, and research

The initial use cases for the FHIR standard have been about exchanging clinical data about individual patients. But in early November, 60 stakeholders from across the healthcare ecosystem gathered at Harvard Medical School to continue progress toward a FHIR bulk data implementation. Managing population health, delivering value-based care, and conducting research all require access to large population data sets, creating demand for a standardized FHIR format for exchange.

A report from the November meeting describes progress on the FHIR bulk data front, noting that within six to eight months of the first meeting on this topic in 2017, CMS was already using the standard in pilots, and EHR vendors are getting ready to embrace bulk data. Other federal agencies also have use cases involving bulk data, the report added. To address this need, the SMART Health IT project run out of the Boston Children’s Hospital Computational Health Informatics Program, and HL7 have jointly developed the SMART/HL7 Bulk Data/Flat FHIR standard and associated tools.

 In a series of tweets about the meeting, Ken Mandl, M.D., M.P.H., lead and chair of the SMART Advisory Committee, said more than 20 health systems and health plans have committed to move the HL7 balloted standard into real-world testing.

 “The community has access to a suite of free and open-source products to facilitate FHIR bulk data implementation, including the SMART reference implementation, SMART sample client, and the SMART bulk data testing tool to verify server compliance,” Mandl tweeted. He said he was "cautiously optimistic we could see at least a soupcon of Flat FHIR in the final ONC Rule, which implements 21st Century Cures Act provisions.”

 The objectives of the meeting were to obtain feedback on the FHIR bulk data API; understand the next steps for payers, federal agencies and EHR vendors; and discuss how to structure the regulatory environment to promote the implementation and adoption of bulk data APIs. Officials from the Office of the National Coordinator for Health IT were on hand to lend support.

 The report lists some common uses cases for transferring population-level data:

• Payers get clinical data to assess care quality, which often happens by emailing spreadsheets around. This is a time-consuming, manual process.

• Providers want to access claims data to see care patients are receiving outside the network. To bring that data into the EHR, it would be beneficial to be able to pull in bulk data from claims.

• Many healthcare institutions have a data warehouse that pulls in data from clinical systems to support analytics use cases like finding research cohorts for studies. Frequently, this process uses custom scripts that are brittle and must be updated as the systems they interface with change, the report notes.

• A great deal of work goes on in healthcare institutions around reformatting data to send it to different disease registries.

The meeting writeup also lists some of the implementations already under way:

 • CMS ACO Beneficiary Claims Data pilot. This is a bulk data service for ACOs to retrieve claims data about their members. There is huge enthusiasm about being able to get the data in a standard format and use standard FHIR tools.

 • CMS Data at the Point of Care pilot. This uses Medicare’s Blue Button to expose claims data to create a 360-degree view of Medicare patients for providers at the point of care.

 • Boston Children’s Hospital Payer Analytics. This project will eventually be open source, using the bulk data mechanism to retrieve data and do quality measures and other types of analytics.