Skip Navigation

The Roger C. Lipitz Center for Integrated Health Care

Post date:  22 July 2013 


Peter J. Pronovost, MD, PhD, FCCM
Sr. Vice President for Patient Safety and Quality, and Director of the Armstrong Institute for Patient Safety and Quality
Johns Hopkins Medicine 

If you have ever tried to choose a physician or hospital based on publicly available performance measures, you may have felt overwhelmed and confused by what you found online. The Centers for Medicare and Medicaid Services, the Agency for Healthcare Research and Quality, the Joint Commission, the Leapfrog Group, and the National Committee for Quality Assurance, as well as most states and for-profit companies such as Healthgrades and U.S. News and World Report, all offer various measures, ratings, rankings and report cards. Hospitals are even generating their own measures and posting their performance on their websites, typically without validation of their methodology or data.

The value and validity of these measures varies greatly, though their accuracy is rarely publically reported. Even when methodologies are transparent, clinicians, insurers, government agencies and others frequently disagree on whether a measure accurately indicates the quality of care. Some companies’ methods are proprietary and, unlike many other publicly available measures, have not been reviewed by the National Quality Forum, a public-private organization that endorses quality measures.

Depending where you look, you often get a different story about the quality of care at a given institution. For example, none of the 17 hospitals listed in U.S. News and World Report’s “Best Hospitals Honor Roll” were identified by the Joint Commission as top performers in its 2010 list of institutions that received a composite score of at least 95 percent on key process measures.

Too often, data about quality are reported by marketing departments—not measurement experts—who only provide information that makes the hospital look good. There are no rules for how or what they report, or even the accuracy of that information. Indeed, there are stronger safeguards of what a company can say about toothpaste than what a hospital can say about its quality. A brief review of hospital websites published in the Journal of American Medical Association (JAMA) several years ago shows egregious examples of misleading reporting. In one instance a hospital stated that 96 percent of their patients received evidence-based therapies while a government website that reports hospital performance stated 64 percent did.    

For performance measures that are standardized and publicly reported, there has been a great deal of sparring among providers and regulators over their accuracy and importance. Regulators argue they are good enough, providers argue they are not, but neither side has presented data to support their position or discussed how accurate is accurate enough. For example, Centers for Medicare & Medicaid Services  does not pay hospitals the costs associated with certain complications, called hospital-acquired conditions (HAC), by using billing data. Yet the accuracy of billing data for identifying these complications is either unknown or low.  For example, billing and clinical data only match up 25 percent of the time for catheter infections, one type of HAC. For most other complications, we do not know whether the billing data gets it right five or 95 percent of the time.     

Unfortunately, health reform has failed to establish a process to create the thousands of performance measures that patients want and deserve. To date, health care has attempted to develop measures one at a time. At this pace, it will be decades before we develop the measures for all the different diagnoses and procedures that patients desire quality data for to make informed decisions. Without a process to develop measures of value, it is unlikely health care will ever improve value. This is also needed for the reporting of charge and cost data, an area that is equally gray for consumers who are hard pressed to find accurate information on what they will be charged for a procedure.  

In a policy paper published this spring, Robert Berenson, a fellow at the Urban Institute, Harlan Krumholz, professor at Yale School of Medicine, and I called for dramatic change in measurement.  (Thanks to The Health Care Blog for their feature on this analysis.)

We made several recommendations, including focusing more on measuring outcomes such as mortality and infections rather than processes (e.g. whether patients received the recommended treatment) or structures of care (e.g. whether ICUs are staffed around the clock with critical care specialists). We urged that measures be at the organization level rather than clinician level, to reflect the fact that safety and quality are as much products of care delivery systems as of individual clinicians. We propose investments in the “basic science” of measurement so that we better understand how to design good measures. You can read these and other recommendations in the analysis.

Of the proposals, perhaps the biggest game-changer would be the creation of an entity to serve as the health care equivalent of the U.S. Securities and Exchange Commission. Rather than wading through a bevy of competing and often contradictory measures, patients and others would have one source of quality data that has national consensus behind it. We write:

“Under this model, this entity would set the rules for the development of measures and the transparent reporting of performance of these measures, analyze progress (with input from clinicians, patients, employers, and insurers), and audit publicly-reported quality measure data. Private sector information brokers could then conduct secondary analyses of the reports, much like happens in the financial industry through companies like Bloomberg. This SEC-like model would thus ensure that all publicly-reported quality measure data are generated from a common basis in fact and allow apples-to-apples comparisons across provider organizations.”

Before the SEC was created, in the aftermath of the Wall Street Crash of 1929, information provided by one business typically could not be compared to another, as there were no common standards for reporting financial performance. This spurred the development of trained professionals, known as Certified Public Accountants (CPAs), skilled in collecting, reporting and certifying financial performance. While training is not standardized for those who collect and report health care quality data, encouragingly, some schools of public health, including Hopkins, have developed health care quality curriculum and, in some cases, advanced degree programs. In the future, such training should be required for those measuring quality.

It’s more than 80 years since the SEC was created, giving way to greater transparency in the financial arena. Today, health care is stuck in a similar situation, despite great efforts to create measures to drive improvement and inform patients’ decisions. It’s time that we catch up. An SEC-like entity could have private sector rule-setting, public sector auditing and transparency, and private sector reanalysis, working from a common book of truth.

Advancing the science of measurement is one of three content tracks in Johns Hopkins’ first Forum on Emerging Topics in Patient Safety, to be held Sept. 23-25 in Baltimore. Experts from a wide range of backgrounds will gather to help generate ideas around this crucial issue. Among the speakers on this track are Patrick Conway, Chief Medical Officer for the Centers for Medicare and Medicaid Services; John Santa, Director of the Consumer Reports Health Ratings Center; Niek Klazinga, Coordinator of the Health Care Quality Indicator Project at the Organisation for Economic Co-operation and Development; and Robert Berenson of the Urban Institute. Aimee Guidera, founder of the Data Quality Campaign, which has encouraged the creation and use of high-quality data in education, will provide perspectives from her field that may translate to health care. If you are interested in this topic and would like to contribute to the recommendations that come from the forum, please join us in September.

Dr. Peter Pronovost is a world-renowned patient safety champion. His scientific work leveraging checklists to reduce catheter-related blood stream infections has saved thousands of lives and earned him high-profile accolades, including being named one of the 100 most influential people in the world by Time Magazine. Dr. Pronovost is an advisor to the World Health Organizations’ World Alliance for Patient Safety and regularly addresses the U.S. Congress on patient safety issues. He is senior vice president of patient safety and quality and director of the Armstrong Institute for Patient Safety and Quality at Johns Hopkins Medicine.