News & Politics

Hospitals 2005: What’s the Best Hospital For You?

There are plenty of numbers available and lots of hospital ratings. Here's how to avoid the hype and find the best information.

Contributing editor John Pekkanen has been writing about medical issues, doctors, and hospitals for three decades.

Here's what DC resident Suzanne Delbanco experienced when looking for the best hospital in which to give birth. "I found I had far less information for deciding which hospital to go to than I did when I had to decide what kind of dishwasher to buy a few months earlier," she says. "Even though healthcare is so important to us, we have access to so little information about which healthcare facilities we should use. This has got to change."

Delbanco is no ordinary consumer. She holds a doctoral degree in healthcare policy and is CEO of the Leapfrog Group, a consortium of more than 150 big corporations seeking to advance hospital quality and safety.

Washingtonians and millions of Americans have experienced a dilemma similar to Delbanco's because how well or badly a hospital cares for its patients has remained hidden in a dark corner of medicine. Considering we spend $1.7 trillion dollars a year on healthcare in this country, the lack of attention to hospital performance seems surprising.

"The healthcare industry has a real hard time being measured because no one likes to come under public scrutiny," says Dr. Kenneth Kizer, president of the Washington-based National Quality Forum, an organization seeking national standards for evaluating hospital quality. "It is critically important to compare hospitals and to make hospital performance transparent. The people want it, health insurers want it, and now healthcare providers want it."

One problem is that right now a hospital's rating depends mostly on who's rating it. The result is that no one can give a definitive answer to one key question–how do similar patients with similar medical problems fare at different hospitals? What we have instead, according to Kizer, is "a Tower of Babel of performance measurers" and no agreement on how best to measure hospital performance.

Kevin Sexton, head of Holy Cross Hospital in Silver Spring, agrees: "Patient-outcome data is now so controversial and so muddy that we don't have anything definitive. We're all seeing different parts of the elephant, but no one's seeing the whole picture."

The results of our interviews with area hospital CEOs and quality evaluators seem to bear this out. They were asked, "If you were given all the publicly available patient-outcome data from ten different hospitals, could you determine which was the best?"

Every one of them said no.

"You might be able to pick out an outlier if it was bad enough," one patient-outcome expert said, "but that's about all."

The interest in opening up hospital-performance information got rolling in 1999 when the Institute of Medicine–the issues-research arm of the National Academy of Sciences–issued a report called "To Err Is Human: Building a Safer Health System." The Institute of Medicine promised to expose "the often startling truth of medical error" that caused up to 98,000 preventable hospital deaths each year. According to the IOM, deaths caused by this epidemic of hospital mistakes were the equivalent of "a jumbo jet crashing every day."

Congress and professional and consumer groups, among others, demanded that hospitals be more accountable and improve patient care and safety. But if a study published in the May 18, 2005, issue of the Journal of the American Medical Association is right, this hasn't happened. Conducted by doctors at Harvard Medical School and the Harvard School of Public Health, the JAMA study found the same number of preventable hospital deaths as the Institute of Medicine reported six years earlier.

Mostly what has resulted from the IOM report is a proliferation of hospital-performance data churned out by the federal government and some states, by professional medical organizations, and by a growth industry of for-profit companies that produce hospital report cards for the public. According to a review published last fall by the Delmarva Foundation and the Joint Commission on Accreditation of Healthcare Organizations–the country's largest hospital accreditor–nearly 50 Web sites now offer some kind of hospital-performance evaluation on the 5,000 acute-care hospitals in the United States.

Sifting through this data deluge to pick one hospital over another remains a guessing game for laypeople and professionals alike. "There is now so much data from so many different sources," admits one area hospital administrator, "that even as a healthcare provider it's very difficult to figure out what the hell's going on."

There are radical discrepancies in the evaluations of Washington-area hospitals by three of the most widely known and established private hospital-rating organizations: HealthGrades, U.S. News, and Solucient.

For 2004, Solucient rated the Washington Hospital Center's cardiovascular program among the top 100 nationally. In its 2004 "Best Hospitals" issue, U.S. News ranked the WHC cardiovascular program 15th best in the country. Yet for that same year, HealthGrades gave WHC's performance in coronary bypass surgery a three-star or "as expected" rating on the basis of its in-hospital mortality rates; it gave WHC the same rating for its one-month and six-month mortality rates. An "as expected" rating essentially means a hospital's performance is in line with a national average. HealthGrades did not give WHC a five-star or "excellent" rating in 2004 for any aspect of its cardiovascular program.

Now take Georgetown University Hospital: In 2004 it received a citation for "clinical excellence" from HealthGrades for being among the "highest scoring of the nation's full service hospitals." Solucient did not include Georgetown among its top 100 hospitals. Georgetown made only two of the 17 U.S. News specialty rankings, coming in 27th for psychiatry and 43rd for orthopedics. The mixed ratings could be a reflection of Georgetown's diminished reputation and financial difficulties over the past several years.

The Johns Hopkins Hospital is regarded as one of the nation's premiere academic medical institutions. In 2004, U.S. News ranked Hopkins's cardiovascular program fourth-best in the country, but that same year Solucient left the Hopkins cardiovascular program out of its top 100. Moreover, U.S. News listed Hopkins first on its "honor roll" of 14 hospitals nationally. These select hospitals were singled out because they "excelled not in one or two specialties, but in six or more," according to U.S. News. But in 2004, Solucient did not include Hopkins as a whole among its top 100 hospitals.

How do the raters explain their disparities? Not surprisingly, each one says it has it right and its competitors have it wrong.

Neither HealthGrades nor Solucient weighs a hospital's reputation in its ratings, so both scoff at U.S. News because it does. Sarah Loughran of HealthGrades claims reputations are little more than image contests and adds, "I know of hospitals that strongly lobby physicians to rate the hospital high so it can make the U.S. News top 100 list."

Solucient senior vice president Jean Chenoweth says a hospital's reputation–good or bad–is not relevant because it can linger long after a hospital has undergone a significant change. "Reputations don't change unless you cut the wrong leg off," she says.

While admitting reputation is the "squishiest" part of the U.S. News hospital ratings, Avery Comarow, the editor in charge of the survey since it began in 1990, defends it as a valid way to help identify centers of excellence. He in turn criticizes Solucient's hospital ratings because they are based in part on such things as hospital finances, occupancy rate, market share, and other nonmedical issues.

"None of this is relevant to a hospital's clinical performance," Comarow says.

That may not be completely true, at least according to a study published in June that found hospitals with financial problems are more likely to commit medical errors than more financially sound hospitals. Researchers from the Agency for Healthcare Research and Quality (AHRQ) compared error rates at 176 Florida hospitals with data on their overall financial well-being. AHRQ found that errors are 12 percent more likely to occur at hospitals that are losing money. The federally funded study appeared in the medical journal Inquiry.

Comarow and Chenoweth do agree on something: that HealthGrades has a conflict of interest because of the fees it charges hospitals to use a HealthGrades excellence award in advertising and publicity.

Chenoweth also says HealthGrades' heavy reliance on Medicare mortality data can be misleading because "it gives you a gross outcome based on a rare event, mortality."

"Only 3 percent of patients die in hospitals," Chenoweth says, "which means 97 percent leave there alive."

Chenoweth concedes that as of now no one really has a handle on measuring hospital performance. "We don't suggest consumers pick a hospital on the basis of what we publish," she says. "Publicly available data is useful, but it's not enough for anyone to make a choice. It's just not definitive."

Another hazard of hospital evaluations is that the quality of hospitals and their departments can change quickly. "There is a constant evolution in medicine, and things move very quickly," says Dr. Bryan Arling, an internist in Northwest Washington and clinical professor at Georgetown and George Washington universities. "What may have been true one year for a hospital, or hospital department, may not be true the next."

There also is the problem of the information source. Although commercial hospital raters vary in their methods and results, most go to the identical source for virtually all their clinical data–the Centers for Medicare and Medicaid Services. This federal agency is where all healthcare providers submit billing data for Medicare reimbursement. It is also the only publicly available national databank for medical outcomes of hospital patients, which is why hospital raters tap into it.

Mining Medicare data to rate hospitals has many limitations, according to critics, the most obvious being that children and younger adults are excluded in any calculation of hospital performance. Medicare patients account for about 55 percent of hospital admissions. Medicare data also are dated by the time they're analyzed and published. For its 2004 ratings, for example, HealthGrades used Medicare data from the years 2000 to 2002.

To more fairly evaluate hospital performance, raters try to account for the overall health of a hospital's patients so one hospital won't appear better than another simply because it treats healthier patients. "Risk adjusting" is based on how illnesses and treatments are coded. But as Sexton, CEO of Holy Cross Hospital, says: "Administrative data is coded to get Medicare reimbursement. It is not coded to capture everything that is going on with patients clinically."

Risk adjustments can also be skewed because of a widely acknowledged hospital practice called "up-coding." This happens when a hospital inflates billing codes to make a patient's care or medical condition appear more complicated than it is so the hospital will be rewarded with a bigger Medicare reimbursement. No one knows for sure how much up-coding goes on, but it's not insignificant.

There's another problem with gathering hospital data. "We have a 'shame and blame' culture," says Joint Commission on Accreditation of Healthcare Organizations president Dr. Dennis O'Leary, "and the fear of lawsuits puts a chill on the willingness of doctors and hospitals to report adverse events."

A 2002 survey of 831 physicians published in the New England Journal of Medicine found that only 14 percent of them thought information about medical mistakes should be released to the public.

Some hospitals also fail to document essential patient data thoroughly, says O'Leary, a former clinical director at George Washington University Medical Center, because it's usually entered by hand, which is labor intensive for many understaffed hospitals. This can make good hospitals that document patient data and are forthcoming about mistakes look worse than weaker hospitals that don't.

These are reasons O'Leary and other critics contend publicly available hospital data do not offer a clear picture of hospital performance.

When asked if commercial raters help or hurt consumers, O'Leary said: "They hurt only if you believe them."

More: