When Atul Gawande of Harvard Medical School went to McAllen, Texas, to find out why health-care costs there were high, he discovered something interesting. The people in McAllen--including the executives who ran the key hospitals there--did not know that their costs were high. They had no clue. Zero.
In McAllen, Gawande reported in The New Yorker, health-care costs were not just high. They were off the charts. But you'd never know that if you had never seen the charts, if no one had ever shown you the charts, if you had never gone looking for the charts.
There are lots of different kinds of charts for displaying data. After all, there are lots of different kinds of data, and depending on how the data are analyzed and displayed, they can offer different insights. Baseball (as I have often noted) collects a multiplicity of data on players, on teams, on match-ups, on circumstances. Yet, compared with medicine and health care, baseball is a simple enterprise.
Still, the basic comparative data on health-care costs aren't all that complicated. You need a numerator: total costs. And you need a denominator, which is often more difficult to select.
The most obvious denominator for comparative health-care costs is population. If you divide total dollars spent on health care in a region by its total population, you get a useful first-order measure with which to compare healthcare costs.
Obviously, the underlying characteristics of the populations being compared may be different. And such differences may explain the differences in the comparative data. For example, if a region's population is significantly older than another's, it could easily have higher health-care costs. After all, in the United States, people over the age of 65 consume over a third of all health-care costs.
Still, unless the age distribution of two geographic areas is significantly different, it isn't going to drive significant differences in per-capita health-care costs. Whenever data are compared, those at the bottom have a standard defense: "You don't understand. We're different." But this explanation (excuse?) is valid only if this difference is both large and relevant.
Any large public agency divides its responsibilities among subunits. But does each subunit know how well its performance compares with its peers? Or in the absence of any data, can a subunit dupe itself into believing that it is at least above average--if not truly superior? If subunit managers can assume that...