Battling for control of health care resources.

AuthorMorreim, E. Haavi

An extraordinary upheaval in the health care sector has occurred during the past two decades. Skyrocketing increases in expenditures, which had become evident in the mid-1960s, gave way to a parade of attempts to rein in costs, ranging from price controls to restrictions on the proliferation of technology to modifications in the incentives under which providers function (Butler and Haislmaier 1989; Goldsmith 1986; Patricelli 1987; Starr 1982). These early efforts were largely unsuccessful, and national health care expenditures continued to rise rapidly (Aaron and Schwartz 1984; Butler and Haislmaier 1989; Fuchs 1987; Schwartz 1981; Starr 1982). By the late 1980s and early 1990s, as international economic competition and then a domestic recession challenged corporate vitality, employers nationwide finally determined that they could no longer continue to absorb annual double-digit increases in health care costs. First on the West Coast and eventually nationwide, corporations gave health plans an ultimatum: restrain premium prices or lose business. That move ushered in the managed care era of the 1990s, with its gyrations between (temporarily) successful cost containment and public vilification for the tactics by which that success was achieved.

Over time those tactics have evolved. Intensive utilization management and stringent gatekeeping systems, so prominent from the mid-1980s to the mid-1990s, have been giving way to broader profiling of providers and practices; incentives have gone from crude cash rewards for cutting costs to more sophisticated mixes rewarding productivity and quality alongside cost consciousness. Enormous changes are still under way.

Throughout this period, a fundamental battle has raged: Who should control health care resources? Health plans insist that they contractually are entitled to determine the medical services and products for which they will pay, but physicians retort that the plans' denials of payment interfere with medical judgment, and patients complain that they are not receiving the care for which they believe they have paid. The battle will not disappear soon, for two reasons.

First, in contrast with other professions, the practice of medicine often requires considerably more than the practitioner's own knowledge and skill. Although other professions require some sort of broader infrastructure, individuals practicing law, architecture, accounting, and so forth usually can practice with a fairly modest array of personal tools--computers, libraries, drafting equipment--because the mainstay of their service is the knowledge, skill, and effort they personally provide to the client. Physicians, in contrast, must routinely use costly drugs, devices, diagnostic technologies, and a host of other expensive resources in addition to their personal expertise.

Second, in most cases the costs of these medical tools are not paid directly by the patients who receive them. Third parties write most of the checks, then transmit those payments to employers and taxpayers. Patients ultimately bear the cost, of course, whether through taxes, forgone wages, or reduced job opportunities, but employers, governments, and others generally cover the immediate bills. Here, too, health care differs from other enterprises. In most business transactions, the "consumer" is the one who chooses, pays for, and receives the product, decides whether the it meets his expectations, and seeks redress if it does not. In health care, various entities typically fill these roles. The employer, not the employee/patient, commonly chooses the health plan or limits the options. The physician (often with influence from the health plan) chooses the medical services, albeit perhaps with input from the patient. The patient receives the care. The health plan, employer, or, in capitated arrangements, the physician's medical group may pay most of the provider fees. If bills are inaccurate, the health plan, employer, or government, not the patient, must chase down the errors. Restitution for poor-quality service is pursued through the tort system and regulatory mechanisms, not usually through refunds or product replacements, as in other markets. In short, in health care there is no readily identifiable "consumer."

Together, these two factors mean that virtually every medical decision is a spending decision, and third parties can control their costs only by controlling, or at least by influencing, actual decisions about patient care. So long as this condition continues, the battle will rage. Although many combatants are engaged, the two primary parties have been health plans and physicians because most of the medical spending decisions are made in their nexus. Plans regard themselves as entitled to determine what they will pay for, and physicians believe that they themselves, not business managers or even medical directors, should decide what is best for patients.

In this article, I argue that neither plans nor physicians should "win" this battle, in the sense of gaining the power to dictate unilaterally what care will be provided and how much money will be spent, for whom, under what conditions. On the one hand, the guidelines many health plans use to make coverage determinations and to reshape medical practices are seriously flawed. On the other hand, physicians' practices often are not based on existing scientific knowledge. Preferably, a balance should be found, a balance that ultimately must incorporate patients themselves.

Problems with Health Plan Guidelines

Although practice guidelines have proliferated in health care, many of those by which health plans make benefits determinations and guide medical care have an inadequate scientific basis. The reasons are numerous.

Many important topics in medicine have not been studied adequately. Although new drugs and devices must be proved safe and effective before they can be commercially marketed, surgeries and other invasive procedures are under no such regulatory requirements. Thus, although coronary artery bypass surgery was first performed in 1964, it was not scientifically evaluated until 1977; angioplasty to open clogged arteries in the heart was "performed in hundreds of thousands of patients prior to the first randomized clinical trial demonstrating efficacy in 1992" (Dalen 1998, 2180).

Many medical devices have never been evaluated scientifically because government regulations do not require an evaluation either for devices already in use at the time the regulations were enacted or for later devices that are substantially equivalent to those earlier ones. Hence, devices such as the pulmonary artery catheter, introduced in the 1970s for monitoring the cardiopulmonary function of critically ill patients, have not been studied thoroughly. Recent evidence indicates that this widely used device may do more harm than good, prompting some critics to urge a moratorium on its use pending further evaluation (J. B. Hall 2000).

Approved drugs and devices can be used in whatever ways physicians wish, and a large proportion of clinical practice is off-label. Anticancer drugs, for instance, are often used in ways and in combinations that go beyond approved indications. Similarly, until fairly recently much of the required testing of new drugs did not include either children or women with child-beating potential as research subjects. The omission was intended to protect children and potential fetuses, and yet the result is that we have only limited knowledge about potentially important differences in the ways drugs affect children and women.

A newer genre of research, "outcomes studies," aims to establish better correlations between what physicians do during clinical care and the results that patients actually experience in both the short and long term. Outcomes studies in general unfortunately suffer from a lack of standardized methodologies--what counts as an outcome, which costs should be tallied, and the like (Epstein 1995; Feinstein 1994; Soumerai et al. 1993; Task Force 1995). Some studies look scientific yet lack any acceptable methodology at all (Brody 1995), whereas others potentially may be biased by researchers' and sponsors' conflicts of interest, given that drug and device manufacturers and health plans undertake much of this research (Hillman et al. 1991; Perry and Thamer 1999). Among legitimate methodologies, each has distinct advantages and disadvantages. For example, administrative data such as hospital billing records are abundant and easily available, but they are littered with gaps and inaccuracies (Ray 1997).

For these reasons and others, managed-care organizations (MCOs) that seek to make benefits decisions or to shape clinical care may not have scientifically well-founded guidelines available. They may rely on panels of experts, who can bring their own biases. Alternatively, plans simply may rely on the Merck manual, Medicare guidelines, "an administrator who `asked friends who are doctors,' or an insurance company's employee-physician (usually not a specialist in the field in question) who reads textbooks and discusses the issue with other insurance company physicians" (Holder 1994, 19; see also Perry and Thamer 1999). As several commentators recently have observed, "materials such as the practice guidelines prepared by Milliman and Robertson, a well-known actuarial firm, often rely on insurers' own decisions rather than on well-designed scientific research" (Rosenbaum et al. 1999, 231). Even if an MCO adopts or produces excellent guidelines, keeping those guides up to date may be nearly impossible as new technologies emerge and as knowledge about them keeps evolving.

The problems do...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT