Defined contribution: from managed care to patient-managed care.

AuthorMorreim, E. Haavi

After more than a decade of extraordinary turbulence in the financing and delivery of health care, it is sobering but probably accurate to anticipate even greater challenges in the near future. Indeed, one commentator has ventured that health care is heading for its own "perfect storm" (Miller 2001). After years of increasingly desperate attempts to centralize control over medical decisions and dollars, the next phase may take us "forward into the past" in ways that will finally reunite patients with their own health, health care, and health care dollars.

A bit of history will suggest how this is likely to play out. Lavish health care funding beginning in the mid-1960s led to decades of unrestrained spending, followed by desperate but unsuccessful attempts to contain costs. In the 1990s managed care introduced business concepts hitherto largely alien to the world of health care. The result was a much-needed taming of expenditures, but at the price of denials, delays, and inconveniences that sometimes were medically, personally, politically, and even economically counterproductive. Although health care clearly needed business discipline, many of the tools of managed care came from people who had considerable experience with businesses such as insurance, but relatively little experience with the clinical nuances of health care.

Managed care's most notorious tactics quickly faded, partly via public backlash and partly as the late 1990s economic boom required employers to lure and keep good workers with generous health care benefits alongside hefty salaries. This phase, too, was short-lived, as the most recent economic slowdown now prompts yet another reexamination of the ways in which health care is financed and delivered.

Promising changes are afoot, particularly via "defined contribution" plans that bring patients into closer contact with the costs of their care and thereby into greater control over the content of their care. This development provides an important opportunity to address important, longstanding flaws in the U.S. health care system.

History

U.S. health care has several fairly distinct eras that need only brief summaries here (Start 1982, Butler and Haislmaier 1989). Prior to World War II, health care was not costly because physicians had relatively little to offer. It was the era of Modest Medicine. But during the wartime years of the 1940s and continuing throughout the postwar era, first-dollar insurance coverage became a standard benefit. Workers and their families came to expect that health care should never cost anything out of their own pockets. At the same time, several factors spurred a rise in health care costs that placed health care beyond the reach of most people's pockets.

The 1965 enactment of Medicare and Medicaid brought large additional populations within the fold of the fully insured and, in the process, made standard some insurance practices that ensured ongoing price inflation. Retrospective fee-for-service (FFS) reimbursement paid for virtually any service rendered, as insurers were reluctant to challenge providers' judgments about what care should be provided. At the same time, physicians and hospitals that were now paid according to "usual, customary, and reasonable" (UCR) fee schedules quickly discerned that health care could be very lucrative if they usually, customarily, and ever-so-reasonably charged very high fees (Roe 1981; Delbanco, Meyers, and Segal 1979). As private insurers quickly adopted the same reimbursement practices, health care came to be financed by an Artesian Well of Money in which physicians and patients could do virtually whatever they wanted, safe in the knowledge that money was no obstacle. Moreover, by deeming virtually any new drug, device, or procedure "medically necessary" and thus a covered benefit as soon as it received either government approval or physician acceptance, those insurance policies also fueled the furnaces of technology. Success ensured sales and profits for manufacturers, whose creativity in adding to the armamentarium of costly medical interventions became boundless.

The inflationary effects of such a system were inevitable and enormous. The 1970s and 1980s witnessed a host of efforts to contain costs, ranging from Nixon's wage and price controls in the early 1970s, to legislation attempting to restrain the proliferation and unnecessary duplication of costly technology, to the Carter administration's threat of mandatory price controls until hospitals agreed in 1979 to restrain their revenues voluntarily. In 1982 the federal government added DRGs--the diagnosis-related-group payment system in which hospitals are paid a flat sum for hospital care of a Medicare beneficiary, based on diagnosis and other factors such as gender, age, and co-morbidities. Instead of being rewarded for doing more, hospitals would now do better by doing less. Employers tried their own cost containment measures, such as increasing employees' copayments, encouraging healthier lifestyles, and requiring second opinions for surgeries.

These programs had little success and national health care expenditures continued to skyrocket. DRGs, for instance, helped to restrain hospital spending but left overall Medicare costs largely intact as hospitals simply shifted numerous inpatient procedures to the outpatient setting where they would be paid for on the usual FFS basis. The "Artesian" mentality still instructed physicians that it is unethical to consider costs over patient welfare (Morreim 1994: 81-82). (1)

By the late 1980s, as international economic competition and a domestic recession forced widespread downsizing, employers determined that they no longer could absorb annual double-digit-percentage increases in health care costs. Particularly beginning on the West Coast, corporations gave health plans an ultimatum: limit premium prices or lose business. The Artesian era gave way to the Managed Care era.

The Entree of Business Approaches to Health Care

Prior to managed care, health plans were largely cost-plus, pass-through financiers who rarely denied payment (Havighurst 1986; Thurow 1984: 1570; Thurow 1985; Light 1983: 1316). But when corporate employers began to demand financial restraint, health plans were finally impelled to cut costs. The initial savings were not difficult to achieve. Hospitalization was a particularly easy target, as lengthy inpatient stays had become de rigueur. Once plans realized how many routine hospitalizations were not medically justified, their reductions in hospital use generated substantial savings. Specialist services were also targeted because primary care physicians (PCPs) often charged considerably less and used fewer resources, even when treating the same conditions (Kassirer 1994; Azevedo 1995; Robinson and Casalino 1996: 9; Grumbach and Bodenheimer 1995; Gerber, Smith, and Ross 1994; Shea et al. 1992). Thus, gatekeeping systems required the patient's PCP to approve a specialist visit before the plan would cover the cost. Additionally, plans abandoned UCR payment in favor of fee scales and initiated a variety of other controls.

Initially, costs dropped significantly even while premiums remained relatively high, resulting in substantial profit margins. The larger a plan's market share, the higher its profitability, almost regardless of what sort of patient population the plan had. As a result, during the early- to mid-1990s, the health care industry witnessed an unprecedented round of mergers and acquisitions in which smaller plans were purchased by larger plans that, in turn, were purchased by still larger plans. Soon, premium prices leveled as well. Whereas 1990 premiums had risen by nearly 17 percent over the previous year, prices increased only about 1 percent in 1994. From then through 1997, annual health care inflation did not rise much above 2 percent (Wall Street Journal 2001).

The days of high profitability and merger-mania were short-lived, of course. Plans still had to keep premiums down even after the easy cost cuts were taken, and expensive new drugs and technologies steadily added to the costs of care. Additionally, plans found it increasingly difficult to enforce their overt denials of care in court, particularly in jurisdictions where "judge-made insurance" rulings consistently favored plaintiffs who claimed that they "reasonably expected" certain services to be covered (Abraham 1981: 1155; Anderson, Hall, and Steinberg 1993: 1636; Ferguson, Dubinsky, and Kirsch 1993: 2116; Morreim 2001).

Plans progressively tightened the screws. They restricted utilization further, trimmed provider fees ever tighter, (2) decreased inpatient stays to controversially short levels, (3) and sometimes substituted lesser-trained personnel for traditional providers (Anders 1995; Twedt 1996). To encourage physicians' adherence to utilization guidelines and gatekeeper systems, many plans added incentive arrangements that rewarded physicians for cost consciousness (Orentlicher 1996). Many plans transferred financial risk to physicians via capitation contracts that ranged from capitating only the physician's own professional services, to broader risk transfers in which the physician managed costs for lab tests and specialist services, to "full-risk" arrangements in which physicians, in essence, became a health plan by accepting the entire premium in exchange for providing the complete spectrum of care (Woolhandler and Himmelstein 1995, Ogrod 1997).

As more patients were denied care or coverage to which they thought they were entitled, "horror stories" proliferated in the media and even in the...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT