“We take it back,” Liran Einav and Amy Finkelstein, two of America’s most prominent health economists, declare in a new book, We’ve Got You Covered, their blueprint for reforming our health care system. After years of preaching “the gospel” that “patients must pay something for their care,” they’ve now abandoned the message.
Instead they propose tax-funded “basic” health coverage—without copayments, deductibles, or other forms of “cost-sharing”—for all Americans. Almost simultaneously, three other leading health economists—Katherine Baicker, the provost of the University of Chicago, who served on George W. Bush’s Council of Economic Advisers, and Amitabh Chandra and Mark Shepard of Harvard—outlined a similar proposal in the Journal of Economic Perspectives. This may signal an encouraging shift in elite opinion, at least among economists, in the debate over health care reform.
Einav and Finkelstein, professors at Stanford and MIT, respectively, spent years teaching that unless patients had skin in the game—that is, unless they had to shell out for each medical appointment—they would “rush to the doctor every time they sneeze,” siphoning resources from other, more beneficial uses. That line of thinking has been dogma among US economists, despite decades of evidence to the contrary in the form of successful single-payer systems in Europe and Canada with little or no such “cost-sharing,” and helps explain why (often hefty) copayments and deductibles have been central to US health insurance.
Health insurance as we know it was pioneered during the Great Depression, when Baylor University Hospital offered coverage, without copays or deductibles, to financially strapped Dallas schoolteachers. This grew into a nationwide federation of nonprofit Blue Cross plans, which dominated the market for the next several decades and likewise avoided cost-sharing. But in the years after World War II, commercial insurance firms began selling lower-cost plans that incorporated cost-sharing in order to discourage frivolous use of care (and insurers’ money), much as car insurers imposed deductibles to deter policyholders from taking hammers to their windshields when they wanted dirty glass replaced. Economists call this behavior “moral hazard.”
The budding field of health economics transformed this business contrivance into gospel. In 1968 an influential paper by Mark Pauly argued that when health care is free, people use it in excess. The following decade Harvard’s Martin Feldstein (who later chaired President Reagan’s Council of Economic Advisers) wrote that “American families are in general overinsured against health expenses” and that higher out-of-pocket medical costs “would increase welfare” for society.
Soon after, a study called the RAND Health Insurance Experiment randomly assigned thousands of families to different insurance plans, some of which imposed cost-sharing and some of which provided free care. In a bombshell 1981 article in The New England Journal of Medicine, the authors reported that people who received free care used more of it. “Moral hazard” became a credo. (Only in 1986 did the RAND team report that cost-sharing curbed both needed and unneeded care in similar measure.)
The consequences were dire. The Heritage Foundation promoted gimmicky “consumer-driven” plans combining sky-high deductibles with tax-advantaged savings accounts that would, they argued, transform patients into prudent consumers. Milton Friedman—the godfather of libertarian economics—pushed the idea to the extreme, recommending that Medicare and Medicaid be abolished and that “every US family unit [be required to] have a major medical insurance policy with a high deductible, say $20,000 a year or 30 percent of the unit’s income.”
Most workers in the US once enjoyed first-dollar coverage (no copays or deductibles) for much of their care: in 1982, for instance, only 30 percent of private plans had a deductible for hospital stays. Today 90 percent of private sector workers with job-based individual coverage face a deductible that averages $1,735 a year. Some states have even sought waivers to increase out-of-pocket payments for impoverished Medicaid recipients. In 2015 Indiana, under Governor Mike Pence, announced that such payments could “preserve dignity among members receiving public assistance and provide them with ‘skin in the game.’”
Yet reams of evidence confirmed what our patients (and physician colleagues) regard as common sense: copays and deductibles cause people to skip needed care and hence suffer poorer health, and even mortal consequences. Women with breast cancer whose employers shifted them to high-deductible plans wound up delaying diagnostic workups and chemotherapy. Modest prescription copays caused seniors to forgo lifesaving medications.
And while copayments and deductibles discouraged care seeking, they didn’t constrain health care spending, which rose from $1,000 per person in 1980—equivalent to spending in other wealthy nations—to $13,493 in 2022, more than double the Organisation for Economic Co-operation and Development average. (And notably the life expectancy in the US, which was middle of the pack among high-income nations in 1980, lagged 5.8 years behind the average by 2021.)
As “skin in the game” lost its cachet, the big insurers pivoted to other profit-maximizing strategies. For instance, many now focus on privatizing Medicare and Medicaid. Some employ doctors directly and use penalties and bonuses to motivate them to minimize expensive care. Those shifts—along with an ever-growing body of data on medical harms—have perhaps changed the intellectual climate for economists to one more tolerant of jettisoning cost-sharing.
Advertisement
However, if Einav and Finkelstein (like Baicker, Chandra, and Shepard) have finally gotten some important things right, they continue to miss the mark on others: they embrace a two-tiered medical system and ignore (or, in Baicker and colleagues’ case, advocate policies that would accelerate) the corporate takeover of health care. These are related issues. Treating medical care as a market commodity like microchips or microwaves supports the dual falsehoods that copays benefit society and that market-driven corporatization breeds efficiency and quality.
After World War II many nations created national health insurance (NHI) or national health service (NHS) systems covering their entire populations. The UK’s NHS, for instance, enacted in 1946, provided nearly all care for free. The following year Saskatchewan, under Premier Tommy Douglas (voted the “Greatest Canadian” by CBC viewers in 2004), implemented universal hospital insurance. By 1971 this had been expanded to provide all Canadians with first-dollar coverage for both hospital care and regular doctors’ visits. (Additionally, Canada prohibits private insurers from selling plans that duplicate the public coverage, and doctors and hospitals from accepting most private payments, provisions that keep wealthy and poor Canadians in the same health care boat, bolstering political support for the single-payer system they call Medicare.)1
It once seemed possible that the US would follow suit. As the UK and some Canadian provinces implemented their systems, the Truman administration proposed an American version of NHI. A campaign led by the American Medical Association—which employed a pioneering public relations firm that painted the NHI bill as a “communist” act—defeated Truman’s push, and employer-paid health insurance filled the void. It had spread during the war—when wage increases but not additional “fringe benefits” were banned—and burgeoned after the war when Congress codified, in 1954, favorable tax treatment of employer-paid benefits.
By 1955 private plans covered 65 percent of Americans. Among the millions left out were the unemployed, the elderly, those in agricultural or domestic work, and those working for small employers that didn’t offer health benefits. The civil rights–era enactment of Medicare and Medicaid2 trimmed the share of Americans without coverage from about 24 percent in 1963 to 11 percent by the late 1970s.
But as the power of unions waned under President Reagan, private coverage shrank, and the number of uninsured Americans rose. By 1994 15.8 percent of Americans had no health coverage. The passage in 2010 of the Affordable Care Act (ACA), which expanded Medicaid and offered new subsidies for private insurance, reversed the trend. Emergency pandemic-era measures—mainly liberalized Medicaid eligibility rules—cut the uninsurance rate to an all-time low of 7.4 percent in 2023. Even so, that meant 25 million Americans remained uninsured, a number that the Congressional Budget Office predicts will grow to 28.2 million next year.
These stark figures still understate the precariousness of Americans’ health coverage. Einav and Finkelstein’s research has found that over a two-year period (prepandemic) more than one in five Americans spent at least one month uninsured. The briefest coverage lapses can cause financial ruin and interrupt lifesaving treatment.
Under our patchwork public-private system, people lose coverage for many reasons. Some fail to file paperwork required to continue their Medicaid eligibility. Others fall off their parents’ coverage when they turn twenty-six. Or they lose a job or are unable to afford the premiums required by most private plans, including ACA plans. Many immigrants are barred from Medicaid and Medicare.
Yet for five decades most US health economists have advised merely caulking the gaps. Obamacare, which was largely designed by MIT’s Jonathan Gruber, is a prime example. The complex bureaucracy it erected to avoid displacing private insurers added an estimated quarter-trillion dollars between 2014 and 2022 to Americans’ already exorbitant outlays for health insurance administration.
Einav and Finkelstein now rightly conclude that more patches won’t work; we have to start from scratch. “Basic coverage,” they write, “must be provided automatically, and financed by the taxpayer.” But their support for tax-funded universal coverage comes with a caveat: to be affordable, the public coverage “should be very basic”—worse than current coverage—with “longer wait times, less patient choice.” It should leave “patients and their physicians not at liberty to get any medical care they both want, as they are currently under Medicare.”
But the very concept of “basic coverage” rests on economic and medical fallacies; there is no workable definition of the “basic” or “minimalist” care Einav and Finkelstein propose that would simultaneously slash spending and provide medically adequate care. There is useless care, to be clear, like ineffective drugs or unneeded surgeries. But ineffective treatments (such as the “energy boost” infusions our local urgent care center advertises) shouldn’t be approved by regulators or prescribed by physicians, regardless of costs.
Advertisement
In the realm of effective care, there is no circumscribable “basic” category that can be safely carved out of coverage. For patients with serious symptoms, long waits to see a specialist are often harmful. For those with uncommon cancers requiring complex surgeries, access to a top-level cancer center where such surgeries are routinely performed can be lifesaving. “Basic coverage” might also exclude expensive treatments deemed cost-ineffective, but many such treatments—like organ transplants or gene therapies for sickle cell disease—are also lifesaving. “Minimalist coverage,” in practice, means greater disability and shorter lives.
Recognizing that the affluent wouldn’t abide “basic coverage,” Einav and Finkelstein (like Baicker et al.) would allow “top-up,” or supplemental, coverage in a higher tier. Of course, imposing long lines on working- and middle-class Americans while letting the rich jump the queue isn’t just inequitable; it’s medically hazardous. In the real world of finite health resources—with a limited number of physicians, nurses, PET scanners, and ICU beds—privileging the rich inevitably siphons resources away from everyone else.
But a simpler, more efficient, healthier, and fairer alternative has long been available: universal single-tier coverage. Representative Pramila Jayapal and Senator Bernie Sanders have introduced Medicare for All bills delineating that approach.3 Critics contend that this would be too expensive, drawing on moral hazard theory and the RAND experiment to argue that eliminating cost barriers would cause patients to overuse their health benefits. Yet the RAND experiment only assessed copayments’ effects on individual patients’ use of care, not society-wide effects, which may differ. In health care, supply often drives (or constrains) a society’s overall use of care. Doctors can fill openings in their schedule by advising more frequent return visits; conversely, they may avoid transferring borderline patients to the ICU when beds are tight. Milton Roemer described this phenomenon with the aphorism “A hospital bed built is a hospital bed filled.” Given limited supplies of doctors and hospital beds, cost-sharing that reduces one group’s use of services would likely increase it for those without cost barriers, redistributing care like toothpaste in a closed tube squeezed on one end.
That’s not just a theory. John Wennberg’s pathbreaking analyses of regional variation in care linked the frequency of hospitalizations and operations to the local supply of beds and surgeons. Yet patients’ outcomes in high- and low-use regions were similar, suggesting that some hospitalizations and surgeries were unnecessary. We’ve found that the implementation of both Medicare and Obamacare increased physician visits for the newly covered, but those increases were fully offset by virtually imperceptible decreases for others. A similar phenomenon occurred in other nations when they expanded coverage. For instance, when Quebec implemented its universal health care program in 1970, a study in The New England Journal of Medicine reported that “physician visits per person per year remained constant at about five but were markedly shifted from persons in higher to lower income groups.”
The economy of health care is singular. Care is constrained by a finite “supply,” while “demand” for care is unlike demand for most goods; patients want better health, not more colonoscopies or more trips to the operating room. And the types of care that improve health are determined not by consumers’ preferences but by their health needs—that is to say, by discoverable and objective medical facts. Scientific practice and supply planning can more safely and effectively constrain overuse than copays and two-tiered medicine.
There is bad behavior, to be sure, but it’s not the behavior of patients with excessive demand; it’s corporate providers that raise costs and subvert care to benefit their bottom lines—a blind spot, as noted, in Einav and Finkelstein’s vision. They address one side of the policy coin—the question of who pays for health care—but neglect the equally important question of who owns it. Any workable prescription for reform must reckon with both.
Profit seeking in medicine is not new. Pliny the Elder complained of physicians’ “avarice, their greedy bargains made with those whose fate lies in the balances.” In modern times commercial firms have dominated drug manufacture, and in the early twentieth century small proprietary nursing homes and hospitals were commonplace. But the contemporary takeover of health care provision and financing by mammoth investor-owned firms—what the late Arnold Relman, the New England Journal of Medicine editor-in-chief, described as the “medical-industrial complex” in an influential 1980 editorial4—is unprecedented.
Beginning in the 1980s investor-owned hospital chains rapidly expanded; between 1980 and 2020 the share of US community hospitals that were for-profit doubled, reaching 24 percent. Meanwhile, among nonprofit hospitals, the advantage conferred by size in negotiating prices with insurers, and hospitals’ increasing reliance on borrowing (which must be paid back from operating profits) to finance construction and other capital needs, prompted both consolidation and the ascendancy of a corporate managerial ethos. The resulting sprawling nonprofit hospital systems that dominate entire regions of the country often behave like profit-maximizing entities; between 2018 and 2023 Providence (a Catholic system with fifty-one hospitals and one thousand clinics) used unlawful collection practices to dun 99,446 patients in Washington state alone.
While most hospitals remain at least nominally nonprofit, investor ownership nearly predominates in most other health care sectors. Between 1987 and 2022 for-profit ownership of substance use treatment facilities rose from 14 to 42 percent. About one third of dialysis centers were for-profit in 1980; now 89 percent are, with two firms (both noted for poor care and excess death rates) owning 80 percent of all centers. Even hospice care has evolved into an aggressive corporate-dominated industry, with 75 percent for-profit ownership in 2022.
Horizontal consolidation—one firm acquiring myriad hospitals or dialysis centers—started decades ago. Now vertical integration—one firm owning, say, insurance plans as well as medical providers—is remaking the medical landscape. Increasingly, your doctor is employed by your insurer and risks unemployment if they fight insurers’ restrictions on your care.
UnitedHealth, the nation’s largest health insurer, also employs or is “affiliated” with 90,000 physicians—nearly 10 percent of all US doctors. Its pharmacy division oversees, negotiates prices for, and dispenses 1.4 billion mail-order prescriptions annually, and its billing and data-mining subsidiary handles nearly one in three Americans’ medical records annually. CVS Health (ranked sixth on the Fortune 500) has similarly expanded beyond its familiar brick-and-mortar stores. It bought the insurance giant Aetna in 2019; owns the nation’s largest mail-order pharmacy (serving 103 million Americans) and an urgent care chain with 1,100 locations; acquired a home health care firm with 10,000 clinicians in 2023; and also bought Oak Street Health, a group of medical practices in twenty-five states. Meanwhile Amazon is building out its One Medical chain of upscale clinics and telehealth services that prescribe drugs for seamless purchase from Amazon’s online pharmacy.
Even more perniciously, private equity firms are invading health care, investing $151 billion in the industry in 2021 alone.5 From 2003 to 2017 those firms—which prioritize short-term profit, often by asset stripping and then dumping their hollowed-out acquisitions—bought 282 hospitals. Private equity now controls more than four hundred hospices, nearly 10 percent of gastroenterologists, 8 percent of dermatologists, and more than 10 percent of oncologists.
Private equity firms have similarly cannibalized hospitals, worsening the quality of care and increasing costs. Cerberus Capital, after buying a string of hospitals from Boston’s archdiocese, paid off investors by selling the land and buildings, leaving the hospitals rent-burdened, unable to pay for essential equipment, and spiraling into bankruptcy. Some of the chain’s hospitals closed, and only enormous government bailouts are keeping others afloat. Nationwide, private equity acquisition causes a 24 percent fall in hospitals’ assets and a 25 percent rise in patients’ hospital-acquired complications, such as infections and falls.
Nursing homes have faced a similar fate. When the Carlyle Group purchased the ManorCare nursing home chain, it sold off the nursing homes’ real estate, distributed the proceeds to investors, and saddled the nursing homes with exorbitant rent payments that forced staffing cuts—leaving its patients to suffer from bedsores, falls, and other complications, as flagged by inspectors. After nursing homes are acquired by private equity, a study found, their residents’ death rates increased 11 percent and billings rose 8 percent.
And when private equity firms buy doctors’ practices, they raise prices and patients’ costs. One private equity–owned firm, TeamHealth, employs 16,000 clinicians. It and another private equity–owned ER-staffing firm (Envision Healthcare) were responsible for a large portion of an epidemic of surprise ER bills: they dictated that their doctors exit all insurance networks, even those accepted by the hospitals where they worked, enabling the firms to bill patients at exorbitant rates.
Paradoxically, taxpayers have underwritten the corporate takeover of care. Government expenditures (including not just Medicare and Medicaid but tax subsidies for private insurance purchases and payments for government employees’ health benefits) currently account for 69 percent of the $5 trillion spent annually on health care in the US. Private health insurers now derive most of their revenues and up to 90 percent of their profits from Medicare and Medicaid.
Under the Medicare Advantage (MA) program, Medicare pays private insurers to cover seniors who choose an MA plan, lured by the promise of extra benefits not available in traditional Medicare. Today more than half of seniors are covered by such private plans, with UnitedHealth capturing nearly one third of the market. Insurers have profited by exaggerating how sick their enrollees are, garnering higher Medicare payments, and by shedding expensively ill enrollees. MedPAC, Congress’s nonpartisan Medicare advisory panel, estimates that because of such gaming Medicare paid MA firms $78 billion more last year than it would have paid for their enrollees if they had been covered by traditional Medicare. Insurers use a fraction of that overpayment to entice enrollees with extra benefits like eyeglasses and fitness club memberships, but 97 percent of it is eaten up by profits and overhead, which amounted to $2,257 per MA enrollee in 2020. (Traditional Medicare’s overhead was $245 per enrollee that year.) Meanwhile MA insurers often deny payment for needed care. UnitedHealth, for instance, allegedly used an AI algorithm to automatically deny coverage for nursing home stays, although the Biden administration has emphasized that care denials cannot be based on AI algorithms alone.
Einav and Finkelstein leave the door open for MA-type plans to serve as the chassis for their universal coverage system (they describe basic coverage “offered by private firms” as one potential reform structure), while Baicker et al. explicitly cite MA as a model for the reform they have in mind. Both reject comprehensive, fully public coverage. (According to Einav and Finkelstein, Medicare for All can’t accommodate patients’ varying “tastes for medical care.”) Unfortunately the reforms these economists advocate couldn’t realize the vast savings on bureaucracy—about $900 billion annually—that single-payer could, savings needed to expand and improve coverage for everyone.
In 1948, on the eve of the NHS’s launch, a pamphlet sent to each residence in England and Wales captured the egalitarian spirit of that new program: “Your new National Health Service begins on 5th July….Everyone—rich or poor, man, woman, or child—can use it or any part of it…. There are no charges, except for a few special items.”
Looking back some six decades later, the British GP and epidemiologist Julian Tudor Hart observed how the service seemed to defy the laws of economics. “Economists told us then that, at zero price, demand would be infinite,” Hart wrote in The Political Economy of Health Care. “The new NHS economy was utopian, against human nature and bound to fail.” Rather than fail, Hart continued, the NHS became a “huge popular success.”
Yet today it faces a multifaceted crisis, with outdated and understaffed facilities, underpaid doctors and nurses, growing waits for essential care, and widening health inequalities. These problems stem, however, not from out-of-control costs driven by moral hazard but from government-imposed austerity. Britain’s health spending, and particularly its investments in facilities, has for decades trailed that of almost all other wealthy nations: this was the central finding of an independent review commissioned by the new Labour government. That austerity, in turn, was enabled by the “top-up coverage” strategy economists recommend.
The NHS, created by Clement Attlee’s post–World War II Labour government, didn’t just provide tax-funded universal and comprehensive coverage. It also nationalized most hospitals and directly employed specialists. Those decisions, championed by Minister of Health Aneurin Bevan, a socialist and former coal miner, uncoupled health care from commerce. “There is no reason why the whole of the doctor-patient relationship,” Bevan wrote in his 1948 “Message to the Medical Profession” in the British Medical Journal, “should not be freed from what most of us feel should be irrelevant to it, the money factor.”
Still, from the beginning the NHS also allowed the wealthy to buy care and insurance in a parallel private system very much connected to commerce and, as the historian E.P. Thompson opined in 1987 in the London Review of Books, based on “cons and parasites on the technologies and resources of the NHS.” In ravaged postwar Britain, this seemed a trivial concession to the few Britons able to afford it; in 1955 only 1 percent of the UK population had private health insurance. But growing investments (and elite Harley Street doctors) subsequently expanded private sector options. When Margaret Thatcher assumed office in 1979, 5 percent of the nation—including Thatcher herself, who extolled the virtues of “going private”—purchased their own coverage. By the time she left office that share had more than doubled.
In Our NHS : A History of Britain’s Best-Loved Institution, the historian Andrew Seaton traces Thatcher’s and her allies’ hope for full-scale privatization of the NHS, or at least the imposition of charges on patients. But Thatcher, facing popular resistance to such changes, settled for a move toward market-based incentives—a strategy devised by Alain Enthoven, a Stanford economist who headed systems analysis at the Pentagon during the Vietnam War—that starved the public system of funding.
That stealth privatization approach was appealing to wealthy Britons, for whom private coverage was almost certainly cheaper than the taxes they’d bear to adequately fund the NHS. It is also, of course, the preferred strategy of Republicans in the US who seek to undermine public services. Only widespread popular affection for the NHS—epitomized by the tribute to the NHS performed at the opening ceremony of the 2012 London Olympics—has forestalled further damage.
The NHS experiment underscores two lessons most economists haven’t learned. A rationally planned, publicly financed and owned health care system can provide free and comprehensive care without causing runaway costs. But diluting the system with top-up coverage and market incentives undermines its political, economic, and medical foundations.
Another system of health care—without medical debt, insurance hassles, red tape, corporate predation, copays, punishing deductibles, and paltry care—is possible, not to mention that support for it is hugely popular. Achieving such a system will require a new economics of care, one that moves past outdated economic orthodoxies, divorces medicine from commerce, and provides care on the basis of needs, not means.
-
1
For more on the Canadian system, see Nathan Whitlock, “Where Health Care Is a Human Right,” The New York Review, November 19, 2020. ↩
-
2
Dixiecrats insisted on separate programs for the mostly white elderly and the disproportionately Black poor: Medicare, a generous program modeled on Blue Cross coverage, and Medicaid, a welfare-based program whose implementation was ceded to often-racist state governments who were free to restrict and underfund care. ↩
-
3
We’ve occasionally advised Sanders; Woolhandler and Himmelstein helped draft his initial single-payer bill, as well as Paul Wellstone’s early Senate version, and Gaffney testified on behalf of Sanders’s bill (and on Sanders’s invitation) before the Senate Budget Committee in 2022. ↩
-
4
Barbara and John Ehrenreich, and others in the left-wing group Health-PAC, used that term earlier, but Relman told two of us (Himmelstein and Woolhandler) he was unaware. ↩
-
5
See Kim Phillips-Fein, “Conspicuous Destruction,” The New York Review, October 19, 2023. ↩