Down With the Copay
We can’t eliminate the profit motive in health care without eliminating copays.
In the week preceding the release of Bernie Sanders’s Medicare for All bill, the Vermont senator’s office was flooded with calls — so many, in fact, that the legislative aides on the other line often guessed callers’ purpose before being prompted. At issue was whether the single-payer health care system Sanders’s bill envisions should include copayments, out-of-pocket payments for health services at the point of care.
For the single-payer advocacy group Physicians for a National Health Program (PNHP), the answer was a resounding “no.” So upon discovering that copays remained in Sanders’s penultimate draft, they sprang into action. After a week of open letters, tweets and appeals from like-minded organizations, Sanders ultimately struck copays from the bill’s final version.
Earlier versions of Sanders’s bill probably included copays for doctors visits and prescription drugs for the same reason that economists like them: they drive down health care usage and costs. After all the attacks branding Sanders’s relatively pedestrian social-democratic platform as fantastical promises of ponies for all, perhaps Sanders’s legislative aides believed meager copays gave their proposal an air of seriousness.
But the obliteration of copays isn’t a bug in the thinking behind Medicare for All — it’s the feature.
“Cost-sharing” features like copays, coinsurance, and deductibles are major manifestations of market logic in the US health-care system. If we want to overturn for-profit health care’s rebranding of “patients” as “consumers,” we have to eliminate financial barriers at the point of care.
“Medical Loss”
There’s a contradiction at the heart of American health care: the insurance companies that are supposed help us access care actually have a vested interest in us receiving as little treatment as possible. The history of the American health-care system is the story of a private sector incapable of doing the thing it supposedly exists to do, while the public sector steps in try to fill the gaps but is far too underresourced to do so. The introduction of copays into the American system is rooted in that contradiction.
As health-care quality and costs began to rise in the early twentieth century, FDR quickly dropped health insurance from his New Deal wish list, apparently out of concern too ambitious a program could kill the whole thing on arrival. Meanwhile, to stay afloat during the Great Depression, conglomerations of hospitals and physicians tried to ensure their communities could still get medical care through a nonprofit monthly “subscription fee.” As these community plans expanded in scope, their rising monthly premiums were eventually undercut by profit-seeking competitors playing by different rules. These private plans drove down fees by only accepting healthy, desirable enrollees — while the nonprofit model struggled to extend coverage to everyone.
The tight labor markets of World War II gave for-profit plans an additional boost. In the midst of frozen wages and mass overseas deployment, skittish employers saw the plans as a way to entice workers. Before long, these private plans served employers in other ways: by making health care contingent on employment, they disempowered workers while undercutting the nonprofit insurers’ ability to bargain with hospitals and providers.
Once the postwar push for national health insurance collapsed for good in 1948, we were stuck with a workplace-tethered insurance system dominated by commercial players that no one (save for those looking to get rich off of health care) would have purposefully designed. But the nonprofit and commercial plans differed in one more crucial way: while the nonprofit setup covered costs from the first dollar of care, commercial benefits generally kicked in only after patients forked over deductibles or copayments — dissuading them from seeking care to begin with, and limiting companies’ liability for it when they did.
This wasn’t some minor semantic difference. It reflected completely different reasons for existing.
While the nonprofit plans aimed to generate New Deal–style collective security, the commercial plans just wanted to make money. The best way to do that was to get people to pay for their plans while avoiding having to pay for people’s treatment. For private insurance companies, a client going to the doctor and receiving care was not a good thing — it was “medical loss” and needed to be prevented.
The health insurance companies parroted the logic of property insurers, who saw all their customers as potential vandals willing to damage their cars and houses in order to defraud the insurance company. If an auto insurance policy holder has dirty car windows, they asked, what’s to stop them from smashing up the windows to get shiny new ones?
In 1921, Aetna — the same company that, nearly one hundred years later, would exit a struggling ACA exchange market in retaliation against the federal government for blocking its megamerger — devised an innovative method to hedge against the would-be dirty window scammers. Enrollees would pay a deductible before the rest of the coverage kicked in, which would incentivize them to protect their “property,” giving health insurance users “skin in the game.”
The idea that patients are apt to sabotage their own health to get brand new, upgraded body parts is as condescending as it is gruesome, and the idea that deductibles give people a reason to stay healthy and minimize medical needs is a paternalistic misreading of the dynamics that affect health. It’s no wonder this patchwork of private plans failed to engineer a system of universal care; it’s not what they were designed to do.
A Better Way
By the early 1960s, it was glaringly obvious that the private insurance sector was incapable of funneling elderly, poor, and disabled patients into the health care system. The fact that these companies’ business model was incompatible with supporting those who most needed it ought to have been a decisive repudiation against private insurers’ right to be doing this in the first place.
Instead, the US government enacted Medicare and Medicaid in 1965, shifting the care of vulnerable populations to the public sector and leaving the easiest and cheapest patients in the same private market whose failures the government had just stepped in to alleviate. Even worse, these new federal programs eventually adapted one of the private sector’s cost-reduction tricks: the copayment.
The cliched justification for charging copays is to relieve doctors’ offices of the burden of patients showing up at the first sign of a sniffle. (Never mind that we observe no such behavior among the wealthy, for whom copays are no barrier.) But we do know that even meager copays make people seek less care, and that the poor suffer worse health outcomes as a result.
“I spent my entire career taking care of low-income people, and trust me — a copayment will keep people away from a doctor. I’ve seen it again and again in my practice,” Dr. Steffie Woolhandler, the co-founder of PNHP, told me over the phone. “It’s a huge amount of money for a low-income person.”
While several countries with universal health care systems do charge copayments at the point of use, they don’t tolerate the amount of poverty that we do in the United States. No other wealthy country does. In a grotesquely unequal society, a copayment doesn’t create “better consumers” of care — it helps us scrimp by shoving the most powerless out of the system.
Once state Medicaid programs began charging copays in the 1970s, the new fees were associated with patients dropping out of health care plans. In some cases, there was a demonstrable impact on health: in 1975, California’s MediCal program reduced doctors’ visits with copays, only to have those savings offset by higher hospitalization rates.
Meanwhile, cost-sharing arrangements have continued in the private insurance market, shifting responsibility for systemic dysfunction onto individuals — as if soaring health care costs are caused by patients unnecessarily demanding CAT scans and blood tests like a spiteful customer at the Old Country Buffet piling their plate high in an effort to make the restaurant take a loss.
The burden of high-deductibles and copays continues to spawn new profit centers for the industry that imposed them: private Medigap insurance offsets costs not covered by Medicare, and Health Savings Accounts siphon off the public tax base to help holders save cash to cover costs like high deductibles. Insurance regulations, like those introduced by the Affordable Care Act, have left companies once again searching for savings through cost-sharing: in 2016, over 39 percent of Americans from ages eighteen to sixty-four held high-deductible health plans, up from 26 percent in 2011. In the past decade, cost-sharing payments have risen at more than twice the rate of wages.
The persistence of cost-sharing isn’t prudent health policy; it’s an indictment of a system whose basic functionality depends on making it as difficult as possible for us to participate in. No matter how many people in #StillWithHer shirts beseech legislators to “improve the Affordable Care Act” rather than back “Medicare for All,” the for-profit framework that the ACA uses does little to challenge is fundamentally incapable of delivering universal care.
Over decades of bending over backwards to accommodate the internal contradictions of the private health insurance industry, few made the one point that really mattered: we never built an insurance system that strove to guarantee universal health care; we built one that strove to protect capitalists’ profits. We can build one based on addressing all people’s health care needs right now.
Medicare for All could create a universal health-care system designed to provide health care for everyone. The logic of the market has no place in such a system. Winning a truly universal system means zero deductibles and zero copays.