Privacy Versus Health Is a False Trade-Off
As tech firms team up with governments to fight the coronavirus pandemic, we’re being asked to accept a trade-off between our digital privacy and our health. It’s a false choice: we can achieve the public health benefits of data without accepting abusive and illicit surveillance.
As the world scrambles to stop the coronavirus pandemic, governments and technology companies have begun exploring new partnerships to track the spread of COVID-19 and target preventative interventions. Emerging reports about these collaborations have sparked a debate: do you want privacy or public health? Despite its beguiling simplicity, this question presents a false choice. Rather than accept this trade-off at face value, we must instead recognize that responding to the pandemic effectively and democratically — protecting health and privacy —requires reimagining how personal data is collected and governed.
Rather than privacy being an inhibitor of public health (or vice versa), our eroded privacy stems from the same exploitative logics that undergird our inadequate capacity to fund and provide public health. Addressing the pandemic requires first addressing these underlying forms of exploitation.
Tracking the Virus
Combating COVID-19 clearly requires good data. The statistics and models that inform the worldwide pandemic response require comprehensive and accurate information about who has been infected and how, and which interventions did or did not work. For instance, South Korea credits widespread testing as a central component of its successful effort to contain the coronavirus. Sharing clinical research data will also be essential to developing effective and safe coronavirus tests and vaccines, allowing public health officials and researchers to assess which interventions hold promise and to disseminate effective treatment rapidly.
In their rush to acquire data that could help curb COVID-19, governments have not stopped or even started, in the case of the United States testing. Countries including South Korea, Israel, and Singapore are using mobile phone location or Bluetooth sensor data to conduct “contact tracing” (identifying and testing those who have had contact with infected individuals), and South Korea, Taiwan, and Hong Kong are using location data to monitor and enforce self-isolation restrictions.
Now the rest of the world is exploring similar uses of location tracking or Bluetooth data, typically in partnership with private tech companies. For example, US governments are turning to tech companies like Google, Facebook, Palantir, and Clearview AI to track outbreaks, provide contact tracing, and study mobility patterns. Apple and Google have announced that they are collaborating to help develop contact tracing apps. Several European countries have begun acquiring mobile location data from telecommunications companies to model mobility patterns and monitor confinement.
These ongoing and proposed projects — alongside critical responses articulating the privacy risks — raise the notion that our privacy and our health are at odds. How can we most effectively limit infections, the logic goes, if governments and tech companies are unable to work together with as much information as possible?
A False Trade-Off
There are numerous reasons to be skeptical about these proposals and this line of analysis.
First, these projects contribute to the bait-and-switch that is endemic to the framing of “privacy vs. X” debates. As a range of commentators are highlighting, government and corporate entities are using the crisis to ratchet up biometric surveillance infrastructures that, once the crisis subsides, will become permanent fixtures used for invasive purposes far outside their original public health mandate. This would follow familiar patterns of “surveillance creep” following a crisis.
Most notably, the global panic after 9/11 led to the installation of radical new government data surveillance tools and practices, premised on the idea that the key to counterterrorism was data collection and analysis. As the counterterrorism debate was framed in terms of “privacy versus security,” governments argued that the tangible threat of terrorism outweighed more abstract concerns about privacy, justifying pervasive surveillance as necessary for public safety. Much of that global surveillance constellation persists to this day, even as evidence of its effectiveness is lacking.
We see similar trajectories of surveillance creep in the corporate world. In 2015, the Google-affiliated research lab DeepMind was granted access to certain National Health Service (NHS) data in the United Kingdom to develop its applications. Despite assurances otherwise, these contracts and associated data were transferred to Google in 2019. Today, we see Alphabet’s Verily COVID-19 triaging tool quietly channeling users towards enrollment in its “Project Baseline,” a health data mapping system that connects to users’ Google accounts.
Second, these technological public-private partnerships are likely to be applied through punitive law enforcement practices that will disproportionately harm minorities and the poor. Many technologies being explored for pandemic response are already in use by law enforcement agencies. Palantir uses its Gotham platform — one of the software tools it is offering the United Kingdom’s NHS as part of the coronavirus response — to facilitate secretive surveillance by the Department of Homeland Security. The Trump administration has purchased cell phone app data about people’s movements — precisely the type of data being proposed today as a tool for contact tracing — in order to arrest and deport undocumented immigrants.
Indeed, many governments have turned to law enforcement to enforce social-distancing policies. In Taiwan, less than an hour after a quarantined student’s phone died, several police officers showed up to ensure that the student was following the quarantine. The NYPD has license to fine individuals up to $500 for violating social-distancing rules, even as reduced train service means that low-income workers must get on crowded trains in order to get to work. Across Canada, more than 700 people have already been charged or fined for violating social-distancing measures.
Buying into the notion that effective pandemic response is primarily a problem of technical implementation to aid enforcement obscures and distracts from the necessary conversation we should be having about securing the material conditions people will require to comply with prolonged shelter-in-place orders. By accepting these technical “solutions,” we diminish the significance of nontechnical social welfare provision and pair privately owned digital infrastructure with punitive state enforcement instead.
Third, we see vast profiteering as both governments and corporate entities use the crisis to extract and commodify personal data, akin to attempts to hoard and mark-up hand sanitizer and masks. As data and communications infrastructures become essential resources in the fight against the virus, the entities who have (or claim to have) the capacity to leverage these infrastructures may use their position to extract inordinate economic or political gains. Under these conditions, sacrificing privacy for health means yielding further control to governments and tech companies that have already gained undue power through technological means.
Companies providing critical communications services during the pandemic, such as WhatsApp in Australia and Zoom in Britain, have been shown to monetize user information through ad targeting and data sharing. When big tech performs public services on behalf of governments, they generate not only valuable data, but also political surplus they are able to exploit down the line in securing government contracts. When governments partner with tech companies to take advantage of their vast data collection and processing infrastructures, they generate forms of governing surplus that enable redeployment of those surveillance infrastructures in punitive policing applications. Time and again, we have seen the surpluses generated from these tools enable commodification and abuse down the line.
Governments and technology companies have long shown themselves to be wolves in sheep’s clothing when it comes to privacy: promising privacy while conducting widespread and illicit surveillance. It’s a losing proposition to accept the trade-off between privacy and health on these terms, and only serves to walk us toward expanded surveillance while conceding the validity of the existing structures, markets, and laws that govern data collection. The hope that the world’s extensive public-private surveillance infrastructures can perform an about-face toward equitable and democratic public health measures is a pipe dream.
Recognizing this is essential to reorienting us from a false choice to a new agenda for reform. We need to redesign our sociotechnical worlds in response to what the health crisis has exposed about the pre-pandemic reality of our social and political institutions: their ingrained structures of inequality and austerity that exacerbated our vulnerability. The story of COVID-19 is not just one of a novel pandemic or new threats to privacy, but also one of worldwide institutional failure. The disastrous US response and fallout to the coronavirus pandemic has exposed the failures of market logics that premise social welfare on what is profitable rather than what is socially valuable and of governments that ignored warnings about our incapacity to handle a pandemic.
Democratic Data
The informational infrastructures we build in response to the pandemic present an opportunity to begin building fairer social relations and more democratic institutions. We can reject the notion that abusive or extractive surveillance is an intractable condition of contemporary digital life. We can achieve the public health benefits of data without accepting abusive and illicit surveillance.
To do this, there are three important principles to follow. First, we must move beyond individuals as the locus of control over data. Just as the pandemic has exposed how interconnected we are in terms of health, it also shows how interconnected we are in terms of privacy: when it comes to both the pandemic and to privacy, individuals making decisions for themselves cannot produce the best collective outcomes because individual actions always have broader social consequences.
In the debate of “privacy vs. X”, where X is a broader social concern like health or security, an individualistic framing of privacy built around dignity and autonomy always loses. What we need is not individual control over data about us, but collective determination over the infrastructures and institutions that process data and that determine how it will be used. This requires moving beyond privacy entailing the choice to opt-in or opt-out of public-private coronavirus surveillance infrastructures and towards developing democratic mechanisms to shape the structure, applications, and agendas of technological architectures.
Second, we need robust legal mechanisms to ensure that the public has democratic control over new technological infrastructures. Today, the data our phones generate is sucked up by operating system providers, telecommunications companies, and app developers, with limited regulation or accountability. But there is room in this ecosystem for another group of actors operating in the interests of communities.
Proposals such as data cooperatives and trusts can facilitate collective bargaining between users and big tech companies and can create mechanisms for communities to democratically use and govern data. Rather than accepting the existing choice of “privacy” or “health,” in other words, we can restructure the set of choices and trade-offs on offer. To avoid handing expanded surveillance capabilities to corporations and states, any attempts to utilize data toward coronavirus mitigation could be governed through data trusts or other mechanisms (following the model of surveillance oversight ordinances, for example). These systems could facilitate community-based information sharing, targeted communications, and resource management without that data being used for profit, policing, or surveillance.
Third, we must ensure that any systems developed today are incapable of becoming entrenched forms of surveillance and exploitation. Privacy reforms often focus on ameliorating the worst harms of surveillance, invoking the language of international human rights law to urge “necessary and proportionate” applications.
These proposals might curb some of the worst abuses from states and corporations, but even if successfully implemented would ensure maintenance of the pre-coronavirus status quo: providing ample room for profiteering in the form of health data for Google, policing contracts for Palantir, or expanded surveillance for Homeland Security.
Instead, we need data infrastructures that leave little to no room for future surplus-value extraction. One such proposal argues that big tech ought to be regulated as public utilities. This is far from avoiding the worst forms of exploitation, but keeps services that ought to be public goods or services subject to the profit motives of the private sector, rather than to democratic control by the people they serve. For example, our contact tracing response could be managed — from data collection to analysis to implementation — by democratically accountable public health authorities, for the sake of public health alone, and be shielded from both market and policing pressures.
These three ideals may seem far-fetched, yet such a system already exists — one many of us are actively participating in right now: the US Census. Rather than focusing on individual control of data, the US Census is both collectively governed and literally the data of our collective governance, used to apportion representation and distribute federal resources. It is also subject to a strong combination of technical and legal protections against abuse and mission creep: Title 13, which governs the US Census, has some of the strongest purpose limitations and confidentiality requirements, punishing anyone who misuses the data or violates the privacy of a respondent with up to five years in prison. While not perfect, these democratic governance mechanisms and rigorous protections shield this high-stakes data infrastructure from exploitation and abuse, highlighting the possibility of creating a similar infrastructure for public health data.
The coronavirus has laid bare the exploitative structures that govern our social and political lives. As we respond to the pandemic, we must be mindful of the world we are building in its wake. Even if we manage to curb the pandemic, we will fail to rise to the challenge this moment presents if we ignore just how democratizing an experience the pandemic might be: the way the virus cuts across lines of privilege and power (even if capacities to shield ourselves and access medical care do not), making clear that our fates are bound up with those of one another. To respond to both the COVID-19 crisis and the broader social breakdowns that got us here will require channeling these hard-won insights into retooling a number of institutions to be more democratic and egalitarian. Our data infrastructures should be no exception.