Understanding the GDPR Through the Lens of COVID-19 Commentary
Understanding the GDPR Through the Lens of COVID-19

The global chaos created by COVID-19 has proven to be a stress test for modern governance. The pandemic and its fallout have proven especially revealing in the context of European data policy, which is governed by the General Data Protection Regulation (GDPR), promulgated in May of 2018. But, COVID-19 has exposed its fault lines. While countries like the United States, Canada and China are using AI to reveal groundbreaking insights into COVID, Europe has little to show, stymied largely by restrictions put in place by the GDPR. These restrictions have had real consequences and are undoubtedly part of the reason Europe is home to seven of the top ten countries measured by fatalities per 100,000 citizens, all of which are higher than fatality rates in the United States. The European devotion to data privacy is noble, but it ought to be analyzed in light of its repercussions. COVID-19 provides a starting point for such an analysis.

The desire for such a sweeping data governance regime comes from the explicitly stated European desire for a human-centric approach to artificial intelligence, of which data is an integral component. President of the European Commission (EC) Ursula von der Leyen said that “Technology is always neutral. It depends on what we make with it. And therefore, we want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible human-centric approach to artificial intelligence.” Von der Leyen equates governing the minutiae of data with human centricity, a stance the EC then doubled down on in a comprehensive White Paper on Artificial Intelligence released in March of 2020. But COVID-19 has forced Europeans to reexamine such bold policy.

Today, the noble goals of the EC are undermining vaccine development and treatments for COVID-19, the development of which has been hamstrung by the GDPR and the White Paper on AI. Vaccines and therapeutic treatments needed to help stymie the virus’s negative impacts require mass amounts of health data – data that under current and potential EU legislation is nearly impossible to get a hold of in a timely manner, if it is able to be acquired at all. Indeed, a person “with direct knowledge of the European Commission’s thinking” said “The EU is not backtracking yet on its position, but it is thinking more actively about the unintended consequences of what they have proposed in the white paper on AI.” Such unintended consequences are demonstrated by the lack of AI and machine learning being used to combat the virus since the type of data required would likely be considered “special categories of personal data” under GDPR, subjecting it to strict compliance requirements. Sadly, European officials actually insist on waiting until the end of the pandemic to assess how technological measures, like AI, could be used to fight the pandemic. Contrast this reality with the United States, where Cotiviti, an American healthcare analytics company based out of Atlanta, can use AI to pinpoint and predict COVID hotspots with up to 80 percent accuracy, thanks to access to medical and health analytics data and a regulatory structure that permits its very existence.

Europe’s failure to use AI to combat COVID exemplifies how the GDPR has negatively impacted the ability to conduct medical research on the continent. First, the GDPR has made companies risk averse. For COVID-19, this means that firms opted not to share sensitive data that could have been used to gain insights from and fight the virus, rather than face potential litigation. Second, it declared that “pseudonymized data is personal data,” thereby subjecting data to difficult to satisfy consent requirements. If privacy is indeed the purpose of the GDPR, then instead of subjecting all data to onerous and unnecessary standards, it should consider alternate privacy-ensuring means. There are many ways to accomplish this. The EU could set requirements for differential privacy, a system that preserves individual privacy while still allowing broader group trends to be understood, thereby informing scientific efforts. However, the GDPR took the more problematic route. Indeed, the GDPR will probably cause the most harm to already marginalized groups that can be more difficult to reach to garner their informed consent, thus leaving them underrepresented in GDPR sanctioned data sets.

It is datasets like these that can contribute to “algorithmic bias,” the phenomenon that occurs when algorithms are trained using data that biases it in a way that society considers malicious or unsatisfactory under societal norms of justice. For instance, imagine training an algorithm that seeks to determine who will be the most successful person out of a random sample of 1,000 Americans. If we were to use virtually any dataset in history, an algorithm would likely determine that a white male, probably from the upper class, would be the most successful. Now imagine applying this to college admissions or another consequential decision. Algorithms can create self-fulfilling prophecies, ingraining biases both positive and negative, which should be guarded against. While the GDPR seeks to mitigate such algorithmic biases, which are indeed harming society, the focus on strictly regulating data is misplaced because it does not get to the root of the issue.

One can reasonably argue that what Europe needs is more data, since the more data that algorithms have, supposedly the less biased they can be, but the GDPR makes data aggregation difficult and costly. Even if one contends that more data is not necessarily better, putting such stringent requirements on data imposes a rather great cost to business and entrepreneurialism and it is unclear if a cost-benefit analysis rules in favor of privacy, with hamstrung European responses to COVID-19 being exemplary of a balance overweighted toward privacy. Whether or not the GDPR actually mitigates algorithmic bias has been investigated, with studies showing it has not done much to remedy such defects. In fact, the most impactful aspect of the GDPR has not been the data standards it has imposed. Rather, as Bryce Goodman argues the greatest tool for thwarting algorithmic bias is the GDPR’s ability to support the legality of data audits. Data audits are when humans scan data and algorithms to analyze if they are biased. Data audits – human oversight – are an effective way, albeit not a panacea, to diminish algorithmic bias. The strict data regulation components of the GDPR itself are not sufficient to thwart algorithmic bias, indeed they may even aggravate the problem more as it makes it more difficult to form representative datasets.

The EU, by creating such an expansive data policy, is trying to connect the dots between innovation policy and data regulation. In one respect, European thinking with regard to the GDPR is sound; legal and regulatory clarity are necessary conditions for optimal economic development. But, where Europe goes awry is its belief that it is also a sufficient condition for such development. Europe is trying to create companies that can rival American tech giants like Microsoft, Google, Amazon, Apple and Facebook, and sees a model of human centricity as a way to get more entrepreneurs and businesses onto the continent. There is also a protectionist aspect to the GDPR. By forcing foreign companies to double their efforts to comply with regulatory regimes and through enforcing steep civil penalties, Europe frees up space for more homegrown talent. Creating such a dense legal thicket, however, often only entrenches the incumbent rather than creates fertile ground for new competition. It is much easier for a multinational firm like Microsoft or Apple to deal with the GDPR than it is for a small technology start-up. Indeed, a paucity of data is considered one of the reasons that Europe never matched the American tech boom of the early 21st century. If Europe wants to create more companies and try and create its own tech giant, the GDPR may be exactly the wrong way to proceed.

The GDPR is a worthwhile attempt at data governance but is not without its flaws. The question moving forward will be whether the EU continues down the road of stringent data protections or if it will opt for a laxer regulatory system. History would tell us that it is unlikely Europe will change course. The experience of Nazi and Soviet surveillance has made Europeans relatively more protective of their privacy. Issues of privacy are personally and politically sensitive on the continent, making it an unlikely candidate for reduction. Europe’s promulgation of the GDPR was a step into the unknown, and one that the world should examine closely. The greatest potential benefit of the GDPR may just be what the rest of the world can learn from it in creating their own data governance policies.

This is the second article in a four-part series exploring comparative data governance regimes. Check out the first article here.

Connor Haaland is a 2020 JURIST Digital Scholar. He graduated from South Dakota State University in 2019, earning degrees in Spanish and Global Studies with minors in Economics and French. Most recently, Connor was a research assistant at The Mercatus Center, where his primary research focused on the intersection of law and emerging technology. He will attend Harvard Law School this fall.

 

Suggested citation: Connor Haaland, Understanding the GDPR Through the Lens of COVID-19 – Student Commentary, August 27, 2020, https://www.jurist.org/commentary/2020/08/haaland-understanding-the-gdpr.


This article was prepared for publication by Megan McKee, JURIST’s Executive Director. Please direct any questions or comments to her at executivedirector@jurist.org


Opinions expressed in JURIST Commentary are the sole responsibility of the author and do not necessarily reflect the views of JURIST's editors, staff, donors or the University of Pittsburgh.