JURIST Guest Columnist Michele Gilman writes about how low-income people and other marginalized communities can experience greater data insecurity and argues for their inclusion in debates about data privacy legislation...
Last month, tenants at a rent-stabilized apartment building called Atlantic Plaza Towers in Brooklyn filed a formal protest against their landlord’s plan to replace key fobs with facial recognition technology. The landlord claimed the new system would enhance security. The tenants, primarily women of color, countered that access to their homes should not hinge on giving up biometric information.
Facial recognition technology is particularly inaccurate when it comes to minorities and women, raising the risk that the tenants could be barred from their homes or penalized for the conduct of other individuals. Moreover, the landlord cannot ensure the security of the data and remains free to sell the data to police, ICE, and other governmental agencies. One long-time resident, Icemae Downes, was reported to have said that: “We should not feel like we’re in a prison to enter into our homes.”
As this tenant protest reveals, the experiences and perspectives of low-income communities with data privacy differ from those of wealthier Americans. For instance, most Americans are less concerned about facial recognition technology, viewing it as a reasonable tradeoff for supposed safety. This gap in perception explains why the voices of marginalized communities must be included in ongoing policy discussions about data privacy and emerging legislation.
The Brooklyn tenants are keenly aware that marginalized communities have long been subject to higher levels of government surveillance as a means of social control. And they are part of a history of resistance to this oversight. Consider the welfare rights movement in the 1970s, when poor mothers organized against intrusive state policies such as welfare investigators rummaging through their closets and trash cans to ferret out supposed fraud.
Today, technology has turbo-charged the ability of the state to control its poor citizens, often with disastrous consequences. People seeking government assistance are increasingly subject to automated, algorithmic decision-making in which computers use data to make eligibility decisions previously entrusted to humans and then monitor how people use their benefits.
Yet replacing people with machines has not erased human error nor has it ended discrimination because humans program computers and they code mistakes and biases into our software. As a result, thousands of needy people are being cut off from life-sustaining supports without adequate explanations or avenues for recourse.
Middle-class Americans could ignore the privacy deprivations suffered by the poor — until the issue spread to them. For instance, in 2017, hackers stole the data of 145 million Americans held at Equifax, the credit reporting company. In 2018, Americans learned that Cambridge Analytica had harvested the data of about 87 million people from Facebook in order to target people with pro-Trump campaign ads.
Americans are waking up to the reality that a thriving industry of big tech companies and data brokers is collecting, aggregating, and selling consumers’ data for profit, without their knowledge or meaningful consent. Even less visible is that personal data is combined with data from users’ social networks to create highly granular reports about our lives.
Still, the harms of this data ecosystem tend to be worse for people who are materially poor. Based on their digital dossiers, poor people are targeted for predatory financial products and for-profit educational scams. Their access to jobs, education, and housing can hinge on the determinations in these reports, with no way to learn about or correct often misleading or discriminatory data outcomes.
Moreover, while identity theft is a nightmare for any victim, its consequences are particularly harsh for low-income people. They often lack the financial resources, expertise, and time to untangle a stolen identity. Meanwhile, they can face wrongful arrests, utility shutoffs, and aggressive debt collection as a result of someone committing wrongdoing in their name.
Not surprisingly, a major survey on digital inequality discovered that low-income people report higher concerns about their data security, but less confidence in how to manage their online usage. Moreover, they are at greater risk of data breaches because they tend to access the internet from mobile devices, which are less secure than computers and which offer data brokers extensive geo-location tracking.
For all these reasons, their perspectives are essential to creating effective data privacy laws. Congress is finally considering comprehensive privacy legislation after years of neglect. Currently, our privacy law regime is scattershot and random. For instance, an existing law protects your video rental history (when was the last time you visited a Blockbuster?), but no laws control how Facebook uses your data.
Congress is trailing the European Union, which enacted a comprehensive privacy law regime effective May 2018, as well as California, whose new privacy law goes into effect in 2020. Big technology companies are jumping on board. They prefer a single federal bill to state variation; meanwhile, states are actively getting into the privacy regulation game.
While there are competing bills in Congress, with varying degrees of consumer protections, the proposed laws generally promote greater transparency in data collection and usage, along with stronger enforcement tools to promote data security and user consent.
Yet if we do not include people from all socio-economic backgrounds in the conversation, we will create a regime that makes privacy a luxury for the rich. We need to hear the voices of marginalized groups during lawmaking, and then pass laws that include processes for ongoing dialogue in privacy regulation regimes. For instance, the EU’s data privacy law contains a provision requiring public participation in setting certain data policies established by large companies.
To be sure, there are challenges to generating participation by low-income citizens–or anyone without a lobbyist–in privacy debates. Many of the harms wrought from automated decisionmaking are invisible. Technological issues can be complex without a computer science degree. Low-income people can face logistical challenges to citizen participation, such as lack of time or transportation or childcare.
Yet the activism of the Atlantic Plaza Towers tenants and other big data resistance protests demonstrates that impacted communities have sophisticated understandings of the surveillance state–and the wherewithal to advocate for their rights.
The key going forward is generating more spaces for hearing the voices of marginalized communities, relying less on “experts” to identify big data harms and solutions, and connecting with other on-the-ground civil rights, human rights, and economic justice activists to explore and unpack how data privacy intersects with those movements.
Michele Gilman is the Venable Professor of Law at the University of Baltimore School of Law. She directs the school’s Saul Ewing Civil Advocacy Clinic, in which student attorneys represent individuals and community groups in a wide array of civil litigation and law reform projects, and also co-directs the Center on Applied Feminism. Professor Gilman’s scholarship focuses on social welfare issues, and her articles have appeared in journals including the California Law Review, the Vanderbilt Law Review, and the Washington University Law Review. She has recently appeared on National Public Radio to discuss data privacy as an economic justice issue and has also written on the topic for The Conversation.
Suggested citation: Michele Gilman, Voices of the Poor Must Be Heard in the Data Privacy Debate, JURIST – Academic Commentary, May 13, 2019, https://www.jurist.org/commentary/2019/05/voices-of-the-poor-must-be-heard-in-the-data-privacy-debate
This article was prepared for publication by Megan McKee, JURIST’s Executive Director. Please direct any questions or comments to her at email@example.com
Opinions expressed in JURIST Commentary are the sole responsibility of the author and do not necessarily reflect the views of JURIST's editors, staff, donors or the University of Pittsburgh.