Facebook has been in and out of the spotlight for over ten years. It has been a place to connect with old friends, share event photos, and even find a community through one of the millions of Facebook Groups. In order to provide such a service, the company collects, processes, and uses data from its 2.8 billion users. Merely having access to that much data puts Facebook at risk of a data breach, with the risk increasing every day that cybersecurity measures are not properly implemented. It also gives the social media company that much more power over how to use, and monetize, that data. Facebook is now asking its iPhone users to continue to allow the app to track them across mobile apps on their phone, stating it will “help keep Facebook free of charge.”
Mishandling Data Since Cambridge Analytica
Facebook gained its most recent notoriety in early 2018 through the Cambridge Analytica scandal. Since then, the company has faced at least 22 security and privacy incidents involving the exfiltration or unauthorized use of its users’ personal data.
Facebook gave device makers like Blackberry, Microsoft, Amazon, Apple, and Samsung access to nonpublic user data, including relationship status, political leaning, and information about users’ Facebook friends, including those who selected privacy settings to forbid information-sharing with third parties. In the same month, Facebook testified that it acquired security app Onavo to analyze how users were interacting with other apps. When Facebook removed Onavo from the iOS and Android app stores, it already had 33 million downloads. In September, researchers found that the company was allowing advertisers to target users through the phone numbers that they provided for two-factor authentication purposes, despite Facebook’s notice that those phone numbers are not made public. This was especially problematic when phone numbers were required to set up such account security. The study found that a phone number used for two-factor authentication “became targetable [in PII-based advertising] after 22 days,” regardless of whether that user set their privacy settings to the “most restrictive choices.” Two weeks later, hackers used a year-old bug in the “View As” feature – a feature that allows Facebook users to view their own profile from the point of view of other users – to compromise 30 million accounts. At the end of the year, Facebook faced three data-compromising incidents. The company admitted that it was aware of a bug that allowed app developers to view 6.8 million users’ photos. Internal documents showed that Facebook participated in data-sharing practices, including the provision of Read, Write, and Delete access of its users’ private messages, with Microsoft, Amazon, Spotify, Yahoo, and other companies without user consent. Finally, a privacy researcher found that no combination of privacy settings would truly allow Facebook users to forbid location-based targeted advertising.
Facebook started the year with children-focused privacy infringements, including knowingly allowing children to ring up credit card charges on their parents’ cards for in-app purchases and using “Facebook Research” to pay users, including minors ages 13-17, in exchange for information about their phone and web activity. The company also collected sensitive data about individuals who both do and don’t have Facebook accounts from apps that track pregnancy cycles, real estate interests, and weight loss. In March, Facebook and Instagram left passwords unencrypted on their servers, affecting hundreds of millions of users. Facebook then “unintentionally uploaded” 1.5 million users’ email contacts without first obtaining those users’ consent. The next day, it was reported that Facebook was aware that posts set to be shared to “Only Me” were accessible by other users though a bug, and that bug was knowingly shared with apps like Tinder. The internal perception of this flaw was that it was more of a “feature than a bug.” Similarly revealing was the fact that Facebook’s facial recognition privacy setting was missing for some users for 18 months after the company created a setting to allow users to turn off facial recognition. A separate design flaw allowed thousands of children to join private chat groups with unauthorized adults. In September, 419 million Facebook users’ phone numbers linked to their unique Facebook ID, name, gender, and country were found in unsecured databases, and another 533 million users’ account details and phone numbers were obtained through an Instagram vulnerability.
In July, Facebook shared users’ personal data with outside developers after stating third party app developers would be blocked from accessing user data if the user didn’t interact with the developer’s app for 90 days.
In January, a Telegram bot allowed hackers to obtain over 500 million Facebook users’ phone numbers, including numbers marked as “private.” This information was exfiltrated without difficulty from data that was previously obtained in September 2019. A few months later, a database created from the same 2019 vulnerability that exposed 533 million users’ phone numbers reemerged online for free after Facebook publicly stated that it “fixed [the] issue in August 2019.” Like it did with the Cambridge Analytica scandal, Facebook attempted to reframe this incident as a “breach of its terms of service” rather than a security failure. A former FTC Chief Technology Officer brought to light that the company also did not disclose any details to the public or to affected users, and considered that vulnerability “low risk.”
Where Facebook’s Data Practices Have Inevitably Landed
Despite its history of privacy trespasses, Facebook now wants to normalize the practice of data scraping. Data scraping occurs when a program collects data from output generated from another program or website. An internal email indicated that Facebook was not going to address the security incident that affected 500 million users’ phone numbers, and instead would focus on its long term goal of “fram[ing] [data scraping] as a broad industry issue…that happens regularly” as opposed to a business practice that Facebook has consciously embraced. Seemingly part of this long-term strategy is a Facebook-sponsored research paper that criticizes Apple’s iOS 14.5 privacy features like App Tracking Transparency (ATT). ATT is a feature that requires app developers to ask mobile device users for their permission to track them across apps and services on their device. By framing ATT not as a privacy-conscious, consumer protection feature, but instead as a hindrance to innovation and the advertising technology industry itself, Facebook is trying to play off the invasive privacy practices it has always exercised as normal and even beneficial. Finally, by forcing iOS device users to consider paying for the app if they use ATT as intended, Facebook is encouraging consumers to act against their best interests. Given its history of mishandling user data, consumers should be wary when choosing the selection that best meets their expectations of mobile device privacy.