RESPECT YOUR DATA

Limit and protect the data you collect, retain, and use.
Protecting your users’ privacy requires you to be thoughtful about the data you collect and hold, and how you use it. By carefully considering the costs and benefits of collecting data and by properly safeguarding the information that you do collect, you prevent privacy harms and increase consumer trust in your company. 
DO NOT USE DATA SHARED FOR ONE PURPOSE, FOR A DIFFERENT PURPOSE
Don’t use data for purposes other than which it was originally shared– like trying to use information users shared with you in the past to train AI today. Or harvest information shared online with another company to train your AI.

Using data for a different purpose can be inconsistent with your privacy policy – which is a contract with your users. Using your own data for another purpose or harvesting information shared with another company for a different purpose can also violate privacy laws in the United States and other countries and constitutional protections like the California constitutional right to privacy. It can lead to expensive lawsuits, government fines, being forced to delete algorithms built on improperly used information, and loss of customer trust.
Case Study

Never Again, Everalbum

Photo app Everalbum used billions of private, user-uploaded images to train “Ever AI,” a facial recognition product that it marketed to private clients, law enforcement, and the military. The company did all this without obtaining consent from users, and without even telling its millions of users that their personal photos would be used to help build surveillance technology.

In fact, Everalbum updated its privacy policy to reflect the facial recognition side of the business only after questions from media. Then the FTC took notice, and the settlement and consent order required Everalbum to get express consent from users before feeding their images into facial recognition algorithms, as well as delete all the algorithms Everalbum had illegally built people’s private photos. As Commissioner Rohit Chopra wrote, “The case of Everalbum is a troubling illustration of just some of the problems with facial recognition.” Following this investigation, a rebrand of its surveillance technology under the name “Paravision,” and months of backlash, Everalbum finally closed its consumer photo storage business in August 2020.

Case Study

LinkedIn Caught Trying to Quietly Loop In User Information to Train AI

LinkedIn became noisy with angry posts when the company was not fully transparent with users and tried to silently opt people into use of their information for AI.

LinkedIn became noisy with angry posts when the company was not fully transparent with users and tried to silently opt people into use of their information for AI. LinkedIn automatically “opted accounts into training generative AI models” prior to updating its terms of service and privacy policies. It forced users to dig into their settings and find a toggle switch to turn it off  and any information collected while people were automatically opted-in may be impossible to claw back.  User backlash was swift, with people writing publicly, “Thought you couldn't hate LinkedIn more? You were wrong” and “LinkedIn sucks… it’s about to suck even harder with the help of AI.” LinkedIn eventually updated its privacy policy and provided notice to users. But just like “what’s done is done” in terms of the information already collected, it will also be hard for the company to reverse the loss of user trust for opting people in rather than being transparent and giving its professional user base a real choice. LinkedIn is now facing a class action lawsuit seeking damages and injunctive relief, alleging that the company's actions violated the Stored Communications Act and the California Unfair Competition Act, and breached contracts that LinkedIn formed with its users. 

Case Study

International Regulators Say Meta GenAI Generates Major Privacy Concerns

Meta ended up in hot water with international regulators, faced big daily fines, and was forced to pause its release of Generative AI products in the EU and Brazil after data protection authori

Meta ended up in hot water with international regulators, faced big daily fines, and was forced to pause its release of Generative AI products in the EU and Brazil after data protection authorities raised serious privacy concerns with the company’s use of personal information for its AI service. The Irish Data Protection Commission requested the pause after “uncertainty over how Meta would train its new AI systems using Meta users’ personal data from photos, comments and other content on public posts.” The Brazilian National Data Protection Authority demanded Meta suspend its practice of using personal information published on its platform to train AI, calling the practice an imminent risk of serious harm, and threatened to fine the company almost $9,000 per day if it failed to comply.

 

ENSURE YOU DO NOT USE DATA IN WAYS THAT HARM USERS.

If you aren’t careful, making decisions based on data can replicate the biases and discrimination that can exist in the real world, undermining user trust. This is especially true for decisions that impact employment, health, education, lending, or even policing and public safety for users, especially those in vulnerable communities. You can avoid this by carefully scrutinizing the ways in which your data-driven decisions affect your users and taking proactive steps to prevent or counteract these potential harms. Such steps protect the users you have and help attract new ones seeking a fair and equitable service.

Case Study

SafeRent’s Algorithms Accused of Discrimination

A company that creates algorithms that landlords can use to screen tenants, called SafeRent, was sued in 2022 for creating products that could discriminate against black and brown renters who use federally-funded housing vouchers. The U.S.

A company that creates algorithms that landlords can use to screen tenants, called SafeRent, was sued in 2022 for creating products that could discriminate against black and brown renters who use federally-funded housing vouchers. The U.S. Department of Justice and Department of Housing and Urban Development both intervened in the suit, warning SafeRent and similar providers that the Fair Housing Act applies to companies that help landlords make housing decisions.

Case Study

Kochava Hit by FTC Lawsuit After Selling Location Info

The FTC sued data broker Kochava in 2022 for selling geolocation data that could identify and trace people around sensitive locations, such as reproductive health clinics, places of worship, and homeless and domestic violence shelters, potentially resulting in discrimination, harassment, and intimidation.

The FTC sued data broker Kochava in 2022 for selling geolocation data that could identify and trace people around sensitive locations, such as reproductive health clinics, places of worship, and homeless and domestic violence shelters, potentially resulting in discrimination, harassment, and intimidation. The FTC alleged that Kochava failed “to adequately protect its data from public exposure.” Expensive litigation on this issue continues for Kochava, with a federal judge denying Kochava’s motion to dismiss and permitting the FTC’s lawsuit to continue in February 2024.  

MINIMIZE THE LINKS BETWEEN YOUR DATA AND INDIVIDUAL USERS.

Tying identifiable data, including IP addresses or account information, to other records can increase the risk of harm to your users if a breach occurs and, as a result, may make your company more vulnerable to expensive lawsuits and government fines. Explore approaches that effectively mask user identity while preserving the business value of collected information and be particularly careful not to accidentally disclose identifiable data along with other potentially sensitive records

Case Study

All Your Base Are Belong to Us: Strava's Heat Map Firestorm

The exercise-tracking app Strava faced a barrage of criticism after a researcher discovered that the company's Heat Map feature - which displayed aggregated exercise data on a map - revealed sensitive user activity, including the potential locations of

The exercise-tracking app Strava faced a barrage of criticism after a researcher discovered that the company's Heat Map feature - which displayed aggregated exercise data on a map - revealed sensitive user activity, including the potential locations of secret U.S. military installations overseas, to the surprise of many users, including those who had adjusted separate privacy settings. Others pointed out that “anonymized” data incorporated into the Heat Map was, in fact, linkable to individual Strava users. Users seeking to opt out of the Heat Map found Strava’s privacy settings “difficult to find, understand, and use.” In a public statement, Strava bafflingly pointed users to these same controls, but not before prominent commentators called the mess-up a full-blown "data privacy debacle."

Case Study

Netflix Sued After Sending Not-So-Anonymous User Data to Researchers

In 2009, Netflix faced a flood of criticism, a class action lawsuit, and the loss of user confidence when it released a huge set of improperly anonymized data. While the company took some steps to remove personal identifiers, researchers were able to identify customers by comparing reviews with other online sources.

In 2009, Netflix faced a flood of criticism, a class action lawsuit, and the loss of user confidence when it released a huge set of improperly anonymized data. While the company took some steps to remove personal identifiers, researchers were able to identify customers by comparing reviews with other online sources. In the aftermath, Netflix was hit with a class action suit filed on behalf of a Netflix customer who was concerned the data would reveal her sexual preference to her intolerant community. After waves of bad press, the case settled out of court for an undisclosed amount.

Share This: