The Federal Trade Commission (FTC) has increasingly turned to equitable remedies in cases involving alleged violations of privacy and data security laws, particularly for companies using artificial intelligence (AI) platforms.
This trend began with the 2019 case involving Cambridge Analytica, where the FTC mandated the deletion or destruction of improperly obtained consumer data. Notably, the FTC’s decision required not only the deletion of the data but also “any information or work product, including any algorithms or equations, that originated, in whole or in part, from this Covered Information.” This remedy marked a significant shift in enforcement, signaling the agency’s intent to address both the misuse of consumer data and the long-term consequences of building AI systems or analytics based on that data.
Following Cambridge Analytica, the FTC employed similar remedies in a variety of cases involving data collection and analysis between 2021 and 2023. In actions against Everalbum, Kurbo, Ring, and Edmodo, the FTC required companies to delete not just data obtained through deceptive or unlawful means but also models or AI systems developed from that data.
The action against Everalbum, Inc. involved alleged misrepresentations about the use of facial recognition technology and the deletion of photographs uploaded to the service by users. In U.S. v. Kurbo, Inc. and U.S. v. Edmodo, the FTC alleged the defendants improperly collected personal information relating to children without required parental consent. Ring allegedly committed privacy and security violations by allowing employee and contractor access to consumers’ videos and by failing to employ privacy and security protections, “enabling hackers to take control of consumers’ accounts, cameras, and videos.” These cases illustrate the FTC’s growing concern with how online platforms collect and utilize consumer information.
Perhaps the most prominent example of the FTC’s use of equitable remedies is the order issued against Rite Aid Corporation in late 2023. The case involved allegations concerning both privacy and data security issues, particularly regarding the use of facial recognition technology. The FTC’s complaint highlighted unfair business practices, a legal standard more difficult to prove than the more common claims of deceptive practices under Section 5 of the FTC Act. In the Rite Aid case, the FTC’s remedies required the company to overhaul its data protection measures, delete improperly collected biometric data, and dismantle AI tools built using that data.
The FTC continued this approach into 2024, most notably with actions against X-Mode Social, Inc. and Avast Limited. The X-Mode Social consent order mandated the deletion of “Data Products,” which included any model or algorithm developed using improperly collected location data. Similarly, the consent order with Avast Limited prohibited the sale or use of browsing data for advertising purposes and required the destruction of any AI models or algorithms derived from such data. In both cases, the FTC sought not just to stop the improper collection and use of consumer data but also to dismantle tools built from that data.
The FTC’s shift toward equitable remedies may reflect criticism from consumer protection advocates, who argue that monetary fines have not been sufficient to deter large technology companies from engaging in improper data practices. By increasing the use of equitable remedies, the FTC is apparently considering whether different types of penalties are a more effective deterrent.
Companies building AI platforms or utilizing platforms to analyze data should consider strategies to mitigate potential data disgorgement or algorithmic deletion penalties. Such strategies could include keeping multiple databases – some as iterative versions through the process, and some where higher risk data, whether personal information, intellectual property, or other protected data, is included or excluded. Data protection impact assessments or similar risk analyses will be necessary to guide AI strategy and the architecture of systems and activities using AI, as it is for other data processing activities.
Sign up for our newsletter and get the latest to your inbox.