FTC issues notice of intent to enforce against unfair use of AI
The Federal Trade Commission ("FTC") recently published a blog post recommending best practices to businesses that utilize Artificial Intelligence ("AI") while also warning that improper use of AI may result in FTC enforcement.
In a post published on April 19, 2021, the FTC asserted its intention to continue scrutinizing companies' use of AI. Although the post recognizes that AI has the ability to revolutionize a number of disparate sectors, the FTC appears concerned with the potential development and deployment of discriminatory algorithms; particularly those that may exacerbate existing racial disparities. The post provides proactive guidance to companies looking to deploy AI but also warned that the FTC would use its regulatory arsenal to enforce the fair, equitable use of AI.
FTC Enforcement Tools
Although the FTC does not have a specific mandate to regulate the use of AI, three statutes grant it broad regulatory authority and establish its jurisdiction over AI in certain situations.
First, Section 5 of the FTC Act gives a broad mandate to the FTC of preventing "unfair or deceptive practices," and the FTC has already used Section 5 to regulate AI. As we wrote about here, the FTC relied on Section 5 to achieve a novel settlement against a facial recognition software company, requiring the company to delete the algorithms it trained using improperly obtained data. In its post, the FTC stated that the use or sale of racially biased algorithms would also violate Section 5.
Second, the Fair Credit Reporting Act covers AI that could be used to deny benefits such as those related to employment, housing, credit, or insurance. Lastly, the Equal Credit Opportunity Act prohibits use of algorithms that would create credit discrimination due to race, color, religion, national origin, sex, marital status, age, or receipt of public assistance.
FTC Recommendations
In its post, the FTC set out seven principles for businesses to adopt when they deploy AI functions:
- Start with the Right Foundation: companies should consider how complete their data set is from the start, identifying gaps in the data set and addressing them accordingly. If some weaknesses are identified, the algorithm may need to be limited in scope or utility.
- Watch Out for Discriminatory Outcomes: companies should test their algorithms before they implement them and then test again occasionally after implementation, to ensure that the technology does not discriminate due to race, gender, or another protected class.
- Embrace Transparency and Independence: companies are encouraged to embrace transparency and independence "by conducting and publishing the results of independent audits, and by opening [their] data or source code to outside inspection."
- Don't Exaggerate What Your Algorithm can do or Whether it can Deliver Fair or Unbiased Results: companies should not overpromise the capabilities of their algorithms because this may result in deception and discrimination, violating FTC regulations.
- Tell the Truth About How You Use Data: companies should clearly set out how users' data is collected and used, otherwise they may face FTC enforcement.
- Do More Good Than Harm: Under Section 5 of the FTC Act, an act is unfair if it "causes or [is] likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition." The FTC paraphrases this definition, stating that algorithms that cause more harm than good may be challenged by the FTC as unfair.
- Hold Yourself Accountable – Or Be Ready for the FTC to Do it For You: should companies ignore the FTC's recommendations and fail to hold themselves accountable, the FTC warns that they may face enforcement actions.
Take Away
As the uses of AI become increasingly sophisticated and continue to proliferate, regulators across the globe have turned their sights to this rapidly evolving sphere. In the same week this FTC post was published, the EU unveiled a proposed framework for regulating AI. That proposal will take years to translate into enforceable law, but the FTC's post makes it clear that the Commission will use the tools currently at its disposable to regulate AI in the near future. This post and the recent Everalbum settlement demonstrate that the FTC is prepared to take an assertive stance to address algorithms that perpetuate racial disparities. Companies seeking to streamline their processes and decision making with AI functions should be aware of this growing regulatory risk.