FTC Recommends Best Practices to Protect Against Discriminatory AI; Elisa Jillson Quoted

FTC Recommends Best Practices to Protect Against Discriminatory AI; Elisa Jillson Quoted
Federal Trade Commission

The Federal Trade Commission (FTC) has cited laws and best practices that artificial intelligence developers and users should consider to promote fairness, truth and equity in the use of AI. 

FTC enforces the Fair Credit Reporting Act, section 5 of the FTC Act and Equal Credit Opportunity Act to help protect against the use of biased algorithms, Elisa Jillson, an attorney at FTC’s bureau of consumer protection, wrote in a blog post published April 19.

The commission calls on AI developers to identify approaches on how to improve datasets and account for data gaps, be on the lookout for discriminatory outcomes by testing algorithms, embrace transparency and independence to address potential bias and be transparent with regard to the use of data.

Developers and companies should also hold themselves accountable for the performance of their AI algorithms.

“But keep in mind that if you don’t hold yourself accountable, the FTC may do it for you. For example, if your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and ECOA,” Jillson wrote.

AI: Innovation in National Security ForumTo register for this virtual forum, visit the GovConWire Events page.

You may also be interested in...


DHS, NIST List Goals for Cyber Best Practices

The Department of Homeland Security (DHS) and the National Institutes of Standards and Technology (NIST) have jointly classified cybersecurity practices into nine categories as bases for cyber performance goals. The nine categories each have specific objectives with regard to how secure control systems are operated and deployed, NIST said Thursday.