On Thursday, May 5th, US President Joe Biden met with top executives from leading artificial intelligence (AI) companies, including Microsoft and Google, to discuss the risks of AI. The meeting lasted for two hours and included a “frank and constructive discussion” on the need for companies to be more transparent with policymakers about their AI systems.
Generative AI is a buzzword in the tech world, with the launch of AI-powered tools like ChatGPT capturing the public’s attention. Millions of users have begun testing such tools, which supporters say can make medical diagnoses, write screenplays, create legal briefs, and debug software.
However, the growing use of generative AI has led to concerns about privacy violations, biased employment decisions, and the potential for scams and misinformation campaigns. In response, President Biden told the officials they must mitigate the risks AI poses to individuals, society, and national security.
The meeting focused on the importance of evaluating the safety of AI products, protecting them from malicious attacks, and being transparent with policymakers about their AI systems. Vice President Kamala Harris stressed that AI has the potential to improve lives but could pose safety, privacy, and civil rights concerns. She told the chief executives they have a “legal responsibility” to ensure the safety of their AI products.
Leading AI developers, including Google, NVIDIA, OpenAI, and Stability AI, will participate in a public evaluation of their AI systems. The administration also announced a $140 million investment from the National Science Foundation to launch seven new AI research institutes.
The Biden administration has taken steps to address concerns around AI, including signing an executive order directing federal agencies to eliminate bias in their AI use and releasing an AI Bill of Rights and a risk management framework. The Federal Trade Commission and the Department of Justice’s Civil Rights Division also said they would use their legal authorities to fight AI-related harm.
While the US has not taken as tough an approach to tech regulation as some European governments, the administration is working closely with the US-EU Trade & Technology Council on the issue. As AI technology continues to proliferate, it is expected that political ads created entirely with AI imagery will become more common, making it more important than ever for companies to ensure the safety and transparency of their AI products.