Lawmakers from the European Union (EU) reached a provisional agreement that will pave the way for oversight on artificial intelligence (AI) including mitigating the risk of technology like ChatGPT.
The move mirrors a similar plan happening in the U.S. where the American government is working to minimize the risk of AI while simultaneously establishing new standards to advance innovations.
Under the EU’s Artificial Intelligence Act (AI Act), member countries were able to come to the temporary agreement despite issues lingering over the use of facial recognition surveillance for police and the use of generative AI in politics, education and more. It is likely that many details will still be ironed out in the next few weeks.
The EU touted the act as the first of any continent to set clear rules for the use of AI.
According to a report from the AP, the eventual law would not take effect until 2025 at the earliest and will hand out financial penalties for violations.
The AI Act was designed to mitigate dangers from specific AI functions based on risk, but the law was expanded to include foundation models, like ChatGPT and Google’s Bard, mainly due to the level of influence these programs have as well as popularity in use.
The goal would be to have laws in place so when an AI-produced text, photo, song, image, video or more is created, it would be monitored for privacy, copyright protection, misinformation and more. The laws would also take into effect how AI-generated content would impact human life itself.
Sticking points
The foundation models were one of the big sticking points to establishing the AI Act, but lawmakers compromised in the talks to encourage homegrown European generative AI companies to compete with rivals from other regions like the U.S. and China, the AP report said.
However, lawmakers are worried that generative AI could lead to supercharged amounts of online disinformation and manipulation, increase cyberattacks or create new bioweapons.
The other sticking point was facial recognition surveillance systems. Some EU lawmakers wanted it completely banned for public use over privacy concerns. But some EU members negotiated to exempt the technology so law enforcement could use it to tackle crimes like sexual exploitation or terrorist attacks.