Two;European parliamentary committees have;overwhelmingly approved;the latest version of the draft AI Act, the EU's overarching rules for governing AI systems.
The tougher draft rules now&;include bans on;facial recognition programs in public, predictive policing AI systems, and emotion recognition, among other technologies.New amendments also;add requirements for the foundation models;and transparency measures for generative AI applications like ChatGPT.
Based on the technology's risk, the law labels AI as either unacceptable, high-risk, or largely unregulated. AI systems with unacceptable risks will be banned.
- Lawmakers have now expanded what's banned to include real-time and post-remote biometric identification, predictive policing, certain emotion recognition systems, and indiscriminate scraping of biometric data from social media for facial recognition databases.
- The law also promotes regulatory sandboxes to test out AI before it's deployed. It orders the creation of a public database of "high-risk" AI systems.
The AI Act faces a plenary vote in June before moving into "trilogues." Final approval is expected before spring 2024.
- Once approved, the law will be the world's;first rules;governing AI. Companies and other affected parties would have a grace period of ~2 years to comply.
- Violators could face hefty fines of up to €30M ($33M), or 6% of their annual global revenue. For tech giants such as Meta or Google, this could potentially translate to billions of dollars.