OpenAI may consider leaving the European Union if the company can't meet the bloc's upcoming rules regulating AI, according to CEO Sam Altman. He is concerned that OpenAI's products, including ChatGPT and GPT-4, could be classified as "high risk" under the proposed rules, which would involve stricter transparency and safety requirements. - The
EU's AI Act, which would ban the riskiest AI applications and regulate
high-risk AI applications, has expanded to include general-purpose AI
technologies like GPT-4.
- Under the draft, companies would be
barred from generating illegal content and would have to disclose
summaries of any copyrighted material used in their AI training data.
- Altman suggested changes to the legislation, such as redefining general-purpose AI systems.
- "We will try to comply, but if we can't comply, we will cease operating," he said.
- While Altman called the current draft law overly restrictive, he believes the proposal is "going to get pulled back" as EU lawmakers negotiate the bill's final details.
- Final
approval is expected in early 2024 at the latest, with a grace period
of many months for stakeholders to figure out how they should comply.
|