“If we can comply, we will, and if we can’t, we’ll cease operating.”
As the European Union readies its new artificial intelligence rules, the head of OpenAI is already preparing his counter-attack.
OpenAI CEO Sam Altman told a British audience in a recent appearance at University College London that although he his firm is “gonna try to comply” with the new EU-wide rules, he has “a lot” to be critical of, Time magazine reports.
During the talk, which is part of a series of appearances throughout Europe, Altman said that he’d met with EU regulators and that some of the language in the law gave him pause — particularly, he said, its designation of ChatGPT, GPT-4 and similar forms of generative AI as “high risk,” a characterization the CEO does not agree with (though it’s worth noting that he once told friends that he hoards guns, water, and gas masks in case of “AI that attacks us.”)
While the EU’s proposed AI rules aren’t the first time a governing body has taken on AI — Italy infamously banned both ChatGPT and Replika, the “AI companion app,” this year over data privacy and child welfare concerns — this would be a watershed moment in terms of international AI regulation.
Clearly it’s a big deal for OpenAI, and its CEO isn’t exactly happy about it.
If the law moves forward with its current or similar language, such a designation would compel AI firms to comply with additional sets of requirements that Altman is clearly against.
“Either we’ll be able to solve those requirements or not,” the CEO said. “If we can comply, we will, and if we can’t, we’ll cease operating.”
“We will try,” he continued. “But there are technical limits to what’s possible.”
This is, of course, a cop-out — if Altman was willing to play ball, there would be no doubt that he could comply with the EU’s requirements, which if they go through as proposed will do things like require the datasets that feed high-risk-designated models be of “high quality” to avoid “risks and discriminatory outcomes,” the European Commission noted in an explainer on the proposed law.
Talking out both sides of his mouth, Altman said during the University College talk that while he doesn’t think the proposed EU law is “inherently flawed,” he believes there are “subtle details here that really matter.”
Interestingly, the OpenAI CEO isn’t against all AI regulation — earlier in the week, the company released a statement saying that in the future, there will need to be an international governing body as current models grow towards human-level knowledge and beyond.
However, it sounds like Altman has strong opinions about exactly how the government should be regulating his business.
More on OpenAI: Weird Trick Breaks ChatGPT’s Brain
You must log in to post a comment.