OpenAI CEO Sam Altman told ABC News that he is worried AI models could fuel "large-scale disinformation" as well as cyber-attacks.
He admitted that AI "is going to eliminate a lot of current jobs," but people "can make much better ones."
- Altman
made the comments during an interview with Rebecca Jarvis, ABC News'
chief business, technology, and economics correspondent.
- He
told Jarvis that when it comes to AI, "we've got to be careful here,"
adding, "I think people should be happy that we are a little bit scared
of this."
- When asked why he was scared, Altman said that if he
wasn't, "you should either not trust me or be very unhappy that I'm in
this job."
- Despite the potential harms, he said AI could be "the greatest technology humanity has yet developed."
- The CEO discussed OpenAI's new GPT-4
large language model, acknowledging that it's "not perfect" but did
score high on the bar exam and SAT math test and can write code in most
programming languages.
- Altman said the language model is a
"tool that is very much in human control" since it waits for the user to
prompt it with an input.
- However, there will be those "who don't put some of the safety limits that we put on," he said, apparently referring to OpenAI.
- "Society,
I think, has a limited amount of time to figure out how to react to
that, how to regulate that, how to handle it," he added.
A reasoning engine:
- Altman
also cautioned people about the models' "hallucinations," in which they
"confidently state things as if they were facts that are entirely made
up."
- He said it's more correct to view the models as "reasoning engines," not as fact databases.
- Facts
are "not really what's special about them," he said, adding that "what
we want them to do is something closer to the ability to reason, not to
memorize."