OpenAI CEO calls for AI regulation


OpenAI CEO Sam Altman;fielded questions;from Congress today, where he expressed support for;licensing and safety standards;for more advanced AI systems.

;During the congressional hearing, the CEO of ChatGPT's AI company emphasized the importance of government intervention to address the risks tied to increasingly powerful AI systems. The session marked the first in a series of anticipated hearings on AI as lawmakers from across the political spectrum consider future AI regulations.

  • During Tuesday's Senate Judiciary subcommittee hearing, Altman;voiced concerns;about the potential for substantial harm caused by AI.
  • "I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that," he said, adding that the industry wants "to work with the government to prevent that from happening."
  • He also acknowledged that GPT-4 would automate some jobs entirely but also create some new jobs.
  • "GPT4 and other systems like it are good at doing tasks, not jobs, so you see already people that are using GPT4 to do their job much more efficiently."
  • Altman proposed a;three-point plan, including the creation of safety standards to evaluate higher-tier AI models and a government agency to license those models — and take away licenses if not in compliance.

Risks of AI:

  • To highlight the risks of unregulated AI, Sen. Richard Blumenthal (D-Conn.) opened the hearing by playing a fake recording of his voice, written by ChatGPT and cloned with AI using audio from his floor speeches.;
  • Blumenthal;expressed fears;that ChatGPT could be used to generate content to endorse Ukraine's surrender or express support for Vladimir Putin's leadership, for example.
  • He and other lawmakers acknowledged that Congress failed to regulate social media early enough, resulting in toxic content and harm to younger people and children, among other problems.

Further concerns:

  • Lawmakers from both political parties questioned Altman, along with the other witnesses;— Gary Marcus, an emeritus professor at NYU, and Christina Montgomery, IBM's VP and chief privacy and trust officer;— about various AI risks.
  • Their;concerns about generative AI;included job disruption, election misinformation, copyright and liability issues, harmful content, and impersonation.
  • IBM's Montgomery stressed the importance of regulating these risks rather than the technology itself, cautioning against a reckless approach.
  • Marcus said hyper-targeted advertising would become common in the AI industry, despite assurances from OpenAI and IBM. He;called for;a dedicated Cabinet-level organization in the U.S. to oversee AI due to the risks and information complexity.

Post a Comment

Previous Next

Contact Form