Representatives from leading AI firms are expected to meet on Wednesday to discuss AI policy ideas and best practices, a source told Axios.
Attendees are expected from OpenAI, Microsoft, Apple, Google, Nvidia,
Anthropic, Stability AI, and Hugging Face, according to Axios.
- The meeting was reportedly convened by seed-stage venture capital firm SV Angel, led by Ron Conway,
- Topics up for discussion include AI public policy standards, frameworks, best practices, and "responsible AI," Axios reported.
- In the U.S., there are no comprehensive rules or guidelines specifically regulating AI.
- In January, NIST released version one of its AI Risk Management Framework, a voluntary list of guidelines to help organizations manage their AI risks.
- U.S. lawmakers and agency leaders are apparently working to apply some existing laws to AI in their respective areas.
- Recently, thousands of people signed an open letter seeking a six-month break or longer in the training of any systems that are more powerful than GPT-4. They argue this would provide more time for experts to deploy "shared safety protocols" for AI.
- Last week, OpenAI published a blog post titled "Our approach to AI safety,"
where it said real-world use of its systems allowed the company to
"develop increasingly nuanced policies against behavior that represents a
genuine risk to people while still allowing for the many beneficial
uses of our technology."