Microsoft is entering the debate over AI regulation, advocating for a new government agency to oversee AI laws and licensing. On Thursday, the company posted a 40-page report outlining
its blueprint for AI rules, which includes proposals for AI frameworks
and safety brakes for AI technology that controls critical
infrastructure. - In the report
"Governing AI: A Blueprint for the Future," Microsoft President Brad
Smith wrote that AI guardrails can't be left solely to tech companies.
- The report argues that a legal and regulatory framework is needed to proactively address and mitigate potential problems.
- Among
its proposals is a requirement for AI systems used in critical
infrastructure to have emergency braking-like capabilities to slow down
or be fully turned off.
- Microsoft is also calling for laws that
would clarify legal obligations for AI systems and require labels for
content that's computer-generated.
- The company supports
creating public-private partnerships to address AI's societal impact,
along with transparency and funding research.
- Apart from urging government action, Smith said Microsoft had pledged to follow NIST's voluntary AI Risk Management Framework.
- He
called for an executive order that would require the federal government
to only procure AI services from companies that also make that
commitment.
|