Companies releasing most LLMs

 

More companies are releasing large language models than universities and academic institutions, according to the 2023 AI Index from the Stanford Institute for Human-Centered Artificial Intelligence. 

 Industry was responsible for 32 leading machine learning models last year compared to only three from academia. According to The Verge, AI may be "entering a new phase of development," with corporations leading the charge, as the need for computing power and other resources to create state-of-the-art AI models grows.

Other key findings from this year's AI Index:

  • The size and training costs of large language and multimodal AI models began to skyrocket in late 2021 and continued into 2022.Google's Pathways Language Model (PaLM) has a whopping 540 billion parameters and cost an estimated $8M to train.

  • OpenAI's GPT-3 created 502 metric tons of carbon during its training phase, the most of any AI model. This was 1.4 times more carbon than DeepMind's Gopher natural language processing model. By comparison, the average American emits around 18 carbon metric tons a year. AI models, such as GPT-3, are trained on massive amounts of text data and are tens of gigabytes in size, making their carbon footprints very large. A 2022 report found that the training and related processes for BLOOM LLM were equivalent to the carbon emissions of 60 transatlantic flights.

  • Year-over-year private investment in AI fell for the first time in 10 years. Global AI private investment was $91.9B in 2022, a 26.7% decrease since 2021. Worldwide corporate investment in AI also fell in 2022 compared to 2021. Last year's total, which included private investments, M&A, minority stakes, and public offerings, reached $189.6B, a 1,200% surge from 2013.

  • The number of AI bills passed by world governments is growing nearly exponentially. In 2022, legislative agencies in 127 countries adopted 37 laws containing the phrase "artificial intelligence." This is compared to nearly zero AI laws made in 2017. The U.S. passed nine AI-related laws last year, the most of any country.

  • The number of AI controversies has been rising every year. Greater roll out of AI — and a greater awareness of how it's misused — have led to an uptick in AI incidents and controversies. In 2021, there were 26x more reported AI issues than in 2012, according to the AI, Algorithmic, and Automation Incidents and Controversies repository. An example would be last year's digitally manipulated deepfake video of Volodymyr Zelenskyy, which featured the Ukrainian President calling on citizens to surrender to Russia.

Source: Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, "The AI Index 2023 Annual Report," AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023.

Post a Comment

Previous Next

Contact Form