Environmental and human cost of AI | Connie L. Braun and Juliana Saxberg

By Connie L. Braun and Juliana Saxberg ·

Law360 Canada (January 22, 2024, 12:41 PM EST) --
Connie Braun
Connie Braun
Juliana Saxberg
Juliana Saxberg
The list of commercially available large language models (LLMs) continues to grow. Training any of these LLMs costs substantially.

It seems that many companies and individuals want to get involved in this latest technology trend, and new product offerings in 2024 provide organizations with a number of affordable options to incorporate LLM technology into operations. Adopting LLM technology can unlock a world of potential, but organizations embedding LLMs into their workflows should be aware of the potential impact on their corporate social responsibility (CSR) profile — and their bottom line.

Large language models are incredibly expensive to build and train. The amount of computing power required is immense, due to the billions of calculations that are performed every time the LLM responds to a prompt. Thousands of graphics processing units (GPUs) are required to handle massive datasets from which the LLMs learn.

Depending on what you need and choose to purchase, the base price for a GPU is around $1,650. To achieve maximum power, however, the cost rises to around $10,000 each. For the GPUs alone, each training sequence can cost upwards of $5 million.

The larger the dataset being used for training, the greater the computational power that is required. That, and longer training intervals. With the size of LLMs increasing faster than the hardware can manage, greater numbers of GPUs are required to compensate for each power-draining training iteration. While developing and fine-tuning each LLM, many training repetitions occur, so the final cost may be unimaginable. It is this kind of cost that prohibits many LLMs from being trained for non-English languages.

In addition to hogging processing power and capital, generative AI technologies exact a concerning price tag on the planet. Early in 2023, scientists estimated that ChatGPT’s monthly processing power usage could be equated with the same amount of electricity used by 175,000 people. With hundreds of millions of daily user requests consuming around one gigawatt-hour each day, this is not surprising.

Mass deployment of LLMs into search engine processes will increase computer processing demand fivefold, with a commensurate increase in energy use and carbon emissions. And, because LLM technologies burn through hardware faster due to heavy use, growing e-waste is another environmental impact of generative AI. Scientists predict that by 2027, AI servers could use between 85 and 134 terawatt hours of electricity each year, an astonishing amount of energy.

Due to the sheer cost, resourcing LLM development is out of reach for most organizations. It is not surprising to learn that the early development of the currently available LLMs was partially subsidized by U.S. tax dollars, including critical support from the U.S. Defense Advanced Research Projects Agency (DARPA).

DARPA is said to have developed the Internet’s predecessor, Advanced Research Projects Agency Network (ARPANET) and supported early projects leading to key advancements in natural language processing (NLP). Unfortunately, the extraordinary resource consumption demanded by LLM development has squeezed public and academic organizations out. As a result, these powerful technologies are owned and controlled by private companies with access to a steady stream of capital.

Most business users are unaware that big tech companies subsidize their use of LLM technologies. In April 2023, researchers reported that the estimated cost of offering ChatGPT 3.5 to the public cost its owners upwards of $700,000 per day — a cash burn rate that is nowhere near offset by subscription fees. In an effort to curb costs, developers have attempted to develop lean LLMs that rely on smaller datasets for training and validating, without success. So far, smaller language models (SLMs) have returned notably poorer outputs in testing compared to the big-data LLMs.

Commentators, focusing on the social impacts of LLM adoption, are quick to point out that LLMs could harm the economy by replacing workers with technology. Less attention has been paid to a lesser-known downstream impact on workers globally.

To minimize the risk of bad outcomes sometimes called “hallucinations,” LLMs need to be continuously supplied with external inputs that enrich and guide outputs through a process called contextual grounding. While some LLMs (like LexisNexis’s recently launched Lexis+ AI) guard against hallucinations with technologies like Retrieval-Augmented Grounding (RAG) that anchor outputs in verified sources, most LLMs also rely on “hand-engineered” feedback provided by “clickworkers,” who provide feedback to validate outputs based on to tangible, real-world scenarios. Clickworkers tend to be inexpensive, yet skilled tech workers working remotely and outsourced to countries that can provide cheap labor, with an estimated 163 million people globally working as clickworkers earning an average of US$2.15 per hour as gig contractors. As gig contractors, they are excluded from most worker protection laws.

While most Westerners fear that AI will eliminate jobs, the other side of the coin is the growing digital ghetto created by AI developers’ reliance on precarious and underpaid human labour.

The race to develop bigger, better, smarter LLMs does not show any signs of slowing down, while the environmental and social costs of LLM development and resourcing are receiving increased attention. Organizations incorporating LLMs into their workflows should be prepared to answer to stakeholders concerned about these costs — and should have a plan in place to cover the risk that a change in the terms and conditions of privately owned LLM technology could disrupt operations.

Connie L. Braun is a product adoption and learning consultant with LexisNexis Canada. Juliana Saxberg is a practice area consultant, corporate & public markets with LexisNexis Canada.
 
The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada, or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and 


Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.