Mohammad Hosseini and Kristi Holmes
Generative artificial intelligence, or GenAI, keeps making headlines. Every day, we hear more success stories about how GenAI is transforming sectors such as health care, infrastructure, business and commerce, education and research. We can applaud this technological progress, but is there anything we should be concerned about?
Much has been said about how GenAI systems can threaten the integrity of research, de-skill the workforce, redefine our jobs and abuse our personal data— but these are potential consequences. Regardless of where and how GenAI is employed or what the consequences might be, the development and deployment of these models come with enormous social and environmental impacts — a topic that does not receive enough attention.
Here are three reasons why concern is called for.
First, GenAI requires hardware known as graphics processing units, or GPUs. Nvidia, a multinational corporation based in California, is the leading producer of GPUs. As an example of its impact, Nvidia is building a new production facility in Taichung, Taiwan, a city comparable to Chicago in population, that, once completed, will consume nearly 25% of the city’s electricity and 6% of its water. These GPUs will have to be housed in data centers. In 2022, data centers consumed 1% to 1.3% of global electricity demand. This consumption is projected to grow by 160% by 2030, leading to an estimated 2.5 billion metric tons of carbon dioxide emissions.
In a recent development, Microsoft and G42, United Arab Emirates’ top AI firm, agreed to jointly invest $1 billion in building a data center in Kenya’s Olkaria region — a well-known geothermal hot spot that can provide affordable energy for the data center. Nevertheless, locals remain concerned about the pollution caused by geothermal energy production and its negative impacts on air, water and soil quality. The data center itself is expected to have negative impacts on the physical and psychological health of nearby communities, based on reports from places such as Chandler, Arizona.
Second, GenAI needs to be trained by data and humans who should label the data and supervise the process. Thousands of workers who train commercial GenAI models are based in low- and middle-income countries, where they are underpaid and work in conditions often described as modern-day slavery.
Furthermore, significant numbers of people are required for maintenance, prompt engineering and validation of GenAI outputs. These efforts require subject matter experts who define and validate the quality of GenAI outputs. Ongoing maintenance, such as debugging and updates, is also a continuous process that adds to the human hours needed to keep these systems operating. Such extensive human involvement highlights the often-overlooked costs and complexities associated with deploying and sustaining GenAI technologies.
Third, GenAI development and deployment need huge sums of money. According to estimates, for a midsize enterprise to initially deploy a GenAI system, $80,000 to $190,000 of investment is needed for hardware, development and data preparation costs. This is excluding the $5,000 to $15,000 for annual maintenance and ongoing costs. And yet, this would still be cheaper than using commercial models such as OpenAI for business purposes. Some experts even believe that the costs of many current and future GenAI developments may exceed the value these systems add to workflows.
The environmental impacts of developing and employing GenAI are immediate and irreversible, but the human and financial burdens need more time to manifest themselves. In the short term, financial and human resources will have to be diverted from other critical projects, which will shift priorities in companies, industries and the public sector. This redirection could stall or slow down initiatives aimed at corporate social responsibility and workers’ welfare in the private sector or education and health care in the public sector — areas that may be more immediately beneficial to society at large.
Mohammad Hosseini, Ph.D., is an assistant professor in the Department of Preventive Medicine at Northwestern University’s Feinberg School of Medicine.
Kristi Holmes, Ph.D., is a professor of preventive medicine and the director of Galter Health Sciences Library at Northwestern’s Feinberg School of Medicine. Distributed by Tribune Content Agency, LLC.