The Cost of Intelligence: How To Manage The AI Energy Problem
Viral memes created using generative artificial intelligence (AI) have become a popular trend recently. The most recent has been the ‘action figure’ where people use AI to create an image of a toy that resembles themselves, or a famous celebrity. Before that, everyone was sharing images inspired by the style of the Japanese Studio Ghibli animation films.
This was an interesting trend, because the founder of Studio Ghibl, Hayao Miyazaki, once said this when commenting on the use of AI to generate art: “Whoever creates this stuff has no idea what pain is whatsoever. I am utterly disgusted… I strongly feel that this is an insult to life itself.”
I saw the Indian investment management company 7i Ventures sharing some interesting insight into the Studio Ghibli trend – although it applies equally to any of the viral trends that focus on using generative AI to create personalized images.
They calculated the power needed to create one image. Just one image could keep a light on in your home for around 19.5 minutes. Multiply that 100 million times – this is a global trend after all – and that lightbulb could be running for 11,142 years. To put that in perspective, an electric car could drive 1.63 times around planet earth or an entire town could be powered for 1.3 years.
This is just one viral trend.
Think about it. As AI models grow more sophisticated, their hunger for compute power has become voracious, transforming once innocuous data centers into sprawling industrial complexes with carbon footprints to match. For an industry built on promises of innovation and efficiency, it feels as if the environmental reckoning is only just beginning.
At the heart of this dilemma is the exponential scaling of AI models. OpenAI’s GPT-4, Google’s Gemini, and Meta’s LLaMA are not merely clever – they are colossal. Training these large language models involves processing billions of data points across thousands of graphics processing units (GPUs), often for weeks at a time. According to some estimates, the energy cost of training a single frontier AI model can rival that of a small country’s annual consumption. And that’s before accounting for inference—the ongoing computation required every time a user asks a question, generates an image, or writes a piece of code.
The issue is not theoretical. In recent months, Microsoft, Amazon, and Google have faced growing scrutiny over their water usage, carbon intensity, and the physical strain their data centers place on regional power grids. As AI becomes embedded in everyday applications—from customer service chatbots to email composition—usage is exploding. Each interaction is relatively modest in energy terms, but at global scale, the aggregate number of users is immense.
If regulators start demanding that tech companies disclose the environmental impact of their AI training and queries then a social responsibility problem could quickly become a compliance issue. Some analysts are already asking if the training methods used by DeepSeek in China might be more sustainable for the long-term – even if they result in less sheer power in the short-term.
Leading chipmakers such as NVIDIA and AMD are racing to produce more efficient processors, reducing the wattage required per operation. Meanwhile, new AI architectures, known as “sparse” models, promise similar performance to dense models but at a fraction of the energy cost. Some firms are exploring neuromorphic computing and analog chips, which mimic the brain’s architecture to achieve dramatically lower power consumption.
Equally important is the shift in where and how AI is run. Training can be scheduled during periods of renewable energy surplus or off-peak hours – run during the night or when the solar panels are in the sun. Inference can also be offloaded to smaller, decentralized chips in smartphones or embedded devices, reducing the burden on central data centers.
Cloud providers, too, are investing heavily in green energy commitments, with Google aiming for 24/7 carbon-free energy in its data centers by 2030.
We need to do something. The infrastructure we are building requires vast amounts of energy. The industry faces a contradiction. AI promises to make businesses leaner, supply chains smarter, and workflows more efficient. Yet without intervention, the tools we rely on to cut emissions elsewhere could become significant contributors to the climate crisis themselves.
To resolve this, a more deliberate and strategic approach is needed. One that prioritizes efficiency as highly as capability, and treats sustainability not as a branding or social responsibility exercise, but as a foundational design principle.
The age of generative AI is here. Data centers will become more efficient and we will find faster and more effective ways to train language models, but eventually there needs to be a more general appreciation of just how much power is needed to keep AI functioning.
Follow IBA Group on LinkedIn for regular updates and comment. For more information on technology strategy and how tech connects to real business solutions please click here