By Satyen K Bordoloi
Copyright sify
In an age where even politeness emits CO₂ and every AI query becomes part of the bigger climate story, transparency is the key to saving the planet, and AI, writes Satyen K. Bordoloi
When Sam Altman gently suggested that users stop saying “thank you” to ChatGPT because every extra query burns energy, what he was suggesting is that, in the age of artificial intelligence, even politeness has a carbon footprint. Does this mean we’ve come to a point where we’ll be forced to ration our gratitude? Not necessarily.
What is necessary, though, is that with the scaling up of AI systems, as their energy demands surge, the push for transparency in AI power consumption no longer remains just about ethics or efficiency: it is also about the survival of the human race on this planet.
Altman’s refrain also highlights how, despite AI becoming one of the most energy-intensive technologies of our time, most of us remain unaware of its environmental costs. And so far, AI companies haven’t been helping. However, in recent months, a shift toward transparency has been observed, with Google becoming the first major AI company to publicly disclose detailed energy consumption data for its AI systems, followed by OpenAI.
This beginning of understanding AI’s environmental footprint comes at a good time as the tech becomes increasingly embedded in our daily lives.
Breaking Down the Numbers
In August, Google released a seminal technical paper that not only revealed that a typical Gemini AI query consumes approximately 0.24 watt-hours of electricity – equivalent to watching television for less than nine seconds, but also explained how they arrived at these numbers. They reported achieving a remarkable 33-fold decrease in energy consumption per query over 12 months, from May 2024 to May 2025, while carbon emissions per query dropped by an even more impressive 44-fold.
The result: each query now produces ‘just’ 0.03 grams of carbon dioxide equivalent and uses 0.26 millilitres of water – about five drops.
OpenAI CEO Sam Altman disclosed similar figures in June 2025, stating that the average ChatGPT query consumes about 0.34 watt-hours of electricity and 0.000085 gallons of water – roughly one-fifteenth of a teaspoon. However, Alman – unlike Google – didn’t reveal how they reached these numbers. We have to take them on their word. These official figures align closely with independent estimates from research institutions like Epoch AI, which estimated that GPT-4o consumes approximately 0.0003 kilowatt-hours or 0.3 watt-hours per query.
What is noteworthy is that these figures are significantly lower than earlier estimates that were closer to about three watt-hours per query. This means there have been improvements in model efficiency, hardware, and response length assumptions.
Yet, what needs to be remembered is that energy consumption varies dramatically across different AI models and even within a single model, depending on how it is used. Take this research from the University of Rhode Island, which suggests that more advanced models like GPT-5 could consume up to 18 watt-hours per query on average, with some responses requiring as much as 40 watt-hours – roughly 8.6 times more than GPT-4.
The Measurement Challenge
To put it mildly, it isn’t easy to accurately measure AI’s environmental impact, because it extends far beyond simply counting electricity consumption. Thankfully, Google’s comprehensive methodology does account for multiple factors often overlooked in estimates, including the energy consumed by idle machines, which are kept ready for expected traffic spikes, the power consumed by CPUs and RAM that provide support functions, and crucially, the significant energy used by data centre infrastructure like cooling systems.
These thus go beyond many ‘optimistic’ public estimates, which usually only account for active AI accelerator chips like GPUs and TPUs.
However, researchers are developing sophisticated tools to address precisely these measurement challenges. A team from Stanford, Facebook AI Research, and McGill University has created an “experiment impact tracker” that measures both electricity consumption and carbon emissions in real-time, and includes factors such as the local power grid’s energy mix. The tool has already been adopted at academic conferences where researchers have been encouraged to publish energy consumption data alongside their AI research.
There has also been a push for standardising methodologies for measuring these. The International Telecommunication Union’s working group released a comprehensive report titled “Measuring what matters: How to assess AI’s environmental impact,” which identified critical gaps in current measurement practices, including over-reliance on estimates, underreported lifecycle phases, and opaque water-use tracking.
Startups Building the Infrastructure for Energy Tracking
In what can only be seen as good news, a new ecosystem of startups is emerging that aims to provide the tools and infrastructure that are needed to track AI’s environmental impact. Companies like CO2 AI have developed sustainability platforms that help large enterprises measure their carbon footprint, including emissions related to AI.
CO2 AI offers automated, audit-ready corporate foot printing, including Scope 3 emissions and product-level analysis.
Verdigris, on the other hand, is an AI-powered energy monitoring company focused on real-time tracking for commercial buildings and data centres. Company – CEO Mark Chung has publicly stated that AI data centres can consume 20 to 30 times more energy than traditional CPU-based setups, due to the high compute demands of GPUs and accelerators.
Other innovative startups tackle specific aspects of the measurement challenge. Take Halcyon, which is using its $10.8 million seed funding to build AI tools that ingest and structure regulatory filings from agencies like FERC and DOE to help renewable energy developers track data centre electricity rates and navigate complex regulatory filings using LLMs and natural language search.
Meanwhile, Kraken Technologies has developed an AI-powered operating system that uses machine learning to optimise renewable energy distribution with 90% accuracy. It is now being used by over 70 million customer accounts across 40-plus utilities worldwide. Then there are platforms like Altruistiq, which provides specialised tools for measuring product-level carbon footprints, and Workiva, which combines carbon and financial data in an AI-powered system for emissions tracking, audit readiness, and regulatory compliance.
Did you notice something? These companies are using AI to measure their AI energy consumption? It is like, as my parents never tired of using the metaphor: using fish oil to fry fish.
Academic Research Leading Environmental Assessment
And let’s not forget the universities and research institutions developing methodologies to assess AI’s environmental impact. Stanford’s Human-Centered AI Institute has developed tools that measure both electricity consumption and carbon emissions for machine learning projects.
Their research has revealed that training an AI language-processing system can produce up to 78,000 pounds of emissions – twice as much as the average American exhales over an entire lifetime.
The Öko-Institut, working with Greenpeace Germany, produced one of the most comprehensive reports on AI’s environmental impacts by evaluating over 95 studies on the topic. Their research projects that global electricity demand for AI computing could be about 11 times higher in 2030 than in 2023, with data centres potentially consuming more electricity than the entire energy-intensive goods production sector in the U.S.
MIT researchers have contributed significant insights. The highlight of their work is the environmental impacts of decisions made during AI system design and deployment. European research institutions have also made significant contributions. A systematic literature review published in Open Research Europe analysed 97 papers from 2017 to 2022, focusing on the role of AI in environmental, economic, and social development.
These studies revealed a “positivity bias” in academic literature, with 79% of sustainable development objectives viewed as positively affected by AI, while only 35% were seen as negatively affected.
The scale of this challenge is humongous. Google, with 25 terawatt-hours of annual energy use, uses energy equivalent to 2.3 million U.S. households, while Microsoft, at 23 TWh, could power 48 Disneyland Paris parks for an entire year. Meta follows closely with 15 TWh. Water use is equally staggering. Google’s annual water usage reaches 24 million cubic meters – enough to fill over 9,618 Olympic-sized swimming pools.
Microsoft uses 7.8 million cubic meters annually, enough to fill 9,000 Boeing 747-400 jets. And this is when large parts of the world face increasing water shortage due to the climate crisis.
The path forward, therefore, requires coordinated action across multiple stakeholders, including tech companies, startups, academic research institutions, civil society, and governments. This initiative by Google and OpenAI to be transparent provides a crucial baseline of data. Yet, these are merely first steps in what’ll require comprehensive action to ensure that AI solves Earth’s problems, not exacerbates them.
At the end of the day, what’s the point of saying ‘thank you’ to the AI system when that system is contributing to destroying the planet? So, the choices we make right now to measure, report and regulate AI consumption will shape whether this tech becomes a tool to better sustainability or one that’ll destroy the planet. The choice is in all of our hands.
In case you missed:
Nuclear Power: Tech Giants’ Desperate Gamble for AIYou’ll Never Guess What’s Inside NVIDIA’s Latest AI BreakthroughTo Be or Not to Be Polite With AI? Answer: It’s Complicated (& Hilarious)AI as Cosmic Cartographer: Teen’s Discovery Illuminates Positive Power of Artificial IntelligenceMicrosoft’s Quantum Chip Majorana 1: Marketing Hype or Leap Forward?Would an Electric Plane Have Reduced the Air India Crash Death Toll?Rise of Generative AI in India: Trends & OpportunitiesA Manhattan Project for AI? Here’s Why That’s Missing the PointThe Great AI Browser War: When AI Decided to Crash the Surfing PartyDigital Siachen: How India’s Cyber Warriors Thwarted Pakistan’s 1.5 Million Cyber Onslaught