In the late 2000s there was a push by a lot of businesses to not print emails and people use to add a ‘Please consider this environment before printing this email.’
Considering how bad LLMs/‘ai’ are with power consumption and water usage a new useless tag email footer should be made.
I don’t know if it would help, but you could also add links to articles about this. You could put hyperlinks behind numbers - like “[see 1, 2, 3, 4…]”.
Here are some if you want to do this: https://www.wired.com/story/new-research-energy-electricity-artificial-intelligence-ai/
https://web.archive.org/web/20240318115424/https://disconnect.blog/ai-is-fueling-a-data-center-boom/
https://www.bloodinthemachine.com/p/ai-is-revitalizing-the-fossil-fuels
https://www.bloomberg.com/graphics/2024-ai-power-home-appliances/
https://www.bloomberg.com/graphics/2025-ai-impacts-data-centers-water-data/
https://www.theguardian.com/technology/2024/sep/15/data-center-gas-emissions-tech
https://www.washingtonpost.com/technology/2024/12/23/arizona-data-centers-navajo-power-aps-srp/
Na, that’s too useful the point is to be virtue signalling
🏞 Please consider the environment before issuing a return-to-office mandate
This! There are no reason go back to office for some professions like programmers, managers, etc
This is the right response - RTO is much worse for the climate than GenAI.
So I’m not saying RTO is worse than AI or vice versa. But do you have any data to back up that statement. I’ve been seeing nothing but news about AI data centers being an absolute nightmare for the planet. And even more so when corrupt politicians let then be built in places that already have trouble with maintaining normal water levels.
I get both are bad for the environment.
Well, real quick, my drive to the office is ~10 miles. My car gets ~3.1 miles/kwh. So let’s say I use 3 KWH per trip, two trips a day, makes it 6KWH. A typical LLM request uses 0.0015KWH of electricity, so my single day commute in my car uses ~4000 LLM queries worth of electricity.
Yeah RTO is way worse, even for an EV that gets 91MPGe.
The thing is: those AI datacenters are used for a lot of things, LLM’s usage amount to about 3% of usage, the rest is for stuff like image analysis, facial recognition, market analysis, recommendation services for streaming platforms and so on. And even the water usage is not really the big ticket item:
The issue of placement of data centers is another discussion, and i agree with you that placing data centers in locations that are not able to support them is bullshit. But people seem to simply not realize that everything we do has a cost. The US energy system uses 58 trillion gallons of water in withdrawals each year. ChatGPT use about 360 million liters/year, which comes down to 0.006% of Americas water usage / year. An average american household uses about 160 gallons of water / day; ChatGPT requests use about 20-50 ml/request. If you want to save water, go vegan or fix water pipes.
I’m on my phone so I can’t fully crunch the numbers, but I took a few minutes to poke around and I think I found the stats to put both of these in perspective.
https://www.arbor.eco/blog/ai-environmental-impact
Each query sends out roughly 4.32 grams of CO₂e (MLCO2), which may seem trivial on its own but adds up millions of queries a day, and you’re looking at a staggering daily output. 1 million messages sent to ChatGPT is equivalent to 11,001 miles driven by an average gasoline-powered passenger vehicle
https://www.epa.gov/greenvehicles/greenhouse-gas-emissions-typical-passenger-vehicle
The average passenger vehicle emits about 400 grams of CO2 per mile.
So yikes and without a doubt unsustainable energy usage, but comparing this to wikis article on COVID environmental impacts
https://en.wikipedia.org/wiki/Impact_of_the_COVID-19_pandemic_on_the_environment?wprov=sfla1
In 2020, carbon dioxide emissions fell by 6.4% or 2.3 billion tonnes globally.
My napkin math says that we would need ~532,407,407 AI queries to match the 2020 work for home drop, but unfortunately, Chat GPT alone is estimating 2.5 billion prompts, daily.
I started writing this assuming the opposite was true but unfortunately AI is a bigger environmental impact than an RTO. Which is honestly shocking. I hope someone corrects my math and tells me it isn’t this dire. Work from should be the norm, but AI is truly just a massive environmental burden.
you got an error in magnitude there: its 5.32*10¹⁴ requests, so 532 407 407 407 407 requests
(2.3 × billion tonnes)/(4.32 grams) 2.3 billion tonnes = 2.3 trillion kilos = 2.3 quadrillion grams ≈ 532 407 000 000 000 requests needed for equivalence to Covid CO2 drop ≈ 912 500 000 000 requests made per year
calculator output below:
\\\\\\
912 500 000 000/532 407 407 407 407 ≈ 0.001 7% (2.5 × billion) × 365 = 912 500 000 000 (2.3 × billion tonnes)/(4.32 grams) ≈ 532 407 407 407 407 \\\\\\\See what i mean? Stop ChatGPT, achieve 0,0017% of the reduction that Covid brought.
To put this into perspective: imagine smoking 4.32 grams of weed every day and imagine how that would add up.
its more like 0.0017% of 4.32grams of weed, see my response above
To be fair they never cared about environment. A paper is something easy to recycle and certainly not the most polluting material to produce.
It was more about saving money, greenwashing and pushing a conversion towards digital archiving (which is much more efficient that paper)
Please consider the environment before sending me an email, seriously, I won’t read it.
Ooooh, I’m totally adding that to my email signature.
Aaaaaaand done.
If you don’t mind the additional real estate, it would also be great to have a version where “printing” is still there but struck out, for people who aren’t aware of the original.
Had to shrink the font a bit; looks better all on one line.
Awesome. Thanks. Using this!
o7
Is that good? I’m old and have no idea what that means.😆
There’s an entire page for it.
🫡
Haha it’s good! It’s supposed to be a little guy saluting. The o is the head and the 7 is the arm and hand doing a salute pose.
Text generation uses hardly any energy at all, though. Most phones do it locally these days. In fact, it likely takes less energy to generate an email in 5 seconds than it would take for you to type it out manually in 5 minutes with the screen on the whole time.
I want to see the “cease and desist you may not use my facebook posts without my express permission” type footers but against AI to start showing up
I don’t think regular people really understand the power needed for AI. It’s often taught that we just have it. But not where it comes from.
I wonder how the power usage of running an LLM locally compares to playing a modern game at high settings. Can they be very different?
People keep telling us that ai energy use is very low, but at the same time, companies keep building more and more giant power hungry datacenters. Something simply doesn’t add up.
Sure, a small local model can generate text at low power usage, but how useful will that text be, and how many people will actually use it? What I see is people constantly moving to the newest, greatest model, and using it for more and more things, processing more and more tokens. Always more and more.
Each datacenter is set to handle millions of users, so it concentrates all the little requests into very few physical locations.
The tech industry further amplifies things with ambient LLM invocation. You do a random google search, it implicitly does an LLM unasked. When a user is using an LLM enabled code editor, it’s making LLM requests every few seconds of typing to drive the autocomplete suggestions. Often it has to submit a new LLM request before the old one even completed because the user typed more while the LLM was chewing on the previous input.
So each LLM invocation may be reasonable, but they are being concentrated impact wise into very few places and invocations are amplified by tech industry being overly aggressive about overuse for the sake of 'ambient magic.
I don’t think regular people really understand how little 3W per request is. It’s the energy you take up by eating 3kcal. Or what your WiFi router uses in half an hour. Or your clothes dryer in 5 seconds.
True, but most people don’t realise how little not printing an email ‘helped’ the environment.
It would have been significant if a lot of people did it.
I’m doing my part. Can’t remember the last time I had to print anything.
My printer died so I go to the library to print when I need to. For ¢25 per color print or ¢10 per black and white print it’s a lot cheaper than buying another printer, plus I’ll really consider if I actually need to print that thing
“If everyone is littering, it’s not a big deal if I throw the occasional can on the ground”
More like, “If I focus on being an asshole to people throwing cans on the ground, I don’t have to stop using my car”
I miss the days where climate activists didn’t get distracted by small change like GenAI. The big ticket issues haven’t changed since the beginning of the climate movement: Cars, Flights, Industry (mainly concrete), Meat and Heating/AC are what drives climate change - any movement that polices individual usage of negligible CO2 emission will fail because noone likes to be preached at.
I mean the difference between pointing out the environmental impacts of AI compared to the environmental impacts of heating/air conditioning, industry, transportation etc. is there’s useful output from one and the other just creates low quality slop.
Most of the use of AI right now is entirely pointless. It exists purely because of the AI bubble and eventually companies won’t be burning queries on annoying sales chat bots that the user isn’t even interacting with or inaccurate search result summaries that you can’t reasonably turn off
a) many people swear by it - is it so useless then? it’s a personal question, and the answer is not the same for everyone. ChatGPT is one of the most downloaded apps worldwide, so to assume that all of those people do not gain something from it cannot be right. b) people do a lot of useless things that all have a climate cost, but noone bats an eye when someone says they watched 2 hours of 4k video, which uses a lot more ressources than Chatbots. c) chatbots that noone interacts with do not consume resources in a meaningful way, since you have to make a request for that - the greeting will be hardcoded in 99% of cases. d) i agree that the amount of VC money inflates AI usage, but VC money does this with everything: dotcom bubble, 2008 crash (housing bubble), crypto bubble, nft bubble… the difference here is that people actually have personal use scenarios, regardless of VC money. e) I agree that opt-in should be the default, i’m no fan of google’s bot, but i actually just don’t use google, i use mullvad leta for most things, and i’m waiting for the rollout of the european search index that qwant and ecosia created - it went live in france recently and i’m excited to try it out when it starts here!
You know what’s funny? i don’t even use ChatGPT, i rely on locally running models - and i’m pretty sure my GPU is less efficient than the setup in a data center.
For climate sure, though local impact by focusing so much demand in such small geography does create outsized impacts for that local area in terms of local energy and water.
I mean, depends on the email. If you spend more time answering yourself than the AI would, you almost certainly emit more green house gasses, used more fresh water and electricity, and burned more calories. Depending on the email, you might have also decreased net happiness generally.
Do we care about the environment or not? Please, oppose datacenters in desserts and stop farming alphalpha where water supplies are low. But your friend using AI to answer an email that could have been a google search is not the problem.
This is the main reason I am reticent about using ai. I can get around its funtional limitations but I need to know they have brought the energy usage down.
It’s not that bad when it’s just you fucking around having it write fanfics instead of doing something more taxing, like playing an AAA video game or, idk, run a microwave or whatever it is normies do. Training a model is very taxing, but running them isn’t and the opportunity cost might even be net positive if you tend to use your gpu a lot.
It becomes more of a problem when everyone is doing it when it’s not needed, like reading and writing emails. There’s no net positive, it’s a very large scale usage, and brains are a hell of a lot more efficient at it. This use case has gotta be one of the dumbest imaginable, all while making people legitimately dumber using it over time.
oh you are talking locally I think. I play games on my steamdeck as my laptop could not handle it at all.
Your steam deck at full power (15W TDP per default) equals 5 ChatGPT requests per hour. Do you feel guilty yet? No? And you shouldn’t!
Yup, and the deck can do stuff at an astounding low wattage, like 3W to 15W range. Meanwhile there’s gpus that can run at like 400W-800W, like when people used to use two 1080s SLI. I always found it crazy when I saw a guy running a system burning as much electricity as a weak microwave just to play a game, lol. Kept his house warm, tho.
How much further down than 3W/request can you go? i hope you don’t let your microwave run 10 seconds longer than optimal, because that’s exactly the amount of energy we are talking about. Or running a 5W nightlight for a bit over half an hour.
LLM and Image generation are not what kills the climate. What does are flights, cars, meat, and bad insulation of houses leading to high energy usage in winter. Even if we turned off all GenAI, it wouldn’t even leave a dent compared to those behemoths.
Where is this 3W from? W isn’t even an energy unit, but a power unit.
sorry, it should be 3 Wh, you are correct of course. The 3Wh come from here:
- Aug 2023: 1.7 - 2.6 Wh - Towards Data Science
- Oct 2023: 2.9 Wh - Alex de Vries
- May 2024: 2.9 Wh - Electric Power Research Institute
- Feb 2025: 0.3 Wh - Epoch AI
- May 2025: 1.9 Wh for a similar model - MIT Technology Review
- June 2025: Sam Altman himself claims it’s about 0.3 Wh
Every source here stays below 3Wh, so it’s reasonable to use 3Wh as an upper bound. (I wouldn’t trust Altmans 0,3 Wh tho lol)
Thanks for linking the sources. I will take a look into that
sorry. there were two conversations and Im getting confused. Are you talking local where I don’t have the overhead? Or using them online where I am worried about the energy usage?
A local LLM probably costs about as much as the online LLM, but that usage is so widely distributed it’s not as impactful to any specific locations.
that’s online, what is used in the data centers per request. local is probably not so different, depending on device - different devices have different architectures which might be more or less optimal, but the cooling is passive. If it would cost more it wouldn’t be mostly free.
This is a pretty well researched post, he made a cheat sheet too ;-)
Yeah the thing is its not comparing each request to an airline flight its comparing each one to a web search. Its utility is not that much greater its just a convenience. Its like with bitcoin where its about energy per transaction compared to a credit card transactions. I mean I search the web everyday a whole bunch and way more when im working.
Whether its useful or not is another discussion, but if you used a LLM to write an email in 2 minutes that you would use 10 minutes for (including searches and whatever), you actually generate LESS CO2 than the manual process:
- PC, 200W/h, 10 min: 33W
- Monitor, 30W/h: 5W
- Google searches, lets say 3, about 0,3W/Search: 1W
equals ~40W
compared to:
- PC, 200W/h, 2 min: ~7W
- Monitor, 30W/h: 1W
- ChatGPT, 3 requests (normally you would expect less requests than google searches, but for the argument…): 9W
equals ~17W.
And that is excluding many other factors like general energy costs for infrastructure, which tilt the calculation further in chatgpts favor.
EVERYTHING we do creates emissions one way or another, we create emissions by simply existing too; it’s important to set things into perspective. Both upper examples compare to running a 1000W microwave for either 2:20 min or 1:05 min. You wouldn’t be shocked by those values.
This would not save anything as you would not use your monitor and pc 8 minutes less in that scenario. Or at least I would not. Its sorta moot as generating an email is definitely not something I would use ai for. Granted I really doubt I would spend 10 minutes on an email unless it was complicated and I was doing something else with it and keeping it open while doing something else as I put it together. Any savings would assume the ai generated email did not result in more activity than one you answered yourself. To have savings you would genuinely have to use the resource less that day or week or such.
Well, that depends on workload and the employer. If you are one of the lucky ones where it’s just important that shit gets done on time, it would result it lower usage. That’s on the employer, not on the LLM.
3W/request (4W if you include training the model) is nothing compared to what we use in our everyday life, and it’s even less when looking what other activities consume. Noone would have an issue with you running a blender for 30 seconds, even tho it’s the same energy usage as an Chatbot request.
You can run one on your PC locally so you know how much power it is consuming
I already stress my laptop with what I do so I doubt I will do that anytime soon. I tend to use pretty old hardware though. 5 year plus. honestly closer to 10.
Can’t remember the last time my hardware was younger than 10 years old 😂…😭
Mines actually just shy. 2017 manufacture.
It’s the same as playing a 3d game. It’s a GPU or GPU equivalent doing the work. It doesn’t matter if you are asking it to summarize an email or play Red Dead Redemption.
I mean if every web search I do is like playing a 3d game then I will stick with web searches. 3d gaming is the most energy intensive thing I do on a computer.
But it’s everywhere now and it’s almost impossible to use mainstream services without it being used. I can just go to Google anymore, type a search query and get a reply without AI bs being used. How long before it’s baked into the GMail compose window and it doesn’t without me wanting to.
Then we stop using it.
I think we need a Rule 34 of open-source programs:
Rule 34: If it exists, there is an open-source version of it
i) If no open-source version exists, it is currently being created
ii) If no open-source version is being created, you must create it yourself
Doesn’t gmail already do this? I seem to remember there being ‘suggested response’ options before I turned it off in the settings that were definitely AI generated. That option being presented to me creeps me out because you can’t know if what you’re receiving was actually written by the person sending it.
Thanks. You reminded me to turn Gemini off. Did that once and it came back on.
you need to generate about 2 million emails (using qwen) for the carbon emission of one transatlantic flight, personal use is definitely not the power hungry shit you imagine
I am a small part of the problem, so I am not a problem.
This post linked elsewhere in the thread was pretty insightful.
i don’t know what amount of energy and water LLM’s or image generation use, but you vastly overestimate it if you react like this.
If you don’t have your screen on power save after 15 seconds of non-use, rethink it - 8 minutes of screen time assuming a 30W monitor equals about 1 ChatGPT request, and that’s including the training of the model and the production of the hardware it’s running on.
If it cost any real amount of money, don’t you think that people would have to pay for that? ChatGPT has 400m users, and only 11m actually pay for it.
E: Something else to set into relation: Charging your phone for about 40 minutes using slow charging (5W) is one request. Water use? 10-25ml water per request - a 500ml bottle lasts you 20-50 questions.
Gaming? ChatGPT uses the energy of 20000 households (not bad for serving 400m users and around 1b requests/day). Fortnite alone uses more than 400000 households, and noone preaches into my ear to stop playing fortnite because it’s bad for the climate. (and i don’t play fortnite lol)
I never had a car, i have flown 4 times in my life, i rarely eat meat. I can generate 10000 requests per day if i cared to and wouldn’t have a chance reaching even the basic wastefulness that a american household is.
The same as every new service over the past 20 years. Start with free, then when they’re hooked add the advertising, paywalls and ramp up the enshitification to the Max. You need to grab market share with a loss leader, dominate and become the defacto standard before you turn your users into money providers.
But even with a loss leader you cannot crank up the price stupidly high if the costs per request were prohibitive; ChatGPT subscriptions cost 20$/month. API pricing for the most expensive option is 10k$/1Million Tokens, so it’s a buck per 100 tokens.
an AI center in Texas was using 460 Million gallons of water, so much that residents were told to cut back on showering to accommodate it.
Yeah, it’s stupid that they built it where it’s not supported by the necessary infrastructure. btw, do you know what happens after they use it to cool the servers? it gets placed back into the river, it doesn’t disappear. this situation is an infrastructure issue, not an AI issue.
I think they’re pointing out the 180-turn in so-called “priorities.” Companies once claimed to want something done for the “sake of the environment,” but now they have no problem using resource-intensive AI without any acknowledgement of how bad it is for the environment.
Both things (avoiding LLM written mail and the paper printouts) are meaningless greenwashing gestures in comparison to - for example - the additional car use due to “return to office”-bullshit.
That’s true. I don’t disagree with you, I just think we’re reading this post differently.
Companies lie about their reasons all the time, especially when they claim they’re doing something for the environment. I interpreted this post as another example pointing out their hypocrisy, not as “this is the one and only thing companies lie about.”
The basic assumption of the post is that GenAI is particularly energy and water hungry, which is not true. The energy my computer used while commenting under this post, which i can estimate to about 300W since my first comment, equals about 300 requests. It would have been climate friendlier to generate my responses with ChatGPT instead of typing them out.