In the late 2000s there was a push by a lot of businesses to not print emails and people use to add a ‘Please consider this environment before printing this email.’
Considering how bad LLMs/‘ai’ are with power consumption and water usage a new useless tag email footer should be made.
This is the main reason I am reticent about using ai. I can get around its funtional limitations but I need to know they have brought the energy usage down.
It’s not that bad when it’s just you fucking around having it write fanfics instead of doing something more taxing, like playing an AAA video game or, idk, run a microwave or whatever it is normies do. Training a model is very taxing, but running them isn’t and the opportunity cost might even be net positive if you tend to use your gpu a lot.
It becomes more of a problem when everyone is doing it when it’s not needed, like reading and writing emails. There’s no net positive, it’s a very large scale usage, and brains are a hell of a lot more efficient at it. This use case has gotta be one of the dumbest imaginable, all while making people legitimately dumber using it over time.
oh you are talking locally I think. I play games on my steamdeck as my laptop could not handle it at all.
Your steam deck at full power (15W TDP per default) equals 5 ChatGPT requests per hour. Do you feel guilty yet? No? And you shouldn’t!
Yup, and the deck can do stuff at an astounding low wattage, like 3W to 15W range. Meanwhile there’s gpus that can run at like 400W-800W, like when people used to use two 1080s SLI. I always found it crazy when I saw a guy running a system burning as much electricity as a weak microwave just to play a game, lol. Kept his house warm, tho.
How much further down than 3W/request can you go? i hope you don’t let your microwave run 10 seconds longer than optimal, because that’s exactly the amount of energy we are talking about. Or running a 5W nightlight for a bit over half an hour.
LLM and Image generation are not what kills the climate. What does are flights, cars, meat, and bad insulation of houses leading to high energy usage in winter. Even if we turned off all GenAI, it wouldn’t even leave a dent compared to those behemoths.
Where is this 3W from? W isn’t even an energy unit, but a power unit.
sorry, it should be 3 Wh, you are correct of course. The 3Wh come from here:
Every source here stays below 3Wh, so it’s reasonable to use 3Wh as an upper bound. (I wouldn’t trust Altmans 0,3 Wh tho lol)
Thanks for linking the sources. I will take a look into that
sorry. there were two conversations and Im getting confused. Are you talking local where I don’t have the overhead? Or using them online where I am worried about the energy usage?
A local LLM probably costs about as much as the online LLM, but that usage is so widely distributed it’s not as impactful to any specific locations.
that’s online, what is used in the data centers per request. local is probably not so different, depending on device - different devices have different architectures which might be more or less optimal, but the cooling is passive. If it would cost more it wouldn’t be mostly free.
This is a pretty well researched post, he made a cheat sheet too ;-)
Yeah the thing is its not comparing each request to an airline flight its comparing each one to a web search. Its utility is not that much greater its just a convenience. Its like with bitcoin where its about energy per transaction compared to a credit card transactions. I mean I search the web everyday a whole bunch and way more when im working.
Whether its useful or not is another discussion, but if you used a LLM to write an email in 2 minutes that you would use 10 minutes for (including searches and whatever), you actually generate LESS CO2 than the manual process:
equals ~40W
compared to:
equals ~17W.
And that is excluding many other factors like general energy costs for infrastructure, which tilt the calculation further in chatgpts favor.
EVERYTHING we do creates emissions one way or another, we create emissions by simply existing too; it’s important to set things into perspective. Both upper examples compare to running a 1000W microwave for either 2:20 min or 1:05 min. You wouldn’t be shocked by those values.
This would not save anything as you would not use your monitor and pc 8 minutes less in that scenario. Or at least I would not. Its sorta moot as generating an email is definitely not something I would use ai for. Granted I really doubt I would spend 10 minutes on an email unless it was complicated and I was doing something else with it and keeping it open while doing something else as I put it together. Any savings would assume the ai generated email did not result in more activity than one you answered yourself. To have savings you would genuinely have to use the resource less that day or week or such.
Well, that depends on workload and the employer. If you are one of the lucky ones where it’s just important that shit gets done on time, it would result it lower usage. That’s on the employer, not on the LLM.
3W/request (4W if you include training the model) is nothing compared to what we use in our everyday life, and it’s even less when looking what other activities consume. Noone would have an issue with you running a blender for 30 seconds, even tho it’s the same energy usage as an Chatbot request.
see again you compare something someone might do once a day and most people do hardly ever. Using a blender. With something used many times constantly through out the day. Web searching. Even before ai datacenters were a massive use of energy. Now im not the type to say throw it all away but I will be careful in my usage till im sure its worth it. this is going to require the vendors to put out data on energy usage. Its new enough that im sure more and better chips will be able to reduce the energy it takes. You have to realize your talking to someone who walks if I can, then bikes as a second option, and finally takes public transit. I avoid driving and planes unless I absolutely have to. Im in tech so I will be using it but it will likely follow the same curve as previous technology has but maybe not given smartphones and apps would be the most recent things before ai and I use those only if I absolutely have to.
You can run one on your PC locally so you know how much power it is consuming
I already stress my laptop with what I do so I doubt I will do that anytime soon. I tend to use pretty old hardware though. 5 year plus. honestly closer to 10.
Can’t remember the last time my hardware was younger than 10 years old 😂…😭
Mines actually just shy. 2017 manufacture.
It’s the same as playing a 3d game. It’s a GPU or GPU equivalent doing the work. It doesn’t matter if you are asking it to summarize an email or play Red Dead Redemption.
I mean if every web search I do is like playing a 3d game then I will stick with web searches. 3d gaming is the most energy intensive thing I do on a computer.