GPU | VRAM | Price (€) | Bandwidth (TB/s) | TFLOP16 | €/GB | €/TB/s | €/TFLOP16 |
---|---|---|---|---|---|---|---|
NVIDIA H200 NVL | 141GB | 36284 | 4.89 | 1671 | 257 | 7423 | 21 |
NVIDIA RTX PRO 6000 Blackwell | 96GB | 8450 | 1.79 | 126.0 | 88 | 4720 | 67 |
NVIDIA RTX 5090 | 32GB | 2299 | 1.79 | 104.8 | 71 | 1284 | 22 |
AMD RADEON 9070XT | 16GB | 665 | 0.6446 | 97.32 | 41 | 1031 | 7 |
AMD RADEON 9070 | 16GB | 619 | 0.6446 | 72.25 | 38 | 960 | 8.5 |
AMD RADEON 9060XT | 16GB | 382 | 0.3223 | 51.28 | 23 | 1186 | 7.45 |
This post is part “hear me out” and part asking for advice.
Looking at the table above AI gpus are a pure scam, and it would make much more sense to (atleast looking at this) to use gaming gpus instead, either trough a frankenstein of pcie switches or high bandwith network.
so my question is if somebody has build a similar setup and what their experience has been. And what the expected overhead performance hit is and if it can be made up for by having just way more raw peformance for the same price.
Efficiency still matters very much when self hosting. You need to consider power usage (do you have enough amps in your service to power a single GPU? probably. what about 10? probably not) and heat (it’s going to make you need to run more A/C in the summer, do you have enough in your service to power an A/C and your massive amount of GPUs? not likely).
Homes are not designed for huge amounts of hardware. I think a lot of self hosters (including my past self) can forget that in their excitement of their hobby. Personally, I’m just fine not running huge models at home. I can get by with models that can run on a single GPU, and even if I had more GPUs in my server, I don’t think the results (which would still contain many hallucinations) would be worth the power cost, strain on my A/C, and possible electrical overload.