ryujin470@fedia.io to Technology@beehaw.org · 26 days agoOpenAI releases a free GPT model that can run on your laptopwww.theverge.comexternal-linkmessage-square8fedilinkarrow-up138arrow-down17file-text
arrow-up131arrow-down1external-linkOpenAI releases a free GPT model that can run on your laptopwww.theverge.comryujin470@fedia.io to Technology@beehaw.org · 26 days agomessage-square8fedilinkfile-text
minus-squareSeefra 1@lemmy.ziplinkfedilinkarrow-up6·26 days agoIsn’t that true for most models until someone destiles and quantises them so they can run on common hardware?
minus-squarefuckwit_mcbumcrumble@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up8·26 days agoThis is the internet, we’re only allowed to be snarky here.
minus-squareGhoelian@lemmy.dbzer0.comlinkfedilinkarrow-up1·edit-226 days agoI mean yeah, but that doesn’t make the title any more true.
minus-squareCyberSeeker@discuss.tchncs.delinkfedilinkEnglisharrow-up1·25 days agoYes, but 20 billion parameters is too much for most GPUs, regardless of quantization. You would need at least 14GB, and even that’s unlikely without offloading major parts to the CPU and system RAM (which kills the token rate).
Isn’t that true for most models until someone destiles and quantises them so they can run on common hardware?
This is the internet, we’re only allowed to be snarky here.
I mean yeah, but that doesn’t make the title any more true.
Yes, but 20 billion parameters is too much for most GPUs, regardless of quantization. You would need at least 14GB, and even that’s unlikely without offloading major parts to the CPU and system RAM (which kills the token rate).