You don’t need to pirate OpenAI. I’ve built the AI Horde so y’all can use it without any workarounds of shenanigans and you can use your PCs to help others as well.
I proposed a simple question like I did all the other AI with “airoboros-65B-gpt4-1.4-GPTQ for 13 kudos in 369.6 seconds”. It was a bit of a wait, I understand why.
It gave me a word for word comment on what I assume is a blog post from a Melissa. The topic was related, just barely.
Which LLM do you recommend for questions about a subject? I looked in the FAQ to see if there was a guide to the choices.
Unfortunately I’m not an expert in LLMs so I don’t know. I suggest you contact the KoboldAI community and they should be able to point you to the right direction
Kobald is a program to run local llms, some seem on par with gpt3 but normaly youre gonna need a very beefy system to slowly run them.
The benefit is rather clear, less centralized and free from strict policies but Gpt3 is also miles away from gpt3.5. Exponential growth ftw. I have yet to see something as good and fast as chatgpt
I’ve always wondered how it’s possible. No way they’ve got some crazy software optimisations that nobody else can replicate right? They’ve gotta just be throwing a ridiculous amount of compute power at every request?
First there is speed for which they do indeed rely on multiple thousands of super high end industrial Nvidia gpus. And since the 10Billion investment from microsoft they likely expanded that capacity.
I’ve read somewhere that chatgpt costs about 700,000 a day to keep running.
There are a few others tricks and caveats here though. Like decreasing the quality of the output when there is high load.
For that quality of output they do deserve a lot of credit cause they train the models really well and continuously manage to improve their systems to create even higher qualitive and creative outputs.
I dont think gpt4 is the biggest model that is out there but it does appear to be the best that is available.
I can run a small llm at home that is much much faster then chatgpt… that is if i want to generate some unintelligent nonsense.
Likewise there might be a way to redesign gpt-4 to run on consumer graphics card with high quality output… if you don’t mind waiting a week for a single character to be generated.
I actually think some of the open sourced local runnable llms like llama, vicuna and orca are much more impressive if you judge them on quality vs power requirement.
You don’t need to pirate OpenAI. I’ve built the AI Horde so y’all can use it without any workarounds of shenanigans and you can use your PCs to help others as well.
Here’s a client for LLM you can run directly on your browser: https://lite.koboldai.net
I had an interesting result.
I proposed a simple question like I did all the other AI with “airoboros-65B-gpt4-1.4-GPTQ for 13 kudos in 369.6 seconds”. It was a bit of a wait, I understand why.
It gave me a word for word comment on what I assume is a blog post from a Melissa. The topic was related, just barely.
Which LLM do you recommend for questions about a subject? I looked in the FAQ to see if there was a guide to the choices.
Unfortunately I’m not an expert in LLMs so I don’t know. I suggest you contact the KoboldAI community and they should be able to point you to the right direction
Thank you. Will do.
I kept playing and tried the scenarios and was getting closer.
Just tested. Thanks for building and sharing!
Aren’t KobaldAI models on par with GPT3? Why not just use ChatGPT then?
AI Horde looks dope for image generation though!
Kobald is a program to run local llms, some seem on par with gpt3 but normaly youre gonna need a very beefy system to slowly run them.
The benefit is rather clear, less centralized and free from strict policies but Gpt3 is also miles away from gpt3.5. Exponential growth ftw. I have yet to see something as good and fast as chatgpt
I’ve always wondered how it’s possible. No way they’ve got some crazy software optimisations that nobody else can replicate right? They’ve gotta just be throwing a ridiculous amount of compute power at every request?
Well there are 2 things.
First there is speed for which they do indeed rely on multiple thousands of super high end industrial Nvidia gpus. And since the 10Billion investment from microsoft they likely expanded that capacity. I’ve read somewhere that chatgpt costs about 700,000 a day to keep running.
There are a few others tricks and caveats here though. Like decreasing the quality of the output when there is high load.
For that quality of output they do deserve a lot of credit cause they train the models really well and continuously manage to improve their systems to create even higher qualitive and creative outputs.
I dont think gpt4 is the biggest model that is out there but it does appear to be the best that is available.
I can run a small llm at home that is much much faster then chatgpt… that is if i want to generate some unintelligent nonsense.
Likewise there might be a way to redesign gpt-4 to run on consumer graphics card with high quality output… if you don’t mind waiting a week for a single character to be generated.
I actually think some of the open sourced local runnable llms like llama, vicuna and orca are much more impressive if you judge them on quality vs power requirement.
Checking it out, how come I can’t paste my api key in the field on the option tab? I gotta type it out?
Which client? Koboldai lite?
https://dbzer0.itch.io/lucid-creations
The embedded browser version is just a demo. Just download and run the local executable and it should work normally.
oh ok, cool! Thanks!
This project looks really interesting.