I’m used to hanging around on !anarchychess@sopuli.xyz if that says something.
I’m used to hanging around on !anarchychess@sopuli.xyz if that says something.
I am really sorry i pissed you all of, i just recently switched on a whim while i was gething super into being a windows poweruser and i swear i have nothing but love <3 i saw a really cool hyper-land interface, it was fast, beuatifull. i dig that. I installed it and i except for work i only used windows as a virtual dekstop 3 times in the month i am doing it.
Ok, Guys i am sorry. I actually was looking for a different meme witch more a hech yeah attidyde but then i stumbled onto this template i thought it be hillarious. I sort of made the switch recently and i learned a lot. I don’t wanna go back.
I also thought we where doing old memes today or something?
Well there are 2 things.
First there is speed for which they do indeed rely on multiple thousands of super high end industrial Nvidia gpus. And since the 10Billion investment from microsoft they likely expanded that capacity. I’ve read somewhere that chatgpt costs about 700,000 a day to keep running.
There are a few others tricks and caveats here though. Like decreasing the quality of the output when there is high load.
For that quality of output they do deserve a lot of credit cause they train the models really well and continuously manage to improve their systems to create even higher qualitive and creative outputs.
I dont think gpt4 is the biggest model that is out there but it does appear to be the best that is available.
I can run a small llm at home that is much much faster then chatgpt… that is if i want to generate some unintelligent nonsense.
Likewise there might be a way to redesign gpt-4 to run on consumer graphics card with high quality output… if you don’t mind waiting a week for a single character to be generated.
I actually think some of the open sourced local runnable llms like llama, vicuna and orca are much more impressive if you judge them on quality vs power requirement.
Kobald is a program to run local llms, some seem on par with gpt3 but normaly youre gonna need a very beefy system to slowly run them.
The benefit is rather clear, less centralized and free from strict policies but Gpt3 is also miles away from gpt3.5. Exponential growth ftw. I have yet to see something as good and fast as chatgpt
And here i though “pretty epic” seems rather weird to me. I’ll have to check on desktop later.
You should probably add a link. I already know this happend but it still reads like a joke.
Truly the weirdest timeline.
Time to implement a new checkbox “remember the state of the remember me checkbox”
Jokes aside this probably requires injecting some code or script into the webpage. Maybe theres a browser extension that can do this.
Lolwhat on that last line. Such a loophole.
I am a simple man. I check if the fitgirl site i am on is the real one and thats about it.
For rare music files il try any torrent that promises to have what i am looking for. I have yet to actually catch sm malicious.
Id love to see a source on how risky this actually is. Is there something in the steam toss that you cant mod this specific file on a legally bought game? I’ve been using sm similar for cities skylines for a while now.
I know people have gotten banned for pirating full games through steam but i have yet to see it happen over dlc.
I am in the same boat. I managed to sideload jellyfin on my samsung but stremio seems a whole different cup of tea
A major part of how we interact. Not replace human interactions and definitely not put a centralized corporate AI in charge.
My vision of what interaction could look like on Lemmy with AI tools (with a few more years of progress):
Imagine if everyone had a small Wikipedia genie on their shoulder, at your demand telling you information about whatever subject your writing about. We all know Wikipedia has mistakes and that some expert levels stuff really is best to leave to experts. I tend to go back and forth with google a lot if i want to get the details in a post right, it has the same problems. But in general Wikipedia and the internet are much more right than the average single person. For some stuff i rather have a transparent trusted AI provide the details then a random internet stranger that may only claim to have done research, or worse has malicious goals to spread misinformation.
What really strikes me here is that your perspective on this seems to be so disconnected from the experience i have gotten working with AI, which is a power-tool to drastic enhance your capabilities in advanced cognitive tasks.
Since ChatGPT last year i have learned:
In just the last 3 weeks (when i got GPT4) i learned
How to work a Linux command-terminal, something i have been struggling with for 2+ years
Set up and work with both Arch and Debian based systems
How to work docker trough cli and how to create heavy customization on many of the servers catered to the needs of my home-network. This includes some advanced reprogramming of how some of my smart devices behave, something i have wanted to do for over 3 years.
I have also gotten many compliments at work for my emerging ability to quickly create scripts to automate tedious tasks, giving us more time to think about and improve our workflow rather then always trying to finish a never ending backlog.
This thing has supercharged my life as a computer enthusiast. I never had a teacher that was capable of teaching me in such a customized manner. On my own tempo, in a requested structure and regardless of how stupid my question might be.
But you are correct that there are clear pitfalls when working with AI. I myself have used it enough that i believe i know how to use them, some notes:
The user is always the brain behind the creative process Like you said “It has no more understanding of the text it shits out than a toddler who has learned to swear” The uploader of the post you linked also stated it himself “it is a tool” not a genie that does all the work for you.
AI Enhances your knowledge. Ten times zero… Setting up Linux servers on my home network is something i have been trying and failing to do for a while now. (Mostly because i am entirely self learned) but i understood it well enough to know if ChatGPT output is realistic at all. I am always directing it to do what i planned to do, and i never copy its work without first understanding what it actually does.
Know the limitations. There are some topics that current AI is much better at then others, in my experience that’s coding and computers. To plan a holiday trip? I tried, its really not that good.
Break it down, use what your learned, Build something better:
Handwrite an email -> have ChatGPT reason what it thinks i am trying to say -> Have Chatgpt rewrite it to better reflect what i am trying to say -> Read and understand what it did -> Discard previous emails and write a final one.
For someone who pre ChatGPT was horrible at writing emails, My boss has now started asking me to craft standardized emails to be send in bulk.
Now to address the original post which really just a low quality cut and dry standard reply from ChatGPT. I am gonna go on a limb and say the comment from OP that it is just a tool is probably more a recent realization. The first week of using these models they do indeed feel a bit like magic know it all boxes, but just like Altman stated this feeling fades quickly. You realize if you actually want to create something of real quality (swindlers will swindle) you are going to have to remain in charge, understand what parts of your tools you can and cant rely on.
I believe there is only one way to learn this and that is for people to use and learn this technology for themselves. I hope i am wrong for the next line but i extrapolate that AI is very much a case of “Get in the motorboat now, or peddle behind forever” because things are going to start to move really fast.
While GPT-4 might not churn out top-notch books just yet, this tech is getting better and will be a major part of how we interact with the world and eachother in the future.
Id like Lemmy to still be relevant in a few years. We shouldn’t shy away from new tech.
Use this webui (its the stabld diffusion ui for llm)
https://github.com/oobabooga/text-generation-webui
I am pretty sure it has a sever option.
Here is a list of the models it likely supports, including gpt4all. https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
Best one i tried is wizard vicuna 13B running on a rtx2070
Ever time i see a post like this i ask the same thing and i have yet to receive answer.
Why should i care?
There are so many open source language models, all with different strengths and weaknesses. There are tools to run them on any OS with all kinds of different hardware requirements.
This has been the case since before chatgpt came out and has exponentially blown up since.
Gpt4all is just a single recent model. But in recent weeks it always gets the headlight under “run chatgpt at home”
What does it do to stand out? Why would i use this and not one of the vicuna or llama models?
Hugging face has a leaderboard for open source large language models.
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
If you are interested in running this tech at home, familiarize yourself with multiple models because they all will behave differently depending on your hardware and your needs.
The joke was precisely that i was to dumb to properly understand this post. Thank you very much for your explanation. I definitely see the benefits x265 can have now. And i may actually use that knowledge when i see download files for both codecs.
I take it the images in the post aren’t the greatest reference then as one of the squares contains both background a portion of a face.
Makes me wonder what ai will allow us to do in the future knowing exactly what information can be compressed and what focus points must remain highest quality.
Ah yes, you see these are number terms that indicate how videos are encoded. I absolutely understand how to feel with this post and are worthy of participating in the smart discussion in the comments.
imposter syndrome aside, left is a nice grid, right is a really really bad attempt at drawing a golden ratio. Sure left is better to maintain average quality. Why are people talking about converting to one and then the other? Why is the golden ratio one not symmetric!
no i was just a bit over exited and not even high.