I’m a software engineer at a startup with impossible deadlines - I’ve used GPT4 for months to generate huge amounts app/server code, and much like your IDE, once you learn to use these tools you don’t want to go back to the days without it.
Speed
- Bard is very fast- similar to GPT 3.5 Turbo
- You need to multitask two GPT4 instances side by side to compensate for how slow GPT4 can be
Reliability
- Bard lies and makes up fake API calls more than GPT4
UI
- Bard UI is garbage - You have to keep manually scrolling down the chat window, and for some reason the largest button on the page is “stop” (???)
- You can tell Bard to modify its response to be longer/shorter and a few other options - I thought this would be useful, but it never ended up helping
Memory
- Bard has really short memory - Forgets details from last response!
- GPT4 memory is also unreliable, any details that are important you have to repeat
Intelligence
- GPT4 is objectively smarter
Internet Search
- GPT4 Internet search is garbage
- Bard has “Verify with Google” - I had high hopes for this, but never actually had a use for it
Willingness to give full code
- GPT4 is bad, but Bard is worse. Both need to be begged/threatened to return more than 100 lines of directly paste-able code.
Generating Useful Code
- Bard can give more concise medium complexity functions
Adding tougher features
- Bard hallucinates and lies
Dealing with lies
- When you tell GPT something doesn’t work, GPT will try something else
- When you tell Bard something doesn’t work, Bard will lie, claim to fix it, then give back the same code
Following Instructions
- GPT4 sometimes doesn’t follow instructions, but improving the prompt will fix that. Bard will happily ignore instructions, as clear as they may be.
Summary:
- GPT4 is still objectively better than Bard. Quite frankly, the prompts Bard couldn’t handle, GPT3.5 could.
- The cons of GPT can be worked around, but for Bard, it’s almost faster to do it yourself. Unless Bard was used like Copilot for short 1-2 lines of autocomplete, I wouldn’t trust it.
PS: If you’re not using AI yet for development, I highly recommend it - It’s like using an IDE instead of Notepad. AI can easily 2-3x your output, but you have to learn how it works so you can prompt it correctly, and you have be good at fixing its mistakes.
https://github.com/dginovker, https://gitlab.com/dginovker
@me bro. I’ve been writing open source software for 5 years before GPT came along. Keep up with the tooling. I recommend you try it before you knock it.
I have 15+ years of experience being a software engineer and now I’m making a mid-six-figure income by going into companies who staff their “engineering department” with people who have five years or less of experience and cannot write a line of code without internet access to save their lives. So by all means go ahead and continue down the road you are going. We thought that stack overflow would guarantee a stable business but now that “AI” has come into play, we can’t even keep up with demand. We’ll probably raise prices by 25% next quarter. By the time that AI can actually produce decent results, I can probably retire twice over.
Imagine making 500k TC but not contributing to open source
When I started doing open source software, that meant posting tarballs to Usenet and mailing list, occasionally mailing someone a hard floppy on request. I don’t have a github profile sparkling with emojies, but I think I’m doing all right.