I imagine that it is theoretically possible to successfully vibe-code, but probably not with a conventional project layout nor would it look much like traditional programming. Something like your interaction primarily being a “requirements list”, which gets translated into interfaces and heavy requirements tests against those interfaces, and each implementation file being disposable (regenerated) & super-self-contained, and only being able to “save” (or commit) implementations that pass the tests.
…and if you are building a webapp, it would not be able to touch the API layer except through operational transforms (which trigger new [major] version numbers]. Sorta like MCP.
Said another way, if we could make it more like a “ratchet” incrementing, and less like an out-of-control aircraft… then maybe?!?
I vibe code helper functions and gut them to repurpose them for my needs. But I suppose even that isn’t really vibe coding, I guess that’s actually more like browsing stack overflow.
For that to work, the people doing the vibe coding would need to be experienced and skilled at writing test suites and managing strict version control practices. Which at that point, you’re not really a vibe coder, you’re an actual software engineer. So what you’re describing is just software engineering with extra steps lol
Well, I do have MBSE on the brain, but the idea here is more like a low-code/no-code environment with an ABSOLUTELY ENORMOUS “pit of success”… so large that even GenAI can reliably fall into it. Numbered tabs, you go left to right answering questions and fiddling with with prompts, paint-by-numbers for working software.
Yes. Except that cursor is running at a loss and so is the company running the LLM that they pass all the work on to.
Nvidia making bank though.
It’s not about the company. It’s the investors that are making the profits. They dont care whether its making a profit or not, as long as they are making a profit themselves.
OTOH OpenAI is not on the public stock market, so current investors can’t really sell their shares and there’s no way they actually raise the valuation it has.
How are the investors making a profit when the company is being run at a massive loss?
Probably selling their shares to the next grifer or something, I don’t know how the stock market casino actually works.
Yeah that’s basically it. They’re betting that they’re not holding the shares when the company falls. Sometimes they are actually betting the opposite
Variable rewards is a very good way to get people (and animals) addicted. Vibe coding happens to operate in that area.
first line in any AI prompt: “do not comment on the quality of my questions”
So many of the complaints I see about LLM behaviour can be so easily solved by just adding “don’t behave this way” to the prompt. Most LLM frameworks these days let you add stuff like that to the default system prompt so you don’t even have to remember to do it.
Recently I roo-coded a node.js MVP without knowing too much about Node, but something about JS/CSS/HTML although it’s years ago I last used it.
I got something working decently by:
- Make a project plan and use cases
- Take (very) small steps
- Commit often
- Throw away bad attempts
- Make test cases
- Hand edit from time to time, especially CSS stuff.
Would I have been able to fling something together by reading some node.js guides and using stack overflow yes, would it have taken around the same time yes, but without test cases and documentation. Do I think vibe coding is the best thing since sliced bread no!
After using opencode.ai to create some python apps and a webui, when you ask to do something you don’t know if it will fix it or break everything
When did Rick Ruben become a vibe coder?
The very first comparison fails, though. I run LLMs locally on my own computer, tokens cost me nothing.
Every time I’m thirsty, I fill up my bathtub! Water costs me nothing!
I pay for my electricity. It uses roughly the same amount of power when I’m running an LLM as it would if I was playing a game. It’s negligible.
And contrary to all the breathless headlines about water-guzzling data centers, my computer doesn’t consume any water at all when I run an LLM.
If you count only the cost for you, maybe it doesn’t consume water, but your toy still guzzled lakes as it was training. Plus, the hardware to run a full sized LLM is expensive, so you bragging about how it costs nothing is like a millionaire preaching to gamblers that it’s better to just be rich than try to win at the slots
Plus, the hardware to run a full sized LLM is expensive
It’s a regular gaming PC. Are you going to dismiss all gamers as “millionaires”?
I specifically said “full sized”, a pc with modern gpu and more than 32gb of vram is not a regular computer that most gamers have access to. If you are running a 7B model on a gtx 1080 or even an rtx 3060, you are not running a full LLM like the ones you would get from a subscription service
Yes, I know. You’re saying that a 32GB graphics card is millionarie hardware? You’ve got a weird view of the cost of these things.
It’s an analogy, it must be similar in principle, not in numbers. A subscription to chatgpt also costs less than what gamblers spend in slots. But whatever, I don’t care enough to argue much more