

She’ll be able to wear that sanction as a badge of honor.
Developer and refugee from Reddit
She’ll be able to wear that sanction as a badge of honor.
But if he didn’t create problems that he could pretend to solve through bluster and rambling incoherence, he’d have to actually work. A Trump never works.
Removed by mod
Huh. So they needed to launch an unprovoked invasion of Ukraine to protect themselves from the United States? Make that make sense. Good luck.
Mods, can we please ban RT as a source?
I don’t think that comparison is apt. Unlike with music, there are objectively inefficient and badly-executed ways for a program to function, and if you’re only “vibing,” you’re not going to know the difference between such code and clean, efficient code.
Case in point: Typescript. Typescript is a language built on top of JavaScript with the intent of bringing strong and static type-checking sanity to it. Using Copilot, it’s possible to create a Typescript application without actually knowing the language. However, what you’ll end up with will almost certainly be full of the any
type, which turns off type-checking and negates the benefits of using Typescript in the first place. Your code will be much harder to maintain and fix bugs in. And you won’t know that, because you’re not a Typescript developer, you’re a Copilot “developer.”
I’m not trying to downplay the benefits of using Copilot. Like I said, it’s something I use myself, and it’s a really helpful tool in the developer toolbox. But it’s not the only tool in the toolbox for anyone but “vibe coders.”
I’m of two minds on this.
On the one hand, I find tools like Copilot integrated into VS Code to be useful for taking some of the drudgery out of coding. Case in point: If I need to create a new schema for an ORM, having Copilot generate it according to my specifications is speedy and helpful. It will be more complete and thorough than the first draft I’d come up with on my own.
On the other, the actual code produced by Copilot is always rife with errors and bloat, it’s never DRY, and if you’re not already a competent developer and try to “vibe” your way to usablility, what you’ll end up with will frankly suck, even if you get it into a state where it technically “works.”
Leaning into the microwave analogy, it’s the difference between being a chef who happens to have a microwave as one of their kitchen tools, and being a “chef” who only knows how to follow microwave instructions on prepackaged meals. “Vibe coders” aren’t coders at all and have no real grasp of what they’re creating or why it’s not as good as what real coders build, even if both make use of the same tools.
What that really means is they’ll be vetted for any indications of disliking Trump.
That’s an odd thing to say. For one thing, there are plenty of physical activities that one could get a reasonable description of from ChatGPT, but if you can’t actually do them or understand the steps, you’re gonna have a bad time.
Example: I’ve never seen any evidence that ChatGPT can properly clean and sterilize beakers in an autoclave for a chemical engineering laboratory, even if it can describe the process. If you turned in homework cribbed from ChatGPT and don’t actually know how to do it, your future lab partners aren’t going to be happy that you passed your course by letting ChatGPT do all the work on paper.
There’s also the issue that ChatGPT is frequently wrong. The whole point here is that these cheaters are getting caught because their papers have all the hallmarks of having been written by a large language model, and don’t show any comprehension of the study material by the student.
And finally, if you’re cheating to get a degree in a field you don’t actually want to know anything about… Why?
Why fight against it? Because some of these students will be going into jobs that are life-or-death levels of importance and won’t know how to do what they’re hired to do.
There’s nothing wrong with using a large language model to check your essay for errors and clumsy phrasing. There’s a lot wrong with trying to make it do your homework for you. If you graduate with a degree indicating you know your field, and you don’t actually know your field, you and everyone you work with are going to have a bad time.
The (presumably) bot that posted it is now extremely banned from the community.
Edit: Not a bot after all, so unbanned, but waiting to see their edits to make the links work.
Ah, did they finally fix it? I guess a lot of people were seeing it fail and they updated the model. Which version of ChatGPT was it?
Ask ChatGPT to list every U.S. state that has the letter ‘o’ in its name.
Not true. Not entirely false, but not true.
Large language models have their legitimate uses. I’m currently in the middle of a project I’m building with assistance from Copilot for VS Code, for example.
The problem is that people think LLMs are actual AI. They’re not.
My favorite example - and the reason I often cite for why companies that try to fire all their developers are run by idiots - is the capacity for joined up thinking.
Consider these two facts:
Those two facts are unrelated except insofar as both involve humans, but if I were to say “Can you list all the dam-building mammals for me,” you would first think of beavers, then - given a moment’s thought - could accurately answer that humans do as well.
Here’s how it goes with Gemini right now:
Now Gemini clearly has the information that humans are mammals somewhere in its model. It also clearly has the information that humans build dams somewhere in its model. But it has no means of joining those two tidbits together.
Some LLMs do better on this simple test of joined-up thinking, and worse on other similar tests. It’s kind of a crapshoot, and doesn’t instill confidence that LLMs are up for the task of complex thought.
And of course, the information-scraping bots that feed LLMs like Gemini and ChatGPT will find conversations like this one, and update their models accordingly. In a few months, Gemini will probably include humans in its list. But that’s not a sign of being able to engage in novel joined-up thinking, it’s just an increase in the size and complexity of the dataset.
It’s absolutely taking off in some areas. But there’s also an unsustainable bubble because AI of the large language model variety is being hyped like crazy for absolutely everything when there are plenty of things it’s not only not ready for yet, but that it fundamentally cannot do.
You don’t have to dig very deeply to find reports of companies that tried to replace significant chunks of their workforces with AI, only to find out middle managers giving ChatGPT vague commands weren’t capable of replicating the work of someone who actually knows what they’re doing.
That’s been particularly common with technology companies that moved very quickly to replace developers, and then ended up hiring them back because developers can think about the entire project and how it fits together, while AI can’t - and never will as long as the AI everyone’s using is built around large language models.
Inevitably, being able to work with and use AI is going to be a job requirement in a lot of industries going forward. Software development is already changing to include a lot of work with Copilot. But any actual developer knows that you don’t just deploy whatever Copilot comes up with, because - let’s be blunt - it’s going to be very bad code. It won’t be DRY, it will be bloated, it will implement things in nonsensical ways, it will hallucinate… You use it as a starting point, and then sculpt it into shape.
It will make you faster, especially as you get good at the emerging software development technique of “programming” the AI assistant via carefully structured commands.
And there’s no doubt that this speed will result in some permanent job losses eventually. But AI is still leagues away from being able to perform the joined-up thinking that allows actual human developers to come up with those structured commands in the first place, as a lot of companies that tried to do away with humans have discovered.
Every few years, something comes along that non-developers declare will replace developers. AI is the closest yet, but until it can do joined-up thinking, it’s still just a pipe-dream for MBAs.
Every headline like this should make clear that him imposing tariffs by fiat is illegal and unconstitutional.
I hate the fact that the media just reports that he’s doing it without ever citing Article 1, Section 8:
The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States;
Every one of these tariff manipulations has been 100% illegal, because the supposed emergencies he’s using to excuse them are nonexistent.
I’m in a similar boat, except I’m waiting to find out if my multinational will be willing to move me. I’m the lead developer, admin, product owner, and architect for a very publicly-facing web presence for my company, so I’m hoping they’ll be willing to in order to keep me happy.
And if they won’t, I’m going to be applying for similar jobs abroad the moment I know.
It’s not just for me. My son is trans and my daughter is gay. I have to get them both out of here before the ovens start firing up.
If you get the impression that at this point I believe the U.S. is a lost cause, you’re correct. If we make it to the 2026 elections intact, the elections are valid, and Democrats sweep, I’ll be extremely surprised.
I… can’t dispute that.
Sorry, I’ll make up for it with this weird music video that wishes you a nice day: https://www.youtube.com/watch?v=_mkiGMtbrPM
Hope it helps.
Correction: Everything I don’t like gets threatened with a tariff. Remember, he always chickens out with this shit.