

What I want Supergiant to do is… whatever the fuck they want.
What I want Supergiant to do is… whatever the fuck they want.
I don’t see the game getting either of those things.
Duos, you can already do, you just have to take on a rando as a third. They could scale the difficulty down for 2 players, sure, but Elden Ring’s mutiplayer scaling is notoriously terrible, in part because no amount of scaling can account for the lost potential for splitting aggro, in a game where splitting aggro is king.
Voice chat is something that FromSoft has INTENTIONALLY never included in any prior game, despite there being co-op in all of them. Making players coordinate with each other with very limited communication tools is one of FromSoft’s signature design choices. The fast pace of this game compared to prior games makes the lack of communication tools hurt a lot more, for sure, but it’s still very much playable. Anyone who dislikes this design choice is absolutely free to, but it’s not gonna change.
Normally, I’d be on board with you, but it does strike me as notable that Coffee Stain has apparently ALREADY been under the umbrella of shareholders this whole time, and is still fucking THRIVING. I’ll also note that Coffee Stain is based in Sweden, where all the things that make them great (I.E. the way devs are treated, which lets them thrive and make great shit) isn’t about to change.
So, I think it’s worth tempering the pessimism a bit, for now. We’ll have to see how it plays out.
What makes you say that? Do you say “everything else” to mean all the studios that aren’t splitting off along with Coffee Stain? From what’s here, I don’t see why Coffee Stain is in any different boat than everyone else underneath “Coffee Stain Group”.
C, C++, C#, to name the main ones. And quite a lot of languages are compiled similarly to these.
To be clear, there’s a lot of caveats to the statement, and it depends on architecture as well, but at the end of the day, it’s rare for a byte
or bool
to be mapped directly to a single byte in memory.
Say, for example, you have this function…
public void Foo()
{
bool someFlag = false;
int counter = 0;
...
}
The someFlag
and counter
variables are getting allocated on the stack, and (depending on architecture) that probably means each one is aligned to a 32-bit or 64-bit word boundary, since many CPUs require that for whole-word load and store instructions, or only support a stack pointer that increments in whole words. If the function were to have multiple byte
or bool
variables allocated, it might be able to pack them together, if the CPU supports single-byte load and store instructions, but the next int
variable that follows might still need some padding space in front of it, so that it aligns on a word boundary.
A very similar concept applies to most struct and object implementations. A single byte
or bool
field within a struct or object will likely result in a whole word being allocated, so that other variables and be word-aligned, or so that the whole object meets some optimal word-aligned size. But if you have multiple less-than-a-word fields, they can be packed together. C# does this, for sure, and has some mechanisms by which you can customize field packing.
I’ll take another look, but I didn’t see any such setting when I was trying to diagnose. And I haven’t changed any Plex settings since the last time we had an internet outage and it worked properly, just a month or two ago.
I recently discovered that Plex no longer works over local network, if you lose internet service. A) you can’t login without internet access. B) even if you’re already logged in, apps do not find and recognize your local server without internet access. So, yeah, Plex is already there.
Is the BlueSky OP here not a native English speaker? Cause, BOY that was tough to follow.
That’s a good analogy.
It’s far more often stored in a word, so 32-64 bytes, depending on the target architecture. At least in most languages.
This isn’t just a horrifically-misleading headline, it’s straight-up false.
The bill originally was written to directly establish personhood of a fetus, but Democrats got an amendment in that keeps the “pregnant mothers get to use the carpool lane” part, without the language that establishes personhood for a fetus. They literally called the Republicans’ bluff on “this bill is about supporting mothers”, by making that specific. This caused one Republican to retract his vote, because the amendment “guts the pro-life purpose of the bill”.
Entire final hour of Tears of the Kingdom.
Ethnic cleansing could turbocharge ethnic cleansing? I mean, I guess, but that’s a really weird way to put it.
You know what we, in the industry, call a detailed specification fo requirements detailed enough to produce software? Code.
The REAL problem is that the industry collectively uses JS almost exclusively for shit it was never meant to do. Like you say, it’s intended for it to not throw errors and kill your whole web page, because it was only ever intended to be used for minor scripts inside mostly-static HTML and CSS web pages. Then we all turned it into the most-popular language in the world for building GUI applications.
Honestly, if you’re having trouble finding stuff for vanilla JS, I’d recommend looking at jQuery. Not that you should USE jQuery, necessarily, but the library is basically a giant wrapper around all the native JS APIs, so the approach to building stuff is essentially the same: it all focuses on tracking and manipulation of DOM elements.
I do vanilla JS (actually TypeScript) dev at work, daily, and that was my big takeaway from spearheding our team’s migration from jQuery to vanilla TypeScript: I honestly don’t know what benefit jQuery provides, over vanilla, because all the most-common jQuery APIs that we were using have a 1:1 native equivalent.
We do also use 2 third-party libraries alongside vanilla, so I’l mention those: require.js and rx.js. Require you probably don’t need, with modern JS having bundling and module support built-in but we still use it for legacy reasons. But rx.js is a huge recommend, for me. Reactive programming is the IDEAL way to build GUIs, in my opinion.
It was one of those tall, thin church candles that you normally put out with a long handheld suffocator. Me, I tried to just jump for it. Came down awkwardly on one of the thing’s feet, lost my balance, and my leg crumpled in under me as I fell.
I once fractured my fibula blowing out a candle. I was, like, 17. You’re telling me it’s going to get WORSE?!
You’re right to think that “since it’s open source, people can see what it’s doing and would right away notice something malicious” is bullshit, cause it pretty much is. I sure as hell don’t spend weeks analyzing the source code of every third party open source package or program that I use. But just like with close-source software, there’s a much bigger story of trust and infrastructure in play.
For one, while the average Joe Code isn’t analyzing the source of every new project that pops up, there are people whose job is literally that. Think academic institutions, and security companies like Kaspersky. You can probably argue that stuff like that is underfunded, but it definitely exists. And new projects that gain enough popularity to matter, and don’t come from existing trusted developers are gonna be subject to extra scrutiny.
For two, in order for a malicous (new) project to be a real problem, it has to gain enough popularity to reach its targets, and the open source ecosystem is pretty freakin’ huge. There’s two main ways that happens: A) it was developed, at least partially, by an established, trusted entity in the ecosystem, and B) it has to catch the eye of enough trusted or influential entities to gain momentum. On point B, in my experience, the kind of person who takes chances on small, unknown, no-name projects is just naturally the “exceptionally curious” type. “Hmm, I need to do X, I wonder what’s out there already that could do it. Hey, here’s something. Is it worth using? I wonder how they solved X. Lemme take a look…”
For three, the open source ecosystem relies heavily on distribution systems, stuff like GitHub, NuGet, NPM, Docker, and they take on a big chunk of responsibility for the security and trustability of the stuff they distribute. They do things like code scanning, binary validation, identity verification, and of course punitive measures taken against identified bad actors (I.E. banning).
All that being said, none of the above is perfect, and malicious actor absolutely do still manage to implant malware in open source software that we all rely on. The hope is that with all of the above points, as well as all the ones I’ve missed, that the odds of it happening are rare, and that when it DOES happen, it’s way easier to identify and correct the problems than when we have to trust a private party to do it behind closed doors.
Great recent example, from last year: https://www.akamai.com/blog/security-research/critical-linux-backdoor-xz-utils-discovered-what-to-know
Me, I see this story as rather uplifting. I think it shows that the ecosystem we have in place does a pretty good job of controlling even the worst malicious actors, cause this story involves just about the worst kind of malicous actor you could imagine. They spent a full 2 years doing REAL open source work to develop that community trust I talked about, as well as maintaining a small army of fake accounts submitting support requests, to put pressure on the project to add more maintainers, resulting in a VERY sophisticated, VERY severe backdoor being added. And they still got found out relatively quickly.
“npm install” in particular is getting me.