• 1 Post
  • 35 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle
  • Yup. Rand() chooses a random float value for each entry. By default I believe it’s anywhere between 0 and 1. So it may divide the first bill by .76, then the second by .23, then the third by 0.63, etc… So you’d end up with a completely garbage database because you can’t even undo it by multiplying all of the numbers by a set value.



  • I actually enjoyed the story. Some of the themes and motifs were heavy handed, but that’s par for the course. Honestly, the biggest issue with the story is that players have come to expect a big plot twist. Bioshock 1’s twist hit first-time players hard, so later games have tried to replicate that. But the issue is that it only hit players hard because they never knew it was coming. They only remember it because it was truly shocking the first time you played through it.

    So now players have come to expect that from the series, which means the series can’t replicate it; When players are looking for a big plot twist, you can’t really hide it anymore. Because as soon as you start foreshadowing it, players catch on. And if you’re too subtle with your signals, then players who have been looking for it will say that doesn’t make any sense.









  • It’s because Yuzu was profiting off of their development with a Patreon. Keep emulators FOSS and there’s no profits to claim.

    Also, because it’s a settlement and not a ruling, it’s not setting a precedent for future lawsuits. Courts historically put a lot of weight on legal precedent, to help make rulings consistent. If one court interprets a new case in a certain way, similar cases in the future will likely look to that first case’s ruling for guidance.

    So if one ruling had decided that emulation is illegal, then subsequent lawsuits would have been much much easier for Nintendo. Because Nintendo could basically argue “we already proved emulation is illegal in that previous case, so now we don’t need to do that part again.”



  • Yes, they retracted the original policy changes with one of those boilerplate “we’re listening to the community” apologies. But the fact still remains that they have done it once and could just as easily decide to do it again in the future. One of the biggest reasons people shifted to Godot is because it’s free and open source. Godot (like many other free open-source softwares) had struggled with adoption until now. But now that Godot has exploded in popularity and game devs have begun learning it, the hardest hurdle is already passed and there isn’t much incentive to switch back to Unity.

    It’d be like if there was a mass exodus from Windows to Linux. And then Microsoft apologized for whatever caused the exodus, but everyone had already installed and learned the basics for Linux. There would be very little incentive for everyone to change back to Windows, because as Linux gets more popular and development progresses, it gets easier to use and more robust.

    The biggest hurdle for switching to a new platform is overcoming user apathy. After all, users will choose to use what they already know, even if it’s slightly inconvenient. That’s why the first phase of pretty much any software launch is making it look similar to something that already exists. If you can greet users with a familiar UI, they’ll be more likely to consider adoption. But Unity managed to actively drive users away from their platform (and into the arms of an open-source competitor) so the biggest hurdle has already been jumped.


  • Yeah I can almost guarantee that the original plan was always for him to leave. He was going to be the scapegoat with a golden parachute, allowing the company to keep the unpopular changes while disbursing the bad publicity. It’s exactly what he did with EA too.

    Basically reddit’s Ellen Pao plan. Bring in someone unpopular to make the unpopular changes, then let them go with a massive payout while keeping the unpopular changes.

    But then Unity realized that the companies weren’t going to forget about the unpopular changes and it wasn’t going to blow over. Companies started bailing left and right and switching to other engines. At that point Unity realized that the smoke was actually a full blown fire, and started doing whatever they could to try and regain some trust. But by that point it was too late, because companies had already seen the potential for abuse. And as the saying goes, when someone tells you who you are, believe them. So now companies are unwilling to go back to Unity, and Unity is grasping at straws.






  • Pretty much. PDF was specifically designed to retain the same look across any device. The goal was that if you designed a document to look a certain way, that opening it on another device wouldn’t fuck your entire design. That’s also why editing PDFs is so damned frustrating, because they’re designed to not change. It largely started as a frustration with the “move an image 3 pixels to the left, and now all your text is in the wrong place” issue. But the EEE strategy by Microsoft directly contributed to pdf becoming the de facto way to share documents.


  • It isn’t compressible at all, really. As far as a compression algorithm is concerned, it just looks like random data.

    Imagine trying to compress a text file. Each letter normally takes 8 bits to represent. The computer looks at 8 bits at a time, and knows which character to display. Normally, the computer needs to look at all 8 bits even when those bits are “empty” simply because you have no way of marking when one letter stops and another begins. It’s all just 1’s and 0’s, so it’s not like you can insert “next letter” flags in that. But we can cut that down.

    One of the easiest ways to do this is to count all the letters, then sort them from most to least common. Then we build a tree, with each character being a fork. You start at the top of the tree, and follow it down. You go down one fork for 0 and read the letter at your current fork on a 1. So for instance, if the letters are sorted “ABCDEF…” then “0001” would be D. Now D is represented with only 4 bits, instead of 8. And after reading the 1, you return to the top of the tree and start over again. So “01000101101” would be “BDBAB”. Normally that sequence would take 40 bits to represent, (because each character would be 8 bits long,) but we just did it in 11 bits total.

    But notice that this also has the potential to produce letters that are MORE than 8 bits long. If we follow that same pattern I listed above, “I” would be 9 bits, “J” would be 10, etc… The reason we’re able to achieve compression is because we’re using the more common (shorter) letters a lot and the less common (longer) letters less.

    Encryption undoes this completely, because (as far as compression is concerned) the data is completely random. And when you look at random data without any discernible pattern, it means that counting the characters and sorting by frequency is basically a lesson in futility. All the letters will be used about the same, so even the “most frequent” characters are only more frequent by a little bit due to random chance. So now. Even if the frequency still corresponds to my earlier pattern, the number of Z’s is so close to the number of A’s that the file will end up even longer than before. Because remember, the compression only works when the most frequent characters are actually used most frequently. Since there are a lot of characters that are longer than 8 bits and those characters are being used just as much as the shorter characters our compression method fails and actually produces a file that is larger than the original.