

Was it before or after Oracle acquired Sun that the fork happened? I’m fairly sure it was Oracle that passed the project across to Apache and I have no idea why the Apache foundation accepted it.
FLOSS virtualization hacker, occasional brewer
Was it before or after Oracle acquired Sun that the fork happened? I’m fairly sure it was Oracle that passed the project across to Apache and I have no idea why the Apache foundation accepted it.
Yeah I don’t think this is an ncdu issue but something is broken with the OPs system.
I’ve long avoided npm but attacks on PyPi are a worry.
For context watching South park?
There is a difference between reviewing code and the feedback when you have the job and during an interview when trying to get a job. I’m not saying you should never expect to be pulled up on mistakes just that an interview experience is very different to the work experience.
Maybe there are ways to ameliorate the stress during the interview to get a better view of how a candidate will perform once hired but I think it’s a tricky balance to strike.
I think there is a difference in setting. Pair coding is a useful exercise but demands a degree of trust that the two of you are working together on a solution rather than one of the pair judging the other.
If you’re expecting to be stressed all the time at work then that is a red flag. Some professions may involve a degree of stress, which should be mitigated, from time to time but software engineering shouldn’t.
In my first interview they put me in a room with a PC with Borland C and a copy of K&R and a sheet with a simple problem to solve and some extra enhancements if I had time. They said they would be back in half an hour and left me to it. That I passed fine.
Some twenty-ish years later I was asked to write a C function to reverse a string on a white board and I failed because I’d misformatted the for loop. I don’t think it was because I’ve become a worse C coder in the intervening years.
When I’m actually coding I’m sat with my editor configured Just So with completion, compilation and unit tests at my finger tips. My favourite coding music blasting my speakers and a handy browser window for looking up anything in unsure of. This is my most productive setting and expecting the same performance in a stressful interview setting is foolish in my opinion.
Working through problems on a white board can work well but you are looking for the problem solving approach, not an encyclopedic knowledge of regex syntax. Those same problems get immeasurably harder when explained over a phone call.
My personal preference when evaluating candidates ability to code is reading their actual production code, the break down of commits, the commit messages and the sort of unit tests they add with a feature. The interview is more focused on their soft skills, what about the work excites them and what they are looking to get out of the role.
You could if you want fork from when it was GPLv3: https://github.com/stenzek/duckstation/commit/7f4e5d55dbdef5a50e0aa4994f667fb03d854928
From the linked comment it sounds like there was a license change in the projects history. I’m surprised the various distro packagers didn’t just collaborate on a renamed fork, unless there are more actively developed emulators still under a FLOSS licence?
Edit yep it was GPLv3 about 11 months ago: https://github.com/stenzek/duckstation/commit/7f4e5d55dbdef5a50e0aa4994f667fb03d854928
I’ve got an Ampere workstation (AVA) which from a firmware point works fine. They may even fix the PCIe bus on later versions.
Asahi is a powerful example of what a small well motivated team can achieve. However they are still face the sisyphean task of reverse engineering entirely undocumented hardware and getting that upstream.
If you love Apples hardware then great. Personally when I have Apple hardware I just tweak the keys to make it a little more like a Linux system and use brew for the tools I’m used to. If I need to I can always spin up a much more hackable VM.
Arm has been slowly pushing standardisation for the firmware which solves a lot of the problems. On the server side we are pretty much there. For workstations I’m still waiting for someone to ship hardware with non-broken PCIe. On laptops the remaining challenge is power usage parity with Windows and the insistance of some manufacturers to try and lock off EL2 which makes virtualization a pain.
What do the inputs and configuration drop down menus say?
So back in the days of the Atari ST we had compact disks (sic).
Most games shipped on a single floppy disk (so 720k or 1.4Mb) and rarely used compression given the base system only has 512k of RAM. The crackers would strip the protection, repack the data and patch the loading routines to handle that. Depending on the games they could fit 3 or 4 games on a single disk.
Nowadays the dynamics are different - games on consoles do use compression but they have to favour speed because they are streaming assets just in time. The PS5 even had dedicated decompression hardware to keep up with the data rate on it’s fast SSD.
Why would you? Effectively you are storing the address of the address at the address. It would get more complicated if there where post/pre increments or index offsets involved.
I remember the old ADSL modems where effectively winmodems. I had to keep a Windows ME machine as my household router until the point the community had reversed engineered them enough to get them working on Linux.
At least they where usb based rather than some random card. I think the whole driver could work in user space.
VirtIO was originally developed as a device para-virtualization as part of KVM but it is now an OASIS standard: https://docs.oasis-open.org/virtio/virtio/v1.3/virtio-v1.3.html which a number of hypervisors/VMM’s support.
The line between what a hypervisor (like KVM) does and what is delegated to a Virtual Machine Monitor - VMM (like QEMU) is fairly blurry. There is always an additional cost to leaving the hypervisor to the VMM so it tends to be for configuration and lifetime management. However VirtIO is fairly well designed so the bulk of VirtIO data transactions can be processed by a dedicated thread which just gets nudged by the kernel when it needs to do stuff leaving the VM cores to just continue running.
I should add HVF tends to delegate most things to the VMM rather than deal with things in the hypervisor. It makes for a simpler hypervisor interface although not quite as performance tuned as KVM can be for big servers.
There are large areas of open source that don’t rely on volunteer labour because companies with a vested interest pay people to work on them. They tend to be the obvious large projects that are continuously developed and gain new features. The trouble with something like xz is it was mostly “done” (as in it did the thing it was intended to do) but still needed maintenance to address the minor niggles, bug reports and updates to tooling and dependencies.
The foundations could do a better job here of supporting the maintainers. After Heartbleed the Linux Foundation started the Core Infrastructure Initiative to help fund those under recognised projects. I would hope the people running that could be more proactive identifying those critical understaffed components.
Edit I think it’s now called the Open Source Security Foundation: https://openssf.org/