

This script is why I ended up learning how to use OnShape. It’s probably much better nowadays, but I could not get it working a few years ago. I needed CAD and OnShape was close enough to Inventor that it was almost frictionless.
Alt account of @Badabinski
Just a sweaty nerd interested in software, home automation, emotional issues, and polite discourse about all of the above.
This script is why I ended up learning how to use OnShape. It’s probably much better nowadays, but I could not get it working a few years ago. I needed CAD and OnShape was close enough to Inventor that it was almost frictionless.
Honestly, I don’t think the tooling would be too terrible. Ron Covell has been making crazy shit out of sheet metal using nothing but hammers and simple wooden forms for many years, and I think it might be possible for OP to do the same. Granted, it would be hideously time consuming and would require great skill, but I think it’d be doable.
It also lacks any form of dependency management AFAICT. I don’t think there’s any way to say you depend on another service. I’m guessing you can probably order things lexically? But that’s, uh, shitty and bad.
I wrote and maintained a lot of sysvinit scripts and I fucking hated them. I wrote Upstart scripts and I fucking hated them. I wrote OpenRC scripts and I fucking hated them. Any init system that relies on one of the worst languages in common use nowadays can fuck right off. Systemd units are well documented, consistent, and reliable.
From my 30 seconds of looking, I actually like nitro a bit more than OpenRC or Upstart. It does seem like it’d struggle with daemons the way sysvinit scripts used to. Like, you have to write a process supervisor to track when your daemonized process dies so that it can then die and tell nitro (which is, ofc, a process supervisor), and it looks like the logging might be trickier in that case too. I fucking hate services that background themselves, but they do exist and systemd does a great job at handling those. It also doesn’t do any form of dependency management AFAICT, which is a more serious flaw.
Nitro seems like a good option for some use cases (although I cannot conceive why you’d want to run a service manager in a container when docker and k8s have robust service management built into them), but it’s never touching the disk on any of the tens of thousands of boxes I help administrate. systemd is just too good.
Just journalctl | grep
and you’re good to go. The binary log files contain a lot of metadata per message that makes it easy to do more advanced filtering without breaking existing log file parsers.
Anubis has worked if that’s happening. The point is to make it computationally expensive to access a webpage, because that’s a natural rate limiter. It kinda sounds like it needs to be made more computationally expensive, however.
Do you have any sources for the 10x memory thing? I’ve seen people who have made memory usage claims, but I haven’t seen benchmarks demonstrating this.
EDIT: glibc-based images wouldn’t be using service managers either. PID 1 is your application.
EDIT: In response to this:
There’s a reason a huge portion of docker images are alpine-based.
After months of research, my company pushed thousands and thousands of containers away from alpine for operational and performance reasons. You can get small images using glibc-based distros. Just look at chainguard if you want an example. We saved money (many many dollars a month) and had fewer tickets once we finished banning alpine containers. I haven’t seen a compelling reason to switch back, and I just don’t see much to recommend Alpine outside of embedded systems where disk space is actually a problem. I’m not going to tell you that you’re wrong for using it, but my experience has basically been a series of events telling me to avoid it. Also, I fucking hate the person that decided it wasn’t going to do search domains properly or DNS over TCP.
Debian is superior for server tasks. musl is designed to optimize for smaller binaries on disk. Memory is a secondary goal, and cpu time is a non-goal. musl isn’t meant to be fast, it’s meant to be small and easily embedded. Those are great things if you need to run in a network/disk constrained environment, but for a server? Why waste CPU cycles using a libc that is, by design, less time efficient?
EDIT: I had to fight this fight at my job. We had hundreds of thousands of Alpine containers running, and switching them to glibc-based containers resulted in quantifiable cloud spend savings. I’m not saying musl (or alpine) is bad, just that you have horses for courses.
Is it? I thought the thing that musl optimized for was disk usage, not memory usage or CPU time. It’s been my experience that alpine containers are worse than their glibc counterparts because glibc is damn good. It’s definitely faster in many cases. I think this is fixed now, but I remember when musl made the python interpreter run like 50-100x slower.
EDIT: musl is good at what it tries to be good at. It’s not trying to be the fastest, it’s trying to be small on disk or over the network.
When the filament goes through the hotend, any moisture in the filament will boil and make the filament all bubbly and not extrude well.
It’s supposed to help make you better at games by giving you an easy way to practice.
Open source isn’t good enough, I want my software to use a strong copyleft license with no ability to relicense via a CLA (CLAs that don’t grant the ability to relicense software are rare, but acceptable). AGPL for servers, GPL for local software, LGPL for libraries when possible, and Apache, MIT, or BSD ONLY when LGPL doesn’t make sense.
I’m positive it has the same issues as any other Windows VM setup. If you’ve got two GPUs, you can probably pass one of them through to the VM and get good graphical performance.
I wish the virtio-gpu stuff hadn’t died on Windows…
EDIT: It might not be dead? That’s cool if so.
True! I just wonder how much energy they’d realistically be able to store for a given amount of resources. Like, does this have the same issues as Lifted Weight Storage? Where the energy density just doesn’t really make sense once you get right down to it. I don’t know the relevant math to determine how much water and at what pressures might be required to scale this up to the 500MWh/1GWh range. It might be perfectly fine.
EDIT: fuck man I’m not writing well today. edited to make me sound like less of a cretin
I wonder if this suffers from the same power density issue as most alternatives to pumped hydro systems. It’s REALLY hard to do better than megatons of water pumped 500 meters up a hill.
If the game uses Unity and the mods are posted on Thunderstore, then Gale works perfectly.
I thought I remembered reading that saltwater electrolysis is far more efficient than freshwater electrolysis. It’s probably not orders of magnitude different, but I imagine it might help a bit.
I learned to program by shitting out God awful shell scripts that got gently thrashed by senior devs. The only way I’ve ever learned anything is by having a real-world problem that I can solve. You absolutely do NOT need a CS degree to learn software dev or even some of compsci itself, and I agree that tools like Bolt are going to make shit harder. It’s one thing to copy stack overflow code because you have people arguing about it in the comments. You get to hear the pros and cons and it can eventually make sense. It’s something entirely different when an LLM shits out code that it can’t even accurately describe later.
the
f
stands for file. The c manpage has some details on how it works: https://www.man7.org/linux/man-pages/man2/flock.2.html