A person with way too many hobbies, but I still continue to learn new things.

  • 5 Posts
  • 342 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle

  • Agreed on Debian stable. Long ago I tried running servers under Ubuntu… that was all fine until the morning I woke up to find all of the servers offline because a security update had destroyed the network card drivers. Debian has been rock-solid for me for years and buying “commercial support” basically means paying someone else to do google searches for you.

    I don’t know if I’ve ever tried flatpaks, I thought they basically had the same problems as snaps?


  • I’m not sure about other distros, I’ve just heard a lot of complaints about snaps under Ubuntu. Cura was the snap I tried on my system that constantly crashed until I found a .deb package. Now it runs perfectly fine without sucking up a ton of system memory. Thunderbird is managed directly by debian, and firefox-esr is provided by a Mozilla repo so they all get installed directly instead of through 3rd-party software (although I think I tried upgrading Firefox to a snap version once and it was equally unstable). Now I just avoid anything that doesn’t have a direct installer.


  • That’s what I was thinking too… If they’re running Ubuntu then they’re probably installing packages through snaps, and that’s always been the worst experience for me. Those apps lag down my whole system, crash or lock up, and generally are unusable. I run Debian but have run into apps that wanted me to use a snap install. One package I managed to find a direct installer that is rock-solid in comparison to the snap version, and the rest of the programs I abandoned.

    Firefox (since it was mentioned) is one of those things I believe Ubuntu installs as a snap, despite there being a perfectly usable .deb package. I applaud the effort behind snap and others to make a universal installation system, but it is so not there yet and shouldn’t be the default of any distro.


  • But why doesn’t it ever empty the swap space? I’ve been using vm.swappiness=10 and I’ve tried vm.vfs_cache_pressure at 100 and 50. Checking ps I’m not seeing any services that would be idling in the background, so I’m not sure why the system thought it needed to put anything in swap. (And FWIW, I run two servers with identical services that I load balance to, but the other machine has barely used any swap space – which adds to my confusion about the differences).

    Why would I want to reduce the amount of memory in the server? Isn’t all that cache memory being used to help things run smoother and reduce drive I/O?


  • And how does cache space figure in to this? I have a server with 64GB of RAM, of which 46GB is being used by system cache, but I only have 450MB of free memory and 140MB of free swap. The only ‘volatile’ service I have running is slapd which can run in bursts of activity, otherwise the only thing of consequence running is webmin and some VMs which collectively can use up to 24GB (though they actually use about half that) but there’s no reason those should hit swap space. I just don’t get why the swap space is being run dry here.



  • Even the older versions work pretty well, depending on the features you need. I use it for all my 3D modeling, I could never get the hang of other CAD software but this one just “makes sense” to me. I even used it last year to create a model of a trailer I wanted to build, worked out the finer details of how everything would fit together and some options like adding ramps, and once we got to the point of building the trailer it was just a matter of copying the dimensions and cutting out all the steel.



  • But is it decentralized? Do the results from multiple spiders get added to give everyone the same quality searches or do I need to scan the whole internet myself?

    [edit] I was looking at this earlier and couldn’t find the info. Started searching again just now and found it immediately… of course… (The answer is YES)


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldDecentralized Search Engine
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    Yep, that’s exactly what I was looking at (https://github.com/searx/searx). As I said, it was a QUICK dive but the wording was enough to make me shy away from it. For all the years I’ve been running servers, I won’t put up anything that requires the latest/greatest of any code because that’s where about 90% of the zero-days seem to come from. Almost all the big ones I’ve seen in the last few years where things that made me panic until I realized that oh, if your updates are more than a year old then none of this affects you. And the one that DID affect me had already been updated through a security release.


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldDecentralized Search Engine
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    4
    ·
    2 months ago

    I just did a quick dive into this and have some concerns. SearX appears to no longer be maintained and was last updated three years ago. SearXNG was forked to use more recent libraries but there were concerns that those are not always stable or fully vetted. There were also concerns that SearXNG did not follow the same concerns for user privacy. It’s a shame that SearX shut down, that one actually sounds like a project I would have jumped on.




  • So to start with, you mentioned the underextrusion on a previous print. Seems like a good starting point, when was the last time you checked your E-steps? Basically you want to disconnect the bowden tube from the hotend, extrude out a short amount of filament and mark its position, then extrude 100mm of filament and measure how much actually came out. From there, there is a formula to adjust the E-steps on the printer. Ideally you should have exactly 100mm come out but there’s a good chance you’re going to have less than this. You can also make some adjustment to this from your slicer (in the material flow section) but that can cause various other problems, so ultimately you’ll want to get this value corrected in the printer itself.

    While the bowden tube is disconnected, this is a good time to try doing a cold-pull. Heat the hotend up to around 200C again, stick some scrap filament into it so it just starts to push filament out the bottom, then let the hotend cool back down to near 160C (or maybe even cooler, but this is a good start). Pull the blob of filament out of the hotend, and you should have a bullet-shaped plug on the end of it. Look this over to see if there is any burnt filament, contaminants, or anything else that looks weird. If you see obvious contaminants then this is likely causing your underextrusion. After doing this, you should also check the nozzle itself, sometimes as they wear out a bit of the brass gets pushed over and blocks the flow. Always keep spare nozzles on hand, they wear out faster than anything else.

    And one more thing before reassembling… Check the extruder itself. After some time it is common for the brass gear to get clogged up with filament or simply have the teeth wear down, especially from some of the fancy filaments like wood, glow-in-the-dark, or even the metallics. However the results of these problems should be fairly obvious from a clicking in the extruder while printing. Clean out any obvious filament remains, or you can get a pack of replacement gears pretty cheap.

    When you are ready to reassemble the bowden tubing, check the fittings at both ends. These wear out easily, so you may see signs that the tubing have been shifting back and forth. These really need to prevent any movement in the bowden tubing, so if you’re going to order parts anyway, get a pack of these to hand on hand. Bad fittings can cause serious underextrusion any place the extruder reverses directions like at the end of a wall. but the wall itself should lay down fairly cleanly.

    Hope that gives you some ideas to run with. Some of this will depend on the specific model of Ender you have, but if it was working fine and just suddenly started having problems then something blocking the filament flow is at the top of the list of possibilities.


  • I read this as a sign that his lack of action during COVID was in fact intentional. He also confirmed his intent for the 25% tariffs on Mexico (which will raise food prices) and on Canada (continuing the problem with new housing being unaffordable). Plus he signed another bill today demanding that US history taught in schools be white-washed of those “troublesome” notions like slavery. Taken all together, it’s obvious that the oligarchy wants the working class to be broke, hungry, homeless, and stupid enough to vote for his type again.

    Your king has spoken. Long live the king.

    [Edit] More news on the health care front… He’s blaming WHO for his own gross mismanagement of the COVID-19 pandemic, and he has pretty much gutted every program that aims to keep health care and medication prices manageable. Isn’t it funny how someone who ran for office on the idea of making things more affordable for everyone has already in his first day made costs skyrocket? https://www.statnews.com/2025/01/20/trump-executive-orders-health-care-drug-pricing-aca-covid-gender-discrimination/


  • Shdwdrgn@mander.xyztoLinux@lemmy.mlHow long has your PC been on for?
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    3 months ago

    22:57:20 up 70 days, 16:04, 21 users, load average: 1.10, 1.14, 1.02

    Honestly if you were expecting a drive failure in three years, you probably have some other problem. The SSD in my desktop is clocking 7.3 years and I never shut down my machines except to reboot. On my servers, I have run used HDDs from ebay for up to ten years (only retired for upgrades). My NAS is currently running a mixture of used drives from Ebay and some refurbs from Amazon, and I don’t anticipate seeing any issues for at least a few more years.


  • More drives also equals larger power consumption so you would need a larger battery backup.

    It also means more components prone to failure which increases your chance of losing data. More drives means more moving parts and electrical connections including data and power cables, backplanes, and generated heat that you need to cool down.

    I’d be more concerned over how many failures you’re seeing that makes you think smaller drives would be the better option? I have historically used old drives from ebay or manufacturer refurbs, and even the worst of those have been reliable enough to only have to replace drives once every year or two. With RAID6 or raidz2 you should be plenty secure during a rebuild to prevent data loss. I wouldn’t consider using a lot of little drives unless it’s the only option I had or if someone gave them away for free.



  • Are you sure about that? Ever hear about this supposed predictable network names in recent linux versions? Yeah those can change too. I was trying to set up a new firewall with two internal NICs plus a 4-port card, and they kept moving around. I finally figured out that if I cold-booted the NICs would come up in one order, and if I warm-booted they would come up in a completely different order (like the ports on the card would reverse which order they were detected). This was completely the fault of systemd because when I installed an older linux and used udev to map the ports, it worked exactly as predicted. These days I trust nothing.