• 2 Posts
  • 256 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle
  • What you are trying to do is called P2V, for Physical to Virtual. VMWare used to have tools specifically for this. I haven’t used them in a decade or more, but they likely still work. That should let you spin up the virtual system in VMWare Player (I’d test this before wiping the drive) and you can likely convert the resulting VM to other formats (e.g. VirtualBox). Again, test it out before wiping the drive, nothing sucks like discovering you lost data because you just had to rush things.



  • If the goal is stability, I would have likely started with an immutable OS. This creates certain assurances for the base OS to be in a known good state.
    With that base, I’d tend towards:
    Flatpak > Container > AppImage

    My reasoning for this being:

    1. Installing software should not effect the base OS (nor can it with an immutable OS). Changes to the base OS and system libraries are a major source of instability and dependency hell. So, everything should be self contained.
    2. Installing one software package should not effect another software package. This is basically pushing software towards being immutable as well. The install of Software Package 1, should have no way to bork Software Package 2. Hence the need for isolating those packages as flatpaks, AppImages or containers.
    3. Software should be updated (even on Linux, install your fucking updates). This is why I have Flatpak at the top of the list, it has a built in mechanism for updating. Container images can be made to update reasonably automatically, but have risks. By using something like docker-compose and having services tied to the “:latest” tag, images would auto-update. However, its possible to have stacks where a breaking change is made in one service before another service is able to deal with it. So, I tend to tag things to specific versions and update those manually. Finally, while I really like AppImages, updating them is 100% manual.

    This leaves the question of apt packages or doing installs via make. And the answer is: don’t do that. If there is not a flatpak, appimage, or pre-made container, make your own container. Docker files are really simple. Sure, they can get super complex and do some amazing stuff. You don’t need that for a single software package. Make simple, reasonable choices and keep all the craziness of that software package walled off from everything else.


  • Traditions exist to pass on learned knowledge and for social cohesion. Prior to widespread education, many local groups had to learn the same lessons and find a way to pass those on from person to person and generation to generation. Given that this also tended to coincide with societies not having the best grasp on reality (germ theory is not that old), the knowledge being passed on was often specious. But, it might also contain useful bits which worked.

    For example some early societies would pack honey into a wound. Why? Fuck if they knew, but that was what the wise men said to do. It turns out that honey is a natural anti-septic and helps to prevent infection. They had no knowledge of this, but had built up a tradition around it, probably because it seemed to work. And so that got passed on.

    The other aspect of traditions is social. When people do a thing together, they tend to bond and become willing to engage in more pro-social behaviors. It isn’t all that important what the activity it, so long as people do it together. The more people feel like they are part of the in-group, the more they will work to protect and sacrifice for that in-group.

    Sure, a lot of traditions are complete crap. They are superstition wrapped in a “that’s the way we’ve always done it” attitude. But it’s important not to overlook their significance to a population. The Christian Church ran headlong into this time and again through European history as they sought to convert various groups. Those groups tended to hold on to old traditions and just blended them into Christianity. This resulted in a fairly fractured religious landscape, but the Church generally tolerated it, because trying to quash it led to too many problems. While stories of various Easter and Christmas traditions being Pagan in origin are likely apocryphal, there are echos of older religious beliefs hanging about.

    It’s best to be careful when looking at a particular group’s traditions and calling them “backwards” or some other epitaph. Yes, they almost certainly have no basis in the scientific method. But, the value of those traditions to a people are very real. And so long as they are not harmful to others, you’re likely to do more harm trying to remove them than by simply allowing folks to just enjoy them.


  • It’s going to depend on what types of data you are looking to protect, how you have your wifi configured, what type of sites you are accessing and whom you are willing to trust.

    To start with, if you are accessing unencypted websites (HTTP) at least part of the communications will be in the clear and open to inspection. You can mitigate this somewhat with a VPN. However, this means that you need to implicitly trust the VPN provider with a lot of data. Your communications to the VPN provider would be encrypted, though anyone observing your connection (e.g. your ISP) would be able to see that you are communicating with that VPN provider. And any communications from the VPN provider to/from the unencrypted website would also be in the clear and could be read by someone sniffing the VPN exit node’s traffic (e.g. the ISP used by the VPN exit node) Lastly, the VPN provider would have a very clear view of the traffic and be able to associate it with you.

    For encrypted websites (HTTPS), the data portion of the communications will usually be well encrypted and safe from spying (more on this in a sec). However, it may be possible for someone (e.g. your ISP) to snoop on what domains you are visiting. There are two common ways to do this. The first is via DNS requests. Any time you visit a website, your browser will need to translate the domain name to an IP address. This is what DNS does and it is not encrypted by default. Also, unless you have taken steps to avoid it, it likely your ISP is providing DNS for you. This means that they can just log all your requests, giving them a good view of the domains you are visiting. You can use something like DNS Over Https (DOH), which does encrypt DNS requests and goes to specific servers; but, this usually requires extra setup and will work regardless of using your local WiFi or a 5g/4g network. The second way to track HTTPS connections is via a process called Server Name Identification (SNI). In short, when you first connect to a web server your browser needs to tell that server which domain it wants to connect to, so that the server can send back the correct TLS certificate. This is all unencrypted and anyone inbetween (e.g. your ISP) can simply read that SNI request to know what domains you are connecting to. There are mitigations for this, specifically Encrypted Server Name Identification (ESNI), but that requires the web server to implement it, and it’s not widely used. This is also where a VPN can be useful, as the SNI request is encrypted between your system and the VPN exit node. Though again, it puts a lot of trust in the VPN provider and the VPN provider’s ISP could still see the SNI request as it leaves the VPN network. Though, associating it with you specifically might be hard.

    As for the encrypted data of an HTTPS connection, it is generally safe. So, someone might know you are visiting lemmy.ml, but they wouldn’t be able to see what communities you are reading or what you are posting. That is, unless either your device or the server are compromised. This is why mobile device malware is a common attack vector for the State level threat actors. If they have malware on your device, then all the encryption in the world ain’t helping you. There are also some attacks around forcing your browser to use weaker encryption or even the attacker compromising the server’s certificate. Though these are likely in the realm of targeted attacks and unlikely to be used on a mass scale.

    So ya, not exactly an ELI5 answer, as there isn’t a simple answer. To try and simplify, if you are visiting encrypted websites (HTTPS) and you don’t mind your mobile carrier knowing what domains you are visiting, and your device isn’t compromised, then mobile data is fine. If you would prefer your home ISP being the one tracking you, then use your home wifi. If you don’t like either of them tracking you, then you’ll need to pick a VPN provider you feel comfortable with knowing what sites you are visiting and use their software on your device. And if your device is compromised, well you’re fucked anyway and it doesn’t matter what network you are using.


  • sylver_dragon@lemmy.worldtoLinux@lemmy.mlAntiviruses?
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 days ago

    Ultimately, it’s going to be down to your risk profile. What do you have on your machine which would wouldn’t want to lose or have released publicly? For many folks, we have things like pictures and personal documents which we would be rather upset about if they ended up ransomed. And sadly, ransomware exists for Linux. Lockbit, for example is known to have a Linux variant. And this is something which does not require root access to do damage. Most of the stuff you care about as a user exists in user space and is therefore susceptible to malware running in a user context.

    The upshot is that due care can prevent a lot of malware. Don’t download pirated software, don’t run random scripts/binaries you find on the internet, watch for scam sites trying to convince you to paste random bash commands into the console (Clickfix is after Linux now). But, people make mistakes and it’s entirely possible you’ll make one and get nailed. If you feel the need to pull stuff down from the internet regularly, you might want to have something running as a last line of defense.

    That said, ClamAV is probably sufficient. It has a real-time scanning daemon and you can run regular, scheduled scans. For most home users, that’s enough. It won’t catch anything truly novel, but most people don’t get hit by the truly novel stuff. It’s more likely you’ll be browsing for porn/pirated movies and either get served a Clickfix/Fake AV page or you’ll get tricked into running a binary you thought was a movie. Most of these will be known attacks and should be caught by A/V. Of course, nothing is perfect. So, have good backups as well.


  • With intermittent errors like that, I’d take the following test plan:

    1. Check for disk errors - You already did this with the SMART tools.
    2. Check for memory errors - Boot a USB drive to memtest86 and test.
    3. Check for overheating issues - Thermal paste does wear out, check your logs for overheating warnings.
    4. Power issues - Is the system powered straight from the wall or a surge protector? While it’s less of an issue these days, AC power coming from the wall should have a consistent sine wave. If that wave isn’t consistent, it can cause a voltage ripple on the DC side of the power supply. This can lead to all kinds of weird fuckery. A good surge protector (or UPS) will usually filter out most of the AC inconsistencies.
    5. Power Supply - Similar to above, if the power supply is having a marginal failure it can cause issues. If you have a spare one, try swapping it out and seeing if the errors continue.
    6. Processor failure - If you have a space processor which will fit the motherboard, you could try swapping that and looking for errors to continue.
    7. Motherboard failure - Same type of thing. If you have a spare, swap and look for errors.

    At this point, you’ll have tested basically everything and likely found the error. For most errors like this, I’ve rarely seen it go past the first two tests (drive/RAM failure), with the third (heat) picking up the majority of the rest. Power issues I’ve only ever seen in old buildings with electrical systems which probably wouldn’t pass an inspection. Though, bad power can cause other hardware failures. It’s one reason to have a surge protector in line at all times anyway.


  • I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.






  • a Bambu Labs compatible heat sink, an E3D V6 ring heater, and a heat break assembly are required
    a fan was sacrificed to mount a Big Tree Tech control board. Most everything ended up connecting to the new board without issue, except for the extruder.
    made a custom mount for the ubiquitous Orbiter extruder.
    The whole project was nicely tied up with a custom-made screen mount.

    So, other than the enclosure and print bed, what’s actually left of the original printer? It seems like the way to get a Bambu printer to run FOSS is to open the box from Bambu Labs, toss everything inside the box in the trash, drop a custom built printer in the box, and then proceed with your unboxing.



  • I have to agree with @paf@jlai.lu on this. I’d much rather have those models as part of the ecosystem than not. I do think part of the 3d printing hobby is learning to look at a model and recognize what can be printed on what type of printer, where supports are needed and where modifications may need to be made. For example, I recently purchased a model through TitanCraft. And the models they create are clearly designed with a resin printer in mind. they have some small features which are difficult or impossible to print on an FDM printer. While I knew that mini-figure models can be challenging on FDM, I went ahead with the purchase anyway. And the resulting min-fig’s staff was so thin my printer just couldn’t print it cleanly. I had to load the STL into Blender and spend an hour or two separating the staff out from the rest of the model and then I thickened it considerably. Sure, the haft of the shaft is a bit thick for the proportions of the model, but not too bad.

    I make a similar evaluation of stuff I see on the various model sharing sites, before I try to print it. Does it need supports? Are some of the details going to be very hard or impossible for my printer to make? Should I split the model? And, while I am pretty crap at Blender, I may consider doing some simple edits to make a model easier to print and/or make changes I want. For example, I liked these ghosts but didn’t care for the spring and just wanted them hollow so I could stuff a UV LED inside them. With glow in the dark PLA, these look neat at night. So, I beat my head against Blender until I had them how I wanted them.

    So, I wouldn’t want to stifle other peoples’ creativity. Let them create and enjoy the fact that people are willing to create and release this stuff for you to print. If it doesn’t work out, fix it and re-release it.


  • force binary choices that don’t align with household rules or with children’s maturity levels.

    This has been my main experience with “parental controls”. As soon as they are turned on, I lose any ability to manage the experiences available to my children. So, in areas where I see them as mature enough to handle something, the only way I can allow them access to that experience is to completely bypass the controls. In many ecosystems, if I judge that one of my children could handle a game and the online risks associated with it, I can’t simply allow that game. Instead, I need to maintain a full adult account for them to use. You also run into a lot of situations where the reason a game is banned from children is unclear or done in an obvious “better safe than sorry” knee-jerk reaction. Ultimately, parental controls end up being far more frustrating than empowering. I’d rather just have something that just says, “this game/movie/etc your kid is asking for is restricted based on reasons X, Y and Z. Do you want to allow it?” Log my response and go with it. Like damned near any choice in software settings, quit trying to out-think me on what I want, give me a choice and respect that choice.


  • I’d suggest looking into some sort of auto bed leveling upgrade. My previous printer (Monoprice MP10 Mini) had the bed leveling sensor fail and be non-replaceable. The amount of futzing with first layer setting was a nightmare, even with a glass bed. My new printer (Creality K1C) does automatic bed leveling with a load sensor and the difference has been night and day. Most prints, I can hit start and not have to fight anything (except TPU, holy hell TPU has been a fight). The sensor won’t guarantee perfect first layers, but goddamn it’s a lot easier to get something reasonable.


  • Fun fact, in some countries the 3.5" floppies were called “stiffy disks”. You know, because the outer casing was “stiff” as opposed to the floppy 5.25" disks. This discovery led to a lot of chuckling among the team I worked with when we opened a new product from one of those countries and read the manual. The instruction to “insert stiffy disk” still leads most of us to chuckling today.


  • ever had to worry whether you’d parked your hard drive’s heads before moving it, child…?

    Yes, also you parked it before shutting down the system every time. Once the hard drive was powered down, the heads would just crash into the platters. While not instantly fatal, it wasn’t good for the drive. So, you’d park the drive before flipping the power switch.


  • It’s been a few of years since did my initial setup (8 apparently, just checked); so, my info is definitely out of date. Looking at the Ubuntu site they still list Ubuntu 16.04, but I think the info on setting it up is still valid. Though, it looks like they only list setting up a mirror or a stripe set without parity. A mirror is fine, but you trade half your storage space for complete data redundancy. That can make sense, but usually not for a self hosting situation. A stripe set without parity is only useful for losing data, never use this. The option you’ll want is a raidz, which is a stripe set with parity. The command will look like:

    zpool create zpool raidz /dev/sdb /dev/sdc /dev/sdd
    

    This would create a zpool named “zpool” from the drives at /dev/sdb, /dev/sdc and /dev/sdd.

    I would suggest spending some time reading up on the setup. It was actually pretty simple to do, but it’s good to have a foundation to work with. I also have this link bookmarked, as it was really helpful for getting rolling snapshots setup. As with the data redundancy given by RAID, it does not replace backups; but, can be used as part of a backup strategy. They also help when you make a mistake and delete/overwrite a file.

    Finally, to answer your question about hardware, my recollection and experience has been that ZFS is not terribly demanding of CPU. I ran a Intel Core i3 for most of the server’s life and only upgraded when I realized that I wanted to game servers on it. Memory is more of an issue. The minimum requrement most often cited is 8GB, but I also saw a rule of thumb that you want 1GB of memory for each TB of storage. In the end, I went with 8GB of RAM, as I only had 4TB of storage (3 2TB disks in a RAIDZ1). But, also think about what other workloads you have on the system. When built, I was only running NextCloud, NGinx, Splunk, PiHole and WordPress (all in docker containers). And the initial 8GB of RAM was doing just fine. When I started running game servers, I stared to run into issues. I now have 16GB and am mostly fine. Some game servers can be a bit heavy (e.g. Minecraft, because fucking Java), but I don’t normally see problems. Also, since the link I provided mentioned it, skip ECC memory. it’s almost never worth the cost, and for home use that “almost never” gets much closer to “actually never”.

    When choosing disks, keep in mind that you will need a minimum of 2 disks and you effectively lose the storage space of one of the disks in the pool to parity storage (assuming all disks are the same size). Also, it is best for all of the disks to be the same size. You can technically use different size disks in the same pool; but, the larger disks get treated as the same size as the smaller disks. So long as the pool is healthy, read speeds are better than a single disk as the read can be spread out among the pool. But, write speeds can be slower, as the parity needs to be calculated at write time. Otherwise, you’re pretty free to choose any disks which will be recognized by the OS. You mention that 1TB is filling up; so, you’ll want to pick something bigger. I mentioned using spinning disks, as they can provide a lot more space for the money. Something like a 14TB WD Red drive can be had for $280 ($20/TB). With three of those in a RAIDZ1 pool, you get ~28TB of storage and can tolerate one disk failure , without losing data. With solid state disks, you can expect costs closer to $80/TB. Though, there is a tradeoff in speed. So, you need to consider what type of workloads you expect the storage pool to handle. Video editing on spinning rust is not going to be fun. Streaming video at 4k is probably OK, though 8k is going to struggle.

    A couple other things think about are space in the chassis, drive connections and power. Chassis space is pretty obvious, you gotta put the disks in the box. Technically, you don’t have to mount the disks, they can just be sitting at the bottom of the case, but this can cause problems with heat shortening the lifespan of the drives. It’s best to have them properly mounted and fans pushing air over them. Drive connections are one of those, you either have the headers or you don’t. Make sure your motherboard can support 3 more drives with the chosen interface (SATA, NVMe, etc.) before you get the drives. Nothing sucks more than having a fancy new drive only to be unable to plug it into the motherboard. Lastly, drives (and especially spinning drives) can be power hungry. Make sure your power supply can support the extra power requirements.

    Good luck whatever route you pick.