Just a basic programmer living in California

  • 2 Posts
  • 84 Comments
Joined 1 年前
cake
Cake day: 2024年2月23日

help-circle

  • That’s a helpful one! I also add a function that creates a tmp directory, and cds to it which I frequently use to open a scratch space. I use it a lot for unpacking tar files, but for other stuff too.

    (These are nushell functions)

    # Create a directory, and immediately cd into it.
    # The --env flag propagates the PWD environment variable to the caller, which is
    # necessary to make the directory change stick.
    def --env dir [dirname: string] {
      mkdir $dirname
      cd $dirname
    }
    
    # Create a temporary directory, and cd into it.
    def --env tmp [
      dirname?: string # the name of the directory - if omitted the directory is named randomly
    ] {
      if ($dirname != null) {
        dir $"/tmp/($dirname)"
      } else {
        cd (mktemp -d)
      }
    }
    




  • The images probably don’t have to look meaningful as long as it is difficult to distinguish them from real images using a fast, statistical test. Nepenthes uses Markov chains to generate nonsense text that statistically resembles real content, which is a lot cheaper than LLM generation. Maybe Markov chains would also work to generate images? A chain could generate each pixel by based on the previous pixel, or based on neighbors, or some such thing.







  • It sounds like you’re including NixOS in this category so I guess I have switched.

    I also tried Fedora Silverblue a bit, and it seemed to me that ostree distros are built on a cool idea supported by compromises I didn’t like:

    Some stuff doesn’t work in Flatpak sandboxing - at least not yet. One example that comes to mind is Firefox integration with the desktop 1Password app. Maybe I could make this work by tinkering with Flatseal, but when install the native packages in NixOS this interaction just works.

    I don’t want my CLI tools in a container running a different distro. For example if I’m using Distrobox to set up a dev environment that’s installing a distro with traditional package management to get around not being able to install packages natively in the host OS. I get that Distrobox enables isolated dev environments for different projects. But for that use case I think Nix devshells are more flexible, robust, and performant.

    Nix also has its problems - in particular the usual complaint that the documentation is not comprehensive enough to match the complexity of the system.


  • It comes down to, what can be done or pre-generated at build or publish time versus what must be done at runtime (such as when a viewer accesses a post)? Stuff that must be done at runtime is stuff you don’t have the necessary information to do at publish time. For example you can’t pre-generate a comments section because you don’t know what the comments will be before a post is published.

    For stuff like email digests and social media posts I might set up a CI/CD system (likely using Github Actions) that publishes static content, and does those other tasks at the same time. Or if I want email digests delivered on a set schedule instead of at publish time I might set a scheduled workflow in the same CI/CD system. Either way you can have automation that is associated with your website that isn’t directly integrated with your web server.

    As you suggest some stuff that must be done at runtime can be done with frontend Javascript. That’s how I implement comments on my static site. I have Javascript that fetches a Mastodon thread that I set up for the purpose, and displays replies under the post.

    I don’t exactly follow your first and fourth requirements so it’s hard for me to comment more specifically. Transforming information from CSVs to HTML sounds like something that could naturally be done at build time if you have the CSVs at build time. But I’m not clear if that’s the case in your situation.


  • This is a big reason for me. Also because if anything breaks - even if my system becomes unbootable - I can select the previous generation from the boot menu, and everything is back to working.

    It’s very empowering, the combination of knowing that I won’t irrevocably break things, and that I won’t build up cruft from old packages and hand-edited config files. It’s given me confidence to tinker more than I did in other distros.




  • It seems to me that you’re asking about two different things: zero-knowledge authentication, and public key authentication. I think you’d have a much easier time using public key auth. And tbh I don’t know anything about the zero-knowledge stuff. I don’t know what reading resources to point to, so I’ll try to provide a little clarifying background instead.

    The simplest way to a authenticate a user if you have their public key is probably to require every request to be signed with that key. The server gets the request, verifies the signature, and that’s it, that’s an authenticated request. Although adding a nonce to the signed content would be a good idea if replay attacks might be a problem.

    If you want to be properly standards-compliant you want a standard “envelope” for signed requests. Personally I would use the multipart/signed MIME type since that is a ready-made, standardized format that is about as simple as it gets.

    You mentioned JSON Web Tokens (JWTs) which are a similar idea. That’s a format that you might think you could use for signing requests - it’s sort of another quasi-standardized envelope format for signed data. But the wrinkle is that JWTs aren’t used to sign arbitrary data. The data is expected to be a set of “claims”. A JWT is a JSON header, JSON claims, and a signature all of three which are serialized with base64 and concatenated. Usually you would put a JWT in the Authorization header of an HTTP request like this:

    Authorization: Bearer $jwt
    

    Then the server verifies the JWT signature, inspects the “claims”, and decides whether the request is authorized based on whether it has the right claims. JWTs make sense if you want an authentication token that is separate from the request body. They are more complicated than multipart/signed content since the purpose is to standardize a narrow use case, but to also support all of the features that the stakeholders wanted.

    Another commenter suggested Diffie-Hellman key exchange which I think is not a bad idea as a third alternative if you want to establish sessions. Diffie-Hellman used in every https connection to establish a session key. In https the session key is used for symmetric encryption of all subsequent traffic over that connection. But the session key doesn’t have to be an encryption key - you could use the key exchange to establish a session password. You could use that temporary password to authenticate all requests in that session. I do know of an intro video for Diffie-Hellman: https://youtu.be/Ex_ObHVftDg

    The first two options I suggested require the server to have user public keys for each account. The Diffie-Hellman option also requires users to have the server’s public key available. An advantage is that Diffie-Hellman authenticates both parties to each other so users know they can trust the server. But if your server uses https you’ll get server authentication anyway during the connection key exchange. And the Diffie-Hellman session password needs an encrypted connection to be secure. The JWT option would probably also need an encrypted connection.


  • hallettj@leminal.spacetoLinux@lemmy.mlHow do you backup?
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 个月前

    My conclusion after researching this a while ago is that the good options are Borg and Restic. Both give you incremental backups with cheap timewise snapshots. They are quite similar to each other, and I don’t know of a compelling reason to pick one over the other.



  • hallettj@leminal.spacetoLinux@lemmy.mlSWAY desktop
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 个月前

    Are you using swayidle? It’s supposed to automatically keep the screen on when there is full-screen video playing. It’s the same in Gnome: you generally don’t need caffeine if a full-screen video is going.

    How are you playing videos? Maybe the player doesn’t correctly implement the idle inhibit protocol. Or if you’re using sway bindings to make the window fullscreen instead of using the app’s own fullscreen mode then maybe the player doesn’t know it’s fullscreen, and doesn’t set up the idle inhibit.

    If you do want manual idle inhibit control, if you use Waybar it has an idle inhibitor module that mimics caffeine. If you don’t use Waybar there is a little Python script you can run. Kill it when you want to stop inhibiting idle. actually wib looks like a better option