• 3 Posts
  • 307 Comments
Joined 4 years ago
cake
Cake day: January 21st, 2021

help-circle



  • The concern is that it would be nice if the UNIX users and LDAP is automatically in sync and managed from a version controlled source. I guess the answer is just build up a static LDAP database from my existing configs. It would be nice to have one authoritative system on the server but I guess as long as they are both built from one source of truth it shouldn’t be an issue.


  • Yes, LDAP is a general tool. But many applications that I am interested in using it for user information. That is what I want to use it for. I’m not really interested in storing other data.

    I think you are sort of missing the goal of the question. I have a bunch of self-hosted services like Jellyfin, qBittorrent, PhotoPrism, Metabase … I want to avoid having to configure users in each one individually. I am considering LDAP because it is supported by many of these services. I’m not concerned about synchronizing UNIX users, I already have that solved. (If I need to move those to LDAP as well that can be considered, but isn’t a goal).


  • I do use a reverse proxy but for various reasons you can’t just block off some apps. For example if you want to play Jellyfin on a Chromecast or similar, or PhotoPrism if you want to use sharing links. Unfortunately these systems are designed around the built-in auth and you can’t just slap a proxy in front.

    I do use nginx with basic with in front of services where I can. I trust nginx much more than 10 different services with varying quality levels. But unfortunately not all services play well.


  • How are you configuring this? I checked for Jellyfin and their are third-party plugins which don’t look too mature, but none of them seem to work with apps. qBittorrent doesn’t support much (actually I may be able to put reverse-proxy auth in front… I’ll look into that) and Metabase locks SSO behind a premium subscription.

    IDK why but it does seem that LDAP is much more widely supported. Or am I missing some method to make it work







  • Yeah, I can’t believe how hard targeting other consoles is for basically no reason. I love this Godot page that accurately showcases the difference:

    https://docs.godotengine.org/en/stable/tutorials/platform/consoles.html

    Currently, the only console Godot officially supports is Steam Deck (through the official Linux export templates).

    The reason other consoles are not officially supported are:

    • To develop for consoles, one must be licensed as a company. As an open source project, Godot has no legal structure to provide console ports.
    • Console SDKs are secret and covered by non-disclosure agreements. Even if we could get access to them, we could not publish the platform-specific code under an open source license.

    Who at these console companies think that making it hard to develop software for them is beneficial? It’s not like the SDK APIs are actually technologically interesting in any way (maybe some early consoles were, the last “interesting” hardware is probably the PS2). Even if the APIs were open source (the signatures, not the implementation) every console has DRM to prevent running unsigned games, so it wouldn’t allow people to distribute games outside of the console marker’s control (other than modded systems).

    So to develop for the Steam Deck:

    1. Click export.
    2. Test a bit.

    To develop for Switch (or any other locked-down console):

    1. Select a third-party who maintains a Godot port.
    2. Negotiate a contract.
      • If this falls through go back to step 1.
    3. Integrate your code to their port.
    4. Click export.
    5. Test a bit.

    What it could be (after you register with Nintendo to get access to the SDK download):

    1. Download the SDK to whatever location Godot expects it.
    2. Click export.
    3. Test a bit.

    All they need to do is grant an open source license on the API headers. All the rest is done for them and magically they have more games on their platform.


  • HTTP/1.1 403 UNAUTHORIZED
    {
      "error": {
        "status": "UNAUTHORIZED",
        "message": "Unauthorized access",
      },
    }
    

    I would separate the status from the HTTP status.

    1. The HTTP status is great for reasonable default behaviours from clients.
    2. The application status can be used for adding more specific errors. (Is the access token expired, is your account blocked, is your organization blocked)

    Even if you don’t need the status now, it is nice to have it if you want to add it in the future.

    You can use a string or an integer as the status code, string is probably a bit more convenient for easy readability.

    The message should be something that could be sent directly to the user, but mostly helpful to developers.


  • Vista sucked so bad. I got a nice new laptop and it was constant pain. One of the real breaking points was that it would refuse to let me modify or delete some files even as superuser. If I recall correctly they weren’t even system files, maybe a separate partition or something.

    I tried installing XP but there was some sort of driver issue with my CD drive. It would start installing fine, but then once it tried to reboot off of the HDD to finish the installation it couldn’t find the installation CD to finish copying things, so the install just crashed half-way done.

    I installed Ubuntu on a partition, dual booted for a while. After a few months I realized that I never even used the Windows partition anymore so I wiped it.


  • Likely what is happening is that the game is probing audio devices and triggering the mic on your headphones to get picked up. This switches them into the “headset” profile which has awful audio quality. I don’t know why the UI isn’t showing that, make sure you are checking while the game is running and the audio sounds bad.

    If you want your headphone mic to work there is not much choice. There isn’t a standard bluetooth profile with good audio and mic. If you never want to use your headphone mic you can probably configure some advanced settings in your audio manager (probably PulseAudio or PipeWire).


  • everyone knows that writing assembly is a fool’s errand

    I think this is misrepresenting the advice. I would argue the following:

    1. Writing your whole program in assembly typically won’t result in faster code than C or Rust. This is because well-written, readable, maintainable assembly will usually be slower than what a compiler produces. Even if you try to be fairly clever the compiler will almost always do a better job unless you are taking the time to carefully profile every line that you write.
    2. The compiler will evolve over time, your hand-written assembly will not. So even if your assembly is faster initially you will need to revisit it as hardware evolves.
    3. Obviously you will need different assembly for every instruction set.

    I don’t think anyone ever said “don’t try to optimize small sections of code you won’t beat the compiler”. Of course you can beat the compiler. But it will require significant upfront and maintenance cost to beat the compiler over time. That cost isn’t worth it for 99.9% of code. But when applied judiciously it can be used for improvements where it matters.

    The conclusion should be start by writing everything in a high level language. Then optimize your algorithms and eliminate performance bugs. Then once you have eliminated the low-hanging fruit consider spending the time to profile and optimize your hottest code in assembly.


  • This is my dream. However I think my target market is smaller and less willing to pay (personal rather than business). However maintenance is low effort and I want the product for myself. So even if it doesn’t make much or anything I think I will be happy to run it forever.

    The ultimate dream would be to make enough to be able to employ someone else part time, so that there could be business continuity if I wasn’t able to run it anymore.


  • kevincox@lemmy.mltoSelfhosted@lemmy.worldSecurity and docker
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    There is definitely isolation. In theory (if containers worked perfectly as intended) a container can’t see any processes from the host, sees different filesystems, possibly a different network interface and basically everything else. There are some things that are shared like CPU, Memory and disk space but these can also be limited by the host.

    But yes, in practice the Linux kernel is wildly complex and these interfaces don’t work quite as well as intended. You get bugs in permission checks and even memory corruption and code execution vulnerabilities. This results in unintended ways for code to break out of containers.

    So in theory the isolation is quite strong, but in practice you shouldn’t rely on it for security critical isolation.


  • kevincox@lemmy.mltoSelfhosted@lemmy.worldSecurity and docker
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    where you have decent trust in the software you’re running.

    I generally say that containers and traditional UNIX users are good enough isolation for “mostly trusted” software. Basically I know that they aren’t going to actively try to escalate their privilege but may contain bugs that would cause problems without any isolation.

    Of course it always depends on your risk. If you are handing sensitive user data and run lots of different services on the same host you may start to worry about remote code execution vulnerabilities and will be interested in stronger isolation so that a RCE in any one service doesn’t allow escalation to access all data being processed by other services on the host.