• 1 Post
  • 166 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • Setting aside the cryptographic merits (and concerns) of designing your own encryption, can you explain how a URL redirector requiring a key would provide plausible deniability?

    The very fact that a key is required – and that there’s an option for adding decoy targets – means that any adversary could guess with reasonable certainty that the sender or recipient of such an obfuscated link does in-fact have something to hide.

    And this isn’t something like with encrypted messaging apps where the payload needs to be saved offline and brute-forced later. Rather, an adversary would simply start sniffing the recipient’s network immediately after seeing the obfuscated link pass by in plain text. What their traffic logs would show is the subsequent connection to the real link, and even if that’s something protected with HTTPS – perhaps https://ddosecrets.com/ – then the game is up because the adversary can correctly deduce the destination from only the IP address, without breaking TLS/SSL.

    This is almost akin to why encrypted email doesn’t substantially protect the sender: all it takes is someone to do a non-encryted reply-all and the entire email thread is sent in plain text. Use PGP or GPG to encrypt attachments to email if you must, or just use Signal which Just Works ™ for messaging. We need not reinvent the wheel when it’s already been built. But for learning, that’s fine. Just don’t use it in production or ask others to trust it.


  • litchralee@sh.itjust.workstoSelfhosted@lemmy.worldWifi Portal
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    1 month ago

    But how do they connect to your network in order to access this web app? If the WiFi network credentials are needed to access the network that has the QR code for the network credentials, this sounds like a Catch 22.

    Also, is a QR code useful if the web app is opened on the very phone needing the credentials? Perhaps other phones are different, but my smartphone is unable to scan a QR code that is on the display.



  • Before my actual comment, I just want to humorously remark about the group which found and documented this vulnerability, Legit Security. With a name like that, I would inadvertently hang up the phone if I got a call from them haha:

    "Hi! This is your SBOM vendor calling. We’re Legit.

    Me: [hangs up, thinking it’s a scam]

    Anyway…

    In a lot of ways, this is the classic “ignore all prior instructions” type of exploit, but with more steps and is harder to scrub for. Which makes it so troubling that GitLab’s AI isn’t doing anything akin to data separation when taking instructions vs referencing other data sources. What LegitSecurity revealed really shouldn’t have been a surprise to GitLab’s developers.

    IMO, this class of exploit really shouldn’t exist, in the same way that SQL injection attacks shouldn’t be happening in 2025 due to a lack of parameterized queries. Am I to believe that AI developers are not developing a cohesive list of best practices, to avoid silly exploits? [rhetorical question]


  • Typically, business-oriented vendors will list the hardware that they’ve thoroughly tested and will warranty for operation with their product. The lack of testing larger disk sizes does not necessarily mean anything larger than 1 TB is locked out or technically infeasible. It just means the vendor won’t offer to help if it doesn’t work.

    That said, in the enterprise storage space where disks are densely packed into disk shelves with monstrous SAS or NVMeoF configurations, vendor specific drives are not unheard of. But to possess hardware that even remotely has that possibility kinda means that sort of thing would be readily apparent.

    To be clear, the mobo has a built-in HBA which you’re using, or you’re adding a separate HBA over PCIe that you already have? If the latter, I can’t see how the mobo can dictate what the HBA supports. And if it’s in IT mode, then the OS is mostly in control of addressing the drive.

    The short answer is: you’ll have to try it and find out. And when you do, let us know what you find!


  • Congrats on the acquisition!

    DL380 G9

    Does this machine have its iLO license? If so, you’re in for a treat, if you’ve never used IPMI or similar out-of-band server management. Starting as a glorified KVM, it then has full power control authority (power on/off, soft reset, hard reset), either a separate or shared Ethernet connection, virtual CD and USB, SNMP reporting, and other whiz-bang features. Used correctly, you might never have to physically touch the machine after installation, except for parts replacement.

    What is your go-to place to source drive caddies or additional bays if needed?

    When my Dell m1000e was missing two caddies, I thought about buying a few spares on eBay. But ultimately, I just 3d printed a few and that worked fine.

    Finally, server racks are absurdly expensive of course. Any suggestions on DIY’s for a rack would be appreciated.

    I built my rack using rails from Penn-Elcom, as I had a very narrow space I wanted to fit my machines. Building an open-frame 4-post rack is almost like putting a Lego set together, but you will have to take care to make sure it doesn’t become a parallelogram. That is, don’t impart a sideways load.

    Above all, resist the urge to get by with a two-post rack. This will almost certainly end in misery, considering that enterprise servers are not lightweight.


  • A lot of my response was already rendered further down the thread. So I’ll only comment on this part:

    The objective is not to make the most community friendly licence, it is to pay the people who do the actual work.

    If this is the singular or main objective that Futo has, then the basis of OP’s post is entirely dead. The title of the post is very clearly “FUTO License, an alternative to Open Sourd”. But if we take your submission as fact, then there is no comparison whatsoever.

    Open Source – whether using OSI’s definition or including FSF’s – has almost never focused on the financial aspect, for better or worse. It’s why commercial entities like Canonical and Red Hat are so rare, because software engineers prefer spending their free time working on great things rather than doing admin.

    Futo sounds like they want to be a commercial entity like Red Hat but without the limitations that Open Source or Free Software would impose on them. And they’re welcome to do that, but that endeavor cannot honestly be called comparable to the mostly community-driven projects like BSD, GNU, and Linux, or commercial ventures like RHEL and whatever cloud-thingy that Canonical is selling now.

    If the goal is to pay for professional talent, with revenue from B2B sales, and only non-commercial users get a free-bee, then that’s just a shareware company with more steps. Futo trying to dress themselves up like Red Hat remains as disingenuous as when they tried to misinform open-source folks about what open-source is.

    I’ll be frank: my interest in software licensing is about finding licenses that strike a sensible balance. It’s about distributing rights and obligations that are equitable and sustainable, while perpetuating software uptake and upkeep. It’s a tough cookie. But I think the Source First license alienates too many potential audiences and its financial model falls apart under any game theory analysis. So I’m not keen on looking down this avenue anymore.


  • I don’t think that’s the main objective of the FUTO license

    That’s fair. I stated my assumption because perhaps they have different objectives. That said, history is quite clear: the greatest success of open-source software development is that it pools efforts from anyone – truly anyone – that is willing and able to put in the time, be it individuals or workers hired by a corporation.

    When a license is heralded as an alternative to open-source – as the title of this post does – I think said license needs to be evaluated against the historical success story that open-source projects like Linux, BSD, Blender, etc have demonstrated. Not having the quality of attracting community contributions is a negative, but all licenses have some sort of tradeoff and ultimately that’s what people evaluate when picking a license.

    I believe the main objective is to incentivize developers to create great software that respects individual users and fights back against the big tech oligarchy.

    This is a laudable goal, though I think the ACSL is more direct at doing the same. It too is a non-open source license, but IMO, I give credit to them for being upfront about that, rather than pointless muddying of the term “open source” that Futo attempted (and ultimately failed at).

    More dogmatically, I don’t see how elevating Futo Holdings Inc (or any other company that will manage software licensed under Source First v1.1) into a “benevolent dictator company for life” will fight against the tech oligarchy. It might act as a counter to FAANG specifically, but there’s no guarantee that Futo Holdings doesn’t end up joining their side anyway, or gets bought out by the oligopoly. Which would then put us all worse off in the end.

    I don’t quite see the issue here. Can you explain a little more? A third-party would just get a license to sell the software, not to develop it.

    Futo Holdings Inc, as the assigned owner of copyright over a software project, reserves the right to license their software however they choose. They can absolutely issue a license to allow a company to privately develop an in-house fork. In copyright speak, the Source First license being “non exclusive” means Futo Holdings can issue someone else a different license. History shows us examples, such as Microsoft’s non-exclusive license of DOS to IBM, which was quite handy since that allowed MS-DOS to be sold with non-IBM PC clones.

    And for an example of licensing that allows in-house edits and recompiling, see the source code license offered by AT&T Labs to various universities, which included one UC Berkeley that eventually developed BSD Unix.

    Isn’t this currently possible with Open Source™? Like the whole point of Open Source™ is that anyone can use the software for anything, right?

    Use, yes. Distribute? Absolutely not with GPL. If ICE wants to create an OS designed to optimally coral unlawfully-detained people in barbaric conditions, then they – just like you, me, the DPRK, or Facebook – can fork Linux and do that. But if ICE then wanted to distribute that CruelOS to another country’s border patrol or secret intelligence or to a private defense firm, they would be obliged by the GPL terms to also offer whatever source code they modified in the Linux kernel to produce CruelOS.

    GPL is about making sure the same rights perpetuate for all of time, for all future users, always. If Linus Torvalds turned evil today, the remaining kernel devs would just fork. Whereas Futo Holdings makes no guarantees, and they themselves can turn evil one day. This isn’t even a contrived example. See IBM/Hashicorp’s Terraform and the FOSS OpenTofu that spawned after they tried to change the license.

    Google may contribute something to Linux, but my company will never contribute anything. Seems like Google is ok with my company benefiting from their work.

    If Google contributed to Linux, it would be GPL licensed. Google knows that this means the playing field will always be level: no one can built and distribute that code in a way that Google couldn’t later benefit from.

    Think of it like this: Google buys everyone in the tavern a beer. Everyone’s happy. But part of the deal is that if anyone else buys for themselves a beer, they have to buy for everyone as well. Google is fine with this, because it means that Microsoft wearing the dark suit will also have to pony up if he wants another drink. As will Netflix in the skinny jeans sitting at the booth. As would Ericsson, the Swede dancing jovially to a tune.

    With the Source First license, Google has no guarantees that Microsoft won’t use his manly charisma to charm Futo Holdings into giving him a better deal than what Google got. Google is bitter at that prospect, and decides not to buy everyone a beer after all. You, me, and Bob who fell asleep in the corner now need to pay for our own beers, but the bartender won’t give us a group discount anymore. We are now all worse off.

    In closing, I had this to say in an earlier post:

    Using the tools of the capitalist (copyright and licenses) to wage a battle against a corporation is neither an even fight, nor is it even winnable. Instead, strong communities build up their skills and ties to one another to fight in meaningful ways.

    If you’re not building (software) communities, the struggle will not succeed.


  • Community audits sound great on paper, but it’s something which the FOSS licenses (eg GPL, MIT) also provide. As a practical matter though, auditing has a two-fold objective: 1) identify risks so they can be quantified, and 2) mitigated. For non-commercial users in the community, an audit is high-effort with low return. And further, this license disincentives mitigation even if the audit does turn up something, because of having to sign the copyright away just to submit a bug fix.

    For commercial users, auditing is more palatable, being part-and-parcel to risk management. And these commercial operations have the budget to do it, but then this license means the best way to keep improvements out of their nemesis’s hands is to maintain an internal fork that never returns code to the public repo. So commercial users will have to pay more to obtain that sort of license.

    All this seems harder than just using MIT code (or even GPL), if such is available. And that’s exactly why I can’t see myself using source-available software in a personal or professional capacity, when there’s any other choice available. It seems worse off for everyone except the owner of the public repo. The license stinks of vendor lock-in, and even if I’m not the one who will pay the rent, I dogmatically will not support rent-seeking like this.


  • To be abundantly clear, “free software” (aka free as in speech) and “open source” are understood as two different categories, and when software falls into both, would be called Free and Open Source (FOSS).

    Wikipedia has this to say:

    FOSS stands for “Free and Open Source Software”. There is no one universally agreed-upon definition of FOSS software and various groups maintain approved lists of licenses. The Open Source Initiative (OSI) is one such organization keeping a list of open-source licenses.[1] The Free Software Foundation (FSF) maintains a list of what it considers free.[2] FSF’s free software and OSI’s open-source licenses together are called FOSS licenses. There are licenses accepted by the OSI which are not free as per the Free Software Definition. The Open Source Definition allows for further restrictions like price, type of contribution and origin of the contribution


  • I’m not sure how this license would foster community contributions to the codebase, assuming that was an objective. When I say “contributor” I mean both individuals as well as corporations, in the same way that both might currently contribute to the Linux kernel (GPL) today.

    As written, this license grants the user a non-exclusive license for non-commercial use. But that implies that for commercial users – like a corporation – they’ll have to negotiate a separate license, since Futo Holdings Inc would retain the copyright. So if a corporation (or nation state entity) throws enough money at Futo Holdings Inc, they can buy their way into any sort of license terms they want, and the normie user can’t complain.

    This is kinda like the principal-agent problem, where the userbase and individual developers now have to trust that Futo Holdings won’t do something reprehensible with the copyrights, be it licensing to certain hostile countries or whatever.

    Whereas in the GPL space, individual developers still own their copyright but license their code out under a compatible license. So even Linus Torvalds cannot unilaterally relicense the Linux codebase, because he would need to seek out every copyright owner for every line of code that exists, and some of those people are already dead.

    I’m personally not a fan at all of forcing individual contributors from the community into signing over copyright (or major rights thereto) or other stipulations as a condition for making the codebase better, with the exception of an indemnity that the code isn’t stolen or a work-product for hire. I used GPL in the comparison above, but the permissive licenses like MIT also have similar qualities.

    EDIT

    Thinking about it more, would corporations even want to contribute? Imagine CorpA decides to add code, having already paid for an existing commercial license from Futo Holdings. But then CorpB – who is CorpA’s arch nemesis – pays Futo Holdings an absurd amount of money and in return gets a commercial license that’s equivalent to the WTFPL. That means CorpA’s contributions are available for CorpB to use, but CorpB has zero obligation to ever contribute a line of code which CorpA could later benefit from. It becomes a battle of money, and Futo Holdings sits as the kingmaker. GPL abates this partially, if CorpA is both using and distributing code. But the Source First License v1.1 has zero mitigation for this, apart from “trust me bro”.


  • I agree with this comment, and would suggest going with the first solution (NAT loopback, aka NAT hairpin) rather than split-horizon DNS. I say this even though I have a strong dislike of NAT (and would prefer to see networks using flat IPv6 addresses, but that’s a different topic). It should also be fairly quick to configure the hairpin on your router.

    Specifically, problems arise when using DNS split-horizon where the same hostname might resolve to two different results, depending on which DNS nameserver is used. This is distinct from some corporate-esque DNS nameservers that refuse to answer for external requests but provide an answer to internal queries. Whereas by having no “single source of truth” (SSOT) for what a hostname should resolve to, this will inevitably make future debugging harder. And that’s on top of debugging NAT issues.

    Plus, DNS isn’t a security feature unto itself: successful resolution of internal hostnames shouldn’t increase security exposure, since a competent firewall would block access. Some might suggest that DNS queries can reveal internal addresses to an attacker, but that’s the same faulty argument that suggests ICMP pings should be blocked; it shouldn’t.

    To be clear, ad-blocking DNS servers don’t suffer from the ails of split-horizon described above, because they’re intentionally declining to give a DNS response for ad-hosting hostnames, rather than giving a different response. But even if they did, one could argue the point of ad-blocking is to block adware, so we don’t really care if SSOT is diminished for those hostnames.


  • which means DNS entries in a domain, and access from the internet

    The latter is not a requirement at all. Plenty of people have publicly-issued TLS certs for domain named services that aren’t exposed to the public internet, or aren’t using HTTP(s). If using LetsEncrypt, the DNS-01 challenge method would suffice, or can even issue a wildcard certificate for subdomains, so additional certificate issuance is not required.

    If after acquiring a domain, said domain can be pointed to one of many free nameservers that provide an API which can be updated from an ACME script for automatic renewal of the LetsEncrypt certificate using DNS-01. dns.he.net is one such example.

    OP has been given a variety of options, each of which come with their own tradeoffs. But public access to Jellyfin just to get a public cert is not a necessary tradeoff that OP needs to make.


  • Not “insecure” in the sense that they’re shoddy with their encryption, no. But being free could possibly mean their incentives are not necessarily aligned with that of the free users.

    In security speak, the CIA triad stands for Confidentiality, Integrity, and Availability. I’m not going to unduly impugn Proton VPN’s credentials on data confidentiality and data integrity, but availability can be a legit security concern.

    For example, if push comes to shove and Proton VPN is hit with a DDoS attack, would free tier users be the first to be disconnected to free up capacity? Alternatively, suppose the price for IP transit shoots through the roof due to weird global economics and ProtonVPN has to throttle the free tier to 10 Mbps. All VPN operators share these possibilities, but however well-meaning Proton VPN and the non-profit behind them are, economic factors can force changes that aren’t great for the free users.

    Now, the obv solution at such a time would be to then switch to being a paid customer. And that might be fine for lots of customers, if that ever comes to pass. But Murphy’s Law makes it a habit that this scenario would play out when users are least able to prepare for it, possibly leading to some amount of unavailability.

    So yes, a holistic analysis of failure points is precisely what proper security calls for. Proton VPN free tier may very well be inappropriate. But whether it rises to a serious concern or just warrants an “FYI”, that will vary based on individual circumstances.


  • Don’t. OP already said in the previous post that they only need Jellyfin access within their home. The Principle of Least Privilege tilts in favor of keeping Jellyfin off the public Internet. Even if Jellyfin were flawless – and no program is – the only benefit that accrues to OP is that the free tier of ProtonVPN can access Jellyfin.

    Opening a large attack surface for such a modest benefit is letting the tail wag the dog. It’s adding a kludge to workaround a different kludge, the latter being ProtonVPN’s very weird paid tier.


  • I previously proffered some information in the first thread.

    But there’s something I wish to clarify about self-signed certificates, for the benefit of everyone. Irrespective of whichever certificate store that an app uses – either its own or the one maintained by the OS – the CA Browser Forum, which maintains the standards for public certificates, prohibits issuance of TLS certificates for reserved IPv4 or IPv6 addresses. See Section 4.2.2.

    This is because those addresses will resolve to different machines on different networks. Whereas a certificate for a global-scope IP address is fine because it should resolve to the same destination. If certificate authorities won’t issue certs for private IP addresses, there’s a good chance that apps won’t tolerate such certs either. Nor should they, for precisely the reason given above.

    A proper self-signed cert – either for a domain name or a global-scope IP address – does not create any MITM issues as long as the certificate was manually confirmed the first time and added to the trust store, either in-app or in the OS. Thereafter, only a bona fide MITM attack would raise an alarm, the same as if a MITM attacker tries to impersonate any other domain name. SSH is the most similar, where trust-on-first-connection is the norm, not the outlier.

    There are safe ways to use self-signed certificate. People should not discard that option so wontonly.


  • Physical wire tapping would be mostly mitigated by setting every port on the switch to be a physical vlan

    Can you clarify on this point? I’m not sure what a “physical VLAN” would be. Is that like only handling tagged traffic?

    I’m otherwise in total agreement that the threat model is certainly not typical. But I can imagine a scenario like a college dorm where the L2 network is owned by a university, and thus considered “hostile” to OP somehow. OP presented their requirements, so good advice has to at least try to come up with solutions within those parameters.


  • I had a small typo where “untrusted” was written as “I trusted”. That said, I think we’re suggesting different strategies to address OP’s quandary, and either (or both!) would be valid.

    My suggestion was for encrypted L3 tunneling between end-devices which are trusted, so that even an untrustworthy L2 network would present no issue. With technologies like WireGuard, this isn’t too hard to do for mobile phone clients, and it’s well supported for Linux clients.

    If I understand your suggestion, it is to improve the LAN so that it can be trusted, by way of segmentation into VLANs which separate the trusted devices from the rest. The problem I see with this is that per-port VLANs alone do not address the possibility of physical wire-tapping, which I presumed was why OP does not trust their own LAN. Perhaps they’re running cable through a space shared with other tenants, or something like that. VLANs help, but MACsec encryption on the wire paired with 802.1x device certificate for authentication is the gold standard for L2 security.

    But seeing as that’s primarily the domain of enterprise switches, the L3 solution in software using WireGuard or other tunneling technologies seems more reasonable. That said, the principle of Defense In Depth means both should be considered.