

100%. Exactly the same.


100%. Exactly the same.


404 is a web server response suggesting that a web server is up. It’s what’s giving 404.
The web server can’t find your page or document or resource. So one of your web servers (on either the reverse proxy or the actual server) is pointing to the wrong spot on what to serve.
You haven’t tried launching a wrong server on the same port right? Or misconfigured your nginx translation?
Isolate the issue. Ignore nginx and start testing just the web server on the destination and see if the server is giving 404 and then if it is giving the right document then it’s nginx configuration. If it’s not giving you the document nginx can’t serve.
But either way start isolating the problem into the smallest area. And focus on the configurations and files that are related to it.
I’m not an expert, but it sounds like if you finish a session of valorant, the anti cheat never unloads and continues to monitor memory and files.
Easy Anticheat though, according so some sources, only runs during game play.
Riots Anticheat has a bad history though. But both essentially are black boxes that send details both hash and samples back to their owners for them to approve what’s on it computer. Opened a medical record? It’s probably been hashed and sent back.
Opened your employers accounting files when working from home? details you probably sent riot a copy.
Both can be updated. There’s no guarantees that riot won’t do something nasty against a portion of high value targets. They know you from your payment details. They can identify, update the module and get anything they like, they have root.
Anticheat has a history of being a tool for hackers. https://www.vice.com/en/article/hackers-are-using-anti-cheat-in-genshin-impact-to-ransom-victims/
There’s no upside for the user. Mostly because they don’t work anyway.
Just compile your kernel with the anti cheat flags and telemetry enabled from source.


What a great post. I haven’t used pcsx2 since I wrote a yakuza guide probably decades ago. But that post was great to read.


If dns resolved then it’s not blocked. You need to look at your network.
Bypass dns connect to the ip and port. What happens?


This won’t work, your wan ip isn’t dynamic, it’s on the ISP NAT network and your resulting ip to public services is shared across many customers. CG-NAT.


I don’t know where you work but don’t access your tailnet from a work device and ideally not their network.
Speaking to roku, you could buy a cheap raspberri pi and usb network port. One port to the network the other to roku. The pi can have a tailscale advertised network to the roku, and the roku probably needs nothing since everything is upstream including private tailscale 100.x.y.z networks which will be captured by your device in the middle raspberri pi.
I guess that’d cost like 40 ish dollars one time.


Curious if someone would then add how well the games on steam deck run in comparison… Though that’s not exactly legal I suppose


Don’t tell competitive gamers that. LOL, CS, Overwatch, COD whatever is about a simple game loop for those who enjoy that loop.


Australia just hopes the countries who handle the waste from the uranium we sell don’t you know make nuclear weapons with it. You know they’re good allies they wouldn’t do anything with that right? They certainly would never enrich the original…
I’m trying to figure out the gap in the market you’re trying to fill other than “for steam fan boys it would allow us fans of steam games that already exist in a native place, in a non native place!”
Correct me what is going into it that isn’t already somewhere, and who that appeals to?
Or is this just thought experiment?
What would you suggest they sell on their Android store that users would be so encouraged to install a new store and then what they want?
Steam already has a store on Android, you just can’t play games there because most games on steam either already exist on the native google play store, or aren’t compatible with mobile architectures like Arm64. Most mobiles unlike a arm laptop, have no x86/amd64 emulator which is what those games are compiled as by their developers.
So what’s left?
Enterprise applications are often developed by the most “quick, ship this feature” form of developers on the world. Unless the client is paying for the development a quick look at the sql table shows often unsalted passwords in a table.
I’ve seen this in construction, medical, recruitment and other industries.
Until cyber security requires code auditing for handling and maintaining PII as law, mostly its a “you’re fine until you get breached” approach. Even things like ACSC Australia cyber security centre, has limited guidelines. Practically worthless. At most they suggest having MFA for Web facing services. Most cyber security insurers have something but it’s also practically self reported. No proof. So if someone gets breached because someone left everyone’s passwords in a table, largely unguarded, the world becomes a worse place and the list of user names and passwords on haveibeenpwned grows.
Edit: if a client pays and therefore has control to determine things like code auditing and security auditing etc as well as saml etc etc, then it’s something else. But say in the construction industry I’ve seen the same garbage tier software used at 12 different companies, warts and all. The developer is semi local to Australia ignoring the offshore developers…


I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.
I can’t see why regular file would be any different.
I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.
I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.
I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.
I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.
Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.


3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.
I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.
Running that cluster 7 or so years now since I bought them new.
I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.
Point is, it’s still capable today.


I thought this was an onion article.


It’s solving a real problem in a niche case. Someone called it gimmicky, but it’s actually just a good tool currently produced by an unknown quantity. Hopefully it’ll be sorted or someone else takes up the reigns and creates an alternative that works perfectly for all my different isos.
For the average home punter maybe even up to home lab enthusiast, probably not saving much time. For me it’s on my keyring and I use it to reload proxmox hosts, Nutanix hosts, individual Ubuntu vms running ROS Noetic and not to mention reimaging for test devices. Probably a thrice weekly thing.
So yeah, cumulatively it’s saving me a lot of time and just in trivialising a process.
If this was a spanner I’d just go Sidchrome or kingchrome instead of my Stanley. But it’s a bit niche so I don’t know what else allows for such simple multi iso boot. Always open to options.
Iirc I seem to find whatever was configured dead or no longer the cool choice when I check online.
Whatever it is, I barely touch it and it works great. Very happy.
Omg there’s too many cars I can’t buy them all.