Is anybody using only IPv6 in their home lab? I keep running into weird problems where some services use only IPv6 and are “invisible” to everyone (I’m looking at you, Java!) I end up disabling IPv6 to force everything to the same protocol, but I started wondering, “why not disable IPv4 instead?” I’d have half as many firewall rules, routes and configurations. What are the risks?
As someone mentioned above, there are some soho devices I have run into that plainly just won’t work with IPv6 even though they claim to and will pull a lease. One or two I have seen just seem to stop working simply by having v6 enabled. Hours of troubleshooting. Its worth making sure someone in internet-land at least claims v6 works on any persistently trouble some devices.
I’m trying to be progressive, but after thinking outside of my little network and reading the posts here, it seems like there’s still a long way to go before I should consider it. I don’t have a split network at home and it would potentially affect everyone in the house. Additionally, I don’t have serious needs for production-grade network equipment, so the chancs of that cheap usb-to-ethernet adapter with more Chinese characters than English in the instruction sheet has a high probability of biting me.
This was sort of a wild hare thought of disabling IPv4 vs disabling IPv6 to solve a problem that’s more of an inconvenience. I am probably not ready for this undertaking. Maybe I’ll revisit it when I get around to partitioning my network.