I want to move away from Cloudflare tunnels, so I rented a cheap VPS from Hetzner and tried to follow this guide. Unfortunately, the WireGuard setup didn’t work. I’m trying to forward all traffic from the VPS to my homeserver and vice versa. Are there any other ways to solve this issue?
VPS Info:
OS: Debian 12
Architecture: ARM64 / aarch64
RAM: 4 GB
Traffic: 20 TB
You don’t want to forward all traffic. You can do SNAT port forwards across the VPN, but that requires the clients in your LAN to use the VPS as their gateway (I do this for a few services that I can’t run through a proxy; its clunky but works well).
Typically, you’ll want to proxy requests to your services rather than forwarding traffic.
ufw
on Debian, but you can use iptables if you want)I’ve done this since ~2013 (before CF tunnels were even a product) and has worked great.
My original use case was to setup direct connectivity between a Raspberry PI with a 3G dongle with a server a home on satellite internet. Both ends of that were behind CG-NAT, so this was the solution I came up with.
Out of curiosity, why not a simple reverse proxy on the VPS (that only adds client real IP to headers), tunneled to a full reverse proxy on the home server (that does host routing and everything else) through a SSH tunnel?
How would that kind of a setup look like?
Variant 1:
Pro: very secure, VPS doesn’t store any sensitive data (no TLS certificates, only a SSH public key) and the client connections pass through the VPS double-encrypted (TLS between client browser and home proxy, wrapped inside SSH).
Con: you don’t get the client’s IP. When the home apps receive the connections they appear to originate at the home end of the SSH tunnel, which is a private interface on the home server.
Variant 2 (in case you need client IPs):
Pro: by decrypting the TLS connection the simple proxy can add the client’s IP to the HTTP headers, making it available to logs and apps at home.
Con: the VPS needs to store the TLS certificates for all the domains you’re serving, you need to copy fresh certificates to the VPS whenever they expire, and the unencrypted connections are available on the VPS between the exit from TLS and the entry into the SSH tunnel.
Edit: Variant 3? proxy protocol
I’ve never tried this but apparently there’s a so called proxy_protocol that can be used to attach information such as client IP to TLS connections without terminating them.
You would still need a VPS proxy and a home proxy like in variant 2, and they both need to support proxy protocol.
The frontend (VPS) proxy would forward connections in stream mode and use proxy protocol to add client info on the outside.
The backend (home) proxy would terminate TLS and do host routing etc. but also it can unpack client IP from the proxy protocol and place it in HTTP headers for apps and logs.
Pro: It’s basically the best of both variant 1 and 2. TLS connections don’t need to be terminated half-way, but you still get client IPs.
Please note that it’s up to you to weigh the pros and cons of having the client IPs or not. In some circumstances it may actually be a feature to not log client IPs, for example If you expect you might be compelled to provide logs to someone.
Very interesting… How do I get started?
The SSH tunnel is just one command, but you may want to use autossh to restart it if it fails.
If you choose variant 2 you will need to configure a pass-through reverse proxy on the VPS that does TLS termination (uses correct certificates for each domain on 443). Look into nginx, caddy, traefik or haproxy.
For the full home proxy you will once again need a proxy but you’ll additionally need to do host routing to direct each (sub)domain to the correct app. You’ll probably want to use the same proxy as above to avoid learning two different proxies.
I would recommend either caddy (both) or nginx (vps) + nginx proxy manager (home) if you’re a beginner.
How do I make the SSH tunnel forward traffic? It can’t be as easy as just running
ssh user@SERVER_IP
in the terminal.(I only need variant 1 btw)
You also add the -R parameter:
https://linuxize.com/post/how-to-setup-ssh-tunneling/ (you want the “remote port forwarding”). ssh -R, -L and -D options are magical, more people should learn about them.
You may also need to open access to port 443 on the VPS. How you do that depends on the VPS service, check their documentation.
Hi, whenever I try to enter the ports 80 and 443 at the beginning of the -R parameter, I get this error:
Warning: remote port forwarding failed for listen port 80
. How do I fix this?The biggest obstacle for me is the connection between the VPS and my homeserver. I have tried this today and I tried pinging
10.0.0.2
(the homeserver IP via WireGuard) and get this as a result:PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. From 10.0.0.1 icmp_seq=1 Destination Host Unreachable ping: sendmsg: Destination address required From 10.0.0.1 icmp_seq=2 Destination Host Unreachable ping: sendmsg: Destination address required ^C --- 10.0.0.2 ping statistics --- 2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1019ms
Not sure why though.
Can you post your WG config (masking the public IPs and private key if necessary)?
With wireguard, the
allowed-ips
setting is basically the routing table for it.Also, you don’t want to set the endpoint address (on the VPS) for your homeserver peer since it’s behind NAT. You’ll only want to set that on the ‘client’ side. Since you’re behind NAT, you’ll also want to set the persistent keepalive in the client peer so the tunnel remains open.
Hi, thank you so much for trying to help me, I really appreciate it!
VPS
wg0.conf
:[Interface] Address = 10.0.0.1/24 ListenPort = 51820 PrivateKey = REDACTED PostUp = iptables -t nat -A PREROUTING -p tcp -i eth0 '!' --dport 22 -j DNAT --to-destination 10.0.0.2; iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source SERVER_IP PostUp = iptables -t nat -A PREROUTING -p udp -i eth0 '!' --dport 55107 -j DNAT --to-destination 10.0.0.2; PostDown = iptables -t nat -D PREROUTING -p tcp -i eth0 '!' --dport 22 -j DNAT --to-destination 10.0.0.2; iptables -t nat -D POSTROUTING -o eth0 -j SNAT --to-source SERVER_IP PostDown = iptables -t nat -D PREROUTING -p udp -i eth0 '!' --dport 55107 -j DNAT --to-destination 10.0.0.2; [Peer] PublicKey = REDACTED AllowedIPs = 10.0.0.2/32
Homeserver
wg0.conf
:[Interface] Address = 10.0.0.2/24 PrivateKey = REDACTED [Peer] PublicKey = REDACTED AllowedIPs = 0.0.0.0/0 PersistentKeepalive = 25 Endpoint = SERVER_IP:51820
(REDACTED would’ve been the public / private keys, SERVER_IP would’ve been the VPS IP.)
On the surface, that looks like it should work (assuming all the keys are correct and 51820/udp is open to the world on your VPS).
Can you ping the VPS’s WG IP from your homeserver and get a response? If so, try pinging back from the VPS after that.
Until you get the bidirectional traffic going, you might try pulling out the iptables rules from your wireguard script and bringing everything back up clean.
I do not get a response when pinging the VPS’s WG IP from my homeserver. It might have something to do with the firewall that my VPS provider (Hetzner) is using. I’ve now allowed the port
51820
on UDP and TCP and it’s still the same as before… This is weird.I’m not familiar with Hetzner, but I know people use them; haven’t heard any kinds of blocks for WG traffic (though I’ve read they do block outbound SMTP).
Maybe double-check your public and private WG keys on both ends. If the keys aren’t right, it doesn’t give you any kind of error; the traffic is just silently dropped if it doesn’t decrypt.
Hmm, the keys do match on the two different machines. I have no idea why this doesn’t work…
Dumb question: you’re starting wireguard right? lol
In most distros, it’s
systemctl start wg-quick@wg0
wherewg0
is the name of the config file in/etc/wireguard
If so, then maybe double/triple check any firewalls / iptables rules. My VPS providers don’t have any kind of firewall in front of the VM, but I’m not sure about Hetzner.
Maybe try stopping wireguard, starting a netcat listener on 51820 UDP and seeing if you can send to it from your homelab. This will validate that the UDP port is open and your lab can make the connection.
### VPS user@vps: nc -l -u VPS_PUBLIC_IP 51820 ### Homelab user@home: echo "Testing" | nc -u VPS_PUBLIC_IP 51820 ### If successful, VPS should show: user@vps: nc -l -u VPS_PUBLIC_IP 51820 Testing
I do know this is possible as I’ve made it work with CG-NAT on both ends (each end was a client and routed through the VPS).