The naive way to put a self-hosted service on the public internet is: port-forward 443 on the home router, point DNS at the residential IP, hope the ISP does not change it. This works. It also publishes the home IP, paints a target on the router, and breaks the day the ISP rotates the lease.

The shape I converged on instead: a tiny VPS in a data centre with a stable public IP, doing nothing except forwarding traffic over a WireGuard tunnel to OPNsense at home, which then NATs the traffic to the internal proxy. The home IP never appears in DNS. The VPS does not host any of the actual services. The two roles separate cleanly.

The pieces#

public client
    |
    v
relay  65.21.53.234            (1 vCPU VPS, public IP)
  nginx :443 (SNI mux + TLS)
  wg0 :41820/udp        peers <-> 10.200.200.0/24
    |
    | WireGuard
    v
home OPNsense  62.178.189.3    (residential WAN)
  wg1                   10.200.200.250
  iptables NAT + masquerade
    |
    v
internal proxy01  10.0.20.21   (the actual reverse proxy)
    |
    v
service VMs and containers

The relay listens on :443 with a tiny nginx that does SNI mux. It does not terminate TLS for most vhosts. Most of the time it sees the ClientHello, reads the SNI, and forwards the raw TCP stream over the WireGuard tunnel to OPNsense. The TLS handshake completes on the inside, not on the relay. The relay never holds the certificate's private key for those services.

OPNsense receives the inbound packets on wg1, masquerades them to look like they came from 10.200.200.250, and routes them to proxy01 at 10.0.20.21. proxy01 is the real reverse proxy: it terminates TLS, splits to backend services by hostname, runs the certbot dance, ships the certs back to the relay over NFS for the few vhosts that the relay does terminate.

What the home side has to do#

The non-trivial part is OPNsense. WireGuard alone gets you "encrypted tunnel between two boxes". You need three more things on top.

Forward NAT for inbound traffic so that a packet arriving on wg1 from 10.200.200.1 can hit proxy01:443 and the response goes back through the tunnel without OPNsense getting confused about reverse-path filtering. The rule shape in pf:

nat on wg1 from 10.200.200.0/24 to 10.0.20.21 -> (wg1)
pass in on wg1 inet proto tcp from any to 10.0.20.21 port 443

The trick is the nat on wg1 line. It rewrites the source of the proxied request to the OPNsense WireGuard IP. Without that, proxy01 would send the response packet directly to the relay's public IP via the WAN default route, the packet would arrive at the relay on the wrong interface, the relay would drop it, and the TLS handshake would hang.

Reply-routing for asymmetric paths. OPNsense's reply-to logic needs to be told that traffic which arrived on wg1 must exit on wg1, even when the destination's normal route says otherwise. This is automatic in newer OPNsense; older versions need an explicit reply-to on the inbound rule.

Outbound NAT exception for traffic going into the tunnel. By default OPNsense masquerades everything leaving WAN. The tunnel sits behind a different interface, but if you forget to except 10.200.200.0/24 from the outbound NAT rules, return traffic gets double-NATed and stops being routable. Add a hybrid outbound NAT rule that says "do not NAT traffic destined for 10.200.200.0/24".

The combination of these three is more annoying than it sounds. OPNsense's UI exposes each rule in a different page. The first time you set this up you will spend a long evening with tcpdump -i wg1 and a confused face. Once it works, it stays working.

What the relay side has to do#

Far less. The relay runs:

  • wg-quick@wg0 keeping the tunnel up
  • nginx with two listener types
  • a tiny SNI mux in front

The SNI mux is the part worth dwelling on. The relay terminates TLS for archworks.co (the blog you are reading, no secrets to hide) and does not terminate TLS for nextcloud.archworks.co, element.archworks.co, and the rest. The split lives in one stream block:

stream {
  map $ssl_preread_server_name $upstream {
    archworks.co                127.0.0.1:8443;   # local TLS termination
    default                     10.200.200.250:443; # forward into the tunnel
  }
  server {
    listen 443;
    ssl_preread on;
    proxy_pass $upstream;
    proxy_protocol on;
  }
}

ssl_preread lets nginx peek at the ClientHello SNI without doing the TLS handshake itself. The relay does not need the certificate for the forwarded vhosts. They terminate on proxy01 inside the home network.

The proxy_protocol on line is important. Without it, proxy01 sees the source IP as the WireGuard peer IP (10.200.200.1), and every visitor looks like the relay. With it, nginx prepends a small header containing the real client IP. proxy01 parses it and the access logs are honest.

What this buys#

The home IP is not in DNS. All A records for *.archworks.co point at the relay's public IP. The home line could rotate every six hours and the public services would not notice. Some have. They have not.

The home WAN can drop and nothing public moves. If the residential ISP has an outage, the tunnel drops, public requests start failing, but the DNS does not flap. Recovery is "tunnel comes back up". No DNS propagation, no monitoring noise about A-record changes, no third party noticing the IP shifted.

The relay is disposable. I run one. I could run two. If a relay gets DDoSed off the internet, the path forward is "spin up a new VPS, install WireGuard, peer it, point DNS at it". The whole role of a relay is small enough to fit on a 5 EUR per month VPS. Multiple relays just become extra peers on the home OPNsense, each owning their own slice of DNS.

The same tunnel works in reverse. From inside the home network, I can set a route that pushes outbound traffic through wg1 and out the relay's WAN. The home network gets a clean public IP for outbound traffic when I want one. It is not a privacy story. It is a "the relay is the egress point for anything from home that needs a stable origin IP" story. Useful for outbound MTAs, for API integrations that whitelist source IPs, for the occasional region-restricted lookup.

Nothing sensitive sits on the relay. No application data, no databases, no user accounts, no certificates for the services that actually matter. The relay's threat surface is "an attacker can read the TLS traffic for archworks.co". Which, for the blog you are reading, is "the public HTML I just put there".

What it does not do#

The relay is the WAN-side IP for the home network as far as the public is concerned. It is not a privacy hop. The traffic between the visitor and the relay is normal TLS. The traffic between the relay and home is WireGuard. The visitor's IP and traffic patterns are visible to the relay, the same way they would be to any reverse proxy.

The relay is also a single point of failure unless you run more than one. The home side is a single point of failure unless your services are tolerant of "the actual backend is gone for a few minutes". For my use case, that is fine. For something more serious, the pattern extends naturally: two relays in different ASNs, anycast or weighted DNS, two WireGuard tunnels from OPNsense, the outbound NAT exceptions get a bit longer.

There is a small operational cost: the relay's certbot can no longer use HTTP-01 challenges for the vhosts it does not terminate, because the HTTP challenge would be served from the wrong machine. Switch to DNS-01 and the problem goes away. I use the PowerDNS API on the relay itself; the writeup is in another post.

The rough recipe, if you want to copy it#

  1. Get a VPS with a stable IP. Hetzner CX23 is what I run, 5 EUR per month.
  2. Generate a WireGuard keypair on both ends, peer them, give the relay 10.200.200.1 and the OPNsense 10.200.200.250.
  3. On OPNsense: add the WireGuard interface, add the inbound NAT rule that rewrites the source to the wg IP, add the outbound NAT exception for the wg subnet, allow inbound TCP from the wg subnet to the internal proxy.
  4. On the relay: nginx with stream block doing ssl_preread SNI mux, proxy_protocol on, default upstream is the tunnel IP, exceptions for any vhost you actually want the relay to terminate.
  5. On the internal proxy: configure it to accept PROXY protocol on the inbound listener.
  6. Point all public DNS at the relay.

There is no single document that walks through all of this end to end. There is OPNsense's WireGuard guide, nginx's stream module docs, and a lot of tcpdump in between. The combination is the interesting part.