Hey guys, so I’ve been self hosting for 2 years, making small upgrades until I reached this point where I replaced my router with one of those Chinese fanless firewalls running Intel n150 and running a proxmox homelab.
I am self hosting headscale with many of my buddies connected, including ny own services. Everything was working great until I setup OPNsense.
The firewall was not easy to setup, but after I set it up, I discovered odd behaviors from tailscale.
The firewall was blocking all connections from the ip 100.60.0.0/24, I had to explicitly allow it and change the forewall state to hybrid
What happens is that my LXC containers running tailscale would receive requests from tailscale0 interface but respond via LAN.
Apparently as I understood, consumer routers have assymetric NAT so that works fine, but not with opnsense.
Every guide I read online talks about installing tailscale on the opnsense router directly but I do not want to expose it to the tailscale network.
For now temporarily I set an ip route to tailscale0 and resolved it that way temporarily, but I still cannot get a solution that can help without compromising the firewall.
It’s also very cumbersome to do this for 50+ LXC containers over and over, even with running systemd scripts a problem might happen in the future
If you guys have any experience with this it would help a lot.


I might have a solution for you by doing what I’m doing. I’m running OPNSense as my firewall as well. I have one NAT:Port Forward rule for torrents (I really am seeding linux iso torrents) and that is it. Any services I’m hosting outside the network are done using Cloudflare tunnels from either a Cloudflared instance or from the LXC itself. This method has fixed my issues with Plex outside of my network since I was able to turn off “Remote Access” and make it available to my friends/family through a “Custom server access URL” (in the network settings, looks like: https://plex.domain.url,http://192.168.1.xx:32400/). No messy NAT rules to complicate things.
I am also using Tailscale, but I don’t terminate it on my firewall. I terminate Tailscale on another host inside my network, you could probably use an LXC container. It’s a Debian system with Tailscale installed, routing enabled (https://tailscale.com/docs/features/subnet-routers), and set up as an exit node and subnet router. On OPNSense, I set up a Gateway on the LAN interface pointing to my Debian Tailscale router node. Then I pointed the remote networks of my family to the Tailscale router using the routes in OPNSense. Fortunately, for me (and because I set them up), they are all different networks.
The benefit to this method is also that when remotely reaching my services, the traffic looks to the services on my network as if they are coming from the Tailscale router and so return there instead of trying to go out my firewall. Tailscale maintains the tunnel through the firewall so it really isn’t a participant in the Tailnet. The only issue I’ve really had had been DNS with the Tailscale Magic DNS wanted to respond instead on my internal DNS servers. I’ve got MagicDNS disabled. but it always messed stuff up. The way I fixed it was to put Tailscale on my Adguard container and make it’s Tailscale IP the first DNS server, followed by the internal IP addresses of my DNS servers (192. addresses). This has worked for me pretty well.
Please let me know if you want any follow up info. I’ve been doing this for a long time. It’s my main hobby (and directly congruent to my job).
Edit: My security tingle just activated from what I told you. I am the only user on my Tailnet. If you use this method, you will want to configure the Tailnet ACL’s/grants appropriately to restrict access to only what you would want another user to use, rather than have full access inside your network. You can add each host inside your network that they would reach as an ipset and then set restrictions for users inside the rules. I will admit that I did have to use some AI to figure out some of the specific syntax for the access controls, but understand it pretty well now.