Re: PF: nat on ipsec
- In reply to: André_S._Almeida : "Re: PF: nat on ipsec"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Tue, 11 Oct 2022 11:08:57 UTC
> IPsec traffic flow is complicated. Have a look at enc. It's been > instrumental in helping me fix this class of issue in several > instances. > YMMV. > > https://www.freebsd.org/cgi/man.cgi?query=enc&sektion=4 > <https://www.freebsd.org/cgi/man.cgi?query=enc&sektion=4> > I have no clue why the host should try to do anything with the packets except for changing source and destination address (NAT). The tunnel is setup between AWS and the VM on the host. The ssh connection from AWS to a client "behind" opnsense works. However, as soon as I try to make a ssh connection from the jail ("behind" opnsense) to AWS, the packets from my local vpn endpoint (opnsense VM) do not get NATed on the host. The host just tries to forward those UDP Port 4500 packets with the private ipv4 address of the opnsense VM as source on the egress interface with the public interface. This of course should not happen. Routing problems can be ruled out, the exact same configuration is working on a Linux host hosting the same opnsense VM. A simple |sysctl net.ipv4.ip_forward=1 && iptables -t nat -A POSTROUTING --source 192.168.251.100 -j SNAT --to-source $public_vpn_ip| did the trick. There is a strange problem here, maybe it is not pf related, maybe something in the stack interferes with those packets. Anyone knows/could guess if this works with ipfw instead?