LAN-to-LAN VPN using WireGuard

So.

In recent years I’ve been more and more pulled into network administration.  I’ve been involved in small companies with their own infrastructure and have had to learn how to work with VLANs and DHCP and all that jazz.

My current company has offices in two locations and needs to have the internal networks connected via VPN. Having planned the networks so that we have no overlapping subnet ranges, we initially thought we’d use the build in vpn facilities of our internet gateways.  But we decided against this, since we wanted to have high throughput (up to 1Gbps) and gateway hardware isn’t typically designed for this.  So, a tunnel between two linux servers on the lan, then.

OpenVPN

Next, we set up OpenVPN on the servers.  First, we needed to create a vpn solution for people at home to connect to the office.  There are tutorials on how to do this with OpenVPN and it is reasonably simple to do, but OpenVPN is so full of configuration options, often poorly documented, that it is non-trivial to get right.  There is all the messing about with public key cryptography, certificates, client keys and whatnot.  And then sensible crypto configuration options.  We have this running in both offices.  It works for people working from home.

But then we wanted to use the same to set up a VPN tunnel between the offices.  And this turned out to be trickier.  There are frankly not many tutorials showing how to do that.  Keeping the tunnel running reliably was a challenge.  And throughput was disappointing.

I spend some time trying to diagnose why throughput was bad.  In the end, I think I discovered that packets were being dropped on the boundary between the openvpn code running in user space and the the linux tunnel interface.  And dropped packets kill TCP performance since TCP goes into back-off mode.  No amount of increasing network buffers seemed to cure this.

Site to site vpn using ssh

In the end, I set up tunnels over SSH.  This turned out to be relatively stable and moderately easy to configure, with acceptable performance.  We were getting some 150Mbps over our link, which has a round-trip latency of some 80ms.  Having set up port forwarding for the ssh connection on the gateway, setting up such a tunnel is not that hard.  I ended up writing systemd units that ran scripts, similar to this:

#! /usr/bin/bash
REMUSER=ec2-user
REMHOST=myhost.org
LOCTUN=7
REMTUN=0
LOCADDR=10.8.0.7
REMADDR=10.8.0.8
REMVPC=172.30.0.0/16

# remote tunnel device must exist prior to setting up main connection
ssh $REMUSER@$REMHOST "sudo /usr/sbin/ip tuntap add dev tun$REMTUN mode tun user ec2-user"

# set up main tunnel
exec ssh \
-o PermitLocalCommand=yes \
-o LocalCommand="\
ifconfig tun$LOCTUN $LOCADDR pointopoint $REMADDR && \
ifonfig tun$LOCTUN txqueuelen 10000 && \
ifconfig route add $REMVPC via $REMADDR" \
-o ServerAliveInterval=30 \
-o ServerAliveCountMax=5 \
-n -w $LOCTUN:$REMTUN $REMUSER@$REMHOST \
"sudo /usr/sbin/ifconfig tun$REMTUN $REMADDR pointopoint $LOCADDR && sudo /usr/sbin/ifconfig tun$REMTUN txqueuelen 10000 && \
sudo /usr/sbin/ip route add 192.168.10.0/24 via $LOCADDR; \
sudo /usr/sbin/ip route add 192.168.11.0/24 via $LOCADDR; \
sudo /usr/sbin/ip route add 192.168.12.0/24 via $LOCADDR; "

What this does is set up tunnel interface locally (ssh will have created it) and assign endpoint addresses for a peered connection.  Then open a SSH tunnel connection to the remote server, where remote tunnel device is set up. Also, appropriate routes have to be set up, both locally and remotely.  And of course, remote and local networks have to have static routes in place to route packets to the gateway hosts.

In addition, we need to first create the remote tunnel device and grant ownership to the remote user, if you are not using the root user to log in (not recommended). Otherwise the /dev/tunX device creation and teardown is handled by SSH:

sudo ip tuntap add dev tun0 mode tun user ec2-user

This works ok.  systemd will retry the connection if it fails.  The tunnel is much more reliable than openvpn, takes seconds to set up and just kind of works.  But twiddling with tunnel interface numbers is a bother.  Also, it is a tcp tunnel, and performance isn’t that great because tunneling TCPover TCP isn’t optimal.

This is the systemd unit file, saved as /etc/systemd/system/sshtun@.service:

[Unit]
Description=SSH tunnel for %I
After=network.target

[Service]
ExecStart=/usr/bin/bash /etc/sshtun/%i

# Restart every >2 seconds to avoid StartLimitInterval failure
RestartSec=5
Restart=always

[Install]
WantedBy=multi-user.target

Wireguard

So, recently I was made aware of WireGuard.  A simple VPN encapsulation protocol to be include in the Linux kernel, no less.  I read this article and decided to give it a go.  I had reason to believe it might be better than my SSH solution:

  • It uses UDP packets
  • It is kernel based and uses its own interface type, i.e. no messing around with tun/tap interfaces.
  • There should be no user-space/kernel-space bottleneck
  • There are almost no configuration options.
  • It promises to automatically re-negotiate the tunnel if required.

Setting it up is pretty straigtforward.  but the use cases shown are typically home to office vpns.  For site to site vpns I have chosen to do things a bit differently.

A tunnel endpoint CIDR range

selecting an IP address for the interface

When server A, on lan A, connects to server B on lan B, what IP addresses should one assign to the WireGuard interfaces?  As a concrete example:  Server Alpha has the lan address 192.168.10.100/24 and  Server Beta lives on a different lan, and has the IP address 192.168.20.60/24.

One approach would be to give the WireuGard interface on server Alpha an address from the remote lan, and vice versa.  This is fine for two networks and if you can reserve the addresses out of the DHCP range of each.  But the approach I have use is to reserve a different private IP range for the tunnel endpoint network.  That way, we can set up more complex topology.  I use the network 10.8.1.0/24 as the virtual tunnel network.  Each WireGuard interface on each tunnel server gets one address out of this range.

Setting up

Based on the instructions here, these are the steps needed to configure server Alpha.  We assume that you have created private and public keys on each server and put those in /etc/wireguard/privatekey.  Also, port forwarding on both sides will forward udp packages on the external ports to the servers.  We also assume that ip forwarding has been enabled for each server.  Notice how we have added the tunnel endpoint address to the interface and allow the remmote tunnel address through. You run the following as root:

ip link add dev wg0 type wireguard
ip address add dev wg0 10.8.1.1/24
wg set wg0 listen-port 7777
wg set wg0 private-key /etc/wireguard/privatekey
wg set wg0 peer CbX0FSQ7W2LNMnozcMeTUrru6me+Q0tbbIfNlcBzPzs= allowed-ips 192.168.20.0/24,10.8.1.2/32 endpoint networkB.company.com:8888
ip link set up wg0

Now you have your basic settings.  But there is no routing yet.  But the cool thing is that the utility wg-quick will help you with that.  First, you save the config you have made:

touch /etc/wireguard/wg0.conf
wg-quick save wg0

This will save the config in the above file.  You can edit it, if you like but it should be fine. You can now bring the interface down and up with wg-quick:

wg-quick down wg0
wg-quick up wg0

That was easy.  wg-quick will have created the interface, assigned the correct address, configured the interface and modified the server routing tables.

systemd

The final trick is to make this into a service. That’s remarkably easy, due to a systemd unit being included. This is what you do:

systemctl enable wg-quick@wg0 --now

This enables the unit and runs it, in one swell foop.

The other server

On server Beta, you perform the same dance, except that you assign a different local tunnel address, and remote addresses to the peer section.

Testing

if all goes well, you should be able to ping between servers now.

Adding a third server

This is when things become simple.  Just allocate a third tunnel interface address to the third server.  Add [peer] sections to the wg0.conf files on the existing servers for the third server.  Set up the third server with the other two as peers.  Start up the interface.  It should just work.

Conclusion

Using WireGuard I managed to get the througput up significantly.  Over our inter-office connection with 80ms round trip time, and with an internet connection of 1000Mbps on both ends, we now get some 350Mbps through the tunnel, using regular consumer workstations serving as tunnel endpoints.  And the tunnels just work.

One thought on “LAN-to-LAN VPN using WireGuard

Leave a comment