Setting up a VPN gateway in Ubuntu using LXC containers and OpenVPN

Simple how-to for setting up a VPN gateway using LXC containers. Also covering the setup of a killswitch and a SOCKS5 proxy

Setting up a VPN gateway in Ubuntu using LXC containers and OpenVPN

Motivation

I use a VPN to be able to reach certain sites that are not accessible from where I am. What I was currently using was a virtual machine that would connect to the VPN and do everything in there. The main issue with that is that I was running it in my laptop and having a VM running just for the VPN was pretty heavy.

It also meant that I had to keep my laptop on all day if I wanted to keep downloading things from the VPN. Plus, I had to remember to stop the VM if I was leaving my place with my laptop because I didn't want to use the VPN from outside my place.

I happened to find my old Acer Aspire One netbook and decided to use it to connect to the VPN and share that connection to my other machines through a SOCK server. Also I could run services there that need to use the VPN all the time.

Objective

My objective was to be able to have my Acer netbook running a VPN client that I could share with the LAN but having the following properties:

  • All traffic to the internet must go through the VPN.
  • Have only one open port to share the connection through a SOCK5 proxy.
  • Through the SOCK5, one should be able to access services running on the localhost interface of the VPN endpoint.

Idea

As I will probably want to use the Acer netbook for other things apart from the VPN (I'm thinking to see if it can handle a media server - or maybe host a wiki or something like that), I didn't want to install the VPN endpoint directly to it.

I also wanted to have an SSH port open to it for easy administration, which I didn't want to have in my VPN endpoint.

Having a VM running on the Acer netbook was out of question. Its 1GB of RAM wouldn't be able to withstand it.

So I decided to go for container technology. My first thought was to use docker, but... docker is not available for my poor netbooks's 32 bit architecture. That's why I decided to go for LXC.

The base system that I installed in the Acer is Ubuntu Server 14.04 LTS.

For SOCKS5 server, I chose to use dante which seems to be the most used SOCKS5 server implementation.
I'm quite tempted to try SS5 as it seems to be easier to setup, but I only discovered it once I was halfway through configuring dante (I had a hard time making dante to work as I wanted to).

Installing LXC and setting up the container

First things first.
To install LXC and related tools that we are going to need, we use apt-get:

$ sudo apt-get install lxc lxc-templates lxctl

We can verify if everything is ok by issuing:

$ sudo lxc-checkconfig

Once LXC has been installed, we will proceed to create the container using the Ubuntu template:

$ sudo lxc-create -n <container-name> -t ubuntu

This will create a new container identified by the given name based on the ubuntu template.

The first thing I do after installing the container from the template is to remove the default ubuntu user and create my own with sudo privileges.

Setting a static IP for the container though DHCP

As we are going to need to forward ports from the host to the container, we require that the container gets assigned the same IP every time.

We can directly configure the container to use a static IP, but a nicer way is to add a rule to the DHCP server in the host so that it always assigns the same IP to the container.

In order to do that we need to create or edit the file /etc/lxc/dnsmasq.conf and make sure that this line exists:

dhcp-hostsfile=/etc/lxc/dnsmasq-hosts.conf

This file will have in each line a pair of lxc container name and IP to assign.
If we want to give the vpn-container container the IP 10.0.1.10, the file should have the line:

vpn-container,10.0.1.10

Setting up the OpenVPN client

Enabling tunnel interfaces in the container

To set up the OpenVPN client we will first need to enable the tun device in our container.
For that we need to modify our container's config file, which is located at /var/lib/lxc/<container-name>/config.

We need to add the following:

# Allow Tun Device
lxc.cgroup.devices.allow = c 10:200 rwm

# Run an autodev hook to setup the device
lxc.autodev = 1
lxc.hook.autodev = /var/lib/lxc/<container-name>/autodev
lxc.pts = 1024
lxc.kmsg = 0

The second part of the added configuration will execute the autodev script that will allow us to create the tun device node.
This script should have the following:

#!/bin/bash

cd ${LXC_ROOTFS_MOUNT}/dev
mkdir net
mknod net/tun c 10 200
chmod 0666 net/tun

This will create the corresponding device node for the tunnel interface with the correct and necessary permissions.

Note: remember to give the autodev script execution permissions.

Setting up OpenVPN in the container

The first thing to do is install OpenVPN using apt-get from the console.
In order to log into the container's console we execute the following command:

$ sudo lxc-console -n <container-name>

Once that has been setup we will continue by adding the necessary files to connect to the VPN to the container filesystem.
The container's filesystem is located at /var/lib/lxc/<container-name>/rootfs of the host's filesystem.

We can create the folder: /var/lib/lxc/<container-name>/rootfs/root/vpn/ which will be the /root/vpn/ folder in our container and put there the VPN required files (certificates, keys, etc).

We also need to create a login.info file where we are going to store the user and password to log in to the VPN service. It is important that we set the permissions to 600 (I.e: read/write for owner, nothing for the rest) as it will have sensitive information.

The first line of this file has to be the user to use and the second line the password, nothing else is needed. In order for OpenVPN to connect at startup to the VPN we need to put the opvpn file in the /etc/openvpn directory of the container but using the .conf_extension.

To the base file we get from the provider we need to adjust the location of the corresponding files (ca.crt, crl.pem, etc) and we need to add the following lines:

auth-user-pass /root/vpn/login.info
up /root/vpn/vpn-connected.sh
down /root/vpn/vpn-disconnected.sh</pre>

The first line tells OpenVPN to use the file to get the user and password to authenticate against the VPN.

The second and third line are used to make OpenVPN execute a script after it finishes connecting to the VPN and after it closes the VPN connection respectively.

For now we will use the script to add routes in order to be able to communicate to the host's LAN without routing through the VPN.

Assuming that the host is located in the subnetwork 10.0.0.0/24 and the container is NATted on the subnetwork 10.0.1.0/24, being the host assigned to the IP 10.0.1.1/24 the up script should have the following content:

#!/bin/bash
route add -net 10.0.0.0 netmask 255.255.255.0 gw 10.0.1.1

We are basically overriding the route that OpenVPN will set to route everything through the VPN for the host's subnetwork.
We make it route correctly to the LAN through the host.

Respectively the down script will contain the command to remove the route that was added:

#!/bin/bash
route del -net 10.0.0.0 netmask 255.255.255.0 gw 10.0.1.1

Both files should be given execution permission.

If we start our container issuing from the host:

sudo lxc-start -n <container-name> -d

We will get the container to connect to the VPN automatically.
We can now move to harden the container to guarantee that the connections to the internet are always done through the VPN.

Blocking non-VPN internet traffic using IPTables

We are going to use iptables to ensure that no traffic leaves the container to the internet except going through the VPN.
Why? If the VPN connection drops while we are doing something, the routes might be set back to the default ones and traffic will not be securely protected through the VPN and would get routed directly to the hosts.

By placing the following rules in iptables we can be sure that no traffic will leave the host towards the internet except by going through the VPN.

We create a script in the container's filesystem (a good place to put it is inside the /root/ directory) with the following content:

#!/bin/bash
# Clear any existing iptables rules
iptables -F
# Allow anything in and out the vpn interface
iptables -A INPUT -i tun0 -j ACCEPT
iptables -A OUTPUT -o tun0 -j ACCEPT
# Allow in and out traffic from the VPN endpoint.
# Replace aaa.bbb.ccc.ddd with the IP endpoint of the VPN
# this can be taken from the ovpn file. 
iptables -A INPUT -s aaa.bbb.ccc.ddd -j ACCEPT
iptables -A OUTPUT -d aaa.bbb.ccc.ddd -j ACCEPT
# Allow in and out traffic to localhost
iptables -A INPUT -s 127.0.0.1 -j ACCEPT
iptables -A OUTPUT -d 127.0.0.1 -j ACCEPT
# Allow DHCP traffic
iptables -A OUTPUT -p UDP --dport 67:68 -j ACCEPT
# Allow in and out traffic to a tcp port from the host's LAN subnetwork 
# iptables -A INPUT -s 10.0.0.0/24 -p tcp --dport XX -j ACCEPT
# iptables -A OUTPUT -d 10.0.0.0/24 -p tcp --sport XX -j ACCEPT
# Reject anything else
iptables -A INPUT -j DROP
iptables -A OUTPUT -j DROP
# Reject any IPv6 traffic
ip6tables -A OUTPUT -j DROP
ip6tables -A INPUT -j DROP
ip6tables -A FORWARD -j DROP

We need to give this script execution permissions and in order to have it executed prior to anything else we are going to trigger its execution to the time the container's main interface is up.

To do that we open the container's /etc/network/interfaces file.
There we locate the main interface and we append to its configuration the line:

post-up /root/firewall-setup.sh

Now, every time the interface is setup, the script will execute and we are guaranteed to only have internet access through the VPN.

Setting up SOCK5 proxy

Now that the VPN service has been installed, we can move forward and install the SOCK5 proxy to share that VPN connection with other machines in the LAN network.

The first thing to do will be to install the sock5 server in the container.
The one that I decided to try is dante.
We can install it through apt-get in Ubuntu:

$ sudo apt-get install dante-server

We can now proceed to configure it. We will need to do two things to make it work:

  • Setup the configuration to make dante work with both, traffic going to the internet (by using the VPN connection) and traffic directed to localhost (to access the services provided by the container).
  • As dante requires to state the allowed out IPs and interfaces and these need to be valid at its startup time. We will be forced to start dante when the VPN is up and not just at boot.

Configuring dante

I based my configuration on the minimal configuration they give in their page.

This is what my /etc/danted.conf looks like:

# $Id: sockd.conf,v 1.43 2005/12/26 16:35:26 michaels Exp $
#
# A sample danted.conf
#
#
# The configfile is divided into three parts; 
#    1) serversettings
#    2) rules
#    3) routes
#
# The recommended order is:
#   Serversettings:
#               logoutput
#               internal
#               external
#               method
#               clientmethod
#               users
#               compatibility
#               extension
#               connecttimeout
#               iotimeout
#  srchost
#
#  Rules:
# client block/pass
#  from to
#  libwrap
#  log
#
#     block/pass
#  from to
#  method
#  command
#  libwrap
#  log
#  protocol
#  proxyprotocol
#
#  Routes: 

logoutput: /var/log/dante-server.log

# Using interface name instead of the address.
internal: eth0 port = 1080

# External configuration: use both interfaces tun0 and lo. Use the correct
# one based on the routing table.
external: tun0
external: lo
external.rotation: route

# list over acceptable methods, order of preference.
# A method not set here will never be selected.
#
# If the method field is not set in a rule, the global
# method is filled in for that rule.
#

# methods for socks-rules.
method: none

# methods for client-rules. No auth, this can be changed to make it more
# restrictive.
clientmethod: none

#
# An important section, pay attention.
#

# when doing something that can require privilege, it will use the
# userid:
#user.privileged: root

# when running as usual, it will use the unprivileged userid of:
user.notprivileged: nobody

# If you compiled with libwrap support, what userid should it use
# when executing your libwrap commands?  "libwrap".
user.libwrap: nobody

# the "client" rules.  All our clients come from the net 10.0.0.0/24.
#

client pass {
        from: 10.0.0.0/24 to: 0.0.0.0/0
 log: error # connect disconnect
}

# the rules controlling what clients are allowed what requests
#

# Generic pass statement - bind/outgoing traffic
pass {  
        from: 0.0.0.0/0 to: 0.0.0.0/0
        command: bind connect udpassociate
        log: error # connect disconnect iooperation
}

# Generic pass statement for incoming connections/packets
pass {
        from: 0.0.0.0/0 to: 0.0.0.0/0
        command: bindreply udpreply
        log: error # connect disconnect iooperation
}

Let's dissect the configuration file part by part:

# Using interface name instead of the address.
internal: eth0 port = 1080

# External configuration: use both interfaces tun0 and lo. Use the correct
# one based on the routing table.
external: tun0
external: lo
external.rotation: route

With the internal directive we state which IPs or interfaces and ports from the host we are going to bind to, to wait for incoming connections.
That is, the endpoints where we will be providing the service.
In this particular case we are going to bind to the IP(s) associated with the eth0 interface on port tcp/1080.

With the external directive we indicate which IPs or interfaces we are going to allow the outgoing connections the proxy server will make can originate from.
If the IP or interface needed to use to reach a certain destination IP is not listed here the proxy server will fail to establish the connection.
As we want the outgoing connections to use the VPN we add the tun0 interface and as we also want to be able to reach local services we add the lo interface.

By default, dante will use the first external directive as its outgoing source.
But we want to use both, for different cases.
By specifying the external rotation to route, dante will check which interface it has to use based on the hosts routing table.
Thus, the connections directed to 127.0.0.1 will use the lo interface while the rest will use tun0 interface.

# methods for socks-rules.
method: none

# methods for client-rules. No auth, this can be changed to make it more
# restrictive.
clientmethod: none

Here we list the possible authentication methods that can be used in the client and socks rules.
We set both to none as we are not going to use any type of authentication with the SOCKS proxy server.

# the "client" rules.  All our clients come from the net 10.0.0.0/24.
#

client pass {
        from: 10.0.0.0/24 to: 0.0.0.0/0
 log: error # connect disconnect
}

Here we state which clients we accept connections from based on their IP and where they want to establish a connection to.
In our case, we want to accept clients coming from the LAN without any restriction on their destination IP.

# Generic pass statement - bind/outgoing traffic
pass {  
        from: 0.0.0.0/0 to: 0.0.0.0/0
        command: bind connect udpassociate
        log: error # connect disconnect iooperation
}

# Generic pass statement for incoming connections/packets
pass {
        from: 0.0.0.0/0 to: 0.0.0.0/0
        command: bindreply udpreply
        log: error # connect disconnect iooperation
}

We can apply restrictions to the actions the SOCKS server is allowed to do based on source and/or destination. But we don't want to apply any type of restriction to the actions the server will be allowed to do.

Starting up dante

By default, the dante service starts on boot. The problem lies in the fact that dante requires the interfaces used in the internal and external directives to exist when it boots. If dante boots before the VPN connection has been established then tun0 won't exist and dante will fail to startup.

In order to solve this issue, what we are going to do is: remove dante from the startup services and then we are going to make the VPN start dante after it connects.

In order to remove dante from the startup services we issue the following command:

$ sudo update-rc.d danted disable

And to start dante when the VPN finishes connecting, we are going to use the OpenVPN's up script we created in the first section of this post.
We append the following command:

service danted start

For the sake of clarity and consistency we are going to stop dante if the VPN connection gets closed.
For that we append to the OpenVPN's down script:

service danted stop

This way we can guarantee that the tun0 interface will exist by the time dante starts.

Letting traffic flow from the LAN to dante

Now that we have dante ready to be used we need to be able to reach it from the LAN. This requires two things:

  1. Allow traffic from the LAN directed to dante's port to pass through the firewall, and
  2. forward a port from host to the container's tcp/1080 where dante is running because the container is NATted.

To allow traffic from the LAN to dante's port we need to modify the firewall-setup.sh script in the container and add the following lines under the section # Allow in and out traffic to a tcp port from the host's LAN subnetwork:

iptables -A INPUT -s 10.0.0.0/24 -p tcp --dport 1080 -j ACCEPT
iptables -A OUTPUT -d 10.0.0.0/24 -p tcp --sport 1080 -j ACCEPT

In order to forward the port we are going to use iptables in the host.
For that, we need to create an interface post-up script just like the one we created to setup the firewall rules in the container, but in the host.

That script should have the following:

#!/bin/bash

iptables -F
iptables -F -t nat
iptables -F -t mangle
iptables -X

iptables -P INPUT DROP
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -P POSTROUTING ACCEPT
iptables -t nat -P PREROUTING ACCEPT

iptables -A INPUT -p tcp --dport 22 -j ACCEPT
iptables -A INPUT -p tcp --dport 1080 -j ACCEPT
iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

iptables -t nat -A POSTROUTING -s 10.0.1.0/24 ! -d 10.0.1.0/24 -j MASQUERADE
iptables -t nat -A PREROUTING -p tcp --dport 1080 -j DNAT --to 10.0.1.10:1080

This script will harden the host allowing only connections to port 22 (for ssh) and for 1080 (the one we want to forward) but also will:

  1. NAT the 10.0.1.0/24 network allowing it to reach the internet (POSTROUTING rule)
  2. Port forward port tcp/1080 of the host to port tcp/1080 on the container.

If you need to forward more ports, just append a new PREROUTING rule adjusting the ports.

Conclusion

After all these steps I have a functional VPN endpoint connected 24/7. The container is really lightweight and runs really smoothly even in that old hardware. I just hope to have remembered every step and not missed anything important.

I can use the endpoint from any machine in the LAN through the SOCK5 proxy, either by setting up the browser settings or using the proxychains utility.

I have also setup a couple of daemons that use the VPN in the container and I am able to manage them through their web interface by using the SOCK5 server.

Note: One thing I would like to add is that I find it important to override the default DNS servers you might get and replace them for google's or openDNS servers.

Comments powered by Talkyard.