Motivation

This is basically an evolution of my previous blog post "Setting up a VPN gateway in Ubuntu using LXC containers and OpenVPN".

I had to eventually upgrade my home server and with that I had to re-install everything I had, including my VPN node setup.

This prompted me to re-think my approach. Docker has become the ubiquitous containerization engine and LXC has become something pretty obscure. I decided to look into how I could replicate the same results I had obtained using the LXC but using docker containers.

Also since then, I've been learning Ansible in order to maintain my personal infrastructure tidy and reproducible. I have been delaying upgrading my home server primarily because all the things I had installed and configured manually and didn't want to have to redo all over again.

Objective

I set my objective to basically get the same functionality as I had in my previous node. That is, a node connected to an OpenVPN with services attached to it that are guaranteed that they exit through the VPN and those services are accessible only through my LAN. But this time using docker as my containerization technology.

A second objective, but as important as the first, is to be able to set all this up using Ansible in order to replicate it and update it in an easy fashion.

Shaping the Solution

The approach I took was to basically use a primary container, for the OpenVPN client node. Then the rest of the services will run in separate containers but sharing the network stack of the OpenVPN node.

There are two main reasons behind this approach: (1) it makes it easy for the other service containers to access the internet through the VPN and (2) if something goes wrong with the OpenVPN container, the rest of the nodes lose all connectivity thus reducing the risk of exposure.

It will also allow us to control the firewall in a single place to make sure we don't allow any traffic but VPN and LAN traffic to the host.

All the setup will be driven using an Ansible role, this way it allows parametrization and it is easy to reproduce the steps until we get it right.

Implementation

OpenVPN Node

The first challenge was to find a an image for the OpenVPN client container. Most OpenVPN images are meant to work as servers but not clients.

Fortunately, I stumbled upon dperson's OpenVPN Client docker image.
It had almost all the things I required. Apart from being an OpenVPN client, it comes with a way of setting up a restricted firewall but also it allows easy configuration for opening up both networks and ports. This is exactly what I needed to simplify everything.

There is only one thing I don't like about it is that I cannot easily give it an OpenVPN configuration file along with the authentication file to use. Based on what I saw on the ENTRYPOINT script, it might be possible to do by mounting those files in particular locations but seemed just easier to bypass this by adding my own launcher script.

The way I decided to approach this was to create a folder in the host where I would place the openvpn.conf I get from my VPN provider along with a login.info file that contains the credentials for the VPN (see auth-user-pass option in OpenVPN reference manual).
Also in that directory I will add a bash script that will execute openvpn with some command line arguments I want to bypass the entrypoint's launch method.

Then, we'll mount that directory inside our container and instruct the container to initialize and derive the launch of the openvpn to our script.

Doing all of the above using ansible is a fairly straight-forward. The playbook to do this looks something like this:

- name: Create base directory for OpenVPN node configs
  file:
    path: "{{ vpn_node__base_dir }}"
    owner: root
    group: root
    mode: '755'
    state: directory

- name: Create itemized directories
  file:
    path: "{{ vpn_node__base_dir }}/{{ item }}"
    owner: root
    group: root
    mode: '755'
    state: directory
  loop:
    - openvpn

- name: Copy OpenVPN configuration file
  copy:
    src: "{{ vpn_node__openvpn_config_file }}"
    dest: "{{ vpn_node__base_dir }}/openvpn/config.ovpn"
    owner: 'root'
    group: 'root'
    mode: '644'

- name: Generate login file
  template:
    src: files/login.info.j2
    dest: '{{ vpn_node__base_dir }}/openvpn/login.info'
    owner: 'root'
    group: 'root'
    mode: '600'

- name: Generate launch script
  template:
    src: files/openvpn-launch.j2
    dest: '{{ vpn_node__base_dir }}/openvpn/launch.sh'
    owner: 'root'
    group: 'root'
    mode: '700'

# Setup OpenVPN docker container
- name: Setup OpenVPN docker container
  docker_container:
    image: dperson/openvpn-client:{{ vpn_node__openvpn_container_version }}
    name: '{{ vpn_node__containers_base_name }}-openvpn'
    state: started
    restart_policy: unless-stopped
    networks_cli_compatible: yes
    networks: '{{ vpn_node__docker_network | default(omit) }}'
    capabilities:
      - NET_ADMIN
    devices:
      - /dev/net/tun
    mounts:
      - source: "{{ vpn_node__base_dir }}/openvpn"
        target: /vpn/
        type: bind
        read_only: no
    command:
      - '-f'
      - '{{ vpn_node__openvpn_port }}'
      - '-r'
      - '{{ vpn_node__lan_cidr }}'
      - /vpn/launch.sh

login.info.j2 template:

{{ vpn_node__openvpn_username }}
{{ vpn_node__openvpn_password }}

openvpn-launch.j2 template:

#!/bin/bash

iptables -A OUTPUT -p tcp -m tcp --dport {{ vpn_node__openvpn_port }} -j ACCEPT
iptables -A OUTPUT -p udp -m udp --dport {{ vpn_node__openvpn_port }} -j ACCEPT
ip6tables -A OUTPUT -p tcp -m tcp --dport {{ vpn_node__openvpn_port }} -j ACCEPT
ip6tables -A OUTPUT -p udp -m udp --dport {{ vpn_node__openvpn_port }} -j ACCEPT

exec openvpn --config /vpn/config.ovpn --auth-user-pass /vpn/login.info

The key things to note here are

  1. the fact that we need to make sure that we are giving the container the NET_ADMIN, to attach the /dev/net/tun device to it
  2. we are mounting the directory that was created with the files on the /vpn directory
  3. By passing those arguments in the command what we are doing is telling the entrypoint's script to setup a restrictive firewall, to not route through the vpn the subnetwork vpn_node__lan_cidr and to use /vpn/launch.sh script rather than launching openvpn on its own.

Now let's talk about the variables we are using:

  • vpn_node__base_dir: this is the directory where all the configuration and files related to our VPN gateway are going to be located. There's going to be a subdirectory per service, openvpn being one of them
  • vpn_node__openvpn_config_file: location of the VPN's configuration file from ansible's perspective
  • vpn_node__openvpn_username: username to use to authenticate against the VPN server
  • vpn_node__openvpn_password: password to use to authenticate against the VPN server
  • vpn_node__openvpn_port: port where the VPN server is listening to. The default value is 1194
  • vpn_node__openvpn_container_version: the version of the image to use. Default latest
  • vpn_node__containers_base_name: prefix to use for all the containers that will be launched by this playbook. It is useful to identify all related containers
  • vpn_node__docker_network: change the network the container will be attached to. It is recommended to override this and not have it attached to the default bridge network. In order to override it the network to be used will need to be created before calling this role and use it
  • vpn_node__lan_cidr: this variable is used to configure the container so that packets destined to that network are not sent through the VPN. This should be setup to the LAN subnetwork you are using. E.g. 10.0.10.0/24

Once this is applied we'll have a container that is connected to the VPN but there is not much use for it. Unless you connect to it and use it, there's actually not much value provided by it. That takes us to the next step: setting up relevant services to make this VPN gateway useful.

SOCKS5 Proxy

The first and most important service to install for a VPN gateway is a SOCKS5 proxy that can be used from the LAN and provides access to the internet through the VPN.

I wanted to keep using dante as it has proven to work well in the time I've been using it.
The search for the image was even harder than for the OpenVPN client one, but I managed to find a decent one: vimagick/dante.
It has the simplicity I was looking for: the only thing it needs is to have the configuration file overriden by an appropriate mount.

A simple-yet-effective configuration file for this occasion would be something like this:

debug: 0
logoutput: stderr
internal: eth0 port = 1080
external: tun0
clientmethod: none
socksmethod: none
user.privileged: root
user.unprivileged: nobody

client pass {
    from: 0.0.0.0/0 port 1-65535 to: 0.0.0.0/0
    log: error
}

socks pass {
    from: 0.0.0.0/0 to: 0.0.0.0/0
    log: error
}

The key parts are the internal and external configurations. They need to match the interfaces you want it to use for listening to new connections and establish the proxied connections, respectively. tun0 is going to be the interface for the VPN and eth0 should be the bridge interface in the container. The rest of the configuration is pretty self explanatory, in my case I'm not applying any restrictions as this will only be reachable within my LAN and don't want to bother to configure it in a more restrictive manner.

In order to integrate this into our VPN what we need to do is launch our container using the OpenVPN's container network stack but we need to remember to (1) publish the port for the SOCKS5 proxy in the OpenVPN container and (2) configure the port-forwarding in the OpenVPN container. The reason of why we need to do the port publishing in the OpenVPN container and not in the danted container is that that is a network stack operation and danted's container doesn't have a network stack, it is using the OpenVPN's one.

What needs to be done in the ansible playbook is something like:

- name: Create itemized directories
  ...
  loop:
    - openvpn
    - socks5-proxy

...

- name: Copy SOCKS5 proxy configuration file
  copy:
    src: "{{ vpn_node__socks5_proxy_config_file }}"
    dest: "{{ vpn_node__base_dir }}/socks5-proxy/sockd.conf"
    owner: 'root'
    group: 'root'
    mode: '644'

# Setup OpenVPN docker container
- name: Setup OpenVPN docker container
  docker_container:
    published_ports:
      - '1080:1080/tcp'
    command:
      - '-f'
      - '{{ vpn_node__openvpn_port }}'
      - '-r'
      - '{{ vpn_node__lan_cidr }}'
      - '-p'
      - '1080'
      - /vpn/launch.sh

- name: Setup SOCKS5 Proxy docker container
  docker_container:
    user: root
    image: vimagick/dante:latest
    name: '{{ vpn_node__containers_base_name }}-socks5'
    state: started
    restart_policy: unless-stopped
    networks_cli_compatible: yes
    network_mode: 'container:{{ vpn_node__containers_base_name }}-openvpn'
    mounts:
      - source: "{{ vpn_node__base_dir }}/socks5-proxy/sockd.conf"
        target: /etc/sockd.conf
        type: bind
        read_only: yes

Once this changes have been applied to the host, voila!, there's a VPN gateway that can be used by any application through the SOCKS proxy.

In order to use it just point your SOCKS5 configuration to the address of the host running the docker containers port 1080.
Remember that you need to be in the LAN specified by vpn_node__lan_cidr in order for you to have access.

The same methodology that I used for adding the SOCKS5 proxy can be used to add any other service.
The only thing needed is launching a container with the desired service and using the OpenVPN's container network stack.
If the service provides a frontend or API that needs to be accessible from the LAN, the corresponding port needs to be both, published and forwarded in the OpenVPN container.

Conclusion

Using all this bits and pieces I built an ansible role that sets up a VPN node: the OpenVPN client container along with all the services I wanted to setup together.
This allows me to regenerate everything from scratch in a really easy and convenient manner.

I haven't made that Ansible role public yet, because I don't know how useful people will find it and I would need to put some more effort to make it more flexible than it currently is as I embedded my specific requirements in it.