Pi-hole on OpenBSD

The problem

I don’t like ads for the following reasons:

  1. They are obnoxious
  2. They are a security risk
  3. They waste bandwidth (and my internet connection is often fragile)
  4. They collect data and violate my privacy

Some options

  1. Use a DNS ad blocker on the network, such as a Pi-hole
  2. Use a browser-based application firewall such as uBlock origin or uMatrix
  3. Use a DNS ad blocker such as the Pi-hole on the host machine, in a VM, or in a Docker container

I have done all three of these things (sometimes combining them), and while the Raymond Hill (uBlock / uMatrix) solutions are excellent, they only work in the specific browser they are installed in, and do nothing to prevent any additional telemetry requests from various operating systems or applications.

While the network level Pi-hole is excellent on a network that you control, it does not help when you are using the network somewhere else. An alternate solution is to provision a VPS somewhere with a VPN such as Algo or Streisand with some sort of DNS ad blocker, but it is nice to be able to collect statistics about blocked domains locally. Also, this means that your DNS requests can be sent using DNSCrypt, which is a great security feature, made all the easier by jedisct1’s dnscrypt-proxy.

The solution

OpenBSD has been making significant improvements to its native virtualization platform, vmm. Running an alpine Linux guest is easy to set up with a static IP Address using interface { switch “switch_name” } networking, meaning that networking will persistent across reboots for both the guest VM and the OpenBSD host. We need linux because we are going to install the official Pi-hole Docker container, and Docker requires linux kernel features, such as namespaces and control groups. The final setup will be an alpine Linux virtual machine running a Pi-hole in a Docker container in the vmm hypervisor on an OpenBSD host running dnscrypt-proxy as a daemon. The host will resolve its DNS queries through the virtual machine, benefiting from encrypted DNS and system-wide ad blocking.

Step 1

Install OpenBSD from a snapshot. Many improvements have been steadily making their way into vmm in -current, and while OpenBSD 6.4 was released on October 18, 2018, it’s nice to benefit from the ever-evolving security mitigations that the OpenBSD developers are constantly adding and refining. Just make sure to read the docs before updating.

Next we want to get encrypted DNS working on OpenBSD. We can install dnscrypt-proxy from packages and configure it to encrypt all of our DNS traffic:

$ doas pkg_add -v dnscrypt-proxy

We need to edit the /etc/dnscrypt-proxy.toml with the relevant options, including making sure that the daemon is listening on the gateway for the alpine VM. The setup below is very secure, but adds extra overhead by negotiating new cryptographic keys with each DNS query. The default configuration file is very well commented, so we can adjust the parameters as necessary.

server_names = ['cloudflare']

listen_addresses = ['127.0.0.1:53', '10.0.0.1:53', '[::1]:53']

max_clients = 250

user_name = '_dnscrypt-proxy'

ipv4_servers = true

ipv6_servers = false

dnscrypt_servers = true

doh_servers = true

require_dnssec = true

require_nolog = true

require_nofilter = true

force_tcp = false
proxy = "socks5://127.0.0.1:9050"

timeout = 2500
keepalive = 30

log_level = 2
log_file = '/var/log/dnscrypt-proxy.log'
use_syslog = false

cert_refresh_delay = 240
dnscrypt_ephemeral_keys = true
tls_disable_session_tickets = true

fallback_resolver = '9.9.9.9:53'
ignore_system_dns = false

Start and enable the daemon:

$ doas rcctl start dnscrypt_proxy &&
    doas rcctl enable dnscrypt_proxy

Update the OpenBSD /etc/resolv.conf file to take advantage of encrypted DNS:

$ doas cat >/etc/resolv.conf<<_EOF
nameserver 127.0.0.1
lookup file bind
_EOF

We are assuming that we will use bridged networking with a virtual Ethernet for the alpine virtual machine in this case. This requires that ip.forwarding is enabled. We also want it to persist across reboots.

$ doas sysctl net.inet.ip.forwarding=1 &&
    echo "net.inet.ip.forwarding=1" | doas tee -a /etc/sysctl.conf

We need to configure a virtual switch for the network. As usual, it helps to read the official OpenBSD documentation, but the basic setup is to create a bridge device and a vether interface. We want these to persist across reboots, so we can use the hostname.if configuration files to do so. First, we can create the virtual ethernet. In this case, we’ll call it vether0:

$ echo 'inet 10.0.0.1 255.255.255.0' | doas tee -a /etc/hostname.vether0 &&
    doas sh /etc/netstart vether0

Then we create an Ethernet bridge called bridge0:

$ echo 'add vether0' | doas tee -a /etc/hostname.bridge0 &&
    doas sh /etc/netstart bridge0

Next we need to edit our pf.conf to allow our packets to get to and from the virtual machine. The simplest addition to the default rules is the following, although more specific rules are certainly possible:

match out on egress from vether0:network to any nat-to (egress)

Now we configure the switch in our /etc/vm.conf. It is possible to set this up manually, but we need this for persistence anyway. We can name the switch whatever we want, in this case alpine_switch.At the top of the file add something such as:

switch "alpine_switch" {
    interface bridge0
}

Download an alpine linux iso. Choose the “Standard” or “Extended” options, as the “Virtual” kernel seems to be having some difficulty with the serial console at the moment.

Create a virtual disk for the alpine installation:

$ vmctl create alpine.img -s 6G

6 Gigs should be plenty of space. After completing this process the alpine VM shows the following disk usage:

alpine:~# df -h | fold -sw 80
Filesystem                Size      Used Available Use% Mounted on
devtmpfs                 10.0M         0     10.0M   0% /dev
shm                    1001.1M         0   1001.1M   0% /dev/shm
/dev/vg0/lv_root          7.2G      1.4G      5.4G  21% /
tmpfs                   200.2M    172.0K    200.1M   0% /run
/dev/vda1                92.8M     32.1M     53.7M  37% /boot
cgroup_root              10.0M         0     10.0M   0% /sys/fs/cgroup
/dev/vg0/lv_root          7.2G      1.4G      5.4G  21% /var/lib/docker
overlay                   7.2G      1.4G      5.4G  21% 
/var/lib/docker/overlay2/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxx/merged
shm                      64.0M         0     64.0M   0% 
/var/lib/docker/containers/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxx/mounts/shm

Boot the alpine iso in vmm to install alpine to disk:

$ doas vmctl start alpine -c \
    -d "/path/to/alpine.iso" \
    -d "/path/to/alpine.img" \
    -m 2048M \
    -n "alpine_switch"

Install alpine to the virtual disk using the serial console and enable ssh.

$ doas vmctl console alpine
Connected to /dev/ttyp0 (speed 115200)

Welcome to Alpine Linux 3.8
Kernel 4.14.79-0-vanilla on an x86_64 (/dev/ttyS0)

localhost login: root
Welcome to Alpine!

The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <http://wiki.alpinelinux.org>.

You can setup the system with the command: setup-alpine

You may change this message by editing /etc/motd.

localhost:~# setup-alpine

Follow the prompts from the setup-alpine script, making sure to setup the network, change the root password and pick an installation mirror. Install and enable openssh when prompted.

An example network configuration is below.

Enter system hostname (short form, e.g. 'foo') [alpine]:
Available interfaces are: eth0.
Enter '?' for help on bridges, bonding and vlans.
Which one do you want to initialize? (or '?' or 'done') [eth0]
Ip address for eth0? (or 'dhcp', 'none', '?') [dhcp] 10.0.0.3/24
Gateway? (or 'none') [10.0.0.1]
Configuration for eth0:
  type=static
  address=10.0.0.3
  netmask=255.255.255.0
  gateway=10.0.0.1
Do you want to do any manual network configuration? [no] 
DNS domain name? (e.g 'bar.com') [] 
DNS nameserver(s)? [] 10.0.0.1

Choose the virtual disk when prompted, and then pick either sys for persistent disk installation or lvmsys to setup lvm.

Available disks are:
  vda   (6.4 GB 0x0b5d )
Which disk(s) would you like to use? (or '?' for help or 'none') [none] vda
The following disk is selected:
  vda   (6.4 GB 0x0b5d )
How would you like to use it? ('sys', 'data', 'lvm' or '?' for help) [?] lvmsys
WARNING: The following disk(s) will be erased:
  vda   (6.4 GB 0x0b5d )
WARNING: Erase the above disk(s) and continue? [y/N]: y
Creating file systems...
  Physical volume "/dev/vda2" successfully created.
  Logical volume "lv_swap" created.
  Logical volume "lv_root" created.
 * service lvm added to runlevel boot
Installing system on /dev/vg0/lv_root:
/mnt/boot is device /dev/vda1
100% ############################################==> initramfs: creating /boot/initramfs-virt
/boot is device /dev/vda1

Installation is complete. Please reboot.
alpine:~# poweroff

Edit /etc/vm.conf on OpenBSD with the options required to set up your virtual machine automatically on boot:

vm "alpine" {
	cdrom "/path/to/alpine.iso"
	disk "/path/to/alpine.img"
	owner <user>
	memory 2G
	interface { switch "alpine_switch" }
	enable # this is the default / change to "disable" to not start the vm at boot
}

Enable vmd to start at boot and start the daemon to boot the newly installed alpine VM:

$ doas rcctl enable vmd && doas rcctl start vmd

Connect to the serial console again to install an ssh key on the alpine VM. Also setup the apk repositories to use HTTPS and enable community and testing and update the system:

alpine:~# cat >/etc/apk/repositories<<_EOF && apk upgrade -U
https://alpine.mirror.wearetriple.com/edge/main
https://alpine.mirror.wearetriple.com/edge/community
https://alpine.mirror.wearetriple.com/edge/testing
_EOF
fetch https://alpine.mirror.wearetriple.com/edge/main/x86_64/APKINDEX.tar.gz
fetch https://alpine.mirror.wearetriple.com/edge/community/x86_64/APKINDEX.tar.gz
fetch https://alpine.mirror.wearetriple.com/edge/testing/x86_64/APKINDEX.tar.gz
OK: 290 MiB in 67 packages

Install, enable, and start Docker and whatever other nice-to-haves on alpine:

alpine:~# apk add docker tmux iproute2 tcpdump tshark wireguard-tools &&
    rc-update add docker &&
    /etc/init.d/docker start
(1/18) Installing libmnl (1.0.4-r0)
(2/18) Installing jansson (2.11-r0)
(3/18) Installing libnftnl-libs (1.1.1-r0)
(4/18) Installing iptables (1.6.2-r0)
(5/18) Installing libltdl (2.4.6-r5)
(6/18) Installing libseccomp (2.3.3-r1)
(7/18) Installing docker (18.06.1-r0)
Executing docker-18.06.1-r0.pre-install
(8/18) Installing docker-openrc (18.06.1-r0)
(9/18) Installing libelf (0.8.13-r3)
(10/18) Installing iproute2 (4.19.0-r0)
Executing iproute2-4.19.0-r0.post-install
(11/18) Installing libpcap (1.8.1-r1)
(12/18) Installing tcpdump (4.9.2-r4)
(13/18) Installing ncurses-terminfo-base (6.1_p20180818-r1)
(14/18) Installing ncurses-terminfo (6.1_p20180818-r1)
(15/18) Installing libevent (2.1.8-r6)
(16/18) Installing ncurses-libs (6.1_p20180818-r1)
(17/18) Installing tmux (2.7-r0)
(18/18) Installing wireguard-tools (0.0.20181018-r0)
Executing busybox-1.29.3-r2.trigger
OK: 293 MiB in 71 packages
 * service docker added to runlevel default
 * Starting docker ...                                                    [ ok ]

Pull the Pi-hole docker image to the host:

alpine:~# docker pull pihole/pihole
latest: Pulling from pihole/pihole
f17d81b4b692: Pull complete 
f173a7e32ba0: Pull complete 
789a21c8d73f: Pull complete 
18b9c4527d4c: Pull complete 
fb59b1419096: Pull complete 
1579ff407b87: Pull complete 
a177c6f65516: Pull complete 
5e9feae54ea7: Pull complete 
Digest: sha256:1f0e73d50ef5d824f24f90ccf71a4039ecd23aa18d9b6a329f2e6f78d407e859
Status: Downloaded newer image for pihole/pihole:latest

We need a directory on the alpine VM that we can map to the Pi-hole Docker container which will contain persistent configuration information. An obvious choice would be to put it in /etc/pihole:

alpine:~# mkdir /etc/pihole

Create a script with the appropriate parameters for launching the Pi-hole Docker container. We can put it in /usr/local/bin/launch_pihole or so. A good explanation of all of the options and environment variables is in the pi-hole/docker-pi-hole GitHub repository.

#!/bin/sh

DOCKER_CONFIGS="/etc/pihole"
IP="10.0.0.3"

docker run -d \
    --name pihole \
    -p 53:53/tcp -p 53:53/udp \
    -p 80:80 \
    -p 443:443 \
    -v "${DOCKER_CONFIGS}/pihole/:/etc/pihole/" \
    -v "${DOCKER_CONFIGS}/dnsmasq.d/:/etc/dnsmasq.d/" \
    -e ServerIP="${IP}" \
    -e WEBPASSWORD= \
    --restart=unless-stopped \
    --dns=127.0.0.1 \
    -e DNS1=10.0.0.1 \
    -e DNS2=no \
    -e IPv6=False \
    pihole/pihole:latest

We have disabled IPv6 and the password for the web admin interface, as well as the Pi-hole’s built-in DHCP server. Additionally, we are setting the upstream DNS provider as the instance of dnscrypt-proxy running on the OpenBSD host. Now we want to bring up this new ad blocker:

alpine:~# chmod +x /usr/local/bin/launch_pihole && launch_pihole

Now we can check if it’s running:

alpine:~# docker ps --format 'table {{.Image}}\t{{.Names}}\t{{.Status}}' &&
    docker ps --format  'table {{.Ports}}'
IMAGE                  NAMES               STATUS
pihole/pihole:latest   pihole              Up 2 minutes (healthy)
PORTS
0.0.0.0:53->53/tcp, 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:53->53/udp, 67/udp

Next we want to check if the web interface is working from our OpenBSD host:

$ chrome --enable-unveil "10.0.0.3/admin"

Finally, we must change the DNS on OpenBSD such that we are benefiting from the Pi-hole. First we can edit the /etc/dhclient.conf so that it will override the DNS servers advertised by various WiFi Access Points. Keep in mind that this may cause connectivity issues in certain instances.

$ echo 'supersede domain-name-servers 10.0.0.3;' | doas tee -a /etc/dhclient.conf &&
    doas sh /etc/netstart

Now we can verify that we are using Cloundflare’s DNS server:

are blocking ads without a browser extension:

And are benefiting from DNSSEC: