Ansible
Infrastructure-as-code playbooks for provisioning Debian-based home servers.
Overview
The /ansible directory contains standalone Ansible playbooks that configure Debian hosts from scratch. All playbooks target localhost with become: true — no inventory files, no ansible.cfg, no orchestration wrapper. Each playbook is self-contained and idempotent.
Two machine profiles exist, each in its own subdirectory:
| Directory | Host | OS | Notes |
|---|---|---|---|
sepia |
sepia.uitgeest.veenboer.xyz | Debian 13 (Trixie) | Current machine. Chint inverter, 2x4TB HDD RAID-ish |
server |
server (other host) | Debian 12 (Bookworm) | GoodWe inverter, multi-drive DAS6 array |
Playbooks
All playbooks share the same structure: hosts: localhost, become: true, straight idempotent tasks.
setup_base.yml
Base Debian OS setup:
- APT sources with contrib/non-free/non-free-firmware
- SSH server
- Locale (
en_US.UTF-8) - System users:
user(uid 1000),bram(uid 1001),rik(uid 1002) - Samba file shares
- dnsmasq (DHCP + DNS) with local network config
- Mount point directories
sepia — 2 Samba shares (black, scratch, music), dnsmasq on enp2s0, DHCP range 192.168.2.1–250, DNS forwarders 192.168.2.151 + public resolvers.
server — 6 Samba shares (helios, jupiter, mercury, neptune, nubes, scratch, luna), dnsmasq on enp3s0 (1 GbE), DHCP range 192.168.2.10–110, plus DHCP reservations for nas (192.168.2.200) and kratos (192.168.2.250).
setup_fstab.yml
Filesystem table configuration:
- Creates all mount point directories
- Sets
/etc/fstabblock for OS volumes, data volumes, and NFS bind mounts
sepia — LVM volumes: root, home (btrfs), opt, docker, swap on SSD. Data: 2x4TB HDD on LVM (btrfs subvolumes: data, backup, seafile) + scratch (ext4). NFS exports via bind mounts.
server — LVM volumes: root, home, docker, swap, opt, seafile on OS disk. Data: 4 btrfs volumes (helios, neptune, jupiter, mercury) + 1 misc btrfs (nubes, scratch). Luna (shared namespace) implemented via bind mounts across helios/neptune/mercury. NFS backup mount to nas at 192.168.2.200.
setup_network.yml
Static network interface configuration.
sepia — enp2s0: 192.168.2.150 + alias 192.168.2.151.
server — enp3s0: 192.168.2.150 + aliases 192.168.2.151, 192.168.2.152.
setup_docker.yml
Docker engine installation from Docker's official APT repository:
- Adds Docker GPG key and APT repo
- Installs
docker-ce,docker-ce-cli,containerd.io,docker-buildx-plugin,docker-compose-plugin - Adds users (bram, rik, user) to the
dockergroup - Enables and starts
dockerservice
setup_systemd_services.yml
Custom systemd units:
sepia:
- docker-dns-ad-blocker.service — oneshot to bring up the DNS ad-blocker container
- sysfs-chmod-powercap.service — allows non-root access to powercap sensors
- inverter.service — runs /opt/inverter/inverter.pl for Chint solar inverter (serial)
server:
- das6-proxy.service — SSH SOCKS proxy tunnel to DAS6
- gw2pvo.service — reads GoodWe inverter data and pushes to PVOutput.org
- jupyter.service — Jupyter Lab server
- sysfs-chmod-powercap.service — powercap permissions
- ser2net.service — serial-to-network proxy (for DAS6 UPS/serial console)
setup_ser2net.yml
ser2net serial-to-network proxy configuration.
sepia — FTDI DCSD USB UART at 115200 baud, exposed on TCP port 2001.
server — FTDI FT232R USB UART at 115200 baud, exposed on TCP port 2001.
setup_crontab.yml
Root cron jobs for:
sepia:
- Btrfs snapshots on /home (daily 7, weekly 12)
- Btrfs snapshots on /media/data (daily 30, weekly 52)
- Btrfs snapshots on /media/seafile (daily 7, weekly 12)
- Midnight ownership fix on /media/data/Music/
server: - Btrfs snapshots on helios, neptune, mercury, nubes (daily 7, varying weekly retention) - Midnight ownership fixes on nubes/downloads/, neptune/Video/Movies/, neptune/Video/Shows/ - Collectd data copy scripts (sepia, shuttle, kratos) - Home Assistant database copy script - Borg backups: helios (Saturday), neptune (Sunday)
install_software.yml
Extra APT packages beyond base install:
sepia (61 packages): System tools (htop, ncdu, powertop, s-tui, stress, smartmontools, lm-sensors, vnstat, fio, likwid, powerstat, iperf3), networking (net-tools, nmap, nethogs, tcpdump, traceroute, rfkill, bind9-dnsutils), storage/filesystem (sshfs, borgbackup, unison, mdadm, lvm2, btrfs-progs, ethtool, gdisk, parted, hdparm), build (build-essential, cmake-curses-gui, git), editors (vim, tmux), util (figlet, wget, zip, p7zip, firmware-realtek, rsync, sudo, apt-file, man, firmware-linux-free), inverter-specific (libappconfig-perl, libdevice-serialport-perl, rrdtool, php, php-curl, python3-serial, netcat-traditional). Also installs uv via astral.sh.
server (63 packages): Similar base + Python dev tools (python3-pip, python3-pandas, python3-pytest, python3-jupyterlab-server, black, yamllint, pre-commit), inverter (python3-termcolor, python3-paramiko, python3-zombie-telnetlib, python3-icecream, sqlite3).
setup_hermes.yml (sepia only)
Hermes Agent system dependencies:
- Runtime:
nodejs,npm(Hermes TUI is a Node.js app) - Tools:
ripgrep(file searching),ffmpeg(media skills) - Build:
build-essential,python3-dev,libffi-dev(Python subagent native deps) - Browser:
xvfb, fonts (Playwright/browser tool) - Also installs
uv(astral.sh) as the Python package manager
Run this after setup_base.yml on any host that runs Hermes Agent.
setup_gw2pvo.yml (server only)
Configures /etc/gw2pvo.cfg for the GoodWe-to-PVOutput bridge.
Quality Tooling
.yamllint— custom config (2-space indent, no line-length limit, relaxed document-start/empty-lines).pre-commit-config.yaml— yamllint + yamlfmt hooks (google/yamlfmt v0.17.2)
Usage
Run any playbook directly on the target host:
cd /ansible/<hostname>
ansible-playbook setup_base.yml
ansible-playbook setup_docker.yml
ansible-playbook install_software.yml
ansible-playbook setup_fstab.yml
ansible-playbook setup_network.yml
ansible-playbook setup_systemd_services.yml
ansible-playbook setup_crontab.yml
ansible-playbook setup_ser2net.yml
No -i inventory flag needed — all playbooks run against localhost.
Design Decisions
- No inventory / no ansible.cfg — each host has its own playbook directory with localhost-only playbooks. This avoids the complexity of a multi-host inventory while still keeping configuration version-controlled and idempotent.
- Playbooks are standalone — no
import_playbookchain. Run them individually in dependency order (base → docker/network/fstab → services → crontab). - per-host directories —
sepia/andserver/hold slightly different configurations for different hardware. Shared logic could be extracted into roles but hasn't been needed yet. - No secrets in playbooks — credentials (gw2pvo, etc.) are visible in the playbook content. This is a known weakness; these should be externalized to vault or
.envfiles.