Wildcard TLS for a self-hosted homelab — Cloudflare DNS-01, end to end
I spent a weekend giving every service in my homelab a real green padlock. Not self-signed-and-click-through-the-warning green. Real, browser-trusted, auto-renewing Let’s Encrypt green. One wildcard cert covers eight services across *.lab.securebytes.net, nothing is exposed publicly to make it work, and renewal happens via cron on its own.
Here’s the full walkthrough. If you’ve got services running internally and you’re tired of self-signed warnings, this is the path.
The shape of the solution
The standard ACME flow is HTTP-01: Let’s Encrypt hits your domain on port 80, you serve back a token, you get a cert. That doesn’t work for internal services because they’re not reachable from the public internet. You’d have to expose every service through a port forward, which defeats the point of internal-only.
The DNS-01 challenge sidesteps that. Instead of proving you control the server by serving a file, you prove you control the domain by writing a TXT record. If you control DNS for the domain, you can issue a cert for any subdomain — including *.something.example.com wildcards.
Cloudflare runs my DNS, so the path is:
- Get an API token from Cloudflare scoped to only edit DNS on this one zone
- Tell
acme.sh(the ACME client) to use Cloudflare’s API for DNS-01 - Issue a wildcard cert for
*.lab.securebytes.net - Install the cert into nginx
- Set up auto-renewal
That’s the whole thing. The complexity is in the plumbing.
Step 1: the Cloudflare API token
Go to Cloudflare → My Profile → API Tokens → Create Token → Custom token. Set:
- Permissions: Zone → DNS → Edit
- Zone Resources: Include → Specific zone →
securebytes.net
The instinct to give it broader permissions (“just in case”) is wrong. This token will sit on a machine in your homelab. If it leaks, the blast radius should be one zone, not your whole Cloudflare account.
Save the token to a secrets file with locked-down perms:
mkdir -p /root/.secrets
cat > /root/.secrets/cloudflare.ini <<EOF
CF_Token="your-token-here"
EOF
chmod 600 /root/.secrets/cloudflare.ini
Also save it in a password manager (I use Vaultwarden) so you have a recovery copy.
Step 2: install acme.sh
acme.sh is a single-file shell script that handles ACME flow. No Python, no Go runtime, no Docker — just bash, which makes it perfect for a small LXC.
curl https://get.acme.sh | sh -s [email protected]
That installs to ~/.acme.sh/ and sets up a cron entry for auto-renewal.
Step 3: issue the wildcard cert
export CF_Token=$(grep CF_Token /root/.secrets/cloudflare.ini | cut -d'"' -f2)
~/.acme.sh/acme.sh --issue --dns dns_cf \
-d "lab.securebytes.net" \
-d "*.lab.securebytes.net" \
--server letsencrypt
The --dns dns_cf tells acme.sh to use the Cloudflare DNS plugin. The two -d flags request both the apex (lab.securebytes.net) and the wildcard (*.lab.securebytes.net) — the wildcard alone doesn’t cover the apex.
If everything’s right, you’ll see acme.sh write a TXT record, wait for propagation, validate, and pull down the cert. Two to three minutes. If it fails with a Cloudflare permission error, the token scope is wrong — recheck it.
Step 4: install the cert into nginx
mkdir -p /etc/nginx/certs
~/.acme.sh/acme.sh --install-cert -d "lab.securebytes.net" --ecc \
--key-file /etc/nginx/certs/lab.securebytes.net.key \
--fullchain-file /etc/nginx/certs/lab.securebytes.net.crt \
--reloadcmd "systemctl reload nginx"
The --reloadcmd is the key piece for renewal. Every time acme.sh renews the cert (every ~60 days), it’ll re-install it and reload nginx automatically. No manual step.
In each nginx site config, point at the wildcard cert files:
ssl_certificate /etc/nginx/certs/lab.securebytes.net.crt;
ssl_certificate_key /etc/nginx/certs/lab.securebytes.net.key;
Same cert, different server_name per site. Eight sites, one cert, one renewal cycle.
The gotchas
A few things bit me along the way that I wouldn’t have predicted:
pfSense DNS Rebind protection blocks hostname access by default. The first time I tried to hit proxmox.lab.securebytes.net from inside the network, pfSense quietly dropped it as a rebind attempt. Fix: System → Advanced → Admin Access → Alternate Hostnames, add your hostname. Also disable the HTTP_REFERER enforcement check in the same menu.
Pi-hole v6 doesn’t use custom.list anymore. Older guides will tell you to add custom DNS entries by editing /etc/pihole/custom.list. That’s gone in v6. The new way is pihole-FTL --config dns.hosts '[...]' — and the call replaces the entire list each time, so always send all your records together.
Vaultwarden’s upstream is HTTPS, not HTTP. Most services proxy from http:// upstream. Vaultwarden runs HTTPS internally with a self-signed cert. The nginx config needs proxy_pass https://... and proxy_ssl_verify off (because the upstream cert is self-signed).
UniFi needs proxy_ssl_server_name on. Without SNI, UniFi OS rejects the upstream connection. Add the directive alongside proxy_ssl_verify off.
Chrome caches old/bad certs aggressively. If you swapped a self-signed cert for a real one, Chrome will sometimes keep showing the old “Not Secure” warning. Test in an incognito window first. If incognito is clean, hard refresh in the regular window or quit Chrome fully and reopen.
What I’d do differently
Two things, on the next iteration:
Move the Cloudflare token to a secrets manager instead of a file on disk. Vault or Bitwarden Secrets Manager, fetched at runtime. The current setup is fine for a homelab, but the practice is wrong if it propagates to anywhere with real stakes.
Codify all of this in Ansible. Right now the proxy LXC is a snowflake — if I lose it, recreating it means running these commands again from memory or from notes. An Ansible playbook would make the rebuild a one-liner. That’s the next project.
Why this is worth doing
Self-signed certs with click-through warnings are one of those things that seem fine until you realize you’ve trained yourself to ignore browser security warnings. That’s a habit worth not building. Real certs internally also means the apps actually work properly — webhooks fire, OAuth flows complete, browsers don’t downgrade features that require secure contexts.
And once it works, it just keeps working. Renewal happens silently, services stay up, and the lab feels like real infrastructure instead of a science experiment. That alone makes the weekend worth it.