SecureBytes Homelab
A two-node Proxmox cluster running real network and security infrastructure as a learning environment for hands-on platform engineering. Built and operated single-handedly with the constraint that every service runs the same way it would in a small enterprise — TLS everywhere, internal DNS, observability, and selective public exposure.
What’s running
The cluster lives on a pair of Lenovo Tiny / SFF nodes — a P920 workstation as the heavy compute box and a ThinkCentre M710q as the lightweight node. Networking comes from a Netgate 2100 running pfSense Plus, a UniFi USW-Lite-8-PoE switch, and a UniFi U7 Pro AP, all managed through a self-hosted UniFi controller on the cluster itself.
On top of that, nine internal services run inside LXC containers and VMs:
- Vaultwarden — self-hosted password manager
- Pi-hole — internal DNS resolution and ad blocking
- Gitea — self-hosted Git (mirrored to public GitHub for sanitized content)
- nginx — central reverse proxy fronting everything
- Grafana — observability dashboards (internal + a separate public one)
- Security Onion — network IDS / SIEM
- Cisco Modeling Labs — network simulation for CCNP-level practice
- Uptime Kuma — uptime monitoring with a public status page at status.securebytes.net
- Buffalo NAS — shared storage
The interesting parts
Wildcard TLS via Cloudflare DNS-01
Every internal service resolves to *.lab.securebytes.net and gets a green padlock. The cert is a Let’s Encrypt wildcard provisioned through Cloudflare’s DNS-01 challenge using acme.sh. Renewal is automated — no manual steps, no scrambling when something expires. The Cloudflare API token is scoped strictly to edit DNS on the securebytes.net zone, nothing else.
Selective public exposure via Cloudflare Tunnel
Two services are exposed publicly through a Cloudflare Tunnel — a public Grafana ops dashboard at ops.securebytes.net (gated by a custom PIN) and the Uptime Kuma status page at status.securebytes.net/status/lab (read-only, public). Admin paths on the Kuma instance are gated by Cloudflare Access with email-OTP. No port forwards, no public IP exposure on the Proxmox box itself — the tunnel daemon dials out to Cloudflare’s edge.
Internal DNS that actually works
Pi-hole v6 holds the records for every *.lab.securebytes.net hostname, pointing them all at the nginx reverse proxy. pfSense’s DNS Rebind protection had to be loosened with an Alternate Hostnames entry — a gotcha worth documenting since the default config silently breaks hostname access.
Public-from-day-one repository discipline
The full operational reference (with internal IPs, hardware identifiers, and credentials) lives in private Gitea. The public version at github.com/otengg/securebytes-platform is sanitized — same architecture, runbooks, and design decisions, no operational secrets. After a May 2026 audit found internal IPs leaking through commit history of older repos, those repos were deleted and the dual-repo workflow was tightened. Trust comes from the discipline, not from never making the mistake.
What I’m working on next
- Migrating service deployment from manual
pct createcommands to Ansible roles - Proxmox Backup Server on the NAS with scheduled backups and tested restores
- Lynis hardening pass on every LXC, capturing before/after scores
- VLAN segmentation: management, lab, IoT, DMZ
- MFA on Proxmox / pfSense / UniFi / Vaultwarden / Gitea
What this is, honestly
This isn’t production infrastructure with five-nines uptime requirements. It’s a learning lab where I get to design, build, and break things at small scale — the kind of hands-on practice that’s hard to come by inside a job role with strict change controls. The real value is in the failures: the DNS rebind issue, the IP collision when an IP I thought was free turned out to be running something I’d forgotten about, the SSH key verification mismatch after recreating a container, the audit that caught my own commit history leaking IPs. Every one of those is an artifact in the writeup folder.
Stack