homelab history

  • 5 min read

in the beginning

I’m going to draw a semi-arbitrary line to demarcate the beginning of my homelab. It’s really a continuation of my long practice of “tinkering with computers”, but I suppose it’s more specific than that. For me, it’s a place to practice, test, and explore technology in support of my software engineering career. Sometimes AV and home projects will sneak in there, but the lab itself is to learn and practice for work.

The earliest version is probably the 2011 Mac Mini and the Drobo. That was pretty successful! I ran Plex for the first time on that box; it ran Time Machine server for the other Macs in the house, and did OS and App store caching. That was the tail end of MacOS X Server (it moved from a dedicated OS installation to an app you installed), so I never got super into ldap, but I think I did play with that Wiki service. This stage of my homelab wound down as Drobo imploded (or, more accurately, began to implode - I got off the Drobo train before the 5n2 launched, somewhere around when the original company spun the storage business off). The Drobo 5n was pretty underpowered as a NAS, but was a fun way to dip my toes into all things network attached storage.

That 2011 Mac Mini soldiered on as a utility server (running Mac specific stuff) until I finally replaced it with the m1 Mac Mini in 2021. I was using an Airport Extreme around this time as the single networking device in my apartment for most of the 2011-2014ish time period. I swapped to the Ubiquiti EdgeRouter and whatever AP was current right around 2014; very cool to be able to split routing, switching, and WiFi into separately configurable/maintainable/upgradable devices. I was picking up IT certs around this time (Network+ and CCNA first, then RHCSA and Linux+ as my focus moved from networking to system administration), so my home setup started to get more complicated.

I started switching (lol) over to the UniFi line when we moved out of an apartment and into our first house in 2016. They briefly offered hardware discounts on early access products, so I got a 16 port 10 gig switch half off - it was years before 10 gig switches finally reached that price at retail, and I used that 10 gig switch as the core of my network for the better part of 10 years. I ran fiber between the basement and attic of that house, spreading access points and ethernet drops wherever I might need them. I’m still using the Unifi line for routing, switching, and wireless access points.

When I upgraded my gaming PC, sometime around 2017, I put Ubuntu Server on the old one and chucked a bunch of drives in it. I was working with an Ubuntu derivative at work after many years of Red Hat and CentOS, and our primary storage tech for my project was ZFS. So Ubuntu Server and ZoL, acting as a fileserver, running docker containers and libvirt for VMs. I moved the plex instance over to that box (it had Intel iGPU in it for hardware acceleration).

the modern era

I made a big move in 2020 to proxmox and freenas. I like my storage boxes to just do storage, so I wasn’t worried about the instability and uncertainty around the freenas to truenas transition (nor was I bothered by the bsd to linux transition with core to scaled). I do my virtualization in a three node proxmox cluster: two beefier nodes that run multiple vms, and a smaller intel nuc that primarily runs plex (and any tie breakers, like a third Kubernetes control plane node). They’re all custom built, mostly on the ryzen desktop platform, which give (in my opinion) great performance per watt. I’ve run up against the PCIe lane limitations a few times, but given all of these machines are on 24/7, it’s really important to me that they don’t guzzle power like the server grade chips tend to.

I’ve been using ansible to wrangle the lab since 2017ish, although I took two detours. First, I moved everything to self hosted gitops via drone.io cicd flows in 2022. That was fine, but gitea/forgejo moving to support github actions made drone.io way less appealing, so I let that die when I dumped gitea for forgejo in 2024. I took a brief run at converting to puppet (we use puppet at my dayjob, so I was interested in learning more about it from first principles) but I ultimately didn’t think the juice was worth the squeeze (and that project appears to be getting killed by the folks at Perforce). I’m back to a nice mixture of my old method and my gitops flows - I’m automating everything with ansible, invoked via actions in my forgejo instance. I’ll probably write more about the details there in my state of the homelab, but I’ve really enjoyed this method.

I had physical constraints in the basement of the house that I lived in from 2016-2023 that meant I couldn’t fit a full 42u rack. See home rack rebuild plan for the final(ish) configuration of the lab in that space. When we moved in 2023, the new basement had enough headroom for a full 42u rack; I moved all of my lab stuff into that single rack. I again ran fiber from the basement to the attic, letting me drop ethernet down for more access points and ethernet runs on the second floor, and this time I also ran four strands of single mode fiber to my office, giving me tons of plumbed connectivity back to the server rack.