Hello
I’ve seen many videos and posts that mentions the usage of Proxmox in their homelab setup, and I’ve been wanting to try it.
I have two servers running at my homelab: an OrangePi 5 for HA AdGuard, Omada Controller, HomeAssistant; a Dell Inspiron 3268 repurposed as a server. The latter is which I’m considering Promox.
What is the server config?
- Dell Inspiron 3268 desktop, Core i3-7100 with 16GB DDR 4.
- 3 SATA drives: 250gb Crucial MX500, system and 2 Kingspec P3-1TB in a ZFS pool, used for backup.
- PCIe to M2 adapter with a KingSpec NX-2TB drive also in a ZFS pool.
- 1 Gigabit ethernet (motherboard0 and 1 2.5Ge ethernet connected to PCIe v1
What is the server currently run and what is used for? (see outputs below)
- Primarily it’s a NAS / Media server
- Ubuntu server 23.10
- Samba server
- lighttpd (reverse proxy for ALL the containers except for Plex and flaresolverr)
- xrdp with XFCE desktop environment (server is headless)
- Docker container running both OpenVPN client and Deluge for privacy
- Plex (docker)
- Flaresolverr/ jackett (docker)
- Radarr / sonarr / bazarr (docker)
- Portainer to manage containers (docker)
Why I’m considering Proxmox :
- Bare metal restart for kernel updates
- Lack of a web monitoring tool for the server that is not a lot of overheard
- From time to time, I like to explore other distros or new appliances and can’t
- You tell me… there may be benefits I’m missing…
I did install Proxmox but think I used it wrong or was unable to realize all the benefits:
- My media is on data-pool/media dataset but I was unable to share that with multiple LXC
- All my docker configs are mapped as volumes on the host, and are stored on data-pools/apps. I would like to reuse them
- I was not sure if I should run a LXC container for each docker, or have a single LXC with everything (exception xrdp / XFCE). I don’t know what would be good practices…
- On networking, I wanted to ensure I can map a NIC just to a specific hardware… but had challenges figuring out how to have a NAT like environment
- I was planning to have a Windows VM that I can start on demand to use, but stopped on the challenges above
Given my opportunities and challenges, what you would all suggest for me? Keep running on baremetal as is or change to Proxmox?
me@jupiter:~$ lsblk -o MODEL,SERIAL,SIZE,STATE --nodeps
MODEL SERIAL SIZE STATE
73.9M
152.1M
40.9M
CT250MX500SSD1 2219E6302473 232.9G running
P3-1TB 0010159006480 953.9G running
P3-1TB 0010159006489 953.9G running
NX-2TB 2280 0010174003295 1.9T live
me@jupiter:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
backup-pool 291G 631G 26K /backup-pool
backup-pool/computers 121G 631G 121G /backup-pool/computers
backup-pool/documents 24K 631G 24K /backup-pool/documents
backup-pool/photos 166G 631G 166G /backup-pool/photos
backup-pool/servers 4.36G 631G 4.36G /backup-pool/servers
data-pool 520G 1.29T 26K /data-pool
data-pool/apps 6.90G 1.29T 6.90G /data-pool/apps
data-pool/home 27.6G 1.29T 27.6G /home
data-pool/media 486G 1.29T 486G /data-pool/media
me@jupiter:~$ docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Size}}"
CONTAINER ID IMAGE SIZE
46ffbe41beee ghcr.io/flaresolverr/flaresolverr:latest 16.1MB (virtual 618MB)
9c6eb995b729 lscr.io/linuxserver/jackett:latest 106MB (virtual 277MB)
ffe5ccd4aae1 portainer/portainer-ce:latest 0B (virtual 294MB)
b28afe33a106 lscr.io/linuxserver/radarr:latest 23.6kB (virtual 196MB)
0f7744daf9d6 lscr.io/linuxserver/sonarr:latest 35.6MB (virtual 335MB)
574c625933b1 binhex/arch-delugevpn:latest 16.5MB (virtual 1.29GB)
8a7f84e40f84 lscr.io/linuxserver/plex:latest 24.3kB (virtual 340MB)
37e8580600ef lscr.io/linuxserver/bazarr:latest 23.2kB (virtual 422MB)
Picture of the rack… very cheap 8U rack, depth is just 350mm in depth. GPON and ER-605 at the top. Unloaded patch panel, TL-SG2210MP switch, orange pi 5, Dell server
well, you said it yourself, get proxmox and take it forward from there.
I run multiple lxc’s hosting portainer, there I can manage all my docker containers in 1 web ui. Its amazing. You can share files with samba or cifs in unprivileged containers (issue 1). I use that for plex because it’s easier to use the igpu in a lxc then a vm. In portainer you can bind mount your docker volumes to existing folders in your lxc and then proceed to share them with samba/cifs/nfs (issue 2). Note that nfs mounting in unprivileged containers is not (easily) doable. Portainer fixes issue 3. I can’t help with issue 4 but aren’t nics already hardware bound?
You probably want to run all of that on bare metal in containers with Docker or Kubernetes.
Containers let you easily share resources between them, because they all share the same kernel. VMs are harder to share hardware resources with, as you’re finding out.
I was not sure if I should run a LXC container for each docker, or have a single LXC with everything (exception xrdp / XFCE). I don’t know what would be good practices…
LXC is a container. I don’t think you would want to run Docker inside LXC. That’s running a container inside a container. I’m a noob though.
Normally, you run one app per container, or one set of apps per container if they are closely related. You could run all the Plex suite apps inside a single LXC container and Windows alongside it in Proxmox. Or you could run each app inside their own LXC container.
Alternatively, you could run them all in individual Docker containers on bare metal Ubuntu, but not have the ability to install Windows or other OSes.
Plex kernel is optimized with fixes specific to proxmox scenarios. For example it is currently impossible to do my setup https://gist.github.com/scyto/76e94832927a89d977ea989da157e9dc on Debian or Ubuntu without compiling your own kernel, which then would break ceph…. The proxmox team do a lot of due diligence on their kernel, patches etc. for your docker scenario consider using VMs as your docker hosts. Lightweight VMs don’t add a lot of overhead. Something like this https://gist.github.com/scyto/f4624361c4e8c3be2aad9b3f0073c7f9
I use proxmox with ceph and cephfs.
I have two ceph pools. One with ssd/nvme, and the other with slow rpm hdd.
The ceph pool with ssd holds my vhds for vms. I only install the os, on that drive…and any necessary programs…Docker, portainer, etc. Then I mount my two cephfs into each vm.
I don’t use lxc…they can’t be migrated between proxmox nodes. I just install Ubuntu server and Docker.
Since you mentioned a windows vm…cephfs can be mounted in Windows as well…no need to setup samba. No need to have data trapped in different vhds. Useful for mapping drives on laptops and desktops in your house too.
Cephfs is my central storage that all servers and containers have access too. If you want to have a NAS gui for users, just map your container to your existing files. I do this for nextcloud. I also run containers like sftpgo, filebrowser, syncthing, etc…all mapped to the two cephfs pools so I can expose and share single data pool via different methods and not have a single vm running samba that’s a single point of failure.
Cephfs and proxmox support clustering, so if you need to grow or upgrade on the future. If you use proxmox for the second server, ceph and cephfs can extend the pool to the second server and moving vms is seamless because the storage is one big pool now. Run some vms on the new server and some on the old…with a single storage pool.
Ceph can grow while online, and is fault tolerant like raid and zfs.
Proxmox will also provide a web interface and console access to your VMs.
I group my Docker containers by function. So I’ll have one Ubuntu Docker server for the Starr apps…then I have another with paperless, Nginx ubiquity controller, filebrowser, sftpgo…then another Ubuntu Docker for nextcloud because it spins up like 10 containers on its own. Another for Plex and jellyfin. Just trying to keep the chaos organized and because I have more than one server…I can move the one of the Docker servers to another proxmox node in my cluster to manually balance cpu and compute.
Then when it’s time to upgrade, I’m not rebuilding everything…I’m just adding a small node with a small cost (small IT budget at home too 😂)
- My media is on data-pool/media dataset but I was unable to share that with multiple LXC
I’ve never transitioned into proxmox from your specific setup but here’s what you could do in proxmox: connect your pools to some kind of SATA drive controller, pass that controller through to a NAS virtual machine, share those drives out as NFS drives, and add those NFS drives as storage from the Proxmox web GUI. Then you can edit your LXC conf files to mount directories from the Proxmox host to the LXCs as needed.
- All my docker configs are mapped as volumes on the host, and are stored on data-pools/apps. I would like to reuse them
I have successfully migrated docker environments from one host to another before simply by copy/pasting the folders and docker compose up -d. Then again sometimes it has not worked for me, but only one way to find out.
- I was not sure if I should run a LXC container for each docker, or have a single LXC with everything (exception xrdp / XFCE). I don’t know what would be good practices…
I don’t know what good practice is but I have a couple LXCs with 12+ docker containers each and they work great for me 🤷♂️
Proxmox has a learning curve but I love it. It is so easy to spin up another VM to try out something new, and you don’t have to interrupt the services that you already have working well because of how isolated everything can be from each other. Need to reboot your Linux environment? You no longer have to take down your entire server, just the part of your server that you’re tinkering with.
Also the UI makes it so easy to manage backups, snapshots, you can create templates of prebuilt images so that you aren’t starting from scratch each time, easily clone or migrate your images around to different nodes. It’s very powerful.