• Nat@apollo.town
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I’ve been surprised not to see this with any of the fediverse platforms I’ve browsed. Instead, they’re all using Docker Compose. Any idea why that is?

    • CoderKat@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      K8s is amazing for big, complicated services. For small things, it quite honestly can be overcomplicated. If you’re running something massive, like, say, Spotify, then k8s will make things simpler (because the alternative for running such a massive and complicated service is… gross lol). That’s not to say that k8s can’t be used for something like Lemmy, just that it might not be worth the complexity.

      For the fediverse, I think a lot of the development is written for small, mostly monolithic single servers. K8s is meant for when you have an entire cluster running some service. You wouldn’t typically run a single server with k8s, but rather you’d have many “nodes” and you’d run many instances of your binary (“pods”) across those nodes for the redundancy.

      I’m not very familiar with the backends of fediverse servers nor Docker Compose, but I’m under the impression that’s for single servers and I’ve seen many Lemmy instances talk about their hosting as if they only have one physical server. That’s probably fine for a FOSS social media site that is run by hobbyists, but major commercial software would never want to have a single server. Heck, they wouldn’t even want to run just servers in one location. The big cloud providers all offer ways to run k8s clusters that use nodes spread across multiple data centers, usually ones with isolated failure zones, all to maximize uptime. But that’s also expensive. For a big business, downtime means millions of dollars lost, so it’s a no brainer. For Lemmy? As annoying as downtime is, users will live.

    • deejay4am@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      4
      ·
      1 year ago

      k8s have a steep learning curve comparatively. With Docker you just install the Docker package and it’s off to the races. With k8s you need to know basically how Docker works, know how the layers it adds on top work, and define everything using YAML config files to get things up and running. The networking is complicated (but flexible), the storage isn’t straightforward (it’s designed to work with large-scale solutions like S3 or Ceph, so setting it up even for local “folder” storage requires more moving parts). Even bootstrapping a new installation requires many steps to install all the pieces you need.

      Don’t get me wrong it’s awesome, but if you don’t already know it, it doesn’t have many advantages for small installations over Docker which is very much “run docker-compose on this file you downloaded and the thing you want sets itself up”.

      While there are tools like Helm or Portainer to assist you, you still have to understand it to make it work.