Status update July 4th

Just wanted to let you know where we are with Lemmy.world.

Issues

As you might have noticed, things still won’t work as desired… we see several issues:

Performance

  • Loading is mostly OK, but sometimes things take forever
  • We (and you) see many 502 errors, resulting in empty pages etc.
  • System load: The server is roughly at 60% cpu usage and around 25GB RAM usage. (That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%)

Bugs

  • Replying to a DM doesn’t seem to work. When hitting reply, you get a box with the original message which you can edit and save (which does nothing)
  • 2FA seems to be a problem for many people. It doesn’t always work as expected.

Troubleshooting

We have many people helping us, with (site) moderation, sysadmin, troubleshooting, advise etc. There currently are 25 people in our Discord, including admins of other servers. In the Sysadmin channel we are with 8 people. We do troubleshooting sessions with these, and sometimes others. One of the Lemmy devs, @[email protected] is also helping with current issues.

So, all is not yet running smoothly as we hoped, but with all this help we’ll surely get there! Also thank you all for the donations, this helps giving the possibility to use the hardware and tools needed to keep Lemmy.world running!

      • donalonzo@lemmy.world
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        1
        ·
        1 year ago

        Rust protects you from segfaulting and trying to access deallocated memory, but doesn’t protect you from just deciding to keep everything in memory. That’s a design choice. The original developers probably didn’t expect such a deluge of users.

      • SomeOtherUsername@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        1 year ago

        I’m calling it - if there’s actually a memory leak in the Rust code, it’s gonna be the in memory queues because the DB’s iops can’t cope with the number of users.

        • SomeOtherUsername@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          24
          ·
          edit-2
          1 year ago

          I think I found what eats the memory. DB iops isn’t the cause - looks like the server doesn’t reply before all the database operations are done. The problem is the unbounded queue in the activitypub_federation crate, spawned when creating the ActivityQueue struct. The point is, this queue holds all the “activities” - events to be sent to federated instances. If, for whatever reason, the events aren’t delivered to all the federated servers, they are retried with an exponential backoff for up to 2.5 days. If even a single federated instance is unreachable, all events remain in memory. For a large instance, this will eat up the memory for every upvote/downvote, post or comment.

          Lemmy needs to figure out a scalable eventual consistency algorithm. Most importantly, to store the messages in the DB, not in memory.