hello, on my server on which only Lemmy is running, I don’t understand why it fills up space all the time. Every day it increases by about 1GB thus risking reaching the limit after a short time.

It is not the images’ fault because there is a size limit for uploading and they are also transformed into .webp.

The docker-compose file is the one from Ansible 0.18.2 with the limits for loggin already in it (max-size 50m, max-file 4).

What could it be? Is there anything in particular that I can check?

Thanks!

28% - 40G (3 July)
29% - 42G (4 July)
30% - 43G (5 July)
31% - 44G (6 July)
36% - 51G (10 July)
37% - 52G (11 July)
37% - 53G (12 July)
39% - 55G (13 July)
39% - 56G (14 July)
  • hitagi (ani.social)
    link
    fedilink
    English
    301 year ago

    Might want to check out this issue. Honestly, I’m not sure what exactly the activity table consists of or what it does but it’s been eating through everyone’s disk space pretty fast.

    • Muddybulldog
      link
      fedilink
      English
      81 year ago

      The activity table is every message that is sent or received between your instance and the rest of the fediverse; posts, comments, votes, admin actions, etc.

    • @[email protected]
      link
      fedilink
      English
      41 year ago

      Extra config options always result in more complexity, so I would strongly prefer to change the hardcoded pruning interval instead.

      Why would that be the case?

      • hitagi (ani.social)
        link
        fedilink
        English
        21 year ago

        I’m not too sure as well. Most people running their own Lemmy instance probably want more config options to fit their needs. I like how much I can configure with pictrs straight from the docker compose file for example.

  • @[email protected]
    link
    fedilink
    English
    161 year ago

    It warms my heart to see lemmy instance owners learning about the nuances of db administration 🥰

    • @[email protected]
      link
      fedilink
      English
      81 year ago

      I don’t think you should be enjoying the fact that there are some problems that could realistically cause a large portion of Lemmy instances to become unsustainable. We should be working towards a way that we can ensure the Lemmy ecosystem thrives.

      • @[email protected]
        link
        fedilink
        English
        17
        edit-2
        1 year ago

        I mean… it was kind of a joke. At the same time, if someone is hosting a database on their own hardware, it is important to understand when and how the database actually releases disk.

        That said, I completely agree that this is an issue that does need to be addressed.

    • Salamander
      link
      fedilink
      English
      11 year ago

      At this point it might actually be worth adding it to my CV 😂

  • @WanderA
    link
    fedilink
    English
    101 year ago

    You can delete old entries from the table. The space will not be released to the filesystem automatically though, but you won’t have to worry about it until enough days pass where it’s filled up the same amount that was freed.

    • Kit Sorens
      link
      fedilink
      English
      51 year ago

      Im sure it couldn’t be difficult to do a rolling purge to keep the file at a fixed size?

        • @WanderA
          link
          fedilink
          English
          21 year ago

          You’d need to take your site down for a while since write access is necessary to that table to avoid duplicates. But yeah, once you’ve done a vacuum full you could find a way to each day trim old entries.

  • chiisana
    link
    fedilink
    English
    81 year ago

    Bearing in mind that posts and comments from communities your users are subscribed to will flow into your instance, not as a reference, but as a copy. So all those “seeding” scripts are terrible ideas in bringing in content you don’t care about and filling up space for the heck of it. If you’re hosting a private instance, you can unsubscribe from things that won’t interest you, thereby slow down the accumulation of contents that are irrelevant and just wasting space.

    • @[email protected]OP
      link
      fedilink
      English
      31 year ago

      Yes I had considered that, but considering that ours is not a giant but moderate instance (a thousand subscribers) it seemed exaggerated to get to have 1GB occupied every day.

      • @[email protected]
        link
        fedilink
        English
        71 year ago
        • The growth is not about user count… not directly anyway. Rather, it’s about the number and activity of subscribed communities. When your users sub to big and highly active meme communities on Lemmy world, the post activity on world determines your storage reqs. I don’t really know, but I could imagine that a 1k user instance might have 80% of the federation storage that a 5k user instance has. 1k users is enough to sub most big communities, whereas the next 4k users are “mostly” going to sub the same big communities and a few low-traffic niche communities. So the next 4k users cause much less federated storage load than the first 1k did.
        • But for comparison, a month ago… the largest Lemmy instance in the world had just over a thousand active users. I’m not sure 1k is as small as you think it is.
  • @[email protected]
    link
    fedilink
    English
    6
    edit-2
    1 year ago

    Back in the early 2000s, Usenet servers could store posts in a fixed-size database with the oldest posts expiring as new posts came in. This approach would mean that you can’t index everything forever, but it does allow you to provide your users with current posts without having infinite storage space.

  • RoundSparrow
    link
    fedilink
    English
    4
    edit-2
    1 year ago

    What could it be? Is there anything in particular that I can check?

    lemmy_server creates a ton of system logs. /var/log/ paths, look at usage on that path.

    The docker-compose file is the one from Ansible 0.18.2 with the limits for loggin already in it (max-size 50m, max-file 4).

    I still suggest verification of how much total size is in /var/log tree.