The virtual disk for my lemmy instance filled up which caused lemmy to throw a lot of errors. I resized the disk and expanded the filesystem but now the pictrs container is constantly restarting.
root@Lemmy:/srv/lemmy# le
less lessecho lessfile lesskey lesspipe let letsencrypt lexgrog
root@Lemmy:/srv/lemmy# ls
leemyalone.org
root@Lemmy:/srv/lemmy# cd leemyalone.org/
root@Lemmy:/srv/lemmy/leemyalone.org# docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------------------------------------------
leemyaloneorg_lemmy-ui_1 docker-entrypoint.sh /bin/ ... Up 1234/tcp
leemyaloneorg_lemmy_1 /app/lemmy Up
leemyaloneorg_pictrs_1 /sbin/tini -- /usr/local/b ... Restarting
leemyaloneorg_postfix_1 /root/run Up 25/tcp
leemyaloneorg_postgres_1 docker-entrypoint.sh postgres Up 5432/tcp
leemyaloneorg_proxy_1 /docker-entrypoint.sh ngin ... Up 80/tcp, 0.0.0.0:3378->8536/tcp,:::3378->8536/tcp
Might this be related?
In some cases, pict-rs might crash and be unable to start again. The most common reason for this is the filesystem reached 100% and pict-rs could not write to the disk, but this could also happen if pict-rs is killed at an unfortunate time. If this occurs, the solution is to first get more disk for your server, and then look in the sled-repo directory for pict-rs. It’s likely that pict-rs created a zero-sized file called snap.somehash.generating. Delete that file and restart pict-rs.
https://git.asonix.dog/asonix/pict-rs#user-content-common-problems
Have you checked the logs for the pictures container to see why it’s restarting?
Could it be permissions?
What do the logs say?
Can’t see pictrs log because it never full starts.
root@Lemmy:/srv/lemmy/leemyalone.org# docker-compose logs leemyaloneorg_pictures_1 ERROR: No such service: leemyaloneorg_pictures_1 root@Lemmy:/srv/lemmy/leemyalone.org#
If the pictrs container doesn’t start check the docker logs.
journalctl -fexu docker
It’ll typically tell you why a container isn’t starting, usually a broken bind mount.
To prevent this from happening again, try migrating to an S3 backend; DigitalOcean have one that’s fixed-price and includes egress, so you can’t accidentally end up with a ridiculous bill one month!
You can still see the logs using
docker logs container_name>
. To get the container name you can usedocker ps -a
. It should list the pictrs container there. The container name is usually the last column of the output.` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ BACKTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Run with COLORBT_SHOW_HIDDEN=1 environment variable to disable frame filtering. 2023-08-26T20:46:43.679371Z WARN sled::pagecache::snapshot: corrupt snapshot file found, crc does not match expected Error: 0: Error in database 1: Read corrupted data at file offset None backtrace ()
Location: src/repo/sled.rs:84
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ SPANTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
0: pict_rs::repo::sled::build with path=“/mnt/sled-repo” cache_capacity=67108864 export_path=“/mnt/exports” at src/repo/sled.rs:78 1: pict_rs::repo::open with config=Sled(Sled { path: “/mnt/sled-repo”, cache_capacity: 67108864, export_path: “/mnt/exports” }) at src/repo.rs:464
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ BACKTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Run with COLORBT_SHOW_HIDDEN=1 environment variable to disable frame filtering. 2023-08-26T20:47:44.343239Z WARN sled::pagecache::snapshot: corrupt snapshot file found, crc does not match expected Error: 0: Error in database 1: Read corrupted data at file offset None backtrace ()
Location: src/repo/sled.rs:84
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ SPANTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
0: pict_rs::repo::sled::build with path=“/mnt/sled-repo” cache_capacity=67108864 export_path=“/mnt/exports” at src/repo/sled.rs:78 1: pict_rs::repo::open with config=Sled(Sled { path: “/mnt/sled-repo”, cache_capacity: 67108864, export_path: “/mnt/exports” }) at src/repo.rs:464
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ BACKTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Run with COLORBT_SHOW_HIDDEN=1 environment variable to disable frame filtering. root@Lemmy:~#`
Seems like your pictrs database is corrupted.
Is there a way to reset the pictrs DB without affecting the posts, commenst and users DB?
pictrs database is completely separate from lemmy database. If you want you just delete everything in the pictrs volume and start afresh. You will lose all images though.
There’s only two local posts on my instance so i’m not worried about losing thoseabout.
Will the pictrs from subscribed communities in other instances be restored after a db reset?
OK. I just deleted the pictrs folder from srv/lemmy/leemyalone.org/volumes but I am still having the same issue.
root/srv/lemmy/leemyalone.org/volumes# sudo rm -r pictrs pictrs_corrupt/ root :/srv/lemmy/leemyalone.org/volumes# LS -bash: LS: command not found root :/srv/lemmy/leemyalone.org/volumes# ls lemmy-ui postgres root :/srv/lemmy/leemyalone.org/volumes# mkdir pictrs root :/srv/lemmy/leemyalone.org/volumes# docker-compose stop && docker-compose rm && docker-compose up -d --build Stopping leemyaloneorg_proxy_1 ... done Stopping leemyaloneorg_lemmy-ui_1 ... done Stopping leemyaloneorg_lemmy_1 ... done Stopping leemyaloneorg_pictrs_1 ... done Stopping leemyaloneorg_postfix_1 ... done Stopping leemyaloneorg_postgres_1 ... done Going to remove leemyaloneorg_proxy_1, leemyaloneorg_lemmy-ui_1, leemyaloneorg_lemmy_1, leemyaloneorg_pictrs_1, leemyaloneorg_postfix_1, leemyaloneorg_postgres_1 Are you sure? [yN] y Removing leemyaloneorg_proxy_1 ... done Removing leemyaloneorg_lemmy-ui_1 ... done Removing leemyaloneorg_lemmy_1 ... done Removing leemyaloneorg_pictrs_1 ... done Removing leemyaloneorg_postfix_1 ... done Removing leemyaloneorg_postgres_1 ... done Creating leemyaloneorg_postgres_1 ... done Creating leemyaloneorg_pictrs_1 ... done Creating leemyaloneorg_postfix_1 ... done Creating leemyaloneorg_lemmy_1 ... done Creating leemyaloneorg_lemmy-ui_1 ... done Creating leemyaloneorg_proxy_1 ... done root :/srv/lemmy/leemyalone.org/volumes# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2e9a781c0136 nginx:1-alpine "/docker-entrypoint.…" 15 seconds ago Up 12 seconds 80/tcp, 0.0.0.0:3378->8536/tcp, :::3378->8536/tcp leemyaloneorg_proxy_1 cc68969ca802 dessalines/lemmy-ui:0.18.4 "docker-entrypoint.s…" 17 seconds ago Up 14 seconds 1234/tcp leemyaloneorg_lemmy-ui_1 f47b59d3b801 dessalines/lemmy:0.18.4 "/app/lemmy" 20 seconds ago Up 17 seconds leemyaloneorg_lemmy_1 a6eda85afa9d mwader/postfix-relay "/root/run" 23 seconds ago Up 20 seconds 25/tcp leemyaloneorg_postfix_1 fa066bad4327 asonix/pictrs:0.4.0 "/sbin/tini -- /usr/…" 23 seconds ago Restarting (1) 3 seconds ago leemyaloneorg_pictrs_1 04fad001a73f postgres:15-alpine "docker-entrypoint.s…" 23 seconds ago Up 20 seconds 5432/tcp leemyaloneorg_postgres_1 root :/srv/lemmy/leemyalone.org/volumes# ls lemmy-ui pictrs postgres root :/srv/lemmy/leemyalone.org/volumes# ls -lah total 20K drwxr-xr-x 5 root root 4.0K Aug 26 21:40 . drwxr-xr-x 3 root root 4.0K Aug 26 20:41 .. drwxr-xr-x 3 root root 4.0K Aug 17 08:31 lemmy-ui drwxr-xr-x 2 root root 4.0K Aug 26 21:40 pictrs drwx------ 19 70 root 4.0K Aug 26 21:41 postgres root :/srv/lemmy/leemyalone.org/volumes#
:
You can try mounting a new folder as pictrs volume. I assume your other data will be safe since it is in the database.