The biggest gotcha with 0.19.4, is the required upgrades to postgres and pictrs. Postgres requires a full DB dump, delete, recreate. Pict-rs requires its own backend DB migration, which can take quite a bit of time.
I also have backup accounts on these instances:
https://beehaw.org/u/lodion
https://sh.itjust.works/u/lodion
https://lemmy.world/u/lodion
https://lemm.ee/u/lodion
https://reddthat.com/u/lodion
The biggest gotcha with 0.19.4, is the required upgrades to postgres and pictrs. Postgres requires a full DB dump, delete, recreate. Pict-rs requires its own backend DB migration, which can take quite a bit of time.
It was removed deliberately during the reddit exodus in order to direct new Lemmy users elsewhere. Rather than to overload lemmy.ml further.
I’m locking this post. There has been some good discussion within the comments, but I don’t believe the linked article is in good faith, and the discussion here is heading off the rails.
I disagree. Its deliberately inflammatory and offensive. I only leave the post here as responses such as this one have value.
Better late than never :)
https://aussie.zone/c/austech
AEST? That would be the daily server snapshot. I’ll see if I can bring it forward, trouble is there are daily Lemmy DB tasks etc running earlier that I don’t want to disrupt .
I’ll see what can be done …
Appears legit now. Did you grab a screenshot?
0.19.3 still has the memory leak, I’ve left Lemmy to restart once a day to mitigate this. Federation seems to be fine for the last week after some minor changes and the upgrade.
Cool, we’re on 0.19.3 now.
Excellent, hopefully that is included in 0.19.3 that came out overnight… will upgrade shortly.
Ok, there’s more to this than I first thought. It seems there is a back end task set to run at a set time every day, if the instance is restarting at that time the task doesn’t run… this task updates the instances table to show remote instances as “seen” by AZ. With the memory leaks in 0.19.1, the instance has been restarting when this task is running… leading to this situation.
I’ve updated the server restart cronjob to not run around the time this task runs… and I’ve again manually updated the DB to flag all known instances as alive rather than dead.
Will keep an eye on it some more…
For anyone curious, two of the bugs that are related to this:
https://github.com/LemmyNet/lemmy/issues/4288
https://github.com/LemmyNet/lemmy/issues/4039
Ok, something is busted with the lemmy API endpoint that shows current federation state. It is currently showing nearly all remote instances as dead:
But “dead” instances are still successfully receiving content from AZ, and sending back to us.
Seems to have sorted it for the most part… not sure what caused it, will do some more digging.
Ok, for some reason practically all instances were flagged as “dead” in the database. I’ve manually set them all to be requeued… server is now smashed as it attempts to updated the ~4000 instances I’ve told it are no longer dead. See how this goes…
☹️
I’ll see what I can see…
You’ve got a way with words… are you a writer? :)
Prior to restarting the lemmy service, this showed over 2k instances as “lagging”. Shortly after restart, its dropped down to single digits.
I’ll leave the hourly restart going for now, it should help with federation issues… and should help with memory leaks too.
Hey Dave, yeah I know about the issues with high latency federation. Have been in touch with Nothing4You, but not discussed the batching solution.
Yes losing LW content isn’t great… but I don’t really have the time to invest to implement a work around. I’m confident the lemmy devs will have a solution at some point… hopefully when they do the LW admins upgrade pronto 🙂