Do not ever push to production without testing things first. I went and moved us to the beta branch because this commit caught my eye and I wanted us to have the emoji picker fixed: https://github.com/LemmyNet/lemmy-ui/commit/ae4c37ed4450b194719859413983d6ec651e9609

The beta branch was on dockerhub so I thought that it was at least minimally tested. I was sorely mistaken. Lemmy (the backend) itself would load, but lemmy-ui (the actual website that renders everything) would keep crashing when attempting to load. I couldn’t roll back because the database was migrated to this beta version and it couldn’t migrate back to the old version when attempting to launch said old version.

I had no choice but to restore from backup. We’ve lost a whole day’s worth of posts (anything after 3AM CST.) I’m really really sorry… blobcatsadpleading

I was just so excited to be able to unveil this, I didn’t take my time with actually testing it.

  • Nazrin@burggit.moe
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    Daily backups isn’t too bad, though.

    I would recommend backup right before upgrade too. make a sticky note for it if you always forget.

    • Burger@burggit.moeOPM
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      See, I deliberately didn’t do it before I did the upgrade because it’d cause downtime for 10-15mins I stop the server just so postgres can be in a consistent state and then the backup starts. And ofc we all know how that turned out, rofl. I clearly wasn’t thinking.

        • Burger@burggit.moeOPM
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          There’s a whole separate server that’s in charge of storing images (pict-rs) it uses its own database that isn’t anything *SQL (sled.) I just think it’s easier to use this solution. Everything’s in-tact and the backup task runs in the wee hours of the morning. Besides, I couldn’t get cron to call docker to execute a pg_dump if my life depended on it. Some shell environment fuckery is my guess. And I just don’t want to mess with troubleshooting it because it’s a hassle testing why something doesn’t work with cron. This works. I’d rather not change it. It backs up everything, including all the images.

          For a time when I was running this on my home box, Proxmox has a nifty backup tool that freezes the filesystem in place and takes a snapshot, then backs up said snapshot in a compressed tarball. It’s dedicated too if you run proxmox backup server. This is a VPS though. Not a dedi. There’s, of course LVM for taking snapshots, but I don’t want to rip everything up and start over. Since this is running raw Ext4 with no volume management whatsoever.

          • nickwitha_k (he/him)@lemmy.sdf.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Makes sense. I just don’t like folks having to do tedious work, if it isn’t needed (also my job). For the cron, I might suspect path or permissions (most often these, in my experience). I find the easiest way to diagnose is to wrap the intended command in a bash script that writes stdout and stderr to files, acting like basic logs.

            Glad you’re back up and running!

            • Burger@burggit.moeOPM
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              I guess I didn’t clarify. I have a cron job that automatically turns the server off via docker compose and runs the Borg backup which seems to work perfectly. There’s no manual intervention at all. I’m not manually turning it off and manually doing the backup.

              I really appreciate your suggestions, though. I just don’t want to touch something that saved my bacon already and risk having some workflow somewhere screw up without my being aware of it when I need to restore from backup. This probably will all be moot since moving to a dedi is a possibility in the future. So then I’d be able to use a hypervisor and just full on backup VM images.

              • nickwitha_k (he/him)@lemmy.sdf.org
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                Yeah. Makes sense - best to have something that you absolutely know works. Having the dedi will be really nice - having control of the hypervisor should let you avoid a lot of issues and make testing new updates easier (clone prod, update the clone, test on the clone, swap LB backend to point to the clone and drain old backend, hold old prod VM for a bit to make rollback quick, if needed).