I think that installation was originally 18.04 and I installed it when it was released. A while ago anyways and I’ve been upgrading it as new versions roll out and with the latest upgrade and snapd software it has become more and more annoying to keep the operating system happy and out of my way so I can do whatever I need to do on the computer.
Snap updates have been annoying and they randomly (and temporarily) broke stuff while some update process was running on background, but as whole reinstallation is a pain in the rear I have just swallowed the annoyance and kept the thing running.
But now today, when I planned that I’d spend the day with paperwork and other “administrative” things I’ve been pushing off due to life being busy, I booted the computer and primary monitor was dead, secondary has resolution of something like 1024x768, nvidia drivers are absent and usability in general just isn’t there.
After couple of swear words I thought that ok, I’ll fix this, I’ll install all the updates and make the system happy again. But no. That’s not going to happen, at least not very easily.
I’m running LUKS encryption and thus I have a separate boot -partition. 700MB of it. I don’t remember if installer recommended that or if I just threw some reasonable sounding amount on the installer. No matter where that originally came from, it should be enough (this other ubuntu I’m writing this with has 157MB stored on /boot). I removed older kernels, but still the installer claims that I need at least 480MB (or something like that) free space on /boot, but the single kernel image, initrd and whatever crap it includes consumes 280MB (or so). So apt just fails on upgrade as it can’t generate new initrd or whatever it tries to do.
So I grabbed my ventoy-drive, downloaded latest mint ISO on it and instead of doing something productive I planned to do I’ll spend couple of hours at reinstalling the whole system. It’ll be quite a while before I install ubuntu on anything.
And it’s not just this one broken update, like I mentioned I’ve had a lot of issues with the setup and at least majority of them is caused by ubuntu and it’s package management. This was just a tipping point to finally leave that abusive relationship with my tool and set it up so that I can actually use it instead of figuring out what’s broken now and next.
In general, consider setting up any kind of rollback functionality; this will enable you to get right back to action without any downtime when you’re time-restricted. This can be achieved by configuring your system with (GRUB-)Btrfs+TImeshift/Snapper. Please bear in mind that it’s likely that you have to come back to solve it eventually, though*. (Perhaps it’s worth thinking about what can be done to ensure that you don’t end up with a broken system in the first place. *cough* ‘immutable’ distro *cough*)
If this seems too troublesome to setup, then consider using distros that have this properly setup from the get-go by default; like (in alphabetical order) Garuda Linux, Manjaro, Nobara, openSUSE Aeon/Kalpa/Leap/Slowroll/Tumbleweed, siduction and SpiralLinux. Furthermore, so-called ‘immutable’ distros also have rollback functionality while not relying on aforementioned (GRUB-)Btrfs+TImeshift/Snapper; this applies to e.g. blendOS, Fedora Kinoite/Sericea/Silverblue, Guix, NixOS and Vanilla OS.
If you feel absolutely overwhelmed by the amount of choice, then you should probably consider the bold ones; not because I think they’re necessarily better but:
- openSUSE’s offerings are generally speaking very polished, therefore being highly suitable to replace Linux Mint or Ubuntu. It’s its own thing though, therefore you might not be able to access packages that are exclusively found in Debian’s/Ubuntu’s repos (though Distrobox solves that trivially). Tumbleweed if you like rolling release, Slowroll if you prefer updates only once every 1-2 months and finally Leap if you lean more towards Stable/LTS releases.
- siduction for being based on Debian; but it’s strictly on the Unstable(/Sid) branch.
- SpiralLinux for being based on Debian; this one -however- has proper support for switching branches.
- Vanilla OS for being based on Debian; this one is very ambitious. But, because it’s an ‘immutable’ distro, it might require the biggest changes to your workflow.
nvidia drivers are absent
While any of the aforementioned distros do a decent job at ‘supporting’ Nvidia, perhaps you might be best off with uBlue’s Nvidia images. As these are images relying on the same technology that Fedora’s immutable distros do, rollback functionality and all the other good stuff we’ve come to love -like automatic upgrades in the background- are present as well. In case you’re interested to know how these actually provide improved Nvidia support:
"We’ve slipstreamed the Nvidia drivers right onto the operating system image. Steps that once took place on your local laptop are now done in a continuous integration system in GitHub. Once they are complete, the system stamps out an image which then makes its way to your PC.
No more building drivers on your laptop, dealing with signing, akmods, third party repo conflicts, or any of that. We’ve fully automated it so that if there’s an issue, we fix it in GitHub, for everyone.
But it’s not just installation and configuration: We provide Nvidia driver versions 525, 520, and 470 for each of these. You can atomically switch between any of these, so if your driver worked perfectly on a certain day and you find a regression you just rebase to that image.
Or switch to another desktop entirely.
No other desktop Linux does this, and we’re just getting started."
Great piece of information. I personally don’t see the benefits with immutable distribution, or at least it (without any experience) feels like that I’ll spend more time setting it up and tinkering with it than actually recovering from a rare cases where things just break. Or at least that’s the way it’s used to be for a very long time and even if something would break it atleast used to be pretty much as fast as reverting a snapshot to fix the problem. Sure, you need to be able to work on a bare console and browse trough log files, but I’m old enough that it was the only option back in the day if you wanted to get X running.
However the case today was something that I just couldn’t easily fix as the boot partition just didn’t have enough space (since when 700MB isn’t enough…) even a rollback wouldn’t have helped to actually fix the installation. Potentially I might had an option to move LVM partition on the disk to grow boot partition, but that would’ve required shrinking filesystem first (which isn’t trivial on a LVM PV) and the experience ubuntu has lately provided I just took the longer route and installed mint with zfs. It should be pretty stable as there’s no snap packages which update at random intervals and it’s a familiar environment for me (dpkg > rpm).
Even if immutable distros might not be for my use case, your comment has spawned a good thread of discussion and that’s absolutely a good thing.
Ah, I had misunderstood your /boot situation previously. There’s an easy way to fix it by backing up current content of boot, unmounting it, creating some dir somewhere where there’s space (
/tempboot
was my choice last time), bind mounting it to /boot and going through the apt process. Then unmount the bind, mount the real boot, delete everything except currently booted kernel stuff, copy all the things from/tempboot
update the initrd and grub. Et voila!Why I didn’t think of that. It whould have fixed the immediate problem pretty fast. I would still have the issue with too small boot partition, but it would’ve been faster to fix the issue at hand. But in either case, I’m pretty happy I got new distro installed and hopefully that’ll fulfil my needs better for years to come.
Thinking straight is rare in stressful situations.
Broken computers aren’t really stressful to me anymore, but it sure plays a part that I kinda-sorta had waited for reason to wipe the whole thing anyways and as I could still access all the files on the system, so in the end it was somewhat convenient excuse to take the time to switch the distribution. Apparently I didn’t have backup for ~/.ssh/config even if I thoguht I did, but those dozen lines of configuration isn’t a big deal.
Thanks anyway, a good reminder that with linux there’s always options to work around the problem.
Great piece of information.
Thank you for your kind words 😊!
at least it (without any experience) feels like that I’ll spend more time setting it up and tinkering with it than actually recovering from a rare cases where things just break
That might be the case depending on your proficiency and to what degree the ‘immutable’ distro allows you to configure your distro declarative. On e.g. NixOS you can define (most of) your system declarative. As such, reinstalling your entire setup is done through some config files. You can even push this further with the (in)famous Impermanence module that has been popularized by the popular Erase your darlings blog-post, in which your system is wiped every time you shut off the machine and rebuild (basically from scratch) every time you boot into it.
Potentially I might had an option to move LVM partition on the disk to grow boot partition, but that would’ve required shrinking filesystem first (which isn’t trivial on a LVM PV)
I haven’t worked with LVM yet. Defaulting to Btrfs (as Fedora -amongst others- does) has so far provided me a reliable experience, even though I’m aware that I’m missing out on performance. Hopefully, Bcachefs will prove to be a vast improvement over Btrfs in a relatively short time-span. You’ve pointed out to have installed Linux Mint with ZFS. Would I be correct to assume that you’ve been hurt by Btrfs in its infancy and choose to not rely on it since? Or is it related to lacking proper support for RAID 5/6? Or perhaps something else? Please feel free to inform me as I don’t feel confident on this topic!
and the experience ubuntu has lately provided I just took the longer route and installed mint with zfs.
Understandable. Though, I can’t stop myself from being very interested in their upcoming Ubuntu Core Desktop. But I imagine you couldn’t care less 😜.
Would I be correct to assume that you’ve been hurt by Btrfs in its infancy and choose to not rely on it since?
I have absolutely zero experience with btrfs. Mint doesn’t offer it by default and I’m just starting to learn bits’n’bobs of zfs (and I like it so far) so I just chose it with an idea that I can learn it on a real world situation. I already have zfs pool on my proxmox host, but for that I hope I’d gone with something else as it’s pretty hungry for memory and my server doesn’t have a ton to spare. But reinstalling that with something else is a whole another can of worms as I’d need to dump couple terabytes worth of data to somewhere else in order to make a clean install. I suppose it might be an option to move data around on the disks and convert the whole stack to LVM one drive at the time, but it’s something for the future.
But I imagine you couldn’t care less 😜.
I was a debian only user for a long time but when woody/sarge (back in 2005-2006) had pretty old binaries compared to upstream and ubuntu started to gain popularity I switched over. Specially the PPA support was really nice back then (and has been pretty good for several years), so specially for a desktop it was pretty good and if I’m not mistaken you could even switch from debian to ubuntu only by editing sources list and running dist-upgrade with some manual fixes.
So, coming from a mindset that everything just works and switching from a release to another is just a bit longer and more complex update the current trend rubs me in a very much wrong way.
So, basically the tl;dr is that life is much more complex today than it was back in the day where I could just tinker with things for hours without any responsibilities (and there’s a ton more to tinker with, my home automation setup really needs some TLC to optimize electricity consumption) so I just want an OS which gets out of my way and allows me to do whatever I need to whenever I need it. Immutable distro might be an answer, but currently I don’t have spare hours to actually learn how they work. I just want my sysVinit back with distributions which can go on for a decade without any major hiccups.
‘immutable’ distro
Are there even immutable distros old enough to have compatibility issues between a 5 year old installation and the latest version?
NixOS has been around since 2003, thus making it older than Ubuntu (2004). Even Silverblue has been out since more than 5 years (October 2018). Finally, we can’t forget about Guix that had its first release over 10 years ago (January 2013).
What is an immutable distro? I’m just now learning this is a thing
It’s often used to describe a distro in which (at least some) parts of the system are read-only on runtime. Furthermore, features like atomicity (i.e. an upgrade either happens or doesn’t; no in-between state), reproducibility[1] and improved security against certain types of attacks are its associated benefits that can (mostly) only exist due to said ‘immutability’. This allows higher degree of stability and (finally) rollback-functionality, which are functionalities that are often associated with ‘immutability’ but aren’t inherently/necessarily tied to it; as other means to gain these do exist.
The reason why I’ve been careful with the term “immutable” (which literally is a fancy word for “unchanging”), is because the term doesn’t quite apply to what the distros offer (most of these aren’t actually unchanging in absolute sense) and because people tend to import associations that come from other ecosystems that have their own rules regarding immutability (like Android, SteamOS etc). A more fitting term would be atomic (which has been used to some degree by distros in the past). The name actually applies to all distros that are currently referred to as ‘immutable’, it’s descriptive and is the actual differentiator between these and the so-called ‘mutable’ distros. Further differentiation can be had with descriptions like declarative, image-based, reproducible etc.
- That is, two machines that have the exact same software installed should be identical even if one has been installed a few years ago, while the other has been freshly installed (besides content of home folder etc). So stuff like cruft, bitrot and (to a lesser degree) state are absent on so-called ‘immutable’ distros.
I really appreciate this thorough response. Are there arguments against immutability? Besides that it’s probably a challenge to maintain…
Are there arguments against immutability?
Initially I was typing out a very long answer, but it quickly got unwieldy 😅. So instead, this one will be oversimplified 😜.
Currently:
- Package management on native system just takes considerably longer on most atomic[1] distros. The exceptions would be Guix and NixOS, but unfortunately their associated learning curves are (very) steep compared to the other atomic distros.
- The learning curve in general is steeper.
- Documentation is lacking.
- Big shifts occur more frequently[2].
- Some things simply don’t work (yet).
One might (perhaps correctly) point out that most of these are actually more related to the technology lacking maturity. And that atomic distros would actually (already) net positively otherwise. Therefore, I’d argue, the transition to atomic distros is perhaps more akin to a natural evolution. I believe (at least) Fedora has already mentioned the possibility to sunset the non-atomic variant in favor of the atomic one when the time is there (or at least switch focus). Which is why I believe that atomicity will probably leave a lasting impact to the Linux landscape, similarly to what systemd has done in years prior.
Besides that it’s probably a challenge to maintain…
If your use-case is supported and you’ve acquired the associated knowledge for setup/configuration and maintenance, then I’d argue it’s probably even easier than a non-atomic distro; simply by virtue of atomicity, increased stability and rollback-functionality. But, as has already been established previously, the learning curve is steeper in general, so getting there is probably harder. With the exception being those whose needs are satisfied easily by the accessible software found in the main package-‘storefront’. Which makes distros like Endless OS very suitable for people whose primary interaction with ‘computers’ has been mobile phones and tablets, as the transition is -perhaps surprising to some- near flawless.
- Yes, that’s how I’ll be referring to them.
- Fedora Silverblue switching to OCI container images for delivery of installations and upgrades. openSUSE’s offerings switching to image-based. Vanilla OS switching from Ubuntu to Debian and to a model that’s a lot more similar to where Silverblue is headed towards. NixOS switching to flakes. etc
Great post. However, I will add my opinion about Debian Sid and its lineage: just don’t use them for production. Sid is an unstable distribution that looks like a rolling release distribution and most of the time it’s fine, but it is fundamentally different since it’s okay if it gets broken.
I’m guessing the idea behind Siduction is to use this rollback functionality to counter its innate instability, but with solid alternatives like openSUSE or the already installed Linux Mint + Timeshift, I wouldn’t recommend Siduction. Also, Manjaro is unstable by design, wouldn’t recommend that one either.
I personally agree with your assessments regarding Debian Sid and Manjaro. However, I didn’t want to force my (potential) ‘bias’ in a comment that tries to be otherwise neutral. Thank you for bringing up the ‘asterisks’ associated with both of these!
Honestly, for a long term usage like this a rolling release distro is better. I’ve never not had massive issues upgrading ubuntu release to release, but I’ve only ever had minor ones on arch and pretty much nothing on gentoo. Arch is bleeding edge, so can’t recommend it to you all that much and gentoo has some learning curve initially. But I’ve heard good things of whatever rolling names are from fedora and opensuse.
People think “updates are time consuming” therefore prefer LTS because its supported for longer. I parole for quite some time that LTS has no place for private use and rolling release is the right way.
I haven’t been paying attention on the rolling releases scene, but I’m pretty sure there was no mature option back when I installed that thing in 2019 or so other than Debian Sid (and daily driving that used to be an adventure in itself, but it’s been years since I last had a system like that). With ubuntu since at least version 14 upgrading from stable release to another was pretty stable experience, but that’s not the experience I’m having today.
Debian sid is not a distro, it is a staging area for Debian testing. It is not meant for use other than testing new packages.
But regardless of that you can still daily drive it as your distribution and many do. That’s why I said it’s an adventure of it’s own, but if you know what you’re getting into and accept the reality with Sid it can work. Personally I don’t want to use it at this point in my life, but I used to run it for several years when woody was getting a bit old on packages and sarge wasn’t out yet (and I think I just continued with sid after sarge release).
Iam using Tumbleweed for close to 10 years now and it was pretty mature from the start. You can’t go wrong with rolling release + perfectly configured btrfs + snapper by default.
LTS does have a place on the desktop: Learning how to daily drive linux. I started with kubuntu non-LTS and didn’t know you needed to manually start a full-upgrade to not get moved to backport repos. Of course that came crashing down on me at the worst time and I took a break from linux. But I did learn enough that I can use arch now and it’s been great.
I don’t understand what it means to "not get moved to backport repos, but this seems ubuntu specific. What you need is proper rollback/snapshot mechanisms in place. Looking at Tumbleweed which offers it out of the box. For Arch you can set it up yourself or use something community made like EndeavourOS.
LTS has no place on personal desktops.
I can’t stand rolling releases (for personal use) and I never recommend them to anyone. To me it feels like being in drift sand.
I need fixed releases to test my documentation (shell scripts) against something. With a rolling release those scripts can break at any time, unless you read the changelog of every package update.
But I also want and use fully automatic updates, so reading changelogs for every update would be the direct opposite of what I am looking for in an OS. I am ok with reading release notes every couple of months for a distribution upgrade though.
I want my systems to be reproducible and that’s impossible with
drift sandrolling releases. In my opinion Fedora or Ubuntu have a decent release cycle, I would never consider Arch or Tumbleweed or Solus.Uhm you never actually used a rolling release distro obviously. Why would you have to read change logs? Also what are you referring to with “test my documentation (shell scripts)”? Why would those not work if package xyz is updated? You are not making much sense, but maybe I am lacking the experience in UNIX to understand your point of view.
Your package manager should tell you about conflicts and even if it doesn’t and something is not working like it did before, you do a simple snapshot rollback and wait another week to update or actually read what might cause the issue. And those incidents are like super rare, at least on Opensuse Tumbleweed (e.g. 2-3 times in a year).
I’ve used Arch Linux and openSUSE Tumbleweed in the past and I have been using Linux for over 10 years …
With each new version of an application there’s the change that configuration files or functionality changes. Packages might even get replaced with others.
You would be surprised how much changes between Ubuntu LTS versions … My archived Ubuntu installation script had lots of if-statements for different versions of Ubuntu, since stuff got moved around. Such things can be as simple as gsettings schemas (keys might get renamed), but even these minor changes make documentation and therefore reproducable reinstallations troublesome.
With a fixed release all these changes are nicely bundled in one large upgrade every couple of months/years, which makes it easy to document and to plan when to do the upgrade.
With each new version of an application there’s the change that configuration files or functionality changes. Packages might even get replaced with others.
Even if this would be true, how would that impact your configuration? It doesn’t full-stop. If you want to access those new features you simply need to check how to activate them in your config file. Or are you making config edits in /etc/ ?!
Your next paragraph I don’t understand, it seems specifically aimed at some kind of self “maintained” script, which as nothing to do with rolling release or distributions.
how would that impact your configuration?
It impacts my documentation. If, for example,
gsettings set org.gnome.software allow-update false
no longer works, because they changed the key fromallow-update
toupdates-allowed
, then my documentation no longer works correctly. Same when new technology is introduced, e. g. a switch from Pulseaudio to Pipewire. With a rolling release distribution these changes can happen at any time, whereas with a fixed release these changes only occur when a new release of the distribution is made and I upgrade to it.I don’t have the time to continously track these changes and modify my documentation accordingly. Therefore I appreciate it if people bundle all those changes for me into one single distribution upgrade and write release notes with a changelog. Then I can spend a day reading the release notes, adjust the documentation, apply the upgrade on all devices and then move on for the next couple of months/years.
which as nothing to do with rolling release or distributions.
I tried to explain to you why I dislike rolling release distributions. That’s why I tried to give you one example where a fixed release distribution is more suitable in my opinion.
I understand that these things might not matter to you, if you only have one computer (or so) to maintain at home or maintaining home computers is your hobby. But I have four personal computers and multiple devices from the family to maintain and system administration is no longer my hobby …
But I have four personal computers and multiple devices from the family to maintain and system administration is no longer my hobby …
but you are writing documentation for scripts?
There is no problem with using a point release system long term. The problem is using Ubuntu. I’ve never once successfully upgraded it from one release to the next without issues, errors, things breaking or loss of functionality. It’s the main reason why I’ll never use Ubuntu again.
I’ve upgraded several Ubuntu LTS versions to newer LTS and have been running fine. The problems come up when you wait too long and the repos don’t have the needed packages anymore. You can still fuddle your way through even that scenario and retain a fully working system.
Ubuntu changes the entire underlying technology too often cause they always try to introduce their own system in place of something that’s already established (Upstart, Unity, Snap, etc.)
My last experiences with Ubuntu were one upgrade that failed to boot after following all the recommended steps, one upgrade where the release notes themselves recommended a fresh install to enable all functionality and a fresh install where the first thing I saw after booting was an error message by Gnome about a crashed service.I left the distro after that and haven’t looked back. Admittedly, that was quite some time ago. It’s likely they’ve improved since then (but so have all other distros).
I’m glad we have companies helping to push the envelope and try new things. I may not always like the direction they take things, e.g., the Unity desktop turned me off for a few releases, and I always seem to run KDE since gnome went off the rails (imo), but it doesn’t hurt anything and the whole ecosystem is probably better for it. If it hurts then people move to alternatives and hopefully Canonical backpedals, or people move on and Ubuntu withers.
You can still fuddle your way through even that scenario and retain a fully working system.
Or at least you used to have that option without too much of a headache. I’m pretty sure you can still do it tho, but the steps required to ‘rescue’ old installation tend to be more complex than they used to be.
For a desktop system, I think something like NixOS is probably the way to go. Keep your home partition then blow away the system and boot if there are ever any issues then install the system from your backed-up system config file and you’re golden.
I just had pacman uninstall itself the other day during a routine -Syu. I was finally able to figure out how to fix it, untar the pkg to / and then tell pacman to install pacman with —overwrite.
That sounds fun
My first arch system and so far haven’t completely borked it yet haha
You won’t. Arch has very little glue that holds it together and the components are quite robust. Buntus of this world, on the other hand, have plenty of glue to enforce their way. And it might be good for first timers, but definitely gets in a way as you start learning the system. My last annoyance like this was disabling gdm - it just kept coming back. Some script somewhere was making sure thr service was running no matter what.
I’ve only ever had minor ones on arch and pretty much nothing on gentoo.
My biggest complaint with Arch was that downgrading wasn’t officially supported.
With Gentoo I don’t have pretty much nothing to complain. But I get it’s not for everyone.
That said I’ve not ran many different distros as my main distro. I went with mkLinux --> Gentoo --> Arch --> Gentoo.
You might be correct, but I haven’t found one that I’d like (not that I’ve really looked for one either). Maybe you know if there’s any Debian derivatives which do rolling releases?
I like cinnamon and I’ve been running mint on my laptop for quite a while and I like it, so I’m going with it right now and plan for my next distro-hopping needs more carefully when installing.
But in general I’d say that Ubuntu is far from what it used to be and the TLC the latest version wants is just something I’m not willing to put up with. If something breaks on a update then it breaks, but at least give me an option to choose when it happens.
Maybe you know if there’s any Debian derivatives which do rolling releases?
No need for derivatives. Just use Debian Unstable. It’s the most stable rolling release distro I’ve used so far.
There is MocaccinoOS based on gentoo, I used it when it was sabayon and was a great experience overall.
Same here. Ubuntu almost made me believe that linux is a pain in the ass to use and you need to fix some shit after every update.
Now I use arch and it’s great. Nvidia is very annoying because they constantly publish drivers that break things, but you can just roll those back and wait until they fix it again. And that gets worse as GPUs age. Apart from nvidia, I’ve had exactly one update issue (telepathy-kde being removed and causing the pacman dependency resolver to get confused) that was fixed in about 2 minutes of googling.
My biggest complains with Ubuntu lately are Firefox is a snap package and when it updates it yells at me to close Firefox so it can update it and if I wait too long it forces the it closed, and it gives me countdown notifications. Annoying and something out of Windows 10 forced reboot type shit. The other is the automatic apr upgrades break cuda/nvidia drivers forcing me to reboot the whole system. Pain in the ass.
Automatic updates that need reboots but run at any time other than when shutting down?
Sounds like something microsoft would do, but even they get that part right.
Yep it’s very annoying. Suddenly my system doesn’t have cuda anymore and it’s because of an update. Only fix that I’ve found is to reboot.
So I grabbed my ventoy-drive, downloaded latest mint ISO on it and instead of doing something productive I planned to do I’ll spend couple of hours at reinstalling the whole system.
With Mint, you should be able to get to a working system that lets you do your paperwork within less than half an hour.
You can set up all your customizations again when you have more time. But it should also be no issue to just copy your old /home folder to the new system between Mint and Ubuntu. Then the only step after installation would be to install the programs you had before.Yes, I know. Existing drive layout however says that I need to repartition the whole thing and that says that I need to copy couple of hundred GB’s over to something else before reinstallation and so on, so it’s not a half hour job. And while I’m at it it’s better to do it right than half-ass it over a long, long period of time.