Flatpaks aren’t perfect, but I think it’s a good solution to the fragmentation problem that is inherent to Linux.
Precisely. Flatpaks solve an important problem. Perfect should not be the enemy of good.
Binary compatibility is a sad story on Linux, and we cannot expect developers — many of whom work for free — to package, test, debug, and maintain releases for multiple distributions. If we want a sustainable ecosystem with diverse distributions, we must answer the compatibility question. This is a working option that solves the problem, and it comes with minor security benefits because it isolates applications not just from the system but from each other.
It’s fair to criticize a solution, but I think it’s not fair to ignore the problem and expect volunteers to just work harder.
Also companies are lazy and if we don’t want to be stuck on Ubuntu for proprietary app stability. We should probably embrace something like flatpak. Also when companies neglect their apps, it’ll have a better chance of working down the road thanks to support for multiple dependency versions on the same install.
Great point! At the end of the day, the apps I want to use will decide which distro I main. Many FOSS fanatics are quick to critique Ubuntu, So they should support solutions that allow our distro to be diverse and use all the killer apps.
I like Flatpak just because it isn’t Snap
The enemy of my enemy, eh?
…is my enemy’s enemy, no more, no less. (Maxims of Maximally Effective Mercenaries #29)
Fair. Also, flatpak does not try to break everything by default, which is a plus.
Laughs in AUR
Laughs in nixpkgs
Laughs in confusion
(I dont know how i got here)
Laughs in support
Support laughs in you
Cracks up in ebuilds
Snickers in pipx
I like the aur too but a proprietary app that isn’t updated to support newer dependencies, it most likely won’t run anyway. At that point it’s either broken app, broken system, or you don’t have anything else installed using that library(yet).
Not an issue on NixOS, you can ship old deps with it
Sounds neat! Don’t really care much for messing with config files for hours. This is from someone who uses arch on all his systems. I’ve been in config hell for a while, I use kde now.
Not great to laugh at the mess Linux is in, due to people paddling in different, incompatible, directions. Users can’t choose the package format. They have to take what they are given. Good or bad. I don’t care which format. As long as it works. But this is a good way to scare more people off of Linux.
laughs at people scared of choice and “mess” . . .
If they’re switcing to linux they should first come to know about open source forking around - arguably - one of the most important features of the whole thing.
If they don’t wan’t that choice and all that inevitable open source forkery, they probably should go for an apple mac or windows or something like that. And maybe they will have to pay for some software for the privilege because it takes work to do those things. They can of course try plain old ubuntu and do stuff the way canonical wants, that removes quite a bit of choice if it is otherwise too terrifying for them.
But in general, I don’t think its a good idea to to try to sell pig-carcasses to vegans by painting them the colours of broccoli.
Lol who the fuck is blaming app devs? Also something something arch
aur is the only thing I miss. I do like fedora with i3 very much but rpm can be pain in the ass sometimes
could you perhaps run them with Distrobox? i was always wondering that.
Yes! This is something I do on 3 of my machines. My ArchLinux Distrobox with paru works like a charm. (so far)
that’s very good to hear, I’ll probably try out silverblue with distrobox and that’ll be my end of distrohopping
It was for me, check out ublue for simpler setup.
most likely but I’m super lazy with my pc unless I’m having hyperfocus going, so I don’t know
distrobox?, i also installed nix and works perfectly
…btw
Flatpak is nice but I really would like to see a way to run flatpakked application transparently e.g. don’t have to
flatpak run org.gnome.Lollypop
and can just run the app via
Lollypop
You could make aliases for each program, but I agree, there should be a way to set it up so they resolve automatically.
You could possibly also make a shell script that does this automatically. I believe most flatpak ids follow a pattern such as com.github.user.package, for github projects for example. So you could loop through all installed flatpaks, extract the name, and then add the alias.
Agreed, but I also feel like such a thing should be included with Flatpak by default instead of leaving it to the users to solve.
You can symlink
/var/lib/flatpak/exports/bin/org.gnome.Lollypop
(if you are using a system installation) or~/.local/share/flatpak/exports/bin/org.gnome.Lollypop
(if you are using a uset installation) to~/.local/bin/lollypop
and run it aslollypop
.Well, Flatpak installs aliases, so as long as your distribution - or yourself - add the
<installation>/exports/bin
path to$PATH
, then you’ll be able to use the application IDs to launch them.And if you want to have the Flatpak available under a different name than its ID, you can always symlink the exported bin to whatever name you’d personally prefer.
I’ve got Blender set up that way myself, with theorg.blender.Blender
bin symlinked to/usr/local/bin/blender
, so that some older applications that expect to be able to simply interop with it are able to.Is there some way to set an install hook that automatically makes those symlinks when you install a flatpak?
Well, Flatpak always builds the aliases, so as long as the
<installation>/exports/bin
folder is in$PATH
there’s no need to symlink.If you’re talking specifically about having symlinks with some arbitrary name that you prefer, then that’s something you’ll have to do yourself, the Flatpak applications only provide their canonical name after all.
You could probably do something like that with inotify and a simple script though, just point it at theexports/bin
folders for the installations that you care about, and set up your own mapping between canonical names and whatever names you prefer.
put flatpak in your PATH and you can youse the app name like normal
I just run them raw, like just
org.gnome.Lollypop
Not ideal, but it’s what I do
It’s fecking raw!
[Honk Honk]
Sewer Count: 999
Nice fucking model! #FreeCivvie11
Flatpak haters hate new apps anyway.
glibc 2.36 is all you’ll ever need, okay? Go away with those goddamn backports!
If I can choose between flatpack and distro package, distro wins hands down.
If the choice then is flatpack vs compile your own, I think I’ll generally compile it, but it depends on the circumstances.
Why?
Because it’s easier to use the version that’s in the distro, and why do I need an extra set of libraries filling up my disk.
I see flatpack as a last resort, where I trade disk space for convenience, because you end up with a whole OS worth of flatpack dependencies (10+ GB) on your disk after a few upgrade cycles.
Is compiling it yourself with the time and effort that it costs worth more than a few GB of disk space?
Then your disk is very expensive and your labor very cheap.
For a lot of project “compiling yourself”, while obviously more involved than running some magic install command, is really not that tedious. Good projects have decent documentation in that regard and usually streamline everything down to a few things to configure and be done with it.
What’s aggravating is projects that explicitly go out of their way to make building them difficult, removing existing documentation and helper tools and replacing them with “use whatever we decided to use”. I hate these.
I should have noted that I’ll compile myself when we are talking about something that should run as a service on a server.
99% of the time it’s just “make && sudo make install” or something like that. Anything bigger or more complicated typically has a native package anyway.
They didn’t say anything about compiling it themselves, just that they prefer native packages to flatpak
edit: I can’t read
2 comments up they said
If the choice then is flatpack vs compile your own, I think I’ll generally compile it, but it depends on the circumstances.
I mean it’s 2024. I regularly download archives that are several tens or even over 100 GB and then completely forget they’re sitting on my drive, because I don’t notice it when the drive is 4TB. Last time I cared about 10GB here and there was in the late-2000s.
Great that you have 4tb on your root partition then by all means use flatpack.
I have 256Gb on my laptop, as I recall I provisioned about 40-50gigs to root.
I’m sorry. I didn’t realize people were still regularly using such constrained systems. Honest. I’ve homebuilt my PCs for the last 15 years.
deleted by creator
🤣
Why not upgrade your hdd?
flatpak has dedub, so no
Yep that’s all well and good, but what flatpack doesn’t do automatically is clean up unused libs/dependencies, over time you end up with several versions of the same libs. When the apps are upgraded they get the latest version of their dependency and leave the old behind.
TEN WHOLE GIGABYTES!! OMG WHAT ARE WE TO DO??
10 out of 40 is 25%
10 out of 4000 is 0.25%
I don’t know what dependencies he has but my 3 year old system that is constantly being updated is full of flatpaks and all of the dependencies combined are only around 3GB. People see 1GB of dependencies and lose their mind.
Stubbornness
Based
deleted by creator
I’m 100% on this camp.
I change my opinion depending on which app it is. I use KDE, so any KDE app will be installed natively for sure for perfect integration. Stuff like grub costumizer etc all native. Steam, Lutris, GIMP, Discord, chrome, firefox, telegram? Flatpak, all of those. They don’t need perfect integration and I prefer the stability, easy upgrades and ease of uninstall of flatpak. Native is used when OS integration is a must. Flatpak for everything else. Especially since sometimes the distro’s package is months/years old… prefering distro packages for everything should be a thing of the past.
Haters aren’t worth listening to. Doesn’t matter if it is flatpak, systemd, wayland, or whatever else. These people have no interest in a discussion about merits and drawbacks of a given solution. They just want to be angry about something.
I know, right!? Add gimp to that list as well. I can go on and on about shortcomings of gimp despite being a happy user. The average gimp hater, on the other hand, doesn’t have anything to say besides “the UI is dumb and I can’t figure out how to draw a circle”
“The UI is unintuitive” is a legitimate complaint
“Intuitive UI” results in Gnome.
Is it really intuitive if I have to open dconf-editor to change the system font?
They call it “intuitive UI”, Linus calls it “‘users are idiots, and are confused by functionality’ mentality of Gnome”
It’s not always a zero-sum game.
Elaborate? Most of good UI comes from KDE.
What I mean is, makingg a UI more intuitive does not necessarily make it more… Gnome-ey? It can still be effective, customizable, etc.
“Intuitive UI” crowd usually means Gnome-ey/Apple-ey design.
In reality customizable design is more intuitive, because you can customize it to your intuition.
kate editor would like to have a word… They did my lady kate dirty with the latest updates :( The top level File menu was so much better and now I don’t know where to find the configuration to get that back, and have on my work computer a stupid single button in the top right corner which opens the “menu bar”, except vertically…
Wayland gets the hate because compositors are authoritative so you cannot e.g. install your own window manager, taskbar, etc. It would be good if there were specifications governing these, but there isn’t.
I’m a Debian fan, and even I think it’s absolutely preferable that app developers publish a Flatpak over the mildly janky mess of adding a new APT source. (It used to be simple and beautiful, just stick a new file in APT sources. Now Debian insists we add the GPG keys manually. Like cavemen.)
Someone got to say it…
There is no Debian if everything was a pile of Snaps/Flatpack/Docker/etc. Debian is the packaging and process that packaging is put through. Plus their FOSS guidelines.
So sure, if it’s something new and dev’y, it should isolate the dependencies mess. But when it’s mature, sort out the dependencies and get it into Debian, and thus all downstream of it.
I don’t want to go back to app-folders. They end up with a missmash of duplicate old or whacky lib. It’s bloaty, insecure and messy. Gift wrapping the mess in containers and VM, mitigates some of security issues, but brings more bloat and other issues.
I love FOSS package management. All the dependencies, in a database, with source and build dependencies. All building so there is one copy of a lib. All updating together. It’s like an OS ecosystem utopia. It doesn’t get the appreciation it should.
And then change where we put them.
Now Debian insists we add the GPG keys manually. Like cavemen.)
Erm. Would you rather have debian auto-trust a bunch of third party people? It’s up to the user to decide whose keys they want on their system and whose packages they would accept if signed by what key.
Not “auto trust”, of course, but rather make adding keys is a bit smoother. As in “OK, there’s this key on the web site with this weird short hex cookie. Enter this simple command to add the key. Make sure signature it spits out is the same on the web page. If it matches, hit Yes.”
And maybe this could be baked somehow to the whole APT source adding process. “To add the source to APT, use
apt-source-addinate https://deb.example.com/thingamabob.apt
. Make sure the key displayed is0x123456789ABC
byThingamabob Team
with received key signature0xCBA9876654321
.”For the keys - do you mean something like
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 00000000 where 00000000 is replaced with the fingerprint of the key you want to fetch?
I do agree - the apt-key command is kinda dangerous because it imports keys that will be generally trusted, IIRC. So a similar command to fetch a key by fingerprint for it to be available to choose as signing keys for repositories that we configure for a single application (suite) would be nice.
I always disliked that signing keys are available for download from the same websites that have the repository. What’s the point in that? If someone can inject malicious code in the repository, they sure as hell can generate a matching signing key & sign the code with that.
Hence I always verify signing keys / fingerprints against somewhat trustworthy third parties.
What we really need though is a crowdsourced, reputation-based code review system. Where open source code is stored in git-like versioning history, and has clear documentations for each function what it should and should not do. And a reviewer can pick as little as an individual function and review the code to confirm (or refute) that the function
- does exactly what the interface documentation claims it does
- does nothing else
- performs input validation (range checks etc)
- is well-written (in terms of performance)
Then, your reputation score would increase according to other users concurring with your assessment (or decrease if people disagree), and your reputation can be used as a weighting factor in contributing to the “review thoroughness” of a code module that you reviewed. E.g.: a user with a reputation of 0.5 confirms that a module does exactly what it claims to do: Module gets review count +1, module gets new total score of +0.5, new total weight of ( combined previous weights + 0.5 ) and the average review score is “reviews total score” / “total weight”.
Something like that. And if you have a reputation of “0.9”, the review count goes +1, total score +0.9, total weight +0.9 (so the average score stays between 0 and 1).
Independent of the user reputation, the user’s review conclusion is stored as “1” (= performs as claimed) or “0” (= does not perform as claimed) for this module.
Reputation of reviewers could be calculated as the sum of all their individual review scores (at the time the reputation is needed), where the score they get is 1 minus the absolute difference between the average review score of a reviewed module and their own review conclusion.
E.g. User A concludes: module does what it claims to do: User A assessment is 1 (score for the module) User B concludes: module does NOT what it claims to do: User B assessment is 0 (score)
Module score is 0.8 (most reviewers agreed that it does what it claims to do)
User A reputation gained from their review of this module is 1 - abs( 1 - 0.8 ) = 0.8 User B reputation gained from their review of this module is 1 - abs( 0 - 0.8 ) = 0.2
If both users have previously gained a reputation of 1.0 from 10 reviews (where everyone agreed on the same assessment, thus full scores):
User A new reputation: ( 1 * 10 + 0.8 ) / 11 = 0.982 User B new reputation: ( 1 * 10 + 0.2 ) / 11 = 0.927
The basic idea being that all modules in the decentralized review database would have a review count which everyone could filter by, and find the least-reviewed modules (presumably weakest links) to focus their attention on.
If technically feasible, a decentralized database should prevent any given entity (secret services, botfarms) to falsify the overall review picture too much. I am not sure this can be accomplished - especially with the sophistication of the climate-destroying large language model technology. :/
If you’re separating your application from the core system package manager and shared libraries, there had better be a good and specific reason for it (e.g. the app needs to be containerized for stability/security/weird dependency). If an app can’t be centrally managed I don’t want it on my system, with grudging exceptions.
Chocolatey has even made this possible in Windows, and lately for my Windows environments if I can’t install an application through chocolatey then I’ll try to find an alternative that I can. Package managers are absolutely superior to independent application installs.
Typically Windows applications bundle all their dependencies, so Chocolatey, WinGet and Scoop are all more like installing a Flatpak or AppImage than a package from a distro’s system package manager. They’re all listed in one place, yes, but so’s everything on FlatHub.
This is true, the only shared libraries are usually the .NET versions, but so many apps depend on specific .NET versions that frequently the modularity doesn’t matter.
I’m not sure where you’re getting the idea that Flatpak aren’t centrally managed…
Can I
sudo apt upgrade
my installed flatpak apps?No, because they’re not apt packages. You can, however,
flatpak update
them, and you don’t even need sudo since they’re installed in the user context rather than system.
I think containerization for security is a damn good reason for virtually all software.
Definitely. I’d rather have a “good and specific reason” why your application needs to use my shared libraries or have acess to my entire filesystem by default.
Using your shared libraries is always a good thing, no? Like your distro’s packages should always have the latest security fixes and such, while flatpaks require a separate upgrade path.
Access to your entire filesystem, however, I agree with you on.
I only use rolling releases on my desktop and have ran into enough issues with apps not working because of changes made in library updates that I’d rather they just include whatever version they’re targeting at this point. Sure, that might mean they’re using a less secure version, and they’re less incentivized to stay on the latest version and fix those issues as they arise, but I’m also not as concerned about the security implications of that because everything is running as my unprivileged user and confined to the flatpak.
I’d rather have a less secure flatpak then need to downgrade a library to make one app I need work and then have a less secure system overall.
emerge sec-policy/selinux-*
I think stability is a pretty good reason
If an app can’t be centrally managed
Open Discover, Gnome Software etc -> Click update?
flatpak upgrade
And with topgrade you can even upgrade flatpaks and your distros repos in one go
I’m now confused if they’re saying that flatpak is centrally managed or not. To me it seems centrally managed, both the flatpak ecosystem but your whole machine (repo packages, firmware, flatpak) if you use those app stores. I might’ve misunderstood what they said.
We’re both saying that it’s centrally managed
Fuck, I took both the wrong way. Sorry about that
Oh no, no GUI nonsense. Single, simple shell command update for the whole system so that it can be properly remotely managed, please. Something equivalent to
sudo apt upgrade
I’ve written a small script that does all the updates (repo, flatpak, docker), verified the packages, does cleanup and shows if stuff needs rebooted. Handy. That way I can do everything from one short command
Flatpack can be centrally managed, it’s just like a parallel distribution scheme, where apps have dependencies and are centrally updated. If a flatpack is made reasonably, then it gets library updates independent of the app developer doing it.
“App image” and " install from tarball" violate those principles, but not snap or flatpack.
Um, if it’s “parallel” (e.g. separate from the OS package manager) then it’s not centrally managed. The OS package manager is the central management.
There might be specific use cases where this makes sense, but frankly if segregating an app from the OS is a requirement then it should be fully containerized with something like Docker, or run in an independent VM.
If a flatpack is made reasonably, then it gets library updates independent of the app developer doing it.
That feels like a load-bearing “if”. I never have to worry about this with the package manager.
Define “the OS package manager”. If the distro comes with flatpack and dnf equally, and both are invoked by the generic “get updates” tooling, then both could count as “the” update manager. They both check all apps for updates.
Odd to advocate for docker containers, they always have the app provider also on the hook for all dependencies because they always are inherently bundled. If a library has a critical bug fix, then your docker like containers will be stuck without the fix until the app provider gets around to fixing it, and app providers are highly unreliable on docker hub. Besides, update discipline among docker/podman users is generally atrocious, and given the relatively tedious nature of following updates with that ecosystem, I am not surprised. Even best case, docker style uses more disk space and more memory than any other option, apart from VM.
With respect to never having to worry about bundled dependencies with rpm/deb, third party packages bundle or statically link all the time. If they don’t, then they sometimes overwrite the OS provided dependency with an incompatible one that breaks OS packages, if the dependency is obscure enough for them not to notice other usage.
If you really hate flatpak just make an arch distrobox and download off the AUR. Or install Nix or something
I do sort of wish Nix was a more popular distro agnostic solution
That’s what I’ve done with my deck. Some things just aren’t available through discover, and the Firefox build on there has behavior that I don’t like or know how to correct. Distrobox gives me access to the Arch repos + AUR with persistence that you can’t get on SteamOS without it.
SteamOS is an arch derivative, so you could also just install arch, add the SteamOS repos, and set the steam UI in gamescope to launch on login
Or just use Arch… only for half of your AUR packages to be broken and end up still using flatpaks anyways.
Install Gentoo and put the package on GURU, it’s really easy (and .ebuild > PKGBUILD)
I’m new to Linux. Every time I’ve had a major issue with an application it turned out to be due to a flatpak. I’ll stick with other options for the time being.
Also at least let me compile it myself if not in a repo 😩
laughs in appimage.
And this, this is why I love the AUR
I think no one said it needs to be ON a distro’s repos. That’s a straw man.
A package should be available in a native package format in a way that doesn’t cause conflict with what’s in the official repo. The reasons for a single source of truth on installed status should be obvious; but given the format of some packaging and the signed assurance of provenance, thr advantages to a native format can be leaves ahead of even that.
Wow, is this meme a really naive take that is contradicted by - oh god, everything. Can someone know about enterprise Linux and also be this naive?
The responsibility to figure out the dependencies and packaging for distros, and then maintain those going forwards, should not be placed on the developer. If a developer wants to do that, then that’s fine - but if a developer just wants to provide source with solid build instructions, and then provide a flatpak, maybe an appimage, then that’s also perfectly fine.
In a sense, developers shouldn’t even be trusted to manage packaging for distributions - it’s usually not their area of expertise, maintainers of specific distributions will usually know better.
While I agree that developers (like myself) are not necessarily experts at packaging stuff, to conclude that it’s fine that a developer provides a flatpak is promoting shitty software. Whether a software should run in a jail, or within user space is a decision that - for most use cases - should be made by the user.
There is absolutely no reason not to provide software as a tar.gz source code archive with a proper makefile & documentation of dependencies - or automake configuration if that’s preferred.
From that kind of delivery, any package maintainer can easily build a distro-package.
I think you’re actually agreeing with me here. I was disputing the claim that software should be made available in “a native package format”, and my counterpoint is that devs shouldn’t be packaging things for distros, and instead providing source code with build instructions, alongside whatever builds they can comfortably provide - primarily flatpak and appimage, in my example.
I don’t use flatpak, and I prefer to use packages with my distro’s package manager, but I definitely can’t expect every package to be available in that format. Flatpak and appimage, to my knowledge, are designed to be distro-agnostic and easily distributed by the software developer, so they’re probably the best options - flatpak better for long-term use, appimage usable for quickly trying out software or one-off utilities.
As for tar.gz, these days software tends to be made available on GitHub and similar platforms, where you can fetch the source from git by commit, and releases also have autogenerated source downloads. Makefiles/automake isn’t a reasonable expectation these days, with a plethora of languages and build toolchains, but good, clear instructions are definitely something to include.
Makefiles/automake isn’t a reasonable expectation these days, with a plethora of languages and build toolchains, but good, clear instructions are definitely something to include.
As for the Makefiles, I meant that for whatever build toolchain the project uses - because the rules to build a project are an essential part of the project, linking the source code into a working library or executable. Whether it is cmake, or gnu make, or whatever else there is - that’s not so important as long as those build toolchains are available cross platforms.
I think what is really missing in the open source world is a distribution-agnostic standard how to describe application dependencies so that package maintainers can auto-generate distro-packages with the distribution-specific dependencies based on that “dependencies” file.
Similar to debian dependencies
Depends: libstdc++6 (>= 10.2.1)
but in a way that identifies code modules, not packages, so that distributions that package software together differently will still be able to identyfindPackageFor( dependency )
I would really like to add this kind of info to my projects and have a tool that can auto-build a repo-package from those.
“oh this is a flatpak or hell even a windows exe…” proceeds to search for it on AUR “ah there it is, wonderful!”
Hell I’ve found a god damn windows gaming cheat trainer on AUR and it worked.
The AUR is basically just a script that describes best case scenario for building something under Arch. They don’t have any specific quality rules they have to meet.
It’s super easy to make and publish an AUR script compared to a regular distro package (including Arch packages).
Usually they work well enough, especially things that just involve repacking binaries (e.g. printer drivers)
They do? I’ve always seen that as being up to distro maintainers, and out of control of the devs.