Friendly Ace Lobster 💜

  • 6 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • Fedora is not enterprise grade. That would be RHEL. And entreprise grade mostly just mean stable (some would say stale) packages anyway if you don’t pay for support.

    Installing nvidia drivers on fedora workstation is as easy as enabling rpm-fusion non-free and then installing a few packages. The issue here comes from OP running an OSI-based immutable system, which makes layering stuff on top a bit more difficult.

    OP’s already running something fedora based, might as well stay where they feel comfortable and just add a few drivers and gaming tweaks on top.

    Nothing against opensuse though. I’m currently running aeon because their approach of immutability is more modulable than fedora’s one.


  • If you find yourself wanting to game on your distro again, layering nvidia drivers ontop of immutable fedora is do-able. If you want a more hands off approach you can use bazzite (https://bazzite.gg/), which has an nvidia compatible version and is just a kinoite-based OSI image with gaming oriented tweaks and extra apps.

    You can even just rebase to it if you’re already using kinoite (and rebase back to kinoite if you don’t like it), no need to reinstall your system. The download page has a one-command exemple on how to do that.


  • I’ll confess that I only tried gpt 3.5 (and the mistral one but it was actually consistantly worse) given that there’s no way in the world I’m actually giving openAI any money.

    Having said that I don’t think it fundamently changes the way it works. Basically I think it’s fine as some sort of interactive man/stackoverflow parser. It can reduce frictions of having to read the man yourself, but I do think it could do things a lot better for new user onboarding, as you seem to suggest in the comments that it’s one of the useful aspect.

    Basically it should drop the whole “intelligent expert” thing and just tell you straight away where it got the info from (and actually link the bloody man pages. At the end of the day the goal is still for you te be able to maintain your own effing system). I should also learn to tell you when it actually doesn’t know instead of inventing some plausible answer out of nowhere (but I guess that’s a consequence of how those models work, being optimized for plausibility rather than correctness).

    As for the quality of the answer, usually it’s kind of good to save you from googling how to do simple one liners. For script it actually shat the bed every single time I tried it. In some instances it gave me 3 ways to do slightly different things all in the same loop. In other straight up conflicting code blocks. Maybe that part is better in GPT 4 I don’t know.

    It also gives you outdated answers without specifying the version of the packages it targets. Which can be really problematic.

    Basically where I’m going with this is that if you’re coding, or maintaining any server at all, you really should learn how to track the state of your infra (including package versions) and read man pages anyway. If you’re just a user, nowadays you don’t really have to get your hands in the terminal.

    At the end of the day, it can be useful as some sort of interactive meta search engine that you have to double check.

    I’m really not getting into the whole “automated garbage that’s filling up the internet, including bug reports and pull request” debate. I do think that all things considered, those models are a net negative for the web.


  • It’s the long term support version of OpenSUSE that is binary compatible with the entreprise version provided by SUSE (SLES). Kind of the same relation between RHEL and CentOS (before the stream controversy).

    In laymans terms it’s a stable desktop and server linux disribution. But it’s in a weird spot right now as OpenSUSE has stated that this will be the last major version following this release format. The next main OpenSUSE distro will be something based on modular imutable images.

    Edit: Apparently there will also be a non-immutable version of Leap 16




  • For pdf export, you can just org-export-to-pdf. In the background it translates your doc to a latex file and then compiles that (I know you stated you didn’t lile tex, but in case you can bear a few command this is actually super useful as it gives you more control over the doc, you can just insert random latex part in your doc and it will handle them nicely). Same for publishers. You can just translate your file to tex and that will fit most of the publication processes. Otherwise you can just convert your doc to pretty much anything with pandoc (including .docx).

    Keep in mind however that this is basically just saying: I like the idea of latex (fine granularity at compile time, raw text and reproducibility) but I prefer org markup for common marks like headers, bold and refs, and I like having a somewhat pretty editor. If your issue with latex is that writting and formating are not synchronous, than yeah this is not for you.


  • Depends on what you’re looking for. If you’re deadset on wysiwyg editors, then yeah, onlyoffice is as good as it gets if you want to keep it foss and don’t like libreoffice. Otherwise people seem to like the many scientific markdown editors. But honestly if you already know emacs then just… emacs. I’m in academia too and with the right set of packages it can fit an academic workflow pretty nicely. I write in org mode with org-superstar, olivetti mode to center text in org, varying fonts and font size for headers, citar for references (that syncs with a realtime bibtex export from my zotero library). With the added bonus of having all the usual goodness (magit, projectile, you name it).





  • The proposal explicitly goes against “more fingerprinting”, which is maybe the one area where they are honest. So I do think that it’s not about more data collection, at least not directly. The token is generated locally on the user’s machine and it’s supposedly the only thing that need to be shared. So the website’s vendor do get potentially some infos (in effect: that you pass the test used to verify your client), but I don’t think that it’s the major point.
    What you’re describing is the status quo today. Websites try to run invasive scripts to get as much info about you as they can, and if you try to derail that, they deem that you aren’t human, and they throw you a captcha.
    Right now though, you can absolutely configure your browser to lie at every step about who you are.
    I think that the proposal has much less to do with direct data collection (there’s better way to do that) than it has to do with control over the content-delivery chain.
    If google gets its way, it would effectively switch control over how you access the web from you to them. This enables all the stuff that people have been talking about in the comment: the end of edge case browser and operating systems, the prevention of add blocking (and with it indeed, the extension of data collection), the consolidation of chrome’s dominant position, etc.