

If you just want to have the list of packages saved in a text file and use that file for apt/dnf/… you could just
sudo dnf install $(< list.txt)
If you just want to have the list of packages saved in a text file and use that file for apt/dnf/… you could just
sudo dnf install $(< list.txt)
The US as a whole is a third world country. Low average income, nearly non-existent health insurance, incredibly bad education (system), high wealth gap, fascism, insanely bad labor laws, the list goes on and on…
That might be the case. But more often than not it’s WAY too easy to see that a decision is bad to argue that we can’t implement any measures against that.
In this case we “just” need laws that prohibit that any infrastructure can be dependent on few foreign entities and had to be completely independent if reasonably possible. Diversification or elimination of dependencies as a law.
You can’t rely on foreign proprietary software like Teams for public facilities and infrastructure if there are reasonable alternatives.
You can’t rely only on Russian oil if other countries are available for trade.
We should start making laws and frameworks that prevent us from making bad decisions in the future. Using Microsoft and their products was always a bad decision and fixing that now is way more expensive than whatever the arguments were against Linux and FOSS software in the last two decades. It was just easy and convenient at the time.
Being dependent on Russia for oil didn’t turn out great either.
But I just see people talking about how to change things for the better, never how to prevent silly things in the future. I’d rather be in a situation were we don’t have to fix things.
And they follow the orders…
Correct, but if they need that much time to come to that conclusion they’re basically useless.
Why do I still see articles with the headline “X claims Israel is committing genocide”? “Claims”? Really? And how is that “news”? If we can’t get over pretending it’s not entirely clear nothing will change.
I know this is quite easy to say from the comfort of my couch in Europe, but guys you need to shoot this fucker in the face already.
Means they aren’t competing. They’re working in a completely different field. Nvidia isn’t producing anything sensible.
Competition in what? Producing bullshit?
But it’s 2⁵² addresses for each star in the observable universe. Or in other words, if every star in the observable universe has a planet in the habitable zone, each of them got 2²⁰ more IPs than there are IPv4 addresses.
“How a shitty company with an even shittier codebase full of bad design decisions uses AI” isn’t going to get me hooked on using AI.
Just when you thought Nvidia couldn’t get worse, they praise Trump.
But spending a lot of processing power to gain smaller sizes matters mostly in cases you want to store things long term. You probably wouldn’t want to keep the exact same LLM with the same weightings and stuff around in that case.
How the hell are you confused when EA does something shitty?
Ye but that would limit the use cases to very few. Most of the time you compress data to either transfer it to a different system or to store it for some time, in both cases you wouldn’t want to be limited to the exact same LLM. Which leaves us with almost no use case.
I mean… cool research… kinda… but pretty useless.
Ok so the article is very vague about what’s actually done. But as I understand it the “understood content” is transmitted and the original data reconstructed from that.
If that’s the case I’m highly skeptical about the “losslessness” or that the output is exactly the input.
But there are more things to consider like de-/compression speed and compatibility. I would guess it’s pretty hard to reconstruct data with a different LLM or even a newer version of the same one, so you have to make sure you decompress your data some years later with a compatible LLM.
And when it comes to speed I doubt it’s nearly as fast as using zlib (which is neither the fastest nor the best compressing…).
And all that for a high risk of bricked data.
The only field I see LLMs enhancing productivity of competent developers is front end stuff where you really have to write a lot of bloat.
In every other scenario software developers who know what they’re doing the simple or repetitive things are mostly solved by writing a fucking function, class or library. In today’s world developers are mostly busy designing and implementing rather complex systems or managing legacy code, where LLMs are completely useless.
We’re developing measurement systems and data analysis tools for the automotive industry and we tried several LLMs extensively in our daily business. Not a single developer was happy with the results.