• 0 Posts
  • 45 Comments
Joined 4 months ago
cake
Cake day: February 20th, 2025

help-circle
  • The only field I see LLMs enhancing productivity of competent developers is front end stuff where you really have to write a lot of bloat.

    In every other scenario software developers who know what they’re doing the simple or repetitive things are mostly solved by writing a fucking function, class or library. In today’s world developers are mostly busy designing and implementing rather complex systems or managing legacy code, where LLMs are completely useless.

    We’re developing measurement systems and data analysis tools for the automotive industry and we tried several LLMs extensively in our daily business. Not a single developer was happy with the results.





  • That might be the case. But more often than not it’s WAY too easy to see that a decision is bad to argue that we can’t implement any measures against that.

    In this case we “just” need laws that prohibit that any infrastructure can be dependent on few foreign entities and had to be completely independent if reasonably possible. Diversification or elimination of dependencies as a law.

    You can’t rely on foreign proprietary software like Teams for public facilities and infrastructure if there are reasonable alternatives.

    You can’t rely only on Russian oil if other countries are available for trade.


  • We should start making laws and frameworks that prevent us from making bad decisions in the future. Using Microsoft and their products was always a bad decision and fixing that now is way more expensive than whatever the arguments were against Linux and FOSS software in the last two decades. It was just easy and convenient at the time.

    Being dependent on Russia for oil didn’t turn out great either.

    But I just see people talking about how to change things for the better, never how to prevent silly things in the future. I’d rather be in a situation were we don’t have to fix things.















  • Ok so the article is very vague about what’s actually done. But as I understand it the “understood content” is transmitted and the original data reconstructed from that.

    If that’s the case I’m highly skeptical about the “losslessness” or that the output is exactly the input.

    But there are more things to consider like de-/compression speed and compatibility. I would guess it’s pretty hard to reconstruct data with a different LLM or even a newer version of the same one, so you have to make sure you decompress your data some years later with a compatible LLM.

    And when it comes to speed I doubt it’s nearly as fast as using zlib (which is neither the fastest nor the best compressing…).

    And all that for a high risk of bricked data.