Here in the USA, you have to be afraid for your job these days. Layoffs are rampant everywhere due to outsourcing, and now we have AI on the horizon promising to make things more efficient, but we really know what it is actually going to be used for. They want automate out everything. People packaging up goods for shipping, white collar jobs like analytics, business intelligence, customer service, chat support. Any sort of job that takes a low or moderate amount of effort or intellectual ability is threatened by AI. But once AI takes all these jobs away and shrinks the amount of labor required, what are all these people going to do for work? It’s not like you can train someone who’s a business intelligence engineer easily to go do something else like HVAC, or be a nurse. So you have the entire tech industry basically folding in on itself trying to win the rat race and get the few remaining jobs left over…

But it should be pretty obvious that you can’t run an entire society with no jobs. Because then people can’t buy groceries, groceries don’t sell so grocery stores start hurting and then they can’t afford to employ cashiers and stockers, and the entire thing starts crumbling. This is the future of AI, basically. The more we automate, the less people can do, so they don’t have jobs and no income, not able to survive…

Like, how long until we realize how detrimental AI is to society? 10 years? 15?

  • Cocodapuf@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    edit-2
    4 days ago

    I can answer that. We won’t.

    We’ll keep iterating and redesigning until we have actual working general intelligence AI. Once we’ve created a general intelligence it will be a matter of months or years before it’s a super intelligence that far outshines human capabilities. Then you have a whole new set of dilemmas. We’ll struggle with those ethical and social dilemmas for some amount of time until the situation flips and the real ethical dilemmas will be shouldered by the AIs: how long do we keep these humans around? Do we let them continue to go to war with each other? do they own this planet? Etc.

    • LANIK2000@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 days ago

      Assuming we can get AGI. So far there’s been little proof we’re any closer to getting an AI that can actually apply logic to problems that aren’t popular enough to be spelled out a dozen times in the dataset it’s trained on. Ya know, the whole perfect scores on well known and respected collage tests, but failing to solve slightly altered riddles for children? It being literally incapable of learning new concepts is a pretty major pitfall if you ask me.

      I’m really sick and tired of this “we just gotta make a machine that can learn and then we can teach it anything” line. It’s nothing new, people were saying this shit since fucking 1950 when Alan Turing wrote it in a paper. A machine looking at an unholy amount of text and evaluation based on a new prompt, what is the most likely word to follow, IS NOT LEARNING!!! I was sick of this dilema before LLMs were a thing, but now it’s just mind numbing.

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        AI developers are like the modern version of alchemists. If they can just turn this lead into gold, this one simple task, they’ll be rich and powerful for the rest of their lives!

        Transmutation isn’t really possible, not the way they were trying to do it. Perhaps AI isn’t possible the way we’re trying to do it now, but I doubt that will stop many people from trying. And I do expect that it will be possible somehow, we’ll likely get there someday, just not soon.