The logical end of the ‘Solution to bad speech is better speech’ has arrived in the age of state-sponsored social media propaganda bots versus AI-driven bots arguing back

  • Touching_Grass@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    edit-2
    1 year ago

    to wipe out humanity

    Does it? Doesn’t that threat exist even without AI. At its current state its a glorified chatbot. Get rid of it, we still have every think tank filled with quants, statisticians, social scientists and marketing teams pushing all that propaganda. Its not AI doing it. Its humans.

    But AI does have potential to also develop new medicines. New materials. It has potential for a lot more good.

    It also has a lot of potential to give people some powerful pocket access to some basic services they normally wouldn’t have. Imagine an AI trained to help people sort out their finances. Act like an r/askdocs. Help with questions about new hobbies.

    So where you see panic, other people see hope. And it isn’t the inventors job to tell you or others how to use something.

    If we destroy ourselves with every bit of advancement then we deserve it. It would be an inevitability.

        • etuomaala@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          I’m saying that the world is facing many threats, and all of them need to be addressed. Including some AI.

            • etuomaala@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              Well, there are a lot of threats from AI that are popularised today. Some of them are fake, crazy, or stupid, but others are real. Here are some threats that are real, in my opinion.

              In trying to use AI for good, we may give too much power to an AI that we do not understand well enough, and whose motivations are not clear. Worse still, this AI might have motivations that we humans could not understand, even if we wanted to. Such an AI may not value humans at all.

              Another threat is that a country like China creates an AI and puts it in charge of their soft power program. It may be possible for an AI to be so intelligent that it could manipulate the world in ways that humans physically could not understand. This could allow China to not only literally take over the world, but also ensure that everybody with any kind of power is absolutely thrilled about it. (Anybody without the power to stop this AI would be disregarded in its calculations, no matter how much they hate the AI. This could easily include most humans.)

              One thing AI has already been doing for a decade to our great detriment is optimising the ad revenue of search engines and social networks, with a total disregard for all other consequences.

              All of these threats are not new. People have been talking about them for years, now. You should pay more attention.

    • HandwovenConsensus@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Not to mention that even if one inventor decides not to release their creation, eventually someone else will make something similar.

      • etuomaala@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Would you say then that our efforts to hinder access to dangerous information aren’t working?