• snooggums@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    10 months ago

    Exactly.

    AI moderation is just word and phrase filtering, the latter of which wasn’t done earlier because it is really complicated due to the vast number of possible combinations of words and context. It also has the same failure issues as word filtering where it will end up being overly restrictive to the point of hilarity or will soon show that no matter what you filter someone will find a way around it.

    • admiralteal@kbin.social
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      10 months ago

      I mean, suppose the LLM bot is actually good at avoiding false positives/misunderstandings – doesn’t that simply remove one of the biggest weaknesses of old-fashioned keyword identification? I really just see this as a natural evolution of the technology and not some new, wild thing. It’s just an incremental improvement.

      What it absolutely does NOT do is replace the need for human judgement. You’ll still need an appeals process and a person at the wheel to deal with errors and edge cases. But it’s pretty easy to imagine an LLM bot doing at least as well a job as the average volunteer Reddit/Discord mod.

      Of course, it’s kind of a moot point. Running a full LLM bot, parsing every comment against some custom-design model, as your automoderator would be expensive. I really cannot see it happening routinely, at least not with current tech costs. Maybe in a few years the prices will have come down enough, but not right now.

      • snooggums@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        suppose the LLM bot is actually good at avoiding false positives/misunderstandings

        No, I don’t think I will.