Not the best news in this report. We need to find ways to do more.

  • AnonTwo@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Is this the same report that was brought up where it was found out Twitter has the exact same issue?

    • Kbin_space_program@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Or that Reddit has had this issue in spades.

      Frankly, that the ability for the Fediverse to cut off problem servers is listed as a drawback and not an advantage is in my opinion, wrong.

  • blazera@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    basically we dont know what they found, because they just looked up hashtags, and then didnt look at the results for ethics reasons. They dont even say what hashtags they looked through.

    • Aesthesiaphilia@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      We do know they only found, what, 112 actual images of CP? That’s a very small number. I’d say that paints us in a pretty good light, relatively.

      • blazera@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        it says 112 instances of known CSAM. But that’s based on their methodology, right, and their methodology is not actually looking at the content, it’s looking at hashtags and whether google safesearch thinks it’s explicit. Which Im pretty sure doesnt differentiate with what the subject of the explicitness is. It’s just gonna try to detect breast or genitals I imagine.

        Though they do give a few damning examples of things like actual CP trading, but also that they’ve been removed.

    • Takatakatakatakatak@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      1 year ago

      There are communities on NSFW Lemmy where people intentionally present as children engaged in sexual abuse scenarios. They’re adults and this is their kink that everyone is supposed to tolerate and pretend is ok.

      See defederation drama over the last couple of days. What I’m saying is, the hashtags mean nothing.

      • hightrix@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        There are communities on NSFW Lemmy where people intentionally present as children engaged in sexual abuse scenarios.

        If you are referring to the community that was cited as the reason for defederation, this is completely false. The community in question is adorableporn, extremely similar to the subreddit of the same name. No one, in any manner, in either community, presents as a child. While yes, the women that post there tend to be on the shorter and thinner side, calling short, thin adults ‘children’ is not being honest.

        To be clear, this community is about petite women. This community is NOT about women with a kink to present as a child.

          • hightrix@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            What other bait communities? We can’t just accept “think of the children” as an excuse. That doesn’t work.

            Yes, no one wants actual CSAM to show up in their feed, we can all completely agree on that. But just because some middle-aged woman can’t tell the difference between a 20 year old and a 15 year old, doesn’t make images of the 20 year old CSAM.

  • shrugal@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Why do they just mention absolute numbers, instead of comparing them to similar platforms? All they said was that there is CSAM on the Fediverse, but that’s also true for centralized services and the internet as a whole. The important question is whether there is more or less CSAM on the Fediverse, no?

    This makes it look very unscientific to me. The Fediverse might have a CSAM problem, but you wouldn’t know it from this study.

  • MyFairJulia@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Why would someone downvote this post? We have a problem and it’s in our best interest to fix that.

    • chaogomu@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      The report (if you can still find a working link) said that the vast majority of material that they found was drawn and animated, and hosted on one Mastodon instance out of Japan, where that shit is still legal.

      Every time that little bit of truth comes up, someone reposts the broken link to the study, screaming about how it’s the entire Fediverse riddled with child porn.

        • ZILtoid1991@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          It’s Pawoo, Pixiv’s (formerly) own instance, which is infamous for this kind of content, and those are still “just drawings” (unless some artists are using illegal real-life references).

          • dustyData@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            They’re using Generative AI to create photo realistic renditions now, and causing everyone who finds out about it to have a moral crisis.

              • Derproid@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                … I mean … idk … If the argument is that the drawn version doesn’t harm kids and gives pedos an outlet, is a ai generated version any different?

                • brain_pan@infosec.pub
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  imo, the dicey part of the matter is “what amount of the AI’s dataset is made up of actual images of children”