• luciole@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    The actual research page is so awkward. The TLDR at the top goes:

    single portrait photo + speech audio = hyper-realistic talking face video

    Then a little lower comes the big red warning:

    We are exploring visual affective skill generation for virtual, interactive characters, NOT impersonating any person in the real world.

    No siree! Big “not what it looks like” vibes.

  • natural_motions@lemmynsfw.com
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 months ago

    In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.

    …or open a dystopian hellscape of disinformation, scams and corporate police state control of information at an unprecedented scale! Three cheers for the reckless pursuit of technology for technologies sake!

  • P03 Locke@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Sigh, not this article again. No, they can’t “deepfake a person with one photo”. They can create a bad uncanny-valley 75% accurate version of one.

  • esaru@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    I think this has an effect most people don’t think of: Media will just lose it’s value as a trusted source for information. We’ll just lose the ability of broadcasting media as anything could be faked. Humanity is back to “word of mouth”, I guess.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    The eyes still have uncanny valley vibes, but that’s because I’m looking for it. If I wasn’t watching demo videos about generated video, I might not have noticed.

  • casmael@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Why would you develop this technology I simply don’t understand. All involved should be sent to jail. What the fuck.

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      They worded the headline that way to scare you into that reaction. They’re only interested in telling you about the negative uses because that drives engagement.

      • BolexForSoup@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        I understand AI evangelists - which you may or may not be idk - look down on us Luddites who have the gall to ask questions, but you seriously can’t see any potential issue with this technology without some sort of restrictions in place?

        You can’t see why people are a little hesitant in an era where massive international corporations are endlessly scraping anything and everything on the Internet to dump into LLM’s et al to use against us to make an extra dollar?

        You can’t see why people are worried about governments and otherwise bad actors having access to this technology at scale?

        I don’t think these people should be locked up or all AI usage banned. But there is definitely a middle ground between absolute prohibition and no restrictions at all.

        • barsoap@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 months ago

          None of those concerns are new in principle: AI is the current thing that makes people worry about corporate and government BS but corporate and government BS isn’t new.

          Then: The cat is out of the bag, you won’t be able to put it in again. If those things worry you the strategic move isn’t to hope that suddenly, out of pretty much nowhere, capitalism and authoritarianism will fall never to be seen again, but to a) try our best to get sensible regulations in place, the EU has done a good job IMO, and b) own the tech. As in: Develop and use tech and models that can be self-hosted, that enable people to have control over AI, instead of being beholden to what corporate or government actors deem we should be using. It’s FLOSS all over again.

          Or, to be an edgelord to some of the artists out there: If you don’t want your creative process to end up being dependent on Adobe’s AI stuff then help training models that aren’t owned by big CGI. No tech knowledge necessary, this would be about providing a trained eye as well as data (i.e. pictures) that allow the model to understand what it did wrong, according to your eye.

          • BolexForSoup@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            I said:

            I don’t think these people should be locked up or all AI usage banned. But there is definitely a middle ground between absolute prohibition and no restrictions at all.

            I have used AI tools as a shooter/editor for years so I don’t need a lecture on this, and I did not say any of the concerns are new. Obviously, the implication is AI greatly enables all of these actions to a degree we’ve never seen before. Just like cell phones didn’t invent distracted driving but made it exponentially worse and necessitated more specific direction/intervention.

    • some_guy@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      They mentioned one potential use that I thought has value and that I hadn’t considered. For video conferencing, this could transmit data without sending video and greatly reduce the amount of bandwidth needed by rendering people’s faces locally. I don’t think that outweighs the massive harms this technology will unleash. But at least there was some use that would be legit and beneficial.

      I’m someone who has a moral compass and I don’t like that scammers will abuse this shit so I hate it. But there’s no keeping it locked away. It’s here to stay. I hate the future / now.

      • Lem Jukes@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Also I would argue sending the actual video of what is happening in front of the camera is kind of the entire point of having a video call. I don’t see any utility in having a simulated face to face interaction where neither of you is even looking at an actual image of the other person.

      • flora_explora@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Wouldn’t you then have to run the AI locally on a machine (which probably draws a lot of power and memory) or use it via cloud (which depends on bandwidth just like a video call). I don’t really see where this technology could actually be useful. Sure, if it is only a minor computation just like if you take a picture/video with any modern smartphone. But computing an entire face and voice seems much more complicated than that and not really feasible for the usual home device.

        • barsoap@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 months ago

          A model that can only generate frontal to profile views of heads would be quite small, I can totally see that kind of thing running on current consumer GPUs, in real time. Near real time is already possible with SDXL-based models with some speedup tricks applied as long as you have a mid-range gaming GPU and those models are significantly more general. It’s not like the model would need to generate spaghetti and sports cars alongside with the head.