In its submission to the Australian government’s review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    Copyright law already allows generative AI systems to scrape the internet. You need to change the law to forbid something, it isn’t forbidden by default. Currently, if something is published publicly then it can be read and learned from by anyone (or anything) that can see it. Copyright law only prevents making copies of it, which a large language model does not do when trained on it.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        11 months ago

        It is not a derivative work, the model does not contain any recognizable part of the original material that it was trained on.

            • frog 🐸@beehaw.org
              link
              fedilink
              arrow-up
              0
              ·
              11 months ago

              The point is that if the model doesn’t contain any recognisable parts of the original material it was trained on, how can it reproduce recognisable parts of the original material it was trained on?

              • ricecake@beehaw.org
                link
                fedilink
                arrow-up
                0
                ·
                11 months ago

                That’s sorta the point of it.
                I can recreate the phrase “apple pie” in any number of styles and fonts using my hands and a writing tool. Would you say that I “contain” the phrase “apple pie”? Where is the letter ‘p’ in my brain?

                Specifically, the AI contains the relationship between sets of words, and sets of relationships between lines, contrasts and colors.
                From there, it knows how to take a set of words, and make an image that proportionally replicates those line pattern and color relationships.

                You can probably replicate the Getty images watermark close enough for it to be recognizable, but you don’t contain a copy of it in the sense that people typically mean.
                Likewise, because you can recognize the artist who produced a piece, you contain an awareness of that same relationship between color, contrast and line that the AI does. I could show you a Picasso you were unfamiliar with, and you’d likely know it was him based on the style.
                You’ve been “trained” on his works, so you have internalized many of the key markers of his style. That doesn’t mean you “contain” his works.

      • BlameThePeacock@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        A human is a derivative work of its training data, thus a copyright violation if the training data is copyrighted.

        The difference between a human and ai is getting much smaller all the time. The training process is essentially the same at this point, show them a bunch of examples and then have them practice and provide feedback.

        If that human is trained to draw on Disney art, then goes on to create similar style art for sale that isn’t a copyright infringement. Nor should it be.

        • Phanatik@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          11 months ago

          This is stupid and I’ll tell you why.
          As humans, we have a perception filter. This filter is unique to every individual because it’s fed by our experiences and emotions. Artists make great use of this by producing art which leverages their view of the world, it’s why Van Gogh or Picasso is interesting because they had a unique view of the world that is shown through their work.
          These bots do not have perception filters. They’re designed to break down whatever they’re trained on into numbers and decipher how the style is constructed so it can replicate it. It has no intention or purpose behind any of its decisions beyond straight replication.
          You would be correct if a human’s only goal was to replicate Van Gogh’s style but that’s not every artist. With these art bots, that’s the only goal that they will ever have.

          I have to repeat this every time there’s a discussion on LLM or art bots:
          The imitation of intelligence does not equate to actual intelligence.

          • frog 🐸@beehaw.org
            link
            fedilink
            arrow-up
            0
            ·
            11 months ago

            Absolutely agreed! I think if the proponents of AI artwork actually had any knowledge of art history, they’d understand that humans don’t just iterate the same ideas over and over again. Van Gogh, Picasso, and many others, did work that was genuinely unique and not just a derivative of what had come before, because they brought more to the process than just looking at other artworks.

  • Gutless2615@ttrpg.network
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 months ago

    It’s not turning copyright law on its head, in fact asserting that copyright needs to be expanded to cover training a data set IS turning it on its head. This is not a reproduction of the original work, its learning about that work and and making a transformative use from it. An generative work using a trained dataset isn’t copying the original, its learning about the relationships that original has to the other pieces in the data set.

    • phillaholic@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      The lines between learning and copying are being blurred with AI. Imagine if you could replay a movie any time you like in your head just from watching it once. Current copyright law wasn’t written with that in mind. It’s going to be interesting how this goes.

      • ricecake@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        11 months ago

        Imagine being able to recall the important parts of a movie, it’s overall feel, and significant themes and attributes after only watching it one time.

        That’s significantly closer to what current AI models do. It’s not copyright infringement that there are significant chunks of some movies that I can play back in my head precisely. First because memory being owned by someone else is a horrifying thought, and second because it’s not a distributable copy.

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          11 months ago

          my head […] not a distributable copy.

          There has been an interesting counter-proposal to that: make all copies “non-distributable” by replacing the 1:1 copying, by AI:AI learning, so the new AI would never have a 1:1 copy of the original.

          It’s in part embodied in the concept of “perishable software”, where instead of having a 1:1 copy of an OS installed on your smartphone/PC, a neural network hardware would “learn how to be a smartphone/PC”.

          Reinstalling, would mean “killing” the previous software, and training the device again.

          • MachineFab812@discuss.tchncs.de
            link
            fedilink
            arrow-up
            0
            ·
            11 months ago

            Right, because the cool part of upgrading your phone is trying to make it feel like its your phone, from scratch. Perishable software is anything but desirable, unless you enjoy having the very air you breathe sold to you.

            • jarfil@beehaw.org
              link
              fedilink
              arrow-up
              0
              ·
              11 months ago

              Well, depends on desirable “by whom”.

              Imagine being a phone manufacturer and having all your users running a black box only you have the means to re-flash or upgrade, with software developers having to go through you so you can train users’ phones to “behave like they have the software installed”

              It’s a dictatorial phone manufacturer’s wet dream.