Apparently, stealing other people’s work to create product for money is now “fair use” as according to OpenAI because they are “innovating” (stealing). Yeah. Move fast and break things, huh?

“Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials,” wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit “misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

  • luciole (he/him)@beehaw.org
    link
    fedilink
    arrow-up
    20
    ·
    edit-2
    1 year ago

    There’s this linguistic problem where one word is used for two different things, it becomes difficult to tell them apart. “Training” or “learning” is a very poor choice of word to describe the calibration of a neural network. The actor and action are both fundamentally different from the accepted meaning. To start with, human learning is active whereas machining learning is strictly passive: it’s something done by someone with the machine as a tool. Teachers know very well that’s not how it happens with humans.

    When I compare training a neural network with how I trained to play clarinet, I fail to see any parallel. The two are about as close as a horse and a seahorse.

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Not sure what you mean by passive. It takes a hell of a lot of electricity to train one of these LLMs so something is happening actively.

      I often interact with ChatGPT 4 as if it were a child. I guide it through different kinds of mental problems, having it take notes and evaluate its own output, because I know our conversations become part of its training data.

      It feels very much like teaching a kid to me.

      • luciole (he/him)@beehaw.org
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        1 year ago

        I mean passive in terms of will. Computers want and do nothing. They’re machines that function according to commands.

        The way you feel like teaching a child when you feed input in natural language to a LLM until you’re satisfied with the output is known as the ELIZA effect. To quote Wikipedia:

        In computer science, the ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface. The effect is a category mistake that arises when the program’s symbolic computations are described through terms such as “think”, “know” or “understand.”