Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. "LLMs learn the same way a person does, it's not plagiarism"

"LLMs learn the same way a person does, it's not plagiarism"

Scheduled Pinned Locked Moved Uncategorized
6 Posts 3 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • GlyphG This user is from outside of this forum
    GlyphG This user is from outside of this forum
    Glyph
    wrote last edited by
    #1

    "LLMs learn the same way a person does, it's not plagiarism"

    This is a popular self-justification in the art-plagiarist community. It's frustrating to read because it's philosophically incoherent but making the philosophical argument is annoyingly difficult, particularly if your interlocutor maintains a deliberate ignorance about the humanities (which you already know they do). But there is a simpler mechanical argument you can make instead: "learning" is inherently mutual.

    GlyphG 1 Reply Last reply
    0
    • GlyphG Glyph

      "LLMs learn the same way a person does, it's not plagiarism"

      This is a popular self-justification in the art-plagiarist community. It's frustrating to read because it's philosophically incoherent but making the philosophical argument is annoyingly difficult, particularly if your interlocutor maintains a deliberate ignorance about the humanities (which you already know they do). But there is a simpler mechanical argument you can make instead: "learning" is inherently mutual.

      GlyphG This user is from outside of this forum
      GlyphG This user is from outside of this forum
      Glyph
      wrote last edited by
      #2

      A teacher “learning more from their students” is such a common observation that it is a cliché. Colleagues mutually learn from each other in professional settings. Actual artists are in conversation with one another, not just learning from a static historical canon. Etc, etc.

      LLMs cannot do this. The output that an LLM produces contains a sort of poisonous residue that makes it destroy the reasoning capacity of other LLMs; this is a well-known problem in the field, known as "model collapse".

      GlyphG jwzJ acbA 3 Replies Last reply
      0
      • GlyphG Glyph

        A teacher “learning more from their students” is such a common observation that it is a cliché. Colleagues mutually learn from each other in professional settings. Actual artists are in conversation with one another, not just learning from a static historical canon. Etc, etc.

        LLMs cannot do this. The output that an LLM produces contains a sort of poisonous residue that makes it destroy the reasoning capacity of other LLMs; this is a well-known problem in the field, known as "model collapse".

        GlyphG This user is from outside of this forum
        GlyphG This user is from outside of this forum
        Glyph
        wrote last edited by
        #3

        Thus, when an LLM absorbs some stolen data, what is happening cannot be 'learning'; it's something else. When we call it 'training', that's a metaphor, not a description. In reality, it is a parasitic activity that requires fresh non-LLM-generated information from humans in order to be sustainable.

        Q.E.D. <https://en.wikipedia.org/wiki/Model_collapse>

        GlyphG 1 Reply Last reply
        0
        • GlyphG Glyph

          Thus, when an LLM absorbs some stolen data, what is happening cannot be 'learning'; it's something else. When we call it 'training', that's a metaphor, not a description. In reality, it is a parasitic activity that requires fresh non-LLM-generated information from humans in order to be sustainable.

          Q.E.D. <https://en.wikipedia.org/wiki/Model_collapse>

          GlyphG This user is from outside of this forum
          GlyphG This user is from outside of this forum
          Glyph
          wrote last edited by
          #4

          (This is not an original thought. Although I've expanded on it a bit here, I have sadly lost reference to the original citation I wanted to use and search on Mastodon is intentionally dysfunctional; if you know who I'm paraphrasing here, feel free to link it up in a reply.)

          1 Reply Last reply
          0
          • GlyphG Glyph

            A teacher “learning more from their students” is such a common observation that it is a cliché. Colleagues mutually learn from each other in professional settings. Actual artists are in conversation with one another, not just learning from a static historical canon. Etc, etc.

            LLMs cannot do this. The output that an LLM produces contains a sort of poisonous residue that makes it destroy the reasoning capacity of other LLMs; this is a well-known problem in the field, known as "model collapse".

            jwzJ This user is from outside of this forum
            jwzJ This user is from outside of this forum
            jwz
            wrote last edited by
            #5

            @glyph I hate that they have also taken the phrase "model collapse" from us. That should only be used to describe what happens when you party too hard with Duran Duran. https://www.youtube.com/watch?v=sSMbOuNBV0s

            1 Reply Last reply
            1
            0
            • GlyphG Glyph

              A teacher “learning more from their students” is such a common observation that it is a cliché. Colleagues mutually learn from each other in professional settings. Actual artists are in conversation with one another, not just learning from a static historical canon. Etc, etc.

              LLMs cannot do this. The output that an LLM produces contains a sort of poisonous residue that makes it destroy the reasoning capacity of other LLMs; this is a well-known problem in the field, known as "model collapse".

              acbA This user is from outside of this forum
              acbA This user is from outside of this forum
              acb
              wrote last edited by
              #6

              @glyph Or, more colourfully, the “Habsburg Singularity”

              1 Reply Last reply
              0
              • R AodeRelay shared this topic
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Don't have an account? Register

              • Login or register to search.
              Powered by NodeBB Contributors
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups