"LLMs learn the same way a person does, it's not plagiarism"
-
"LLMs learn the same way a person does, it's not plagiarism"
This is a popular self-justification in the art-plagiarist community. It's frustrating to read because it's philosophically incoherent but making the philosophical argument is annoyingly difficult, particularly if your interlocutor maintains a deliberate ignorance about the humanities (which you already know they do). But there is a simpler mechanical argument you can make instead: "learning" is inherently mutual.
-
"LLMs learn the same way a person does, it's not plagiarism"
This is a popular self-justification in the art-plagiarist community. It's frustrating to read because it's philosophically incoherent but making the philosophical argument is annoyingly difficult, particularly if your interlocutor maintains a deliberate ignorance about the humanities (which you already know they do). But there is a simpler mechanical argument you can make instead: "learning" is inherently mutual.
A teacher “learning more from their students” is such a common observation that it is a cliché. Colleagues mutually learn from each other in professional settings. Actual artists are in conversation with one another, not just learning from a static historical canon. Etc, etc.
LLMs cannot do this. The output that an LLM produces contains a sort of poisonous residue that makes it destroy the reasoning capacity of other LLMs; this is a well-known problem in the field, known as "model collapse".
-
A teacher “learning more from their students” is such a common observation that it is a cliché. Colleagues mutually learn from each other in professional settings. Actual artists are in conversation with one another, not just learning from a static historical canon. Etc, etc.
LLMs cannot do this. The output that an LLM produces contains a sort of poisonous residue that makes it destroy the reasoning capacity of other LLMs; this is a well-known problem in the field, known as "model collapse".
Thus, when an LLM absorbs some stolen data, what is happening cannot be 'learning'; it's something else. When we call it 'training', that's a metaphor, not a description. In reality, it is a parasitic activity that requires fresh non-LLM-generated information from humans in order to be sustainable.
-
Thus, when an LLM absorbs some stolen data, what is happening cannot be 'learning'; it's something else. When we call it 'training', that's a metaphor, not a description. In reality, it is a parasitic activity that requires fresh non-LLM-generated information from humans in order to be sustainable.
(This is not an original thought. Although I've expanded on it a bit here, I have sadly lost reference to the original citation I wanted to use and search on Mastodon is intentionally dysfunctional; if you know who I'm paraphrasing here, feel free to link it up in a reply.)
-
A teacher “learning more from their students” is such a common observation that it is a cliché. Colleagues mutually learn from each other in professional settings. Actual artists are in conversation with one another, not just learning from a static historical canon. Etc, etc.
LLMs cannot do this. The output that an LLM produces contains a sort of poisonous residue that makes it destroy the reasoning capacity of other LLMs; this is a well-known problem in the field, known as "model collapse".
@glyph I hate that they have also taken the phrase "model collapse" from us. That should only be used to describe what happens when you party too hard with Duran Duran. https://www.youtube.com/watch?v=sSMbOuNBV0s
-
A teacher “learning more from their students” is such a common observation that it is a cliché. Colleagues mutually learn from each other in professional settings. Actual artists are in conversation with one another, not just learning from a static historical canon. Etc, etc.
LLMs cannot do this. The output that an LLM produces contains a sort of poisonous residue that makes it destroy the reasoning capacity of other LLMs; this is a well-known problem in the field, known as "model collapse".
@glyph Or, more colourfully, the “Habsburg Singularity”
-
R AodeRelay shared this topic