Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

Scheduled Pinned Locked Moved Uncategorized
119 Posts 91 Posters 36 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • James ThomsonJ James Thomson

    Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

    Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

    Developers: Wheeeeeeeeee!

    #?.info :commodore:P This user is from outside of this forum
    #?.info :commodore:P This user is from outside of this forum
    #?.info :commodore:
    wrote last edited by
    #99

    @jamesthomson Yeah I don't think so. Some are, a tiny minority are bragging about supposedly being 10x more productive but not showing anything of value, but literally every dev I'm seeing is:

    1. Complaining about AI being everywhere and being forced to use it
    2. Complaining about slop bug reports
    3. Worried about layoffs that will also destroy the company that's laying them off, making a bad economy even worse.

    The minority that's cheering for it are a minority that happens to be loud, but it's the same people, and many of them aren't devs to begin with, as evidenced by their LinkedIn style of writing.

    1 Reply Last reply
    0
    • Thomas BrandE Thomas Brand

      @jamesthomson DragThing now by ChatGPT.

      macfixerM This user is from outside of this forum
      macfixerM This user is from outside of this forum
      macfixer
      wrote last edited by
      #100

      @Eggfreckles @jamesthomson

      1 Reply Last reply
      0
      • James ThomsonJ James Thomson

        Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

        Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

        Developers: Wheeeeeeeeee!

        Ken FranqueiroK This user is from outside of this forum
        Ken FranqueiroK This user is from outside of this forum
        Ken Franqueiro
        wrote last edited by
        #101

        @jamesthomson Competent developers: *too aghast at how many of their dependencies now accept LLM slop in PRs to say anything*

        1 Reply Last reply
        0
        • tschenkelT tschenkel

          @owlex @the_other_jon

          I follow you on the informed vs ignorance argument.

          But, given that you list many of the ethical reasons against AI, there is little "informed use" that will also stand up to the ethical razor.

          The Luddites were not ignorant. They were the technically able, who knew how to operate the machines, but fought against using them BECAUSE they understood them.

          In my work I use deterministic scientific models, but I work with machine learning models as well. And all my colleagues (who are real experts in how neural networks work) are opposed to generative AI.

          AlexandraO This user is from outside of this forum
          AlexandraO This user is from outside of this forum
          Alexandra
          wrote last edited by
          #102

          @tschenkel @the_other_jon

          I appreciate this pushback.
          You're right that I could be more informed, and I'm actively working on

          For example: I recently learned about OpenAI's ties to ICE and have since switched to other models (local models) because of it. That's exactly what 'informed use' looks like to me. I am trying to learn about specific harms and adjusting accordingly.

          But here's where I still disagree with the Luddite comparison: The Luddites had a real choice to reject the machines. I don't have that choice anymore: I’m required to use AI at work, and personally, it helps me function with ADHD in ways that nothing else does.

          So my question remains: If I can't opt out entirely, isn't 'informed use and demanding regulation' better than 'uninformed use and silence'? I'm genuinely trying to navigate this topic, not to justify myself.

          Also I am really curious why your colleagues are all against generative AI. Would you please expand on that?

          tschenkelT 2 Replies Last reply
          0
          • VítorV Vítor

            @owlex You haven’t asked me, but your questions appear to me to be in such good faith that I’ll try to provide a response. Specifically to:

            > Why is it wrong to use something critically while being aware of its problems? […] And when capitalism is forcing it into everything anyway, isn't informed usage better than ignorance?

            I don’t think your description fits the current state of ATP. Marco in particular¹ has become a bit of a mouthpiece for LLMs. He’s now actively spouting the fear mongering of “use it or you’re going to be left behind” and in general is profoundly focused on what the technology does *for him* while summarily ignoring the negative impact to others and society in general.

            Informed usage does not mean advocacy. What ATP is doing now is closer to the latter than the former. It has much praise, little criticism.

            ¹ Whom I agree with and publicly applaud on pretty much every Tim Cook criticism.

            AlexandraO This user is from outside of this forum
            AlexandraO This user is from outside of this forum
            Alexandra
            wrote last edited by
            #103

            @vitor

            Thank you for your goodhearted response ☺️

            It’s maybe just me, but I don’t feel like they are cheerleading AI exclusively as they are also covering the problems with it(like the Anthropic Book Piracy). So for me its still balanced, can be different for you(Also sometimes I may zone out a bit)

            Btw I agree about the 'left behind' aspect which Marco is talking about. I see this every day at work. My employees who are genuinely anxious about being replaced or left behind by AI. As a team lead, I'm trying to navigate that: helping people adapt while also acknowledging the real fear and harm. That's where regulation becomes critical. We can't just leave people to fend for themselves in this shift.

            1 Reply Last reply
            0
            • Stephen 🌈 (he/him)F Stephen 🌈 (he/him)

              @owlex @the_other_jon An ethical position on something often requires sacrifice. We aren’t doing this to be mean to the podcast. We are doing it to attempt to influence the industry in another direction.

              The complexity of the situation doesn’t really have anything directly to do with what is ethical. It only has to do with how hard it is to see it. Are you arguing that the complexity makes it ok or that it is hard for you to see? Some of us can see the harm and are trying our best to make it visible.

              Those who provide the counterpoint don’t say anything about whether the harm will stop or somehow be mitigated really — they mostly just say, “Don’t be left behind.” Does that sound like a rational actor or an addict?

              My belief: it is absolutely wrong to feed this technological vampire that threatens to erase humanity. Don’t become a thrall. It doesn’t end well for them. 😊

              AlexandraO This user is from outside of this forum
              AlexandraO This user is from outside of this forum
              Alexandra
              wrote last edited by
              #104

              @firepoet @the_other_jon

              Thank you for this answer 😊

              I respect that position, and you're right that ethical stances often require sacrifice. But I think we're drawing different lines here. I don't see AI as something we can just starve by not using it. It's already everywhere, being used by corporations, governments, everyone. So for me, the question is: do I abstain entirely while others use it uncritically, or do I use it thoughtfully and keep pushing for better regulation?

              I am trying to navigate the reality I am living in.
              As I said in other posts it helps me navigate with my ADHD and also I’m kind of forced to use it at work.
              What I am doing is to advocate for using different models at work and talking with people about the problems of AI. All while being really excited how we can use it to make lives better, because it has cool use cases.

              Also I am pragmatic singular boycott never helped much. We need regulations in place, that’s the most important thing

              Stephen 🌈 (he/him)F 1 Reply Last reply
              0
              • JohnM John

                @airisdamon @owlex @the_other_jon It's not gonna fly. Apple doesn't release their source code. People still pay them money for some reason. Knowing what the code does is an infinitely easier step (and a prerequisite to) controlling what code does via legislation. It doesn't matter what 'society should do'. Society will keep paying Apple. Apple will keep paying government to make sure it's never compelled to reveal what its code does to its users.

                Airis DamonA This user is from outside of this forum
                Airis DamonA This user is from outside of this forum
                Airis Damon
                wrote last edited by
                #105

                @mrkeen @owlex @the_other_jon There could be extralegal methods for democratization. I don't know. I'm not real enthused with the direction all this is heading toward.

                1 Reply Last reply
                0
                • James ThomsonJ James Thomson

                  Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                  Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                  Developers: Wheeeeeeeeee!

                  MansourM This user is from outside of this forum
                  MansourM This user is from outside of this forum
                  Mansour
                  wrote last edited by
                  #106

                  @jamesthomson The taboo against plagiarism in the arts doesn't exist in the software world. Some of the biggest names regularly copy code without attribution.

                  I remember when GitHub first launched support for Markdown. It was based on the work on a female Russian developer. Her code was copied, her name and attribution stripped, and replaced with a generic open source license.

                  1 Reply Last reply
                  0
                  • AlexandraO Alexandra

                    @firepoet @the_other_jon

                    Thank you for this answer 😊

                    I respect that position, and you're right that ethical stances often require sacrifice. But I think we're drawing different lines here. I don't see AI as something we can just starve by not using it. It's already everywhere, being used by corporations, governments, everyone. So for me, the question is: do I abstain entirely while others use it uncritically, or do I use it thoughtfully and keep pushing for better regulation?

                    I am trying to navigate the reality I am living in.
                    As I said in other posts it helps me navigate with my ADHD and also I’m kind of forced to use it at work.
                    What I am doing is to advocate for using different models at work and talking with people about the problems of AI. All while being really excited how we can use it to make lives better, because it has cool use cases.

                    Also I am pragmatic singular boycott never helped much. We need regulations in place, that’s the most important thing

                    Stephen 🌈 (he/him)F This user is from outside of this forum
                    Stephen 🌈 (he/him)F This user is from outside of this forum
                    Stephen 🌈 (he/him)
                    wrote last edited by
                    #107

                    @owlex @the_other_jon I wish you the best. While we are on opposite sides of this particular struggle I can respect your need to fix the broken system from the inside. Just be careful out there.. https://steve-yegge.medium.com/the-ai-vampire-eda6e4f07163

                    AlexandraO 1 Reply Last reply
                    0
                    • James ThomsonJ James Thomson

                      Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                      Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                      Developers: Wheeeeeeeeee!

                      JulietteJ This user is from outside of this forum
                      JulietteJ This user is from outside of this forum
                      Juliette
                      wrote last edited by
                      #108

                      @jamesthomson Not all developers....

                      1 Reply Last reply
                      0
                      • AlexandraO Alexandra

                        @tschenkel @the_other_jon

                        I appreciate this pushback.
                        You're right that I could be more informed, and I'm actively working on

                        For example: I recently learned about OpenAI's ties to ICE and have since switched to other models (local models) because of it. That's exactly what 'informed use' looks like to me. I am trying to learn about specific harms and adjusting accordingly.

                        But here's where I still disagree with the Luddite comparison: The Luddites had a real choice to reject the machines. I don't have that choice anymore: I’m required to use AI at work, and personally, it helps me function with ADHD in ways that nothing else does.

                        So my question remains: If I can't opt out entirely, isn't 'informed use and demanding regulation' better than 'uninformed use and silence'? I'm genuinely trying to navigate this topic, not to justify myself.

                        Also I am really curious why your colleagues are all against generative AI. Would you please expand on that?

                        tschenkelT This user is from outside of this forum
                        tschenkelT This user is from outside of this forum
                        tschenkel
                        wrote last edited by
                        #109

                        @owlex @the_other_jon

                        I'd say with local models you are on the right track. The issue with most genAI is that there is no true OpenSource. Even OpenWeight models are trained on closed training data and can only be trained by intransparent entities (Meta).

                        'Informed use' of models that you control and 'demanding regulation' of the companies pushing the models is what I'm for as well.

                        However, I would expect the AI bubble to burst in the next few years. The ratio of investment to potential revenue is just too large for it to be economically viable in a sustainable fashion. That's why OpenAI et al. are pushing so hard. They need to keep the investment capital to pour money in and keep the overvaluation going up.

                        After that we'll have a new AI winter (I remember the end of the first, and lived through the second, having done some AI work before the second) - which will mean all the real AI applications will suffer.

                        I actually was enthusiastic when the LLMs came out, because they solved the natural language processing problem we tried to work out in the 80s. I really hoped we'd get the Star Trek computer, but we got MaaS (Mansplaining as a Service).

                        We'd need LLMs combined with a reliable knowledge engine. Scaling the NN won't lead to any emergent AGI - only to a collapse of personal computing (see shortage in RAM, HDDs, ...), and the environment.

                        You may be the avantgarde, running a local LLM on your old cobbled together hardware Gibson-style.

                        I read the same books the tech-bros reference as their inspiration, but I read them as warnings.

                        1 Reply Last reply
                        0
                        • James ThomsonJ James Thomson

                          Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                          Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                          Developers: Wheeeeeeeeee!

                          Chief TwatD This user is from outside of this forum
                          Chief TwatD This user is from outside of this forum
                          Chief Twat
                          wrote last edited by
                          #110

                          @jamesthomson OTOH, we give away tons of our results for free some of which you're using daily.

                          1 Reply Last reply
                          0
                          • AlexandraO Alexandra

                            @tschenkel @the_other_jon

                            I appreciate this pushback.
                            You're right that I could be more informed, and I'm actively working on

                            For example: I recently learned about OpenAI's ties to ICE and have since switched to other models (local models) because of it. That's exactly what 'informed use' looks like to me. I am trying to learn about specific harms and adjusting accordingly.

                            But here's where I still disagree with the Luddite comparison: The Luddites had a real choice to reject the machines. I don't have that choice anymore: I’m required to use AI at work, and personally, it helps me function with ADHD in ways that nothing else does.

                            So my question remains: If I can't opt out entirely, isn't 'informed use and demanding regulation' better than 'uninformed use and silence'? I'm genuinely trying to navigate this topic, not to justify myself.

                            Also I am really curious why your colleagues are all against generative AI. Would you please expand on that?

                            tschenkelT This user is from outside of this forum
                            tschenkelT This user is from outside of this forum
                            tschenkel
                            wrote last edited by
                            #111

                            @owlex @the_other_jon

                            Btw, the Luddites didn't really have a choice, either. As history shows, the machines were forced on them after all.

                            It wasn't the machines they were against, it was the abuse of them to increase/create a power imbalance that allowed the owners of the machines to create something similar to the slavery that was moving out.

                            Some parallels to how the means of computing are now seized and used against the same people who have built it in the first place.

                            1 Reply Last reply
                            0
                            • James ThomsonJ James Thomson

                              Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                              Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                              Developers: Wheeeeeeeeee!

                              Bastet 魔王様 😈B This user is from outside of this forum
                              Bastet 魔王様 😈B This user is from outside of this forum
                              Bastet 魔王様 😈
                              wrote last edited by
                              #112

                              @jamesthomson
                              Now a question that comes to mind, if AI generated content, be it graphics, music, plot lines or the code that binds this all together, can't be copyrighted, can we, theoretically speaking, copy modern games that use AI in development freely? 🤔

                              1 Reply Last reply
                              0
                              • Stephen 🌈 (he/him)F Stephen 🌈 (he/him)

                                @owlex @the_other_jon I wish you the best. While we are on opposite sides of this particular struggle I can respect your need to fix the broken system from the inside. Just be careful out there.. https://steve-yegge.medium.com/the-ai-vampire-eda6e4f07163

                                AlexandraO This user is from outside of this forum
                                AlexandraO This user is from outside of this forum
                                Alexandra
                                wrote last edited by
                                #113

                                @firepoet Thank you, Stephen. While we still don’t have the same position. This post captures what I am looking out for in my employees and myself

                                1 Reply Last reply
                                0
                                • Colin CornabyC Colin Cornaby

                                  @jaredwhite @jamesthomson All LLM generated code is in the public domain. The commercial companies just protect it all behind private repos. If you could force them to release it that would be what you’d need.

                                  Ur Ya'arY This user is from outside of this forum
                                  Ur Ya'arY This user is from outside of this forum
                                  Ur Ya'ar
                                  wrote last edited by
                                  #114

                                  @colincornaby
                                  @jaredwhite @jamesthomson

                                  A post suggesting precisely this:
                                  https://zomglol.wtf/@jamie/116059523957674208

                                  1 Reply Last reply
                                  0
                                  • James ThomsonJ James Thomson

                                    Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                                    Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                                    Developers: Wheeeeeeeeee!

                                    PointlessOne :loading:P This user is from outside of this forum
                                    PointlessOne :loading:P This user is from outside of this forum
                                    PointlessOne :loading:
                                    wrote last edited by
                                    #115

                                    @jamesthomson I’m not sure what exactly surprises you. If you look at cultural norms of the trades this attitude appears to be downright inevitable.

                                    Writers basically invented copyright to legally prevent others from using their works. Nowadays writers don’t edit others’ work or lift parts of others’ works. All this is relegated to fanfics which are deemed extremely unserious, a training exercise at best.

                                    Visual artists are similarly cagey about ownership. Copying is somewhat allowed only in training. Even remote similarities in the final work would immediately be pointed out. They even have a concept of a forgery—an exact copy, which is an absolute no-no.

                                    Meanwhile programmers from the earliest days felt very little attachment to the code they produced.

                                    Bob: my dudes, look what I came up with over the weekend!
                                    Dave: very cool! There was a bug, here’s a patch.

                                    Programmers are much more collectivist about the code. They invented a license that legally binds others to give away their code.

                                    As an example of difference of attitude let’s take a look at id software. They open sourced engine code for their games fairly quickly. While assets—which are mostly art: graphics, music—remain restricted to this day.

                                    So I don’t see what’s so surprising about them not caring much about the plagiarism issue now given that they never really did.

                                    1 Reply Last reply
                                    0
                                    • James ThomsonJ James Thomson

                                      Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                                      Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                                      Developers: Wheeeeeeeeee!

                                      𝑀𝒶𝓀𝑒𝔸𝕧𝕠𝕪🦀M This user is from outside of this forum
                                      𝑀𝒶𝓀𝑒𝔸𝕧𝕠𝕪🦀M This user is from outside of this forum
                                      𝑀𝒶𝓀𝑒𝔸𝕧𝕠𝕪🦀
                                      wrote last edited by
                                      #116

                                      Speak for yourself, I have the same message as artists and writers. AI gen code feels super gross, uncomfortable, and stolen. Not worshipping the ground LLMs walk is likely one of the reasons I was laid off a year ago. LLMs were directly the reason my non-coder friend didn't hire me to fix his Wix site. I feel like I'm forced to use them against my will or risk never getting a paycheck again.

                                      1 Reply Last reply
                                      0
                                      • Jonathan PolleyT Jonathan Polley

                                        @owlex Are the training sets licensed or just strip mined from the web/redit/github/sourceforge? This was the cause for their “AI is theft” statement.

                                        From a technical standpoint: Are these training sets free from bugs? If you use an ai tool to generate tests, are they useful tests? A useful test is one that tries to break the code instead of showing that the code “works. Tests that that exercise the interfaces or cover the code tend to not be “useful” tests.

                                        Felipe CeprianoF This user is from outside of this forum
                                        Felipe CeprianoF This user is from outside of this forum
                                        Felipe Cepriano
                                        wrote last edited by
                                        #117

                                        @the_other_jon @owlex uh, I definitely disagree on your stance about tests: A test that checks if something is working is really useful when you need to refactor something and needs to be sure the changes haven't affected existing behaviour

                                        1 Reply Last reply
                                        0
                                        • James ThomsonJ James Thomson

                                          Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                                          Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                                          Developers: Wheeeeeeeeee!

                                          AzuaronA This user is from outside of this forum
                                          AzuaronA This user is from outside of this forum
                                          Azuaron
                                          wrote last edited by
                                          #118

                                          @jamesthomson As a developer who hates AI, the one pushback I would make against this framing is that it was a mistake to grant computer code "literary copyright protection" in the first place. It's literally computer instructions, and just like a recipe instructions are not copyrightable, computer instructions should not be copyrightable. Patentable, sure, but not copyrightable.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups