Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

Scheduled Pinned Locked Moved Uncategorized
119 Posts 91 Posters 51 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • LeelooL Leeloo

    @jamesthomson
    I suspect that these developers are the same ones who were cut'n'pasting their code from stackoverflow before.

    I know I personally wouldn't want to leave the fun part to AI.

    TurreT This user is from outside of this forum
    TurreT This user is from outside of this forum
    Turre
    wrote last edited by
    #87

    @leeloo @jamesthomson Yeah, there are developers who are going wheeeeeeeeeee but most that I know are just as disgusted by it as the other creative folks.

    1 Reply Last reply
    1
    0
    • softmausS softmaus

      @fabienmarry @amyinorbit @jamesthomson It has always mind boggling to me how they mock Casey for being ‚frugal‘ and how it’s ‚funny‘ that Marco has some form of a money spending disorder.

      FabienF This user is from outside of this forum
      FabienF This user is from outside of this forum
      Fabien
      wrote last edited by
      #88

      @softmaus It's all relative isn't it… Frugal means thinking of replacing a top of the line laptop because it's 2 year old… while the other host buys a restaurant. I still find that interesting, but less and less relatable.

      softmausS 1 Reply Last reply
      0
      • Airis DamonA Airis Damon

        @owlex @the_other_jon If it is incredibly complex, then shouldn't the technology be democratically controlled? Shouldn't all tech that has such a massive impact on our lives be democratically controlled? I believe it should.

        JohnM This user is from outside of this forum
        JohnM This user is from outside of this forum
        John
        wrote last edited by
        #89

        @airisdamon @owlex @the_other_jon It's not gonna fly. Apple doesn't release their source code. People still pay them money for some reason. Knowing what the code does is an infinitely easier step (and a prerequisite to) controlling what code does via legislation. It doesn't matter what 'society should do'. Society will keep paying Apple. Apple will keep paying government to make sure it's never compelled to reveal what its code does to its users.

        Airis DamonA 1 Reply Last reply
        0
        • AlexandraO Alexandra

          @the_other_jon I'm aware of these problems, and many more (energy waste, OpenAI's exploitation of workers in Africa for manual training, copyright theft, data mining).

          My question stands: Why is it wrong to use something critically while being aware of its problems? Especially when we're in the middle of such a massive technological shift that we should understand it. And when capitalism is forcing it into everything anyway, isn't informed usage better than ignorance?

          And it's not even just about American companies anymore. We're in a global race for AI dominance now. This whole topic is incredibly complex.

          I respect you for having these principles, but I think taking it out on a podcast, which reports about technology is a little weird. Though it's your decision 😊

          VítorV This user is from outside of this forum
          VítorV This user is from outside of this forum
          Vítor
          wrote last edited by
          #90

          @owlex You haven’t asked me, but your questions appear to me to be in such good faith that I’ll try to provide a response. Specifically to:

          > Why is it wrong to use something critically while being aware of its problems? […] And when capitalism is forcing it into everything anyway, isn't informed usage better than ignorance?

          I don’t think your description fits the current state of ATP. Marco in particular¹ has become a bit of a mouthpiece for LLMs. He’s now actively spouting the fear mongering of “use it or you’re going to be left behind” and in general is profoundly focused on what the technology does *for him* while summarily ignoring the negative impact to others and society in general.

          Informed usage does not mean advocacy. What ATP is doing now is closer to the latter than the former. It has much praise, little criticism.

          ¹ Whom I agree with and publicly applaud on pretty much every Tim Cook criticism.

          AlexandraO 1 Reply Last reply
          0
          • AlexandraO Alexandra

            @airisdamon @the_other_jon

            I'm not sure we need to democratically control the technology itself, but we absolutely need to hold companies accountable for their methods. And since these models are built on OUR collective knowledge, we should demand open weight models and not be forbidden from using them.

            The true impact of LLMs is still unfolding. If they turn out to be like the telephone or internet, then yes, strong regulatory control is needed. But if they're more like one compiler among many, maybe not.

            What's clear: We need to close the legal loopholes that let companies profit parasitically from society without giving back. Democratic control means informed engagement, not avoidance.

            illogicalF This user is from outside of this forum
            illogicalF This user is from outside of this forum
            illogical
            wrote last edited by
            #91

            @owlex @airisdamon @the_other_jon

            One criticism on your choice of words: large language models are not built on knowledge, but on data.

            I think that is a very crucial distinction to keep in perspective what the tools can and cannot do.

            1 Reply Last reply
            0
            • FabienF Fabien

              @softmaus It's all relative isn't it… Frugal means thinking of replacing a top of the line laptop because it's 2 year old… while the other host buys a restaurant. I still find that interesting, but less and less relatable.

              softmausS This user is from outside of this forum
              softmausS This user is from outside of this forum
              softmaus
              wrote last edited by
              #92

              @fabienmarry Same. In the meantime, I manage to cut them some slack by remembering that consumerism still is a commonly accepted virtue in the US.

              1 Reply Last reply
              0
              • AlexandraO Alexandra

                @the_other_jon I'm aware of these problems, and many more (energy waste, OpenAI's exploitation of workers in Africa for manual training, copyright theft, data mining).

                My question stands: Why is it wrong to use something critically while being aware of its problems? Especially when we're in the middle of such a massive technological shift that we should understand it. And when capitalism is forcing it into everything anyway, isn't informed usage better than ignorance?

                And it's not even just about American companies anymore. We're in a global race for AI dominance now. This whole topic is incredibly complex.

                I respect you for having these principles, but I think taking it out on a podcast, which reports about technology is a little weird. Though it's your decision 😊

                Stephen 🌈 (he/him)F This user is from outside of this forum
                Stephen 🌈 (he/him)F This user is from outside of this forum
                Stephen 🌈 (he/him)
                wrote last edited by
                #93

                @owlex @the_other_jon An ethical position on something often requires sacrifice. We aren’t doing this to be mean to the podcast. We are doing it to attempt to influence the industry in another direction.

                The complexity of the situation doesn’t really have anything directly to do with what is ethical. It only has to do with how hard it is to see it. Are you arguing that the complexity makes it ok or that it is hard for you to see? Some of us can see the harm and are trying our best to make it visible.

                Those who provide the counterpoint don’t say anything about whether the harm will stop or somehow be mitigated really — they mostly just say, “Don’t be left behind.” Does that sound like a rational actor or an addict?

                My belief: it is absolutely wrong to feed this technological vampire that threatens to erase humanity. Don’t become a thrall. It doesn’t end well for them. 😊

                AlexandraO 1 Reply Last reply
                0
                • James ThomsonJ James Thomson

                  Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                  Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                  Developers: Wheeeeeeeeee!

                  blurkB This user is from outside of this forum
                  blurkB This user is from outside of this forum
                  blurk
                  wrote last edited by
                  #94

                  @jamesthomson Not all of us.

                  1 Reply Last reply
                  0
                  • Michael B. JohnsonD Michael B. Johnson

                    @jamesthomson I’ve been struggling with this cognitive dissonance for years.

                    wW This user is from outside of this forum
                    wW This user is from outside of this forum
                    w
                    wrote last edited by
                    #95
                    I'm torn whether this is cognitive dissonance or a result of an industry containing a significant fraction of people who apparently hate the field they work in

                    CC: @jamesthomson@mastodon.social
                    1 Reply Last reply
                    0
                    • James ThomsonJ James Thomson

                      Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                      Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                      Developers: Wheeeeeeeeee!

                      Paul TichonczukP This user is from outside of this forum
                      Paul TichonczukP This user is from outside of this forum
                      Paul Tichonczuk
                      wrote last edited by
                      #96

                      There are a few out there. But most deleveopers are using it because if they don't, they lose their job.

                      1 Reply Last reply
                      0
                      • James ThomsonJ James Thomson

                        Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                        Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                        Developers: Wheeeeeeeeee!

                        Richard SmithR This user is from outside of this forum
                        Richard SmithR This user is from outside of this forum
                        Richard Smith
                        wrote last edited by
                        #97

                        @jamesthomson I wonder if that’s because for writers, the writing is the work. For artists, the art is the work. For many developers, the code is not the work - the result of the code is. My customers don’t look at, or even care about my code. They care about its outcomes.

                        For some developers this is harder, who saw coding as a craft in itself. Like woodworking. But many purchasers of beautiful furniture also just want it to hold up their drink.

                        1 Reply Last reply
                        0
                        • James ThomsonJ James Thomson

                          Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                          Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                          Developers: Wheeeeeeeeee!

                          Claudio Zizza 🦜S This user is from outside of this forum
                          Claudio Zizza 🦜S This user is from outside of this forum
                          Claudio Zizza 🦜
                          wrote last edited by
                          #98

                          @jamesthomson It's kinda like Stackoverflow on Demand.

                          1 Reply Last reply
                          0
                          • James ThomsonJ James Thomson

                            Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                            Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                            Developers: Wheeeeeeeeee!

                            #?.info :commodore:P This user is from outside of this forum
                            #?.info :commodore:P This user is from outside of this forum
                            #?.info :commodore:
                            wrote last edited by
                            #99

                            @jamesthomson Yeah I don't think so. Some are, a tiny minority are bragging about supposedly being 10x more productive but not showing anything of value, but literally every dev I'm seeing is:

                            1. Complaining about AI being everywhere and being forced to use it
                            2. Complaining about slop bug reports
                            3. Worried about layoffs that will also destroy the company that's laying them off, making a bad economy even worse.

                            The minority that's cheering for it are a minority that happens to be loud, but it's the same people, and many of them aren't devs to begin with, as evidenced by their LinkedIn style of writing.

                            1 Reply Last reply
                            0
                            • Thomas BrandE Thomas Brand

                              @jamesthomson DragThing now by ChatGPT.

                              macfixerM This user is from outside of this forum
                              macfixerM This user is from outside of this forum
                              macfixer
                              wrote last edited by
                              #100

                              @Eggfreckles @jamesthomson

                              1 Reply Last reply
                              0
                              • James ThomsonJ James Thomson

                                Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                                Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                                Developers: Wheeeeeeeeee!

                                Ken FranqueiroK This user is from outside of this forum
                                Ken FranqueiroK This user is from outside of this forum
                                Ken Franqueiro
                                wrote last edited by
                                #101

                                @jamesthomson Competent developers: *too aghast at how many of their dependencies now accept LLM slop in PRs to say anything*

                                1 Reply Last reply
                                0
                                • tschenkelT tschenkel

                                  @owlex @the_other_jon

                                  I follow you on the informed vs ignorance argument.

                                  But, given that you list many of the ethical reasons against AI, there is little "informed use" that will also stand up to the ethical razor.

                                  The Luddites were not ignorant. They were the technically able, who knew how to operate the machines, but fought against using them BECAUSE they understood them.

                                  In my work I use deterministic scientific models, but I work with machine learning models as well. And all my colleagues (who are real experts in how neural networks work) are opposed to generative AI.

                                  AlexandraO This user is from outside of this forum
                                  AlexandraO This user is from outside of this forum
                                  Alexandra
                                  wrote last edited by
                                  #102

                                  @tschenkel @the_other_jon

                                  I appreciate this pushback.
                                  You're right that I could be more informed, and I'm actively working on

                                  For example: I recently learned about OpenAI's ties to ICE and have since switched to other models (local models) because of it. That's exactly what 'informed use' looks like to me. I am trying to learn about specific harms and adjusting accordingly.

                                  But here's where I still disagree with the Luddite comparison: The Luddites had a real choice to reject the machines. I don't have that choice anymore: I’m required to use AI at work, and personally, it helps me function with ADHD in ways that nothing else does.

                                  So my question remains: If I can't opt out entirely, isn't 'informed use and demanding regulation' better than 'uninformed use and silence'? I'm genuinely trying to navigate this topic, not to justify myself.

                                  Also I am really curious why your colleagues are all against generative AI. Would you please expand on that?

                                  tschenkelT 2 Replies Last reply
                                  0
                                  • VítorV Vítor

                                    @owlex You haven’t asked me, but your questions appear to me to be in such good faith that I’ll try to provide a response. Specifically to:

                                    > Why is it wrong to use something critically while being aware of its problems? […] And when capitalism is forcing it into everything anyway, isn't informed usage better than ignorance?

                                    I don’t think your description fits the current state of ATP. Marco in particular¹ has become a bit of a mouthpiece for LLMs. He’s now actively spouting the fear mongering of “use it or you’re going to be left behind” and in general is profoundly focused on what the technology does *for him* while summarily ignoring the negative impact to others and society in general.

                                    Informed usage does not mean advocacy. What ATP is doing now is closer to the latter than the former. It has much praise, little criticism.

                                    ¹ Whom I agree with and publicly applaud on pretty much every Tim Cook criticism.

                                    AlexandraO This user is from outside of this forum
                                    AlexandraO This user is from outside of this forum
                                    Alexandra
                                    wrote last edited by
                                    #103

                                    @vitor

                                    Thank you for your goodhearted response ☺️

                                    It’s maybe just me, but I don’t feel like they are cheerleading AI exclusively as they are also covering the problems with it(like the Anthropic Book Piracy). So for me its still balanced, can be different for you(Also sometimes I may zone out a bit)

                                    Btw I agree about the 'left behind' aspect which Marco is talking about. I see this every day at work. My employees who are genuinely anxious about being replaced or left behind by AI. As a team lead, I'm trying to navigate that: helping people adapt while also acknowledging the real fear and harm. That's where regulation becomes critical. We can't just leave people to fend for themselves in this shift.

                                    1 Reply Last reply
                                    0
                                    • Stephen 🌈 (he/him)F Stephen 🌈 (he/him)

                                      @owlex @the_other_jon An ethical position on something often requires sacrifice. We aren’t doing this to be mean to the podcast. We are doing it to attempt to influence the industry in another direction.

                                      The complexity of the situation doesn’t really have anything directly to do with what is ethical. It only has to do with how hard it is to see it. Are you arguing that the complexity makes it ok or that it is hard for you to see? Some of us can see the harm and are trying our best to make it visible.

                                      Those who provide the counterpoint don’t say anything about whether the harm will stop or somehow be mitigated really — they mostly just say, “Don’t be left behind.” Does that sound like a rational actor or an addict?

                                      My belief: it is absolutely wrong to feed this technological vampire that threatens to erase humanity. Don’t become a thrall. It doesn’t end well for them. 😊

                                      AlexandraO This user is from outside of this forum
                                      AlexandraO This user is from outside of this forum
                                      Alexandra
                                      wrote last edited by
                                      #104

                                      @firepoet @the_other_jon

                                      Thank you for this answer 😊

                                      I respect that position, and you're right that ethical stances often require sacrifice. But I think we're drawing different lines here. I don't see AI as something we can just starve by not using it. It's already everywhere, being used by corporations, governments, everyone. So for me, the question is: do I abstain entirely while others use it uncritically, or do I use it thoughtfully and keep pushing for better regulation?

                                      I am trying to navigate the reality I am living in.
                                      As I said in other posts it helps me navigate with my ADHD and also I’m kind of forced to use it at work.
                                      What I am doing is to advocate for using different models at work and talking with people about the problems of AI. All while being really excited how we can use it to make lives better, because it has cool use cases.

                                      Also I am pragmatic singular boycott never helped much. We need regulations in place, that’s the most important thing

                                      Stephen 🌈 (he/him)F 1 Reply Last reply
                                      0
                                      • JohnM John

                                        @airisdamon @owlex @the_other_jon It's not gonna fly. Apple doesn't release their source code. People still pay them money for some reason. Knowing what the code does is an infinitely easier step (and a prerequisite to) controlling what code does via legislation. It doesn't matter what 'society should do'. Society will keep paying Apple. Apple will keep paying government to make sure it's never compelled to reveal what its code does to its users.

                                        Airis DamonA This user is from outside of this forum
                                        Airis DamonA This user is from outside of this forum
                                        Airis Damon
                                        wrote last edited by
                                        #105

                                        @mrkeen @owlex @the_other_jon There could be extralegal methods for democratization. I don't know. I'm not real enthused with the direction all this is heading toward.

                                        1 Reply Last reply
                                        0
                                        • James ThomsonJ James Thomson

                                          Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                                          Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

                                          Developers: Wheeeeeeeeee!

                                          MansourM This user is from outside of this forum
                                          MansourM This user is from outside of this forum
                                          Mansour
                                          wrote last edited by
                                          #106

                                          @jamesthomson The taboo against plagiarism in the arts doesn't exist in the software world. Some of the biggest names regularly copy code without attribution.

                                          I remember when GitHub first launched support for Markdown. It was based on the work on a female Russian developer. Her code was copied, her name and attribution stripped, and replaced with a generic open source license.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups