Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. This is a damning article from the Wikipedia editors on GenAI articles written for Wikipedia: https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipedia-editing-what-we-learned-in-2025/

This is a damning article from the Wikipedia editors on GenAI articles written for Wikipedia: https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipedia-editing-what-we-learned-in-2025/

Scheduled Pinned Locked Moved Uncategorized
19 Posts 12 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Shafik YaghmourS Shafik Yaghmour

    This is a damning article from the Wikipedia editors on GenAI articles written for Wikipedia: https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipedia-editing-what-we-learned-in-2025/

    #ai

    Martin RundkvistM This user is from outside of this forum
    Martin RundkvistM This user is from outside of this forum
    Martin Rundkvist
    wrote last edited by
    #8

    @shafik Yep, there's no intelligence there. A reference is just yet another sequence of words that is statistically probable.

    #aibubble

    1 Reply Last reply
    0
    • Shafik YaghmourS Shafik Yaghmour

      This is a damning article from the Wikipedia editors on GenAI articles written for Wikipedia: https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipedia-editing-what-we-learned-in-2025/

      #ai

      CybermatronT This user is from outside of this forum
      CybermatronT This user is from outside of this forum
      Cybermatron
      wrote last edited by
      #9

      @shafik Welcome to the life of every University lecturer marking essays right now.

      Shafik YaghmourS 1 Reply Last reply
      0
      • Shafik YaghmourS Shafik Yaghmour

        This is a damning article from the Wikipedia editors on GenAI articles written for Wikipedia: https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipedia-editing-what-we-learned-in-2025/

        #ai

        Stephen Bannasch (316 ppm)S This user is from outside of this forum
        Stephen Bannasch (316 ppm)S This user is from outside of this forum
        Stephen Bannasch (316 ppm)
        wrote last edited by
        #10

        @shafik
        Interesting!
        “Our early interventions with participants who were flagged as using generative AI for exercises that would not enter mainspace seemed to head off their future use of generative AI. We supported 6,357 new editors in fall 2025, and only 217 of them (or 3%) had multiple AI alerts. Only 5% of the participants we supported had mainspace AI alerts. That means thousands of participants successfully edited Wikipedia without using generative AI to draft their content.”

        Shafik YaghmourS 1 Reply Last reply
        0
        • CybermatronT Cybermatron

          @shafik Welcome to the life of every University lecturer marking essays right now.

          Shafik YaghmourS This user is from outside of this forum
          Shafik YaghmourS This user is from outside of this forum
          Shafik Yaghmour
          wrote last edited by
          #11

          @TheCybermatron

          I am sorry

          1 Reply Last reply
          0
          • Stephen Bannasch (316 ppm)S Stephen Bannasch (316 ppm)

            @shafik
            Interesting!
            “Our early interventions with participants who were flagged as using generative AI for exercises that would not enter mainspace seemed to head off their future use of generative AI. We supported 6,357 new editors in fall 2025, and only 217 of them (or 3%) had multiple AI alerts. Only 5% of the participants we supported had mainspace AI alerts. That means thousands of participants successfully edited Wikipedia without using generative AI to draft their content.”

            Shafik YaghmourS This user is from outside of this forum
            Shafik YaghmourS This user is from outside of this forum
            Shafik Yaghmour
            wrote last edited by
            #12

            @stepheneb

            Yes, education definitely helps. There have been several reports that if people using LLMs in a way that makes their flaws obvious they learn and use them in more appropriate ways.

            If they can keep up, it should not be a big problem. Sounds like they have a hold of it.

            1 Reply Last reply
            0
            • Shafik YaghmourS This user is from outside of this forum
              Shafik YaghmourS This user is from outside of this forum
              Shafik Yaghmour
              wrote last edited by
              #13

              @Susan60

              For sure, tools are useful when you use them appropriately.

              1 Reply Last reply
              0
              • Shafik YaghmourS Shafik Yaghmour

                @arthfach

                if the editors can not keep up, then yes.

                Fucking bitch Martin VermeerM This user is from outside of this forum
                Fucking bitch Martin VermeerM This user is from outside of this forum
                Fucking bitch Martin Vermeer
                wrote last edited by
                #14

                @shafik @arthfach As a footnote, it seems that Pangram is remarkably effective at what it does. And I am inclined to trust it as the technical report is typeset in LaTeX 😏

                https://arxiv.org/pdf/2402.14873

                1 Reply Last reply
                0
                • Shafik YaghmourS Shafik Yaghmour

                  This is a damning article from the Wikipedia editors on GenAI articles written for Wikipedia: https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipedia-editing-what-we-learned-in-2025/

                  #ai

                  Martin EscardoM This user is from outside of this forum
                  Martin EscardoM This user is from outside of this forum
                  Martin Escardo
                  wrote last edited by
                  #15

                  @shafik Yes. Indeed.

                  Today I opened Claude to try to find a reference for something I know is true, but is not original with me, to cite in a paper I am writing.

                  The first answer was a proof, which (in this particular case) was correct.

                  But then I told it that I didn't want a proof, only a reference to cite. I had told it in advance that I already knew it is true.

                  So it gave me a reference. When I looked at it, there was nothing in there stating or proving what I wanted.

                  So I complained and I got an "apology" (I am not sure machines can or are even entitled to apologize - at best, they should apologize on behalf of their creators).

                  Then it tried again, and it again gave me a reference that didn't have what I wanted.

                  The third time I tried, it said it gave up, that what I wanted is nowhere to be found in the literature. But this is wrong. I've seen it before, I know it is true because I can prove it (and Claude itself can prove it (correctly this time), but course not out of nothing).

                  Don't ever trust a reference given by genAI unless you check it yourself. The references I got after explicitly asking for a reference, and nothing else, didn't have what I asked for.

                  The machine just makes things up in a probabilistic way. When it starts "apologizing" then you can know for sure that it is rather unlikely that you will get anything useful from it.

                  Even more concerning is if it doesn't apologize. You may suppose that the answer is right and use it for whatever purpose you had in mind. Good luck with that.

                  The TurtleT Susan from NeuStudioS 2 Replies Last reply
                  0
                  • Martin EscardoM Martin Escardo

                    @shafik Yes. Indeed.

                    Today I opened Claude to try to find a reference for something I know is true, but is not original with me, to cite in a paper I am writing.

                    The first answer was a proof, which (in this particular case) was correct.

                    But then I told it that I didn't want a proof, only a reference to cite. I had told it in advance that I already knew it is true.

                    So it gave me a reference. When I looked at it, there was nothing in there stating or proving what I wanted.

                    So I complained and I got an "apology" (I am not sure machines can or are even entitled to apologize - at best, they should apologize on behalf of their creators).

                    Then it tried again, and it again gave me a reference that didn't have what I wanted.

                    The third time I tried, it said it gave up, that what I wanted is nowhere to be found in the literature. But this is wrong. I've seen it before, I know it is true because I can prove it (and Claude itself can prove it (correctly this time), but course not out of nothing).

                    Don't ever trust a reference given by genAI unless you check it yourself. The references I got after explicitly asking for a reference, and nothing else, didn't have what I asked for.

                    The machine just makes things up in a probabilistic way. When it starts "apologizing" then you can know for sure that it is rather unlikely that you will get anything useful from it.

                    Even more concerning is if it doesn't apologize. You may suppose that the answer is right and use it for whatever purpose you had in mind. Good luck with that.

                    The TurtleT This user is from outside of this forum
                    The TurtleT This user is from outside of this forum
                    The Turtle
                    wrote last edited by
                    #16

                    @MartinEscardo @shafik google Gemini can be browbeaten into an apology and then will go right back and do the same shit over again exactly.

                    1 Reply Last reply
                    0
                    • Shafik YaghmourS Shafik Yaghmour

                      This is a damning article from the Wikipedia editors on GenAI articles written for Wikipedia: https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipedia-editing-what-we-learned-in-2025/

                      #ai

                      Dr Jess BirchS This user is from outside of this forum
                      Dr Jess BirchS This user is from outside of this forum
                      Dr Jess Birch
                      wrote last edited by
                      #17

                      @shafik This certainly mirrors my personal experience (and why I'll keep calling them lie-bots) where all these tools added "citations" and "links" (in Google, Bing, etc "AI summary") but it's not "real" because it's just looking for a plausible real link not one that actually says what it has imagined (as a summary).
                      If you want specific lists of things, you can get all sorts of links attached (the lists are wrong) or just the weather (link will usually contradict the claim).

                      1 Reply Last reply
                      0
                      • Shafik YaghmourS Shafik Yaghmour

                        This is a damning article from the Wikipedia editors on GenAI articles written for Wikipedia: https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipedia-editing-what-we-learned-in-2025/

                        #ai

                        LucL This user is from outside of this forum
                        LucL This user is from outside of this forum
                        Luc
                        wrote last edited by
                        #18

                        @shafik the few times I've looked up references on Wikipedia, the experience was similar and I don't think this was related to LLMs having written the thing I wanted to learn more about but turned out to not be mentioned at all in the alleged source

                        A comparative study might be better than a blanket "it's frequently wrong". Of course I expect LLMs to perform worse than humans but the context would be helpful to put the numbers into perspective

                        1 Reply Last reply
                        0
                        • Martin EscardoM Martin Escardo

                          @shafik Yes. Indeed.

                          Today I opened Claude to try to find a reference for something I know is true, but is not original with me, to cite in a paper I am writing.

                          The first answer was a proof, which (in this particular case) was correct.

                          But then I told it that I didn't want a proof, only a reference to cite. I had told it in advance that I already knew it is true.

                          So it gave me a reference. When I looked at it, there was nothing in there stating or proving what I wanted.

                          So I complained and I got an "apology" (I am not sure machines can or are even entitled to apologize - at best, they should apologize on behalf of their creators).

                          Then it tried again, and it again gave me a reference that didn't have what I wanted.

                          The third time I tried, it said it gave up, that what I wanted is nowhere to be found in the literature. But this is wrong. I've seen it before, I know it is true because I can prove it (and Claude itself can prove it (correctly this time), but course not out of nothing).

                          Don't ever trust a reference given by genAI unless you check it yourself. The references I got after explicitly asking for a reference, and nothing else, didn't have what I asked for.

                          The machine just makes things up in a probabilistic way. When it starts "apologizing" then you can know for sure that it is rather unlikely that you will get anything useful from it.

                          Even more concerning is if it doesn't apologize. You may suppose that the answer is right and use it for whatever purpose you had in mind. Good luck with that.

                          Susan from NeuStudioS This user is from outside of this forum
                          Susan from NeuStudioS This user is from outside of this forum
                          Susan from NeuStudio
                          wrote last edited by
                          #19

                          @MartinEscardo @shafik “The machine just makes things up in a probabilistic way.”

                          Well said!

                          At the risk of giving a machine human character, I find it useful to think of AI as a charming liar.

                          1 Reply Last reply
                          0
                          • R ActivityRelay shared this topic
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Don't have an account? Register

                          • Login or register to search.
                          Powered by NodeBB Contributors
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups