Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. People have been telling me that while ChatGPT and other "AI"s can't reason, they can at least be great search engines.

People have been telling me that while ChatGPT and other "AI"s can't reason, they can at least be great search engines.

Scheduled Pinned Locked Moved Uncategorized
11 Posts 4 Posters 11 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Colin the MathmoC This user is from outside of this forum
    Colin the MathmoC This user is from outside of this forum
    Colin the Mathmo
    wrote last edited by
    #1

    People have been telling me that while ChatGPT and other "AI"s can't reason, they can at least be great search engines. They have (illegally) slurped all of the internet, and can find things that you otherwise wouldn't be able to find.

    So I thought I'd test this.

    1/n

    Colin the MathmoC 1 Reply Last reply
    0
    • Colin the MathmoC Colin the Mathmo

      People have been telling me that while ChatGPT and other "AI"s can't reason, they can at least be great search engines. They have (illegally) slurped all of the internet, and can find things that you otherwise wouldn't be able to find.

      So I thought I'd test this.

      1/n

      Colin the MathmoC This user is from outside of this forum
      Colin the MathmoC This user is from outside of this forum
      Colin the Mathmo
      wrote last edited by
      #2

      In the Foundation series by Asimov there's a character who is interested in archaeology and talks about his research. This is then held up as an example of the decline of civilisation ... so I asked ChatGPT:

      "Hi ... in the Foundation trilogy by asimov there is an archaeologist who talks about weighing up the merits of earlier authors, but says it would be stupid to go and examine the original material for himself. Which book is that in?"

      The response:

      2/n

      Colin the MathmoC 1 Reply Last reply
      0
      • Colin the MathmoC Colin the Mathmo

        In the Foundation series by Asimov there's a character who is interested in archaeology and talks about his research. This is then held up as an example of the decline of civilisation ... so I asked ChatGPT:

        "Hi ... in the Foundation trilogy by asimov there is an archaeologist who talks about weighing up the merits of earlier authors, but says it would be stupid to go and examine the original material for himself. Which book is that in?"

        The response:

        2/n

        Colin the MathmoC This user is from outside of this forum
        Colin the MathmoC This user is from outside of this forum
        Colin the Mathmo
        wrote last edited by
        #3

        ChatGPT: "You’re thinking of *Foundation and Empire*, the second book of the original Foundation trilogy.

        "The character is Ebling Mis. He’s usually described as a psychologist and historian (often loosely remembered as an archaeologist because of how he studies the past), and there’s a scene where he talks about evaluating earlier historians’ work rather than going back to primary sources himself. He explicitly argues that it would be pointless—or even foolish—to try to re-examine the original material directly, because later scholars have already done that work and synthesized it.

        "That moment fits perfectly with Asimov’s ongoing theme: the danger of over-reliance on received knowledge and academic consensus, especially when dealing with something as slippery as the Second Foundation."

        Full, complete, coherent, convincing ...

        And wrong.

        3/n

        Colin the MathmoC 1 Reply Last reply
        0
        • Colin the MathmoC Colin the Mathmo

          ChatGPT: "You’re thinking of *Foundation and Empire*, the second book of the original Foundation trilogy.

          "The character is Ebling Mis. He’s usually described as a psychologist and historian (often loosely remembered as an archaeologist because of how he studies the past), and there’s a scene where he talks about evaluating earlier historians’ work rather than going back to primary sources himself. He explicitly argues that it would be pointless—or even foolish—to try to re-examine the original material directly, because later scholars have already done that work and synthesized it.

          "That moment fits perfectly with Asimov’s ongoing theme: the danger of over-reliance on received knowledge and academic consensus, especially when dealing with something as slippery as the Second Foundation."

          Full, complete, coherent, convincing ...

          And wrong.

          3/n

          Colin the MathmoC This user is from outside of this forum
          Colin the MathmoC This user is from outside of this forum
          Colin the Mathmo
          wrote last edited by
          #4

          So I said:

          "could you find the exact reference and quotation?"

          ChatGPT: "I searched through *Foundation and Empire* by Isaac Asimov for the specific moment you’re remembering, but the exact quotation about weighing up earlier authors and deciding it would be “stupid” to go examine the original material yourself doesn’t appear in the widely circulated text or quote collections I could find online. What *is* clear is this:

          "📌 The character with that kind of attitude is definitely Ebling Mis ..."

          There's more.

          It's still wrong.

          4/n

          Colin the MathmoC 1 Reply Last reply
          0
          • Colin the MathmoC Colin the Mathmo

            So I said:

            "could you find the exact reference and quotation?"

            ChatGPT: "I searched through *Foundation and Empire* by Isaac Asimov for the specific moment you’re remembering, but the exact quotation about weighing up earlier authors and deciding it would be “stupid” to go examine the original material yourself doesn’t appear in the widely circulated text or quote collections I could find online. What *is* clear is this:

            "📌 The character with that kind of attitude is definitely Ebling Mis ..."

            There's more.

            It's still wrong.

            4/n

            Colin the MathmoC This user is from outside of this forum
            Colin the MathmoC This user is from outside of this forum
            Colin the Mathmo
            wrote last edited by
            #5

            So I said: "Are you sure it's Ebling Mis and not Lord Dorwin?"

            The reply?

            ChatGPT: "Great catch — and you’re absolutely right to question that.

            "It is Lord Dorwin, not Ebling Mis."

            *Face-palm*

            It got it right. Eventually. When the right answer was fed to it.

            So ... *nothing* the "AI" slop machines tell you.

            Nothing.

            5/n, n=5.

            gjmG DougMerritt (log😅 = 💧log😄)D Thomas RigbyH 3 Replies Last reply
            0
            • Colin the MathmoC Colin the Mathmo

              So I said: "Are you sure it's Ebling Mis and not Lord Dorwin?"

              The reply?

              ChatGPT: "Great catch — and you’re absolutely right to question that.

              "It is Lord Dorwin, not Ebling Mis."

              *Face-palm*

              It got it right. Eventually. When the right answer was fed to it.

              So ... *nothing* the "AI" slop machines tell you.

              Nothing.

              5/n, n=5.

              gjmG This user is from outside of this forum
              gjmG This user is from outside of this forum
              gjm
              wrote last edited by
              #6

              @ColinTheMathmo Curiously, Claude makes the same claim. I don't remember the books clearly enough to have a good sense of whether it's a particularly plausible claim to make somehow, but it seems curious that two different, independently trained models (though of course trained on a lot of the same data) would make the _same_ error. Is there perhaps a pile of wrong stuff on the interwebs that they might both have "learned" from?

              gjmG 2 Replies Last reply
              0
              • gjmG gjm

                @ColinTheMathmo Curiously, Claude makes the same claim. I don't remember the books clearly enough to have a good sense of whether it's a particularly plausible claim to make somehow, but it seems curious that two different, independently trained models (though of course trained on a lot of the same data) would make the _same_ error. Is there perhaps a pile of wrong stuff on the interwebs that they might both have "learned" from?

                gjmG This user is from outside of this forum
                gjmG This user is from outside of this forum
                gjm
                wrote last edited by
                #7

                @ColinTheMathmo I have a sort-of-theory, not for why the AIs pick on Ebling Mis for this, but for why they don't correctly identify Lord Dorwin.

                Asimov represents Dorwin as having an outrageously plummy British accent (IIRC no one is literally supposed to be speaking English -- "Galactic"? -- but clearly we're meant to understand him as the umpteenth-century equivalent of a stuffy out-of-touch English aristocrat. He does this by respelling all his words. I suspect the AIs have trouble reading that.

                (But I wouldn't bet much on that being the reason; I've been disappointed more than once by AIs' inability to identify things from literature, which seems like it really ought to be a strength; I wonder whether it's a consequence of some sort of anti-copyright-violation measures.)

                Having found that Claude did the same for me as ChatGPT did for Colin, I tried it in ChatGPT myself, and after it wrongly and confidently claimed that it was Ebling Mis (in _Foundation_, a book in which that character never even appears) I pointed out that it couldn't be both E.M. and in that book, and invited it to think again; it now (with "very high confidence") claimed it was Lewis Pirenne, who does at least appear in _Foundation_ but doesn't say anything like that.

                It also offers a list of "common paraphrases that circulate", none of which I can find in circulation.

                Claude did better when prompted to double-check, incidentally; rather than jumping to a new wrong answer it admitted that it wasn't entirely sure.

                1 Reply Last reply
                0
                • Colin the MathmoC Colin the Mathmo

                  So I said: "Are you sure it's Ebling Mis and not Lord Dorwin?"

                  The reply?

                  ChatGPT: "Great catch — and you’re absolutely right to question that.

                  "It is Lord Dorwin, not Ebling Mis."

                  *Face-palm*

                  It got it right. Eventually. When the right answer was fed to it.

                  So ... *nothing* the "AI" slop machines tell you.

                  Nothing.

                  5/n, n=5.

                  DougMerritt (log😅 = 💧log😄)D This user is from outside of this forum
                  DougMerritt (log😅 = 💧log😄)D This user is from outside of this forum
                  DougMerritt (log😅 = 💧log😄)
                  wrote last edited by
                  #8

                  @ColinTheMathmo
                  Great example of why these things should absolutely not be blindly trusted.

                  That said, recommended best practice is to herd these critters carefully, by having the initial prompt include things like "If you don't know, say so, and if you think you know, provide explicit references and quotes" and so on.

                  BTW I'm glad you provided the answer, because I sure don't recall the series clearly after all these decades, even though I did read it more than once in my youth, both before and after he merged all his stories into one story universe.

                  DougMerritt (log😅 = 💧log😄)D 1 Reply Last reply
                  0
                  • DougMerritt (log😅 = 💧log😄)D DougMerritt (log😅 = 💧log😄)

                    @ColinTheMathmo
                    Great example of why these things should absolutely not be blindly trusted.

                    That said, recommended best practice is to herd these critters carefully, by having the initial prompt include things like "If you don't know, say so, and if you think you know, provide explicit references and quotes" and so on.

                    BTW I'm glad you provided the answer, because I sure don't recall the series clearly after all these decades, even though I did read it more than once in my youth, both before and after he merged all his stories into one story universe.

                    DougMerritt (log😅 = 💧log😄)D This user is from outside of this forum
                    DougMerritt (log😅 = 💧log😄)D This user is from outside of this forum
                    DougMerritt (log😅 = 💧log😄)
                    wrote last edited by
                    #9

                    @ColinTheMathmo
                    P.S.:
                    > while ChatGPT and other "AI"s can't reason

                    They certainly don't reason as humans do. But just like old fashioned expert systems, they have *a* form of reasoning -- just a very strange one by human standards, and one that is strongly based on language.

                    You'll recall that arbitrary syntax *can* be Turing equivalent (semi-Thue systems), so they *might* be able to compute anything a Turing machine can do by such means. They don't, but it's a reminder not to underestimate that kind of thing.

                    https://en.wikipedia.org/wiki/Semi-Thue_system

                    At this point I also need to caveat that humans are not Turing machines: they can produce incorrect answers, unlike Turing machines, and your whole point is that LLMs also produce incorrect answers, so they are *not* Turing equivalent, although it makes them a little more similar to humans. 🙂

                    1 Reply Last reply
                    0
                    • gjmG gjm

                      @ColinTheMathmo Curiously, Claude makes the same claim. I don't remember the books clearly enough to have a good sense of whether it's a particularly plausible claim to make somehow, but it seems curious that two different, independently trained models (though of course trained on a lot of the same data) would make the _same_ error. Is there perhaps a pile of wrong stuff on the interwebs that they might both have "learned" from?

                      gjmG This user is from outside of this forum
                      gjmG This user is from outside of this forum
                      gjm
                      wrote last edited by
                      #10

                      @ColinTheMathmo On the other hand, Gemini (even the "fast" version that one can get at without signing in or anything) pretty much nails it.

                      It identifies Lord Dorwin, comments on his accent, correctly identifies the "Origin Question" that he's been arguing about, and correctly identifies that Asimov takes his attitude to show a decay of the idea of _science_.

                      It says it's 99% confident of its answer (which is not so different from ChatGPT's assessment of its confidence, but obviously better merited in this case).

                      1 Reply Last reply
                      0
                      • Colin the MathmoC Colin the Mathmo

                        So I said: "Are you sure it's Ebling Mis and not Lord Dorwin?"

                        The reply?

                        ChatGPT: "Great catch — and you’re absolutely right to question that.

                        "It is Lord Dorwin, not Ebling Mis."

                        *Face-palm*

                        It got it right. Eventually. When the right answer was fed to it.

                        So ... *nothing* the "AI" slop machines tell you.

                        Nothing.

                        5/n, n=5.

                        Thomas RigbyH This user is from outside of this forum
                        Thomas RigbyH This user is from outside of this forum
                        Thomas Rigby
                        wrote last edited by
                        #11

                        @ColinTheMathmo MS Copilot tells me;
                        The line you’re thinking of comes from Foundation and Earth.

                        In that novel, Golan Trevize and Janov Pelorat meet an archaeologist on the planet Comporellon. During their conversation, the archaeologist explains that he evaluates ancient authors by comparing later scholarly summaries—but considers it “ridiculous” (or words to that effect) to examine the original ancient sources himself. It’s Asimov’s satire of over‑specialized academic thinking.
                        Why it’s Foundation and Earth

                        The scene occurs early in the book, when Trevize and Pelorat are trying to gather information about Earth’s existence.

                        The Comporellon archaeologist is portrayed as pompous and dismissive of primary sources.

                        This exact attitude—preferring secondary commentary over original texts—is unique to this character in the series.

                        1 Reply Last reply
                        0
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Don't have an account? Register

                        • Login or register to search.
                        Powered by NodeBB Contributors
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • World
                        • Users
                        • Groups