@mike On ones with a "Deny all" option, if you go into the individual settings you will often find that lots of them have an _extra_ little toggle-switch, labelled "Legitimate Interest", that is _on_ by default and that I bet does not get turned off when you click "Deny all".
gjm@mathstodon.xyz
Posts
-
Over the last few weeks I have noticed a behaviour in myself. -
People have been telling me that while ChatGPT and other "AI"s can't reason, they can at least be great search engines.@ColinTheMathmo On the other hand, Gemini (even the "fast" version that one can get at without signing in or anything) pretty much nails it.
It identifies Lord Dorwin, comments on his accent, correctly identifies the "Origin Question" that he's been arguing about, and correctly identifies that Asimov takes his attitude to show a decay of the idea of _science_.
It says it's 99% confident of its answer (which is not so different from ChatGPT's assessment of its confidence, but obviously better merited in this case).
-
People have been telling me that while ChatGPT and other "AI"s can't reason, they can at least be great search engines.@ColinTheMathmo I have a sort-of-theory, not for why the AIs pick on Ebling Mis for this, but for why they don't correctly identify Lord Dorwin.
Asimov represents Dorwin as having an outrageously plummy British accent (IIRC no one is literally supposed to be speaking English -- "Galactic"? -- but clearly we're meant to understand him as the umpteenth-century equivalent of a stuffy out-of-touch English aristocrat. He does this by respelling all his words. I suspect the AIs have trouble reading that.
(But I wouldn't bet much on that being the reason; I've been disappointed more than once by AIs' inability to identify things from literature, which seems like it really ought to be a strength; I wonder whether it's a consequence of some sort of anti-copyright-violation measures.)
Having found that Claude did the same for me as ChatGPT did for Colin, I tried it in ChatGPT myself, and after it wrongly and confidently claimed that it was Ebling Mis (in _Foundation_, a book in which that character never even appears) I pointed out that it couldn't be both E.M. and in that book, and invited it to think again; it now (with "very high confidence") claimed it was Lewis Pirenne, who does at least appear in _Foundation_ but doesn't say anything like that.
It also offers a list of "common paraphrases that circulate", none of which I can find in circulation.
Claude did better when prompted to double-check, incidentally; rather than jumping to a new wrong answer it admitted that it wasn't entirely sure.
-
People have been telling me that while ChatGPT and other "AI"s can't reason, they can at least be great search engines.@ColinTheMathmo Curiously, Claude makes the same claim. I don't remember the books clearly enough to have a good sense of whether it's a particularly plausible claim to make somehow, but it seems curious that two different, independently trained models (though of course trained on a lot of the same data) would make the _same_ error. Is there perhaps a pile of wrong stuff on the interwebs that they might both have "learned" from?
-
I like #DuckDuckGo.@petersuber Strongly agree.