@FediTips asks "Can we please chill out and be a bit more friendly?"
I highly recommend that.
Always.
But also particularly when somebody is trying to help without compensation, just for the sake of the community.
@FediTips asks "Can we please chill out and be a bit more friendly?"
I highly recommend that.
Always.
But also particularly when somebody is trying to help without compensation, just for the sake of the community.
@shafik Yes. Indeed.
Today I opened Claude to try to find a reference for something I know is true, but is not original with me, to cite in a paper I am writing.
The first answer was a proof, which (in this particular case) was correct.
But then I told it that I didn't want a proof, only a reference to cite. I had told it in advance that I already knew it is true.
So it gave me a reference. When I looked at it, there was nothing in there stating or proving what I wanted.
So I complained and I got an "apology" (I am not sure machines can or are even entitled to apologize - at best, they should apologize on behalf of their creators).
Then it tried again, and it again gave me a reference that didn't have what I wanted.
The third time I tried, it said it gave up, that what I wanted is nowhere to be found in the literature. But this is wrong. I've seen it before, I know it is true because I can prove it (and Claude itself can prove it (correctly this time), but course not out of nothing).
Don't ever trust a reference given by genAI unless you check it yourself. The references I got after explicitly asking for a reference, and nothing else, didn't have what I asked for.
The machine just makes things up in a probabilistic way. When it starts "apologizing" then you can know for sure that it is rather unlikely that you will get anything useful from it.
Even more concerning is if it doesn't apologize. You may suppose that the answer is right and use it for whatever purpose you had in mind. Good luck with that.