Queried an LLM in my knowledge base and it come up with stuff I was pretty sure I have never seen before.
-
Queried an LLM in my knowledge base and it come up with stuff I was pretty sure I have never seen before. So, I asked it where it got that data from. And it said:
"I made it up as a plausible example metric to illustrate impact; it did not come from XX data or any xx source."
Again. Don't trust LLMs with facts or to ask it to write reports.
PS: I'm not anti-AI. But this demonstrates how writers and researches are still needed, and companies should not trust these tools blindly to confidently create presentations and reports without triple checking the work!
-
R AodeRelay shared this topic
-
Queried an LLM in my knowledge base and it come up with stuff I was pretty sure I have never seen before. So, I asked it where it got that data from. And it said:
"I made it up as a plausible example metric to illustrate impact; it did not come from XX data or any xx source."
Again. Don't trust LLMs with facts or to ask it to write reports.
PS: I'm not anti-AI. But this demonstrates how writers and researches are still needed, and companies should not trust these tools blindly to confidently create presentations and reports without triple checking the work!
@liztai Definitely take AI with a pinch of salt. I tested it for research I was doing last year, the links were useful, the summaries were insane.
-
Queried an LLM in my knowledge base and it come up with stuff I was pretty sure I have never seen before. So, I asked it where it got that data from. And it said:
"I made it up as a plausible example metric to illustrate impact; it did not come from XX data or any xx source."
Again. Don't trust LLMs with facts or to ask it to write reports.
PS: I'm not anti-AI. But this demonstrates how writers and researches are still needed, and companies should not trust these tools blindly to confidently create presentations and reports without triple checking the work!
@liztai I've experimented with AI to write some of my blog posts. I always have to double check all statements of fact. Besides, it uses a flat, neutral voice like a bored college prof who's given the same lecture for 30 years.
-
Queried an LLM in my knowledge base and it come up with stuff I was pretty sure I have never seen before. So, I asked it where it got that data from. And it said:
"I made it up as a plausible example metric to illustrate impact; it did not come from XX data or any xx source."
Again. Don't trust LLMs with facts or to ask it to write reports.
PS: I'm not anti-AI. But this demonstrates how writers and researches are still needed, and companies should not trust these tools blindly to confidently create presentations and reports without triple checking the work!
@liztai I'm regularly presented with LLM output to ask my opinion of it, or fix the code it generates.
It gets high-level, amateur level stuff mostly correct. Ask for details? Garbage. Ask it to fix a problem? Garbage. Ask for the rationale or explanation? Garbage.
A former client presented me with some code, and said that a team of four people had been working on it for two weeks and couldn't get it working. In the first few minutes I saw weird values for command line options. Then I saw command line options that didn't exist. Then I saw they were trying to cram CSV output from one command into a tool that only accepted XML. Turns out someone started with LLM output, then tried to get it working, burning probably 100+ hours of human labour. I re-wrote the tool in 6 hours.
-
Queried an LLM in my knowledge base and it come up with stuff I was pretty sure I have never seen before. So, I asked it where it got that data from. And it said:
"I made it up as a plausible example metric to illustrate impact; it did not come from XX data or any xx source."
Again. Don't trust LLMs with facts or to ask it to write reports.
PS: I'm not anti-AI. But this demonstrates how writers and researches are still needed, and companies should not trust these tools blindly to confidently create presentations and reports without triple checking the work!
@liztai I’ve never once seen an LLM directly state that it made something up.
-
@liztai I’ve never once seen an LLM directly state that it made something up.
@dandylyons this really did. I can always send screenshots lol
-
@dandylyons this really did. I can always send screenshots lol
@liztai Sure yeah. Sorry. I don’t think you’re lying. I just think that’s very unusual.
-
@liztai Sure yeah. Sorry. I don’t think you’re lying. I just think that’s very unusual.
@dandylyons maybe it's more truthful than most lol.
-
@dandylyons maybe it's more truthful than most lol.
@liztai There’s ongoing debate in research about if LLMs even know their own internal “thoughts”.
-
@liztai There’s ongoing debate in research about if LLMs even know their own internal “thoughts”.
@dandylyons I'd be shocked if they do lol