I can’t believe we’re doing this again.
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
-
@iju how do you suppose an LLM interfaces with "being shut down"? Why would it try to *not* be shut down
-
@iju how do you suppose an LLM interfaces with "being shut down"? Why would it try to *not* be shut down
@crmsnbleyd @iju The mighty off switch can not be defeated -
@iju how do you suppose an LLM interfaces with "being shut down"? Why would it try to *not* be shut down
Do you remember HAL?
Also I'm getting the feeling that you're not conversing in good faith: first you refuse to tell what you saw as the problem in the message (unhelpful), and when I spent time trying to guess what you might have meant, you're criticizing me for not only writing perfect English, but not using it like I'm you.
And then you start asking questions that I feel I've already answered.
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech@infosec.exchange journalists need clicks
-
@malwaretech News flash: “software trained to reproduce text similar to text that commonly occurs in a given situation, including lots of fictional AI dystopias, reproduces text that is similar to what commonly occurs in discussions of possible AI dystopias”
@dpnash @malwaretech I I don’t know if the most annoying part is people who set up AI in fiction scenarii and act all surprised that AI plays their role, or people who assume that the AI’s meta-cognition is somewhat valuable.
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech
>"would you kill people?"
>"no"
>"yes you would"
>"okay then, I guess I would"
>"oh my god,,,,"Without fail
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech listen, it was just trained on endless amount of human conversations.
It just means, that those people would kill someone to exist.
The language model is just kind of "all possible or probable conversations, held by humans before". You cannot expect anything else.
But of course, the idea of chatbot controlling anything critical is totally mad. But history of use, as a species, is mostly continuous stream of random nonsense.
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech my toaster runs NetBSD, this is slander
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech That's degrading to toasters, they're consistent and often reliable.
It's more like a couple Magic 8 Balls in a tumble dryer.
-
-
Do you remember HAL?
Also I'm getting the feeling that you're not conversing in good faith: first you refuse to tell what you saw as the problem in the message (unhelpful), and when I spent time trying to guess what you might have meant, you're criticizing me for not only writing perfect English, but not using it like I'm you.
And then you start asking questions that I feel I've already answered.
@iju HAL was not an LLM
-
Perhaps. But the important part was HAL hearing it was going to be shut down, and referring to priorities given to it before the journey.
And I'm now going to block you. You're not doing any work toward steelmanning, and I don't much like entertaining people who talk to people like they're LLMs.
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
it's a bot that generates text....
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech The weird thing about AI is that our computers now seem to become vulnerable to social engineering and psychological manipulation. In this case: Suggestion.
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech Words evince too much in people.
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
As there are replies referencing 2001 and Red Dwarf, I have to leave this here too - https://youtu.be/h73PsFKtIck
-
@malwaretech listen, it was just trained on endless amount of human conversations.
It just means, that those people would kill someone to exist.
The language model is just kind of "all possible or probable conversations, held by humans before". You cannot expect anything else.
But of course, the idea of chatbot controlling anything critical is totally mad. But history of use, as a species, is mostly continuous stream of random nonsense.
@xChaos @malwaretech it means people have *said* they would do that. Or it's in books. Or fanfic. Somewhere, those words exist in that order, many times over, it learned that's not an unexpected thing to find written out. Doesn't mean anyone has or would kill anyone in that context.
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
"this thing can write better than average LinkedIn posts, clearly it should be put in charge of ~stuff~" is just poetry
