I can’t believe we’re doing this again.
-
@malwaretech in the the old days the toasters were flying to save the screens.

I see what you did there!
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech > It’s like putting your toaster in charge of air traffic control.
Coming to the airport near you this fall!
-
Don't know much about LLM's, but I do know that they output text, and if that output is connected something else (be it toaster or a nuke), then surely it can be triggered.
If I'm mistaken, you are welcome to correct me.
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech /Talkie Toaster starts buffing up his CV
-
@iju @burritosec @malwaretech 'feels'

@leftofcentre @burritosec @malwaretech
A placeholder word for whatever mechanism leads to LLM outputting anything.
As you should have grasped from the "glorified text generator".
Please don't be toxic.
-
Don't know much about LLM's, but I do know that they output text, and if that output is connected something else (be it toaster or a nuke), then surely it can be triggered.
If I'm mistaken, you are welcome to correct me.
@iju it doesn't have any feelings about being shut down
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
-
@iju how do you suppose an LLM interfaces with "being shut down"? Why would it try to *not* be shut down
-
@iju how do you suppose an LLM interfaces with "being shut down"? Why would it try to *not* be shut down
@crmsnbleyd @iju The mighty off switch can not be defeated -
@iju how do you suppose an LLM interfaces with "being shut down"? Why would it try to *not* be shut down
Do you remember HAL?
Also I'm getting the feeling that you're not conversing in good faith: first you refuse to tell what you saw as the problem in the message (unhelpful), and when I spent time trying to guess what you might have meant, you're criticizing me for not only writing perfect English, but not using it like I'm you.
And then you start asking questions that I feel I've already answered.
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech@infosec.exchange journalists need clicks
-
@malwaretech News flash: “software trained to reproduce text similar to text that commonly occurs in a given situation, including lots of fictional AI dystopias, reproduces text that is similar to what commonly occurs in discussions of possible AI dystopias”
@dpnash @malwaretech I I don’t know if the most annoying part is people who set up AI in fiction scenarii and act all surprised that AI plays their role, or people who assume that the AI’s meta-cognition is somewhat valuable.
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech
>"would you kill people?"
>"no"
>"yes you would"
>"okay then, I guess I would"
>"oh my god,,,,"Without fail
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech listen, it was just trained on endless amount of human conversations.
It just means, that those people would kill someone to exist.
The language model is just kind of "all possible or probable conversations, held by humans before". You cannot expect anything else.
But of course, the idea of chatbot controlling anything critical is totally mad. But history of use, as a species, is mostly continuous stream of random nonsense.
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech my toaster runs NetBSD, this is slander
-
I can’t believe we’re doing this again. It’s just a bot that generates the text you ask it for. If you put it in charge of critical decisions, it will kill people. Not because it’s secretly evil, but because it’s a word generator. It’s like putting your toaster in charge of air traffic control.
@malwaretech That's degrading to toasters, they're consistent and often reliable.
It's more like a couple Magic 8 Balls in a tumble dryer.
-
-
Do you remember HAL?
Also I'm getting the feeling that you're not conversing in good faith: first you refuse to tell what you saw as the problem in the message (unhelpful), and when I spent time trying to guess what you might have meant, you're criticizing me for not only writing perfect English, but not using it like I'm you.
And then you start asking questions that I feel I've already answered.
@iju HAL was not an LLM
-
Perhaps. But the important part was HAL hearing it was going to be shut down, and referring to priorities given to it before the journey.
And I'm now going to block you. You're not doing any work toward steelmanning, and I don't much like entertaining people who talk to people like they're LLMs.