New blogpost: AI will compromise your cybersecurity posturehttps://rys.io/en/181.html
-
New blogpost: AI will compromise your cybersecurity posture
https://rys.io/en/181.htmlThe way “AI” is going to compromise your cybersecurity is not through some magical autonomous exploitation by a singularity from the outside, but by being the poorly engineered, shoddily integrated, exploitable weak point you would not have otherwise had on the inside.
LLM-based systems are insanely complex. And complexity has real cost and introduces very real risk.
1/🧵
-
New blogpost: AI will compromise your cybersecurity posture
https://rys.io/en/181.htmlThe way “AI” is going to compromise your cybersecurity is not through some magical autonomous exploitation by a singularity from the outside, but by being the poorly engineered, shoddily integrated, exploitable weak point you would not have otherwise had on the inside.
LLM-based systems are insanely complex. And complexity has real cost and introduces very real risk.
1/🧵
An important aspect of pushing AI hype is inflating expectations and generating fear of missing out, one way or another. What better way to generate it than by using actual fear?
I look at three notorious examples of such fear-hyping:
PassGAN cracking "51% of popular passwords in seconds"
that paper about ChatGPT "exploiting 87% of one-day vulnerabilities"
and of course Anthropic's "first AI-orchestrated cyber-espionage campaign"tl;dr: don't lose sleep over them.
2/🧵
-
An important aspect of pushing AI hype is inflating expectations and generating fear of missing out, one way or another. What better way to generate it than by using actual fear?
I look at three notorious examples of such fear-hyping:
PassGAN cracking "51% of popular passwords in seconds"
that paper about ChatGPT "exploiting 87% of one-day vulnerabilities"
and of course Anthropic's "first AI-orchestrated cyber-espionage campaign"tl;dr: don't lose sleep over them.
2/🧵
Anthropic does make an important point though, even though they try to bury it:
> [The attackers] had to convince Claude—which is extensively trained to avoid harmful behaviors—to engage in the attack. They did so by jailbreaking it (…) They also told Claude that it was an employee of a legitimate cybersecurity firm, and was being used in defensive testing.
The real story is how hilariously unsafe Claude is, and how a company valued at $180bn refuses to take responsibility for that.
3/🧵
-
N Marianne shared this topic