Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. New blogpost: AI will compromise your cybersecurity posturehttps://rys.io/en/181.html

New blogpost: AI will compromise your cybersecurity posturehttps://rys.io/en/181.html

Scheduled Pinned Locked Moved Uncategorized
infosec
3 Posts 1 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Michał "rysiek" Woźniak · 🇺🇦R This user is from outside of this forum
    Michał "rysiek" Woźniak · 🇺🇦R This user is from outside of this forum
    Michał "rysiek" Woźniak · 🇺🇦
    wrote last edited by
    #1

    New blogpost: AI will compromise your cybersecurity posture
    https://rys.io/en/181.html

    The way “AI” is going to compromise your cybersecurity is not through some magical autonomous exploitation by a singularity from the outside, but by being the poorly engineered, shoddily integrated, exploitable weak point you would not have otherwise had on the inside.

    LLM-based systems are insanely complex. And complexity has real cost and introduces very real risk.

    1/🧵

    #AI #InfoSec

    Michał "rysiek" Woźniak · 🇺🇦R 1 Reply Last reply
    0
    • Michał "rysiek" Woźniak · 🇺🇦R Michał "rysiek" Woźniak · 🇺🇦

      New blogpost: AI will compromise your cybersecurity posture
      https://rys.io/en/181.html

      The way “AI” is going to compromise your cybersecurity is not through some magical autonomous exploitation by a singularity from the outside, but by being the poorly engineered, shoddily integrated, exploitable weak point you would not have otherwise had on the inside.

      LLM-based systems are insanely complex. And complexity has real cost and introduces very real risk.

      1/🧵

      #AI #InfoSec

      Michał "rysiek" Woźniak · 🇺🇦R This user is from outside of this forum
      Michał "rysiek" Woźniak · 🇺🇦R This user is from outside of this forum
      Michał "rysiek" Woźniak · 🇺🇦
      wrote last edited by
      #2

      An important aspect of pushing AI hype is inflating expectations and generating fear of missing out, one way or another. What better way to generate it than by using actual fear?

      I look at three notorious examples of such fear-hyping:
      👉 PassGAN cracking "51% of popular passwords in seconds"
      👉 that paper about ChatGPT "exploiting 87% of one-day vulnerabilities"
      👉 and of course Anthropic's "first AI-orchestrated cyber-espionage campaign"

      tl;dr: don't lose sleep over them.

      2/🧵

      Michał "rysiek" Woźniak · 🇺🇦R 1 Reply Last reply
      0
      • Michał "rysiek" Woźniak · 🇺🇦R Michał "rysiek" Woźniak · 🇺🇦

        An important aspect of pushing AI hype is inflating expectations and generating fear of missing out, one way or another. What better way to generate it than by using actual fear?

        I look at three notorious examples of such fear-hyping:
        👉 PassGAN cracking "51% of popular passwords in seconds"
        👉 that paper about ChatGPT "exploiting 87% of one-day vulnerabilities"
        👉 and of course Anthropic's "first AI-orchestrated cyber-espionage campaign"

        tl;dr: don't lose sleep over them.

        2/🧵

        Michał "rysiek" Woźniak · 🇺🇦R This user is from outside of this forum
        Michał "rysiek" Woźniak · 🇺🇦R This user is from outside of this forum
        Michał "rysiek" Woźniak · 🇺🇦
        wrote last edited by
        #3

        Anthropic does make an important point though, even though they try to bury it:

        > [The attackers] had to convince Claude—which is extensively trained to avoid harmful behaviors—to engage in the attack. They did so by jailbreaking it (…) They also told Claude that it was an employee of a legitimate cybersecurity firm, and was being used in defensive testing.

        The real story is how hilariously unsafe Claude is, and how a company valued at $180bn refuses to take responsibility for that.

        3/🧵

        1 Reply Last reply
        1
        0
        • MarianneN Marianne shared this topic
        Reply
        • Reply as topic
        Log in to reply
        • Oldest to Newest
        • Newest to Oldest
        • Most Votes


        • Login

        • Don't have an account? Register

        • Login or register to search.
        Powered by NodeBB Contributors
        • First post
          Last post
        0
        • Categories
        • Recent
        • Tags
        • Popular
        • World
        • Users
        • Groups