Agentic AI-based services are the new Shadow IT.
-
Agentic AI-based services are the new Shadow IT. Change my mind.
@briankrebs let's be honest tho shadowit.ai sounds pretty bad ass
-
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
@briankrebs I am also really curious how many people have aggressively violated various privacy laws by feeding stuff into various LLMs for "summary" and "analysis".
Frankly it should be a much larger compliance nightmare than it is. (Or, I suppose, it *is* a ginormous compliance nightmare and just right now everyone's thinking it isn't. Incorrectly)
-
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
@briankrebs@infosec.exchange
On the plus side, step #1 of setting up things like an #AWS/#Azure/#GCP account β especially production ones β is to disable the ability to create IAM users (forcing the use of IAM-roles that are 2FA authenticated via a service like #Okta) β¦and the role-based authentication-tokens are typically TTLed to a couple hours.
Still, a "good" (suspicious-quotes) agent-setup would be pretty trivial to configure to snarf credentials from the relevant token-services. That triviality likely applies more broadly. -
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
@briankrebs In several pen tests I've done across the last 18 months, one of the most interesting trends has been the sudden increase in the number of examples I've found of people who have thrown those API keys, and in some cases raw data, into accidentally public GitHub repos while attempting to glue AI to things to 'see what it can do'.
Few weeks ago I found a GitHub repo that a developer had trained on a dump of their own corporate emails, and all those emails where just in public, on Github, and contained lots of things like vendor SFTP creds. It's a free for all.
-
@briankrebs In several pen tests I've done across the last 18 months, one of the most interesting trends has been the sudden increase in the number of examples I've found of people who have thrown those API keys, and in some cases raw data, into accidentally public GitHub repos while attempting to glue AI to things to 'see what it can do'.
Few weeks ago I found a GitHub repo that a developer had trained on a dump of their own corporate emails, and all those emails where just in public, on Github, and contained lots of things like vendor SFTP creds. It's a free for all.
@SecureOwl @briankrebs Wheeee
-
@SecureOwl @briankrebs Wheeee
@SecureOwl @briankrebs I will confess to playing random songs on a coworker's Alexa when they checked in their personal home Alexa key into a corporate git repository.
-
@briankrebs I am also really curious how many people have aggressively violated various privacy laws by feeding stuff into various LLMs for "summary" and "analysis".
Frankly it should be a much larger compliance nightmare than it is. (Or, I suppose, it *is* a ginormous compliance nightmare and just right now everyone's thinking it isn't. Incorrectly)
@wordshaper@weatherishappening.network @briankrebs@infosec.exchange
It took Enron to happen before we got Sarbanes-Oxley. Data privacy will have its Enron moment eventually. -
@briankrebs I am also really curious how many people have aggressively violated various privacy laws by feeding stuff into various LLMs for "summary" and "analysis".
Frankly it should be a much larger compliance nightmare than it is. (Or, I suppose, it *is* a ginormous compliance nightmare and just right now everyone's thinking it isn't. Incorrectly)
@wordshaper @briankrebs Unfortunately, I don't think the people doing this care or will ever care. Privacy laws tend to be a joke anyways and there is very little incentive for most people/companies to change. I don't think most governments even want that to change. It's better for them, allows more data collection, etc.
I wish I didn't have such a negative and cynical outlook on it all.
-
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
@briankrebs oh we dont even have 2fa because because. Have i mentioned we have a gigantic bloated mess of it bureaucracy but nobody cares we dont have a secure image repo?
But somebody had the idea to write safe dev guidelines because paper is what keeps us safe, not patching vulns.
-
@briankrebs let's be honest tho shadowit.ai sounds pretty bad ass
@grumpasaurus@infosec.exchange @briankrebs@infosec.exchange This is definitely what we all need: autonomous AI running IaC deployments. I mean, what could go wrong??