A few days ago, a client’s data center (well, actually a server room) "vanished" overnight.
-
@uriel sure - we tend to call "data center" a specific place, inside the company, that will host the servers (with A/C, etc). Maybe a little inappropriate, here.
Well, not "a little". The one you described is - at best - a server room, not even a hosting center, since according with the blueprints, there was no redundancy....
-
@stefano I must repeat this Never trust in onsite backups either. Fire will destroy those. And RAID is not backup.
You know this but it bears repeating!@Dianora absolutely! No local backup is a safe backup.
-
Well, not "a little". The one you described is - at best - a server room, not even a hosting center, since according with the blueprints, there was no redundancy....
@uriel You're right. I've updated the original post to clarify it. Thank you for pointing it out!
-
@Dianora absolutely! No local backup is a safe backup.
-
@javensbukan @thegaffer suuure...fun...

@stefano @thegaffer
Yeeeeeaaaaaaaaah hahahahha -
A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.
I then suspected a power failure, but the UPS should have sent an alert.
The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.
To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.
The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.
That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.
The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.
The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.
Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.
Never rely only on internal monitoring. Never.
@stefano
even my new home alarm is coupled with a external monitoring alarm center that recognize tampering/sabotage jn addition to the "normal" alarms based on sensors etc. it costs a yearly subscription, but having a break in in the past, we considered it worthwile when we renovated our home. -
A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.
I then suspected a power failure, but the UPS should have sent an alert.
The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.
To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.
The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.
That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.
The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.
The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.
Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.
Never rely only on internal monitoring. Never.
@stefano
I just want to say, this is one of those long, esoteric, fascinating, entertaining threads like you used to see on Reddit, and it's great to see here on the Fedi, minus all the Reddit bullshit. Good job everyone! -
@nuintari Indeed. A local backup is very nice to have, but not something you should count on having when you *truly* need a backup.
My personal first layer is in-place ZFS snapshots on redundant pools. Amazing when they work. Not something I can count on to restore from if the computer PSU blows up because of a lightning strike, whether natural or deliberately induced by an act of Human.
-
@stefano
I just want to say, this is one of those long, esoteric, fascinating, entertaining threads like you used to see on Reddit, and it's great to see here on the Fedi, minus all the Reddit bullshit. Good job everyone!@JSteven Thank you!
-
@nuintari Indeed. A local backup is very nice to have, but not something you should count on having when you *truly* need a backup.
My personal first layer is in-place ZFS snapshots on redundant pools. Amazing when they work. Not something I can count on to restore from if the computer PSU blows up because of a lightning strike, whether natural or deliberately induced by an act of Human.
@mkj @stefano @Dianora This is basically what I do.
I take this, encrypt it, and upload it to Backblaze.
I have not needed a full restore since 1999. Since long before BB existed. My backup policy at one point involved shuffling HDDs around ala sneakernet. B2 has been a life saver I have never needed.
But I have done several local restores, and many simulated remote restores in the interim years.
I'm good to go.
-
@mkj @stefano @Dianora This is basically what I do.
I take this, encrypt it, and upload it to Backblaze.
I have not needed a full restore since 1999. Since long before BB existed. My backup policy at one point involved shuffling HDDs around ala sneakernet. B2 has been a life saver I have never needed.
But I have done several local restores, and many simulated remote restores in the interim years.
I'm good to go.
@nuintari I did at one point not long ago look at what it would cost me to store an encrypted backup with some cloud provider. (I'm still at the sneakernet for offsite backup stage, but I do have an obvious place for that.)
It comes out to *per year* roughly the equivalent of one HDD that can hold all my hot data plus some history.
So even if I assume a very aggressive hardware replacement schedule, still a good bit more expensive than my current setup.
-
In the first sentence you mention a "data center", but such an attack would not work with a data center, to be one you need to have two buildings with independent power supply, at a safe distance, etc etc. I think this was at best a hosting room, not a data center.@uriel @stefano
worked for years for an ISP/datacenter whose primary datacenter space was in the first level of our office building. We had only one service for the building. It's technically possible to get two, but it would be from the same power company... so when the drunk driver crashed into the transformer and took out our power in winter it would have taken out both anyway. That actually caused a power surge that destroyed our transfer switch which is another problem that having two services wouldn't have solved. We did have diesel backup generators though
We didn't even have diverse entrances into the building for our fiber for a long long time either. But we were definitely a datacenter. (my brother still works there; nothing has really changed except increased bandwidth)
I have never heard of any rules or regulations that require a "datacenter" to have two buildings and independent power. Sounds like something someone made up... -
A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.
I then suspected a power failure, but the UPS should have sent an alert.
The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.
To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.
The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.
That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.
The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.
The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.
Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.
Never rely only on internal monitoring. Never.
@stefano I wasn't aware of this kind of problems with internal monitoring and the importance of external monitoring. However, I think is more important to monitor the monitoring server or to have one heartbeat of the monitoring system (external or internal). Because the external monitoring system could also fail without being aware of it.
-
@stefano I wasn't aware of this kind of problems with internal monitoring and the importance of external monitoring. However, I think is more important to monitor the monitoring server or to have one heartbeat of the monitoring system (external or internal). Because the external monitoring system could also fail without being aware of it.
@zako sure. Monitoring the monitor is more important than monitoring the services.
-
-
@ricardo@mastodon.bsd.cafe @stefano@mastodon.bsd.cafe @mkj@social.mkj.earth No SPDs can protect you from intentional saboteurs (or faulty grid or wiring) that run hard (not momentary) 380 V into 230 V systems. Easily fry everything electrical in the building when it happens.
-
A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.
I then suspected a power failure, but the UPS should have sent an alert.
The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.
To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.
The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.
That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.
The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.
The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.
Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.
Never rely only on internal monitoring. Never.
@stefano knowledge to take out a security system: aquired
-
@ricardo@mastodon.bsd.cafe @stefano@mastodon.bsd.cafe @mkj@social.mkj.earth No SPDs can protect you from intentional saboteurs (or faulty grid or wiring) that run hard (not momentary) 380 V into 230 V systems. Easily fry everything electrical in the building when it happens.
@niconiconi If my memory serves me well, a couple of years ago we installed some Schneider SPDs at a clinic in the countryside that combined types 1–3 for lightning protection
️
@stefano @mkj -
In the first sentence you mention a "data center", but such an attack would not work with a data center, to be one you need to have two buildings with independent power supply, at a safe distance, etc etc. I think this was at best a hosting room, not a data center.
@uriel Who officially defines that definition of a datacenter, I wonder?

-
A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.
I then suspected a power failure, but the UPS should have sent an alert.
The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.
To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.
The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.
That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.
The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.
The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.
Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.
Never rely only on internal monitoring. Never.
