A few days ago, a client’s data center (well, actually a server room) "vanished" overnight.
-
@lorenzo @stefano
I think Stefano, the mild mannered barista of the BSD Cafe who posts pictures of sunsets and from his walks in nature is just a cover, and in reality he is a tough-as-nails secret military agent who's chasing cybercriminals around the globe.
See also his comment to my blog post about "just telling people to call the Barista" to make them crap their pants... this Barista has a secret!
️ -
@gumnos @mwl @Dianora @stefano Both may, for sure, be present at the table. #devicedrivers
-
@gumnos @mwl @Dianora @stefano Both may, for sure, be present at the table. #devicedrivers
@EnigmaRotor @gumnos @mwl @stefano Only if the plugboard is also set up right.
-
@EnigmaRotor @gumnos @mwl @stefano Only if the plugboard is also set up right.
-
A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.
I then suspected a power failure, but the UPS should have sent an alert.
The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.
To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.
The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.
That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.
The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.
The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.
Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.
Never rely only on internal monitoring. Never.
@stefano Have to integrate this story into the pitch for our monitoring service

-
A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.
I then suspected a power failure, but the UPS should have sent an alert.
The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.
To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.
The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.
That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.
The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.
The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.
Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.
Never rely only on internal monitoring. Never.
@stefano Good for you. If next time, you could solve your problems without involving people who are sick at home with a serious family issue on top, that would be great.
-
@stefano Good for you. If next time, you could solve your problems without involving people who are sick at home with a serious family issue on top, that would be great.
@fennek Calling these 'my' problems is inaccurate; I am simply providing services to this company and I have no formal contract or obligation regarding this specific issue. I could have easily ignored the alert, especially since I wasn't aware the person in charge was out sick. Despite this, I offered to step in and handle it myself - even though it’s hours away - to help out and allow them to stay home.
-
@gumnos @mwl @Dianora @stefano Both may, for sure, be present at the table. #devicedrivers
@EnigmaRotor @gumnos @mwl @Dianora is this board powered by a BSD?
-
@tkr I will - but it's too fresh and still not totally over. When I'll have all the final details, this will be a blog post
-
@EnigmaRotor @gumnos @mwl @Dianora is this board powered by a BSD?
-
A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.
I then suspected a power failure, but the UPS should have sent an alert.
The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.
To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.
The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.
That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.
The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.
The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.
Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.
Never rely only on internal monitoring. Never.
@stefano
db -
@stefano Good for you. If next time, you could solve your problems without involving people who are sick at home with a serious family issue on top, that would be great.
@fennek@cyberplace.social pretty harsh. As an external provider, @stefano@bsd.cafe likely would have no idea their primary point of call was sick and/or had a family issue.
Really, if the primary point of call was out of action, it would be up to the business itself to arrange alternatives, allowing the sick person to stay out of action.
-
@fennek@cyberplace.social pretty harsh. As an external provider, @stefano@bsd.cafe likely would have no idea their primary point of call was sick and/or had a family issue.
Really, if the primary point of call was out of action, it would be up to the business itself to arrange alternatives, allowing the sick person to stay out of action.@paul @fennek I am familiar with that organization - and I know that the person (the one who was home sick, even if I didn't know he was home sick) has a deep sense of loyalty, but he is not reckless. If he hadn't been well enough, he wouldn't have gone. I even offered to go in his place myself. It is a healthy environment, not "that typical company" that exploits its employees. For obvious reasons, I cannot disclose details (and I work with several similar companies in different areas), but I can guarantee that everyone acted with the utmost respect for human decency. Fortunately, not all businesses operate like malicious entities that only think about harming their employees and collaborators.
I always strive to distance myself from such organizations, as they do not align with my outlook on life and the world.
-
A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.
I then suspected a power failure, but the UPS should have sent an alert.
The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.
To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.
The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.
That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.
The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.
The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.
Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.
Never rely only on internal monitoring. Never.
Damn you Stefano.
You just spoiled a future Netflix movie.
Instead of watching in 2027 : `The Power Surge Heist`... we will have `The Uptime` with Stefano as sysadmin.
Following you so i can keep up with all the movies i will be missing.
-
@paul @fennek I am familiar with that organization - and I know that the person (the one who was home sick, even if I didn't know he was home sick) has a deep sense of loyalty, but he is not reckless. If he hadn't been well enough, he wouldn't have gone. I even offered to go in his place myself. It is a healthy environment, not "that typical company" that exploits its employees. For obvious reasons, I cannot disclose details (and I work with several similar companies in different areas), but I can guarantee that everyone acted with the utmost respect for human decency. Fortunately, not all businesses operate like malicious entities that only think about harming their employees and collaborators.
I always strive to distance myself from such organizations, as they do not align with my outlook on life and the world.
@stefano Thank you for the clarification.

-
@fennek No problem! Indeed, it may not have been entirely clear from the original text.
-
@EnigmaRotor @stefano @gumnos @mwl Is his name Laplace per chance?
-
@mwl @EnigmaRotor @stefano Was that an offer to buy us all a round of beers at BSDCan? *whistles innocently*
@Dianora @mwl @EnigmaRotor @stefano try Gelato, just sayin'
-
@Dianora @mwl @EnigmaRotor @stefano try Gelato, just sayin'
@Keltounet @mwl @EnigmaRotor @stefano Grand Marnier Gelato!
-
@Keltounet @mwl @EnigmaRotor @stefano Grand Marnier Gelato!
@Dianora @Keltounet @mwl @stefano Not bad at all. Must be tasty… I vote for it
