Your Robot Vacuum Is Watching You: The $30K Hack That Exposed Thousands of Smart Homes
Your Robot Vacuum Is Watching You: The $30K Hack That Exposed Thousands of Smart Homes
$30,000. That's what a security researcher earned. Not for finding a flaw in a bank's API or cracking an enterprise firewall. For proving they could remotely hijack a robot vacuum cleaner, access its camera, download the complete floor plan of your home, and watch you through a device you bought to clean your floors.

Check Point Research found these vulnerabilities in Roborock robot vacuums, one of the most popular smart home brands in the Xiaomi ecosystem. The flaws weren't theoretical. They were fully exploitable, remotely triggerable, and affected thousands of devices sitting in living rooms and bedrooms right now.
This isn't a story about one brand having a bad day. It's about IoT security being broken in ways the industry keeps refusing to address.
What Check Point Actually Found
The vulnerabilities were severe by any reasonable measure. Researchers demonstrated full remote control of the device. Steering it around your home, sure. But the real damage: accessing the camera feed and exfiltrating the detailed room maps the vacuum generates with its LiDAR sensors.

Think about what your robot vacuum knows about you. It has a centimeter-accurate map of your entire home. It knows which rooms you use, how your furniture is arranged, where your doors and windows are. Models with cameras can literally see inside your home. All of this data was accessible to anyone who could exploit the flaws Check Point identified.
Oded Vanunu, Head of Products Vulnerabilities Research at Check Point, said it plainly: users need to be aware of the privacy and security risks of connecting smart devices to their home networks, because even common household appliances can be exploited by attackers.
The $30,000 bounty Xiaomi paid is telling. That's not a token payout. Bug bounty amounts reflect severity, and $30K says the company understood how bad this could have been. To their credit, Xiaomi confirmed and patched the vulnerabilities before Check Point went public. Responsible disclosure worked here. But the fact that these flaws existed at all is the real problem.
The IoT Security Problem Nobody Wants to Fix
I've spent over fourteen years building software systems, and one pattern keeps repeating: security gets treated as a feature rather than a foundation. In IoT, this is dramatically worse.

Traditional software companies have decades of hard-won security practices. Bug bounties, dedicated security teams, automated patch pipelines. IoT manufacturers? Many are hardware companies that bolted on a cloud backend as an afterthought. The firmware update story is often terrible. Authentication is frequently weak. And the attack surface is massive because you're combining embedded systems, cloud APIs, mobile apps, and local network protocols into a single product.
The Roborock case is actually one of the better outcomes. Xiaomi has a mature bug bounty program. They responded. They shipped a patch. Now think about the hundreds of no-name smart home devices on Amazon with zero security infrastructure behind them. The $20 smart camera from a brand you've never heard of. The Wi-Fi air purifier that hasn't received a firmware update since it shipped.
I wrote about how Wikipedia's security resilience prevented a mass hack, and the lesson there was straightforward: security culture matters more than any single technical control. The IoT industry doesn't have a security culture. It has a shipping culture.
Your Smart Home Is an Attack Surface
If you have more than five smart devices on your home network, you are running an attack surface that most small businesses would find concerning. That's not hyperbole.
Every smart device is a computer. Your robot vacuum is a computer with wheels, a camera, LiDAR, Wi-Fi, and a persistent connection to a cloud backend. Your smart thermostat is a computer on your wall. Your video doorbell is a computer pointed at your front door. Each one runs firmware that was probably written under deadline pressure by a team that may or may not have had a security review.
The Roborock vulnerability specifically allowed remote code execution. That's the worst category of security flaw. An attacker doesn't need to be on your network. They don't need physical access. They can reach your device from anywhere on the internet. And once they have code execution on one device inside your home network, lateral movement to everything else gets a lot easier.
I've seen this pattern play out in production systems. One compromised endpoint becomes the beachhead for a much larger attack. The difference is that enterprise environments typically have monitoring, segmentation, and incident response. Your home network has a consumer router running default firmware from 2019.
A robot vacuum with a camera and an internet connection isn't a cleaning tool. It's a surveillance device that happens to also clean your floors.
We keep thinking of these devices as appliances. They're not. They're networked computers with sensors, and we need to evaluate them with the same rigor we'd apply to any other networked computer.
What Engineers Should Actually Do About This
If you're building IoT products, the Roborock disclosure should be a case study your team reviews. The specific technical failures — authentication, API security, firmware integrity — matter less than the systemic question: does your organization treat security as a first-class concern in the product development lifecycle?
A few things that actually move the needle:
Network segmentation is not optional. Put your IoT devices on a separate VLAN. Most modern consumer routers support guest networks at minimum. Your robot vacuum should not be on the same network segment as your laptop with your SSH keys and source code.
Firmware updates need to be automatic and verified. If your device can't update itself securely, it will eventually become a liability. The Roborock patch reached users because Xiaomi has OTA update infrastructure. Many IoT vendors don't, and that's inexcusable in 2025.
Minimize data collection at the source. Does your vacuum actually need to upload a detailed map of your home to a cloud server? Does it need a camera that streams over the internet? The principle of least privilege applies to data collection, not just access control. Having worked with systems that process sensitive data, I can tell you the best security for data is not collecting it in the first place.
The parallel to prompt injection remaining OWASP's top LLM vulnerability is hard to ignore: we rush to ship AI-powered features while security consistently lags behind capability. Same dynamic in IoT. The camera, the mapping, the cloud integration all ship on day one. The security audit happens after the breach.
The $30K That Should Have Been $300K
Here's my honest take: $30,000 is too little for what Check Point found. A vulnerability giving an attacker remote access to the cameras and floor plans of thousands of homes is worth far more than that to a malicious actor. Responsible disclosure paid $30K. The black market for this kind of access would pay multiples. That math is a problem the entire industry needs to sit with.
Bug bounty programs work, but only when the payouts reflect the actual value of the vulnerabilities. Xiaomi running a bounty program at all puts them ahead of most IoT manufacturers. The economics of vulnerability disclosure still favor the wrong side, though.
As I explored when looking at the security nightmares lurking in vibe-coded applications, we keep building faster without securing what we've built. IoT is the most extreme version of this. Billions of devices, thin margins, pressure to ship, security as an afterthought.
The next Roborock-scale disclosure won't be about vacuums. It'll be about the AI-powered home assistant robot with microphones, cameras, and the ability to physically move through your house. We're adding more sensors, more compute, and more connectivity to our homes every year. The attack surface isn't shrinking.
If you're an engineer working on connected devices, the question isn't whether your product has vulnerabilities. It does. The question is whether you'll find them before someone else does. And whether your organization can respond in hours instead of months when that day comes.
Photo by Vika Strawberrika on Unsplash.


