Congressional Democrats on the Joint Economic Committee released a report this week pinpointing more than $20.9 billion in consumer losses stemming from identity theft that came out of four major breaches of data broker firms. US senator Maggie Hassan launched the investigation in August after an investigation by The Markup and CalMatters, copublished by WIRED, found that some data brokers were hiding opt-out tools from Google and other search engines.
The US Department of Justice’s recent release of 3 million documents related to convicted sex offender Jeffrey Epstein included grand jury subpoenas to Google that shed light on how federal investigators interact with tech companies and how they respond to government requests for information.
The Mexican drug cartel CJNG may survive the killing of its longtime leader Nemesio “El Mencho” Oseguera Cervantes in part thanks to its prolific use of technologies like drones, social media, and AI. Meanwhile, the Mexican Navy announced on Thursday that it had seized a semi-submersible vessel carrying nearly 4 tons of cocaine as part of a recent initiative to deter drug trafficking in the Pacific Ocean. The effort comes as the US has launched its own purported campaign against maritime trafficking via a series of deadly attacks on boats in the Caribbean.
Meanwhile, as AI assistant agents like OpenClaw explode in popularity—and sow chaos around the web—a new open source project called IronCurtain is using a unique design to secure and constrain agentic AI before it can go rogue.
And there’s more. Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.
Setting an autonomous internet-enabled robot loose in your house should give anyone a moment’s pause. When that robot is a roving vacuum cleaner equipped with a camera and microphone that could be hijacked from anywhere in the world with nothing more than its serial number, it becomes an actual privacy horror story.
One such robovac owner, Sammy Azdoufal, discovered that absurd security vulnerability while attempting an experiment in piloting his DJI Romo robot vacuum cleaner with a PS5 controller. He found that he could instead control 6,700 of the robots in 24 countries around the world, with full access to the floor plans they generated of their owners’ homes and their video and audio feeds. When The Verge contacted Azdoufal, he was able to instantly access a Romo owned by a staffer at the tech news outlet just by knowing its 14-digit serial number. DJI has now fixed the vulnerability in response to Azdoufal essentially live-tweeting his findings. But the story nonetheless raises serious questions about the security of other audio- or video-enabled internet-of-things gadgets—not to mention ones capable of freely roaming your home.
While the Department of Homeland has been hugely empowered under the Trump administration in its mission to deport millions of immigrants, the organization within DHS that serves as the United States’ primary cyber defender, the Cybersecurity and Infrastructure Security Agency, has been neglected. Now its acting director, Madhu Gottumukkala, has been replaced as CISA seeks to find new footing.
Even before that news, CyberScoop this week reported on the crises that have plagued the agency for the entire first year since Trump’s inauguration: A third of the staff has been laid off and entire divisions of the agency have been closed. Nominations for a permanent director have been blocked in Congress. Its capabilities have withered, and organizations that had sought out CISA for assistance and partnerships have looked elsewhere. Gottumukkala has suffered his own more personal scandals such as ousting security personnel after he failed a polygraph test and sharing sensitive contracts on ChatGPT. Now Nick Andersen, who has served as CISA’s executive director for cybersecurity, will replace Gottumukkala at the beleaguered agency.
A researcher at King’s College London pitted three popular large language models against each other in simulated war game scenarios and found that, 95 percent of the time, at least one of the models opted to deploy tactical nuclear weapons. The researcher also found, when an AI model deployed a tactical nuclear weapon, its AI opponent only deescalated a fourth of the time. None of the companies behind the three models—OpenAI, Google, and Anthropic—responded to New Scientist’s request for comment.
AI’s role in war-fighting has lurched into the spotlight this week. Anthropic and the Department of War are embroiled in a contract dispute over whether Anthropic’s AI models can be used to power fully autonomous weapons and mass domestic surveillance. Dario Amodei, Anthropic’s CEO, wrote in a statement that these types of use cases “can undermine, rather than defend, democratic values.” In turn, President Donald Trump has threatened to ban the use of Anthropic products, including its Claude chatbot, within the US government. Meanwhile, hundreds of Google and OpenAI employees have signed an open letter asking for their bosses to “put aside their differences and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”
A new app for Android phones called Nearby Glasses lets users scan for smart glasses in your vicinity, revealing the presence of the wearable gadgets, which are sometimes indistinguishable from normal glasses and let wearers record people without their knowledge. The app scans for the unique Bluetooth signatures that the glasses emit, and sends users a notification if it detects a nearby source.
The developer told 404 Media that he was inspired to build the app after reading about several incidents involving smart glasses. Over the summer, 404 Media reported that a Customs and Border Protection agent had donned a pair during an immigration raid, and this fall the outlet also reported that men were using smart glasses to film massage parlor workers, seemingly without their knowledge or consent. In February, The New York Times reported that one smart-glasses developer, Meta, had plans to integrate face recognition into its glasses, spurring fresh concerns among privacy experts.