A design flaw – or expected behavior based on a bad design choice, depending on who is telling the story – baked into Anthropic’s official Model Context Protocol (MCP) puts as many as 200,000 servers at risk of complete takeover, according to security researchers.
AI a security risk? Can’t be! 🙄
It’s worse even than that. The server software (released by Anthropic) that lets an AI connect to a web service has a critical arbitrary remote code execution bug. So if you even let an AI connect to you, you’ve now allowed anyone to access your whole server.
There is no excuse for this other than wild incompetence.
Wait, but Mythos is the revolution in the software security world, it found 0-days in all popular OS’s, including FreeBSD. I’m sure it would have found critical bugs in their own code! /s
Ai isn’t a security risk, if you know how to use the tool. Just add the line “Make no mistake” to the prompt. Not even a “please” is needed.
Modern problems require modern solution.
I think the biggest thing that blows my mind about this whole AI rush is that we were finally starting to get security ingrained in people’s minds and have them understand the risks of data exfiltration and reputation damage, even holding companies responsible for data breaches and then…… throw everything out the window with security because AI
I can’t understand what this article is talking about.
When I create and run a simple MCP server, I decide what commands it’s able to run. I can decide if the interface is stdio or http with sse. So I can’t see how someone would send me a request for “rm -rf /” that would actually run it, unless running it is part of the intended features.
Maybe the protocol design leaves that in the open, but I think not even negligence would be enough to implement this flaw, because it’s easier to NOT do it.




