
Most of us use AI in a conversational way. Models like Claude and ChatGPT are great for interacting with, like asking questions about a topic you’re unfamiliar with, making comparisons, and asking for advice. However, there are other types of AI, like AI agents that can do things for you. Recently, OpenClaw has been gaining a lot of attention on that front, but unfortunately, its AI extensions are proving to be a security nightmare.
OpenClaw AI extensions are a security issue
For those unfamiliar, OpenClaw is the same MolBot and ClawdBot project that has received a few rebrands since its launch. It is an open-source agentic AI—a type of AI that can complete tasks for you that go beyond searching the web and generating text. What’s great about it is that you can self-host it, just in case you’re worried about privacy and companies like Anthropic or OpenAI having access to your conversations.
Also, thanks to its open-source design, it allows users to submit their own “skills.” You could think of them as similar to Google Chrome’s extensions. Unfortunately, this is where things get a bit dicey. In a recent blog post, 1Password product VP Jason Meller claims that the OpenClaw AI extensions have become a security issue.
Basically, OpenClaw’s open-source design is both its strength and weakness. According to OpenSourceMalware, the platform discovered at least 28 malicious skills published on the ClawHub marketplace. It also discovered 386 malicious add-ons that were uploaded towards the end of January and the first week of February.
Working swiftly
The good news is that OpenClaw is aware of this. In a post on X, OpenClaw’s creator, Peter Steinberger, is working on ways to make the platform more secure. This includes introducing a new GitHub requirement. OpenClaw still allows anyone to upload skills, but they must have a GitHub account that is at least a week old.
Next, OpenClaw is introducing a reporting system. Signed-in users can report a skill, and each report needs to have a reason and will be recorded. Also, skills that have more than 3 unique reports will be automatically hidden. However, moderators can unhide skills if they deem the reports to be false or inaccurate. But at the same time, they can also delete them and ban users.
To be fair to OpenClaw, it is still a fairly new platform, so teething issues are to be expected. However, these aren’t small issues, and it’s reassuring to see that action is being taken.
The post OpenClaw AI: Extensions Security Issues Expose Risks of Open-Source AI Agents appeared first on Android Headlines.