Innovation accelerates faster than governance
OpenClaw illustrates a familiar pattern: convenience drives adoption, accountability follows after consequence.
If you need proof of the acceleration of time in contemporary IT, consider the ClawdBot Moltbot OpenClaw case study. Here’s the tl;dr: a coder-entrepreneur tinkers with a personal project for a few weeks, open-sources the thing on github and it takes off vertically. Within 72 hours it is massively popular; on its third name after 2 separate trademark challenges; sparks an unrelated coin issuance on Solana that ends predictably with a rug-pull; and attracts equally predictable exploitation of its permissive security architecture resulting in account hijacking and large-scale exfiltration of user data and credentials.
Once again, IT security norms are challenged by viral adoption, opportunistic participation in an open ecosystem and naïve trust.
How did we get here?
Allow me to introduce you to OpenClaw. Peter Steinberger imagined an AI assistant that would tackle the digital chaos and administrivia of daily life. What if he could offload the tedium of scanning his inbox, booking lunch or checking ten times a day whether that stick of memory has come down in price (spoiler: it hasn’t)? What if he could be provided each morning with a summary of activity on his favourite subreddits and github accounts or each week with a round up of white papers and specialist discussion in his field of interest? Surely others had their own priorities and digital workflows that could benefit a broader community of productivity enthusiasts in their personal and professional lives.
What OpenClaw actually is (and what it is not)
OpenClaw is an autonomous, persistent AI agent that orchestrates tools and models locally and externally using LLM and other APIs. It operates with the user’s permissions, including file system access, browser control, script and binary execution and shell access.
Contrast this with the bounded, stateless and supervised paradigm of LLM chatbots and copilot add-ons to your email app or your text composition software.
Nate B. Jones, whose substack is here, provides one of the clearest explanations I have read of the stack summarising it thusly: “you own the agent layer, you rent the intelligence”.
Simply put, the agent is the executive function.
People love it because who is against productivity?
Users are various degrees of blown away by the seemingly boundless possibilities according to testimonials scrolling on OpenClaw’s landing page. Nate Jones called it a “Star Trek moment” on his podcast.
OpenClaw is always on, responsive to text and voice commands via your keyboard or a host of common messaging apps. It self extends and self improves.
Use cases abound from the prosaic delegation of back of house digital drudgery to the automation of front of house creative tasks such as website creation.
The tool is only a few weeks old so users at the time of writing are necessarily tech-enthusiastic early adopters. Notwithstanding, for certain users such as solo business operators, SME owners, freelancers and, of course, purveyors of productivity advice OpenClaw is genuinely transformative.
Herein lies a danger of category confusion and blind spots
The siren call to the mariners that are today’s small business operators is seductive. “Automate, digitise, simplify, outsource, put it in the cloud...” they are told and many of these suggestions make a lot of sense. But increasingly data is not only valuable to organisations; it’s critical, disproportionately so for small ones. What must not be outsourced is the assessment of cyber risk and the crafting of mitigation measures.
Organisations must categorise their data, distinguishing productivity systems (eg internal messaging) from production systems (eg an ERP).
Within those broad categories, information may be more or less strategic. The key question here must be “what would I most like to learn about my competitor?” and the inference is that that is the information you must most preciously guard. What strategic information do your meeting notes contain? What could be learned from the CEO’s diary or her travel arrangements? What do your to do lists reveal about your strategic or organisational priorities?
The agent is indifferent to your internal taxonomy. Indeed it makes no distinction between your personal data and your professional data. If granted access, it will operate across the entire landscape.
Without policies grounded in risk assessment, users may begin by delegating low-stakes cognitive labour and can, without noticing, graduate to delegating higher and higher-stakes organisational memory.
OpenClaw is a node in an agentic ecosystem
A software ecosystem has sprung up around OpenClaw including, in no particular order, Moltbook, a social network including human users and AI agent contributors; Clawhub, a “skill dock” aka a marketplace for OpenClaw skills; and Clawdstrike, a security toolbox for the OpenClaw ecosystem. Clawhub announces on its landing page that there is “no gatekeeping” which should be interpreted as caveat emptor: there are no common standards or QA, no code inspection or validation, no moderation and, at least initially, no means of reporting malicious content (though this point has seemingly been addressed recently on OpenClaw’s github).
The security implications are immediately apparent. Consider the free availability of unverified, unsigned code packages, OpenClaw’s implicit trust of all imported code, the multiple avenues for prompt injection via files the agent may inspect as it was instructed and configured to do by the user. Every imported skill expands the attack surface. The holes in the Swiss cheese of Professor James Reason’s conceptual model are aligned.
The functionality-convenience-security tradeoff
OpenClaw’s tagline is “The AI that actually does things” - functionality. Convenience is also highlighted: “…all from WhatsApp, Telegram, or any chat app you already use”. As for security, it is clearly stated further down under “Full System Access”: “…full access or sandboxed, your choice.” Looking into the default setup we see:
Important: sandboxing is off by default. If sandboxing is off, host=sandbox runs directly on the gateway host (no container) and does not require approvals. To require approvals, run with host=gateway and configure exec approvals (or enable sandboxing).Even without a task, or between tasks, OpenClaw performs housekeeping functions that require LLM API calls; more on that later. Whenever a user calls on the assistant to perform a task or the autonomous agent takes it upon itself, some access and resources must be brought to bear: more LLM tokens, access to email locally or cloud based, files locally or remotely, calendars, app APIs and so on. If the user wishes to secure a reservation or capture a dip in the price of a plane ticket she’s asked OpenClaw to keep tabs on, it would be convenient if the agent could act immediately with the authority granted to it over her credit card.
Novice users who do not think in terms of threat models, who do not audit code, who believe that “local” equates to “safe” are particularly likely to be underweight security in their analysis of the functionality/convenience/security tradeoff.
Even an individual user must consider the broader potential impacts of a cyber mishap, whether those be reputational or for example on their credit score. For the business owner, however, this is not a lifestyle decision and regulated entities should absolutely seek legal advice. It must be viewed through a governance lens with consideration to delegations of authority, fiduciary duties, data protection responsibilities, supplier and vendor risk management as well as a host of other considerations.
OpenClaw already introduces material risk. Users should not compound that risk through negligence.
CEOs should already have asked their CIO and IT chiefs to have policies and concrete measures in place to ensure that employees are not using tools such as OpenClaw even with the best of intentions.
Unintended economic consequences: token burn and misaligned incentives by default
Over the weeks since OpenClaw has been live common themes of tutorials and articles have come in waves. First was the “capabilities and potential” wave; then the “security alert” wave and most recently the “this can be unexpectedly expensive” wave. There are examples of monthly token quotas being exhausted in a matter of hours or days.
There are two main causes: unnecessary, overly verbose LLM API calls and routing tasks to expensive, high-capability models when a cheaper, smaller model would suffice. The good news is that there is tremendous scope for optimisation and auditing of token use (which is a proxy for cost). Users should capitalise on the experience that has been shared online.
Firstly, OpenClaw performs background tasks that trigger LLM API calls. An internal “heartbeat” can be thought of as regular housekeeping task that checks that the service is still running and that nothing has been left undone. There is also a cron-like function that triggers tasks on a schedule. The user might adjust the frequency and/or choice of resource, perhaps even opting for a locally hosted LLM with a marginal token cost of zero.
In addition, OpenClaw’s exchanges with the LLMs are by default extremely verbose and therefore costly in tokens. By default, each LLM call includes the full conversational history and context, dramatically increasing token consumption without proportional gain in inference quality.
Secondly, AI models have different capabilities and within each “stable” there are horses for courses at a variety of pricing tiers. Just taking Anthropic as an example, the pricing rises between the Haiku tier, the Sonnet tier and the Opus tier reflecting more and more powerful models. The user intuitively arbitrates in real time between the value of timeliness, depth and breadth of analysis, precision of a recommendation and a multitude of other factors so it’s up to the user to instruct OpenClaw on which tier of which LLM is most appropriate for which task.
OpenClaw has no native cost awareness, nor should it be expected to. A key governance principle is that constraint must be encoded; in this case, budgetary expectations must be explicitly passed to the agent in one way or another. Otherwise it resembles giving the keys to the house and the wine cellar, the keys to the car and credit card credentials to the teenagers and leaving for the weekend. What’s the worst that can happen?
A systems observation: innovation and convenience first, then security
LLMs, the intelligence layer of the agentic stack, are sufficiently new that only a limited set of governance norms has stabilised. OpenClaw is disruptive because it introduces a persistent, autonomous executive layer into a governance environment where norms are even less mature.
We are witnessing another manifestation of a familiar pattern in software systems. Convenience almost invariably outcompetes security in early adoption. Ecosystems simultaneously amplify innovation and extend the attack surface.
In complex systems, defaults matter more than documentation. Most users never read the latter.
What does this tell us?
The signal is familiar. With competing interests, the incentives are diffuse. Developers optimise for capability. Entrepreneurs optimise for growth. Security professionals optimise for mitigation and containment. Victims appear only after the fact. In that patchwork of responsibility, governance lags.
In liberal market economies, regulation typically crystallises after harm. Anticipatory constraints are frequently characterised as overreach.
OpenClaw illustrates the increasing pace and pervasiveness of disruptive technologies, shortening the time between innovation and consequence. Governance does not accelerate at the same rate.
