4 min read

Stop Open Sourcing Shit

We are living through a strange moment in software. Agentic coding, autonomous systems, and near-zero cost creation have collapsed the distance between idea and execution. What once took teams now takes an afternoon. What once demanded coordination now happens in isolation. Software is everywhere, constantly produced, endlessly shipped, and increasingly disposable.

That alone is not the problem.
The problem is what we are doing with that power.

In It’s the End of SaaS as We Know It (And I Feel Fine), I argued that the traditional SaaS model is losing relevance in a world where software can be shaped continuously and individually. In AI Killed Product Management, I argued that the rituals we used to justify and control software creation no longer make sense when execution is cheap and fluid.

This piece is the uncomfortable continuation of that line of thought. Not everything needs to be a product, and not everything should be open source.

SaaS Is Dead, but the Reflex Remains

For a long time, building software meant building a product. You needed a roadmap, a backlog, and a story explaining why this thing deserved to exist. Product management became the gatekeeper. If it did not scale, it was not worth doing. If it could not be monetized, it was not serious.

That world is fading.

Individual software now occupies a different space. Bespoke tools, personal systems, and narrowly scoped utilities solve real problems without aspiring to become companies. They do not need growth strategies or pricing pages. They simply need to work.

But as the SaaS reflex weakens, another reflex takes its place. If something is not turned into a product, it gets open sourced. Openness becomes the default outcome rather than a deliberate choice. This feels generous. Often, it is just a way of avoiding responsibility.

Open Source Was Built on Friction

Open source has always carried expectations. It invites use, contribution, and reliance. Historically, those expectations were balanced by friction. Contributing required effort, familiarity, and time. That friction filtered intent and created a baseline of trust.

AI removed that filter.

Today, anyone can contribute instantly and at scale. Contributions can look correct while carrying little signal about understanding, judgment, or long-term care. Code is easy. Validation is not. Maintenance is not. Security is not.

Publishing something as open source under these conditions is not a neutral act. It creates surface area for failure. It invites people into systems that may not be stable, intentional, or safe to rely on.

Trust erodes not through malice, but through indifference.

When Openness Meets Autonomous Agents That “Work”

OpenClaw is a useful example of this shift, not just in how software is built, but in how work itself is imagined.

As an agent-coded chatbot project, OpenClaw captured attention by embodying a compelling idea: autonomous agents that can operate, extend themselves, and meaningfully “work” in the world. Its rapid spread reflected genuine excitement around that possibility.

That same dynamic introduced new challenges. Contributions arrived faster than any human team could reasonably review. Alongside thoughtful additions came auto-generated patches, speculative changes, and loosely integrated behaviors produced by other agents. Over time, security issues and scam-like patterns began to surface, not as a moral failure, but as a structural consequence of openness combined with autonomy.

When agents are allowed to act rather than just assist, the surface area grows non-linearly. Review becomes harder. Accountability becomes blurrier. The cost of care concentrates on fewer people.

This does not negate OpenClaw’s apparent success. It highlights a tension.

Excitement and Quality Do Not Scale the Same Way

Agentic systems are very good at generating momentum. Autonomous behavior is compelling. Distribution is fast. Participation becomes nearly frictionless.

Quality is different. Quality requires consistency, shared understanding, and ongoing judgment. These are slow, human processes that do not automatically benefit from scale.

Whether OpenClaw can translate early momentum into long-term reliability remains an open question, and it is one many projects in this space will face. Over time, novelty fades. Expectations rise. What remains is whether the system is understandable, dependable, and safe.

In the long run, quality tends to outlast excitement.

Care Is the Real Cost of Open Source

Open sourcing something is a statement. It says this matters. It says others can depend on this. It says someone is willing to stand behind it.

That does not mean perfection or infinite support. But it does mean responsibility. It means defining scope. It means deciding what belongs and what does not. It means accepting that every additional contributor, human or agent, carries a cost.

“Move fast and break things” was already fragile advice inside companies. In open source, it becomes destructive. Breaking your own system is learning. Breaking a shared system externalizes the cost to people who trusted you.

If you are not willing to absorb that cost, openness is not generosity. It is offloading.

Trust Is Now a System Design Problem

AI eliminated the natural barrier to entry that once allowed open source projects to trust by default. Some people ignore this shift. Others complain without changing anything.

A more interesting response is to design for reality.

Mitchell Hashimoto’s vouch is one such attempt. It introduces explicit trust management into open source projects by making participation conditional rather than assumed. Contributors vouch for contributors. Projects decide how trust works for them.
https://github.com/mitchellh/vouch

This is not a solution. It introduces governance, exclusion, and new forms of friction. But that is precisely the point. The fact that such systems are necessary at all tells us the old assumptions no longer hold.

What once emerged organically now requires boundaries.

The Hard Question No One Wants to Ask

The uncomfortable possibility is that open source, as we know it, may not survive this transition.

Most contributions will no longer be original. They will be agent-mediated, synthesized, and assembled rather than authored. The cost of producing code approaches zero, while the cost of reviewing, understanding, and securing it continues to rise.

At that point, openness stops being an obvious virtue. If contributions are cheap and indistinguishable, participation becomes noise. Trust systems can mitigate this, but they also narrow the space. They trade openness for survivability.

If fewer people are contributing understanding, judgment, and long-term care, then we need to ask what openness actually means.

What Might Remain

This does not mean collaboration disappears. It means it changes shape. Smaller circles. Explicit trust. Narrower scope. More closed by default, more open by intention.

Some projects will survive by becoming highly curated. Others will retreat into private or semi-private spaces. Many will simply stop accepting external input, because the cost of openness exceeds its value.

That is not a moral failure. It is an adaptation.

Care Is the Constraint

AI makes code abundant. Trust, understanding, and responsibility remain scarce.

Open source cannot be sustained on abundance alone. Tools like Vouch make the fracture visible, but they do not close it. They force us to confront trade-offs that were previously hidden by friction.

We are no longer deciding how to manage open source better. We are deciding whether the old assumptions still hold at all, and that is a much harder problem than tooling will ever solve.

Subscribe to our newsletter.

Be the first to know - subscribe today