Zum Inhalt springen
KISoftware

Vibe Coding and the Open Source Crisis from AI Code

Jamin Mahmood-Wiebe

Jamin Mahmood-Wiebe

Editorial photo illustration for Vibe Coding and the Open Source Crisis from AI Code
Article

In January 2026, Daniel Stenberg — the creator and sole full-time maintainer of cURL, the networking library embedded in roughly ten billion installations worldwide — closed his project's bug bounty program. The program had run for six years. According to Stenberg, he shut it down because one in five submissions was AI-generated garbage: fabricated vulnerability reports that read plausibly enough to demand manual review, but fell apart under any real scrutiny. The overall validity rate of reports had dropped to 5%.

This was not an isolated incident. Across the open source ecosystem, a wave of low-quality AI-generated contributions was overwhelming maintainers who already worked on razor-thin margins of time, energy, and funding. The phenomenon acquired a name: AI Slopageddon.

1.7xmore bugs in AI-generated code — CodeRabbit study, Dec 2025
20%of cURL submissions were AI garbage — Daniel Stenberg, Jan 2026
80%revenue drop at Tailwind CSS — InfoQ, Feb 2026

AI Slopageddon: When Open Source Drowns in AI Contributions

The responses from major open source projects have been swift and increasingly drastic.

cURL closed its HackerOne bug bounty entirely. Stenberg documented how AI-generated reports were wasting hundreds of hours of maintainer time — hours that should have gone into actually improving the software. The fake reports were sophisticated enough to bypass initial screening but fundamentally nonsensical: invented CVE numbers, hallucinated code paths, plausible-sounding exploit chains that targeted functions that did not exist.

Ghostty, the terminal emulator by Mitchell Hashimoto (founder of HashiCorp), implemented one of the most explicit AI contribution policies in open source. Contributors must disclose AI tool usage. Drive-by AI-generated pull requests result in permanent bans. Hashimoto published the reasoning in detail: every PR, regardless of quality, requires maintainer time to review. When the ratio of low-quality to genuine contributions tilts too far, the maintainers drown.

tldraw, the collaborative whiteboard library, went further. Creator Steve Ruiz closed all external pull requests — not just AI-generated ones. The reason was darkly ironic: Ruiz discovered that AI scripts he had built himself to manage issues were creating poor-quality issue descriptions, which external contributors then fed into their own AI tools to generate pull requests. The AI was feeding itself in a loop of degrading quality.

Godot Engine, the open source game engine, was drowning under 4,681 open pull requests. Maintainer Remi Verschelde described the situation as "draining and demoralizing." Each PR, even the obviously bad ones, required someone to look at it, assess it, and close it with an explanation. The sheer volume was burning out the people the project depended on.

Kate Holterhoff, an analyst at RedMonk, coined the term "AI Slopageddon" to describe this phenomenon: the systematic overwhelming of open source maintenance capacity by low-effort, AI-generated contributions that impose review costs without delivering value.


What Is Vibe Coding — and Why Did It Explode?

The term "vibe coding" was coined by Andrej Karpathy in February 2025. It describes a way of programming where you describe what you want in natural language, the AI generates the code, and you accept it without deeply understanding or reviewing it. You go with the vibe. It worked. Wikipedia now has an article on it. The term went mainstream within weeks.

We covered the productive side of AI-assisted programming in detail in Vibe Coding: 5 Ways AI Is Transforming How We Program. That article explores the tools, the productivity gains, and the workflow patterns that make AI coding genuinely powerful. This article is about what happens when vibe coding escapes the boundaries of a private project and hits the open source ecosystem.

The tools have accelerated dramatically since Karpathy's tweet. Cursor launched Cloud Agents in February 2026 — spinning up virtual machines that write code, run tests, and deliver merge-ready pull requests. A single user can run ten to twenty parallel agents simultaneously. The same AI agent architecture we describe as a productivity lever for enterprises becomes an uncontrolled mass weapon when deployed without oversight. Claude Code passed $2.5 billion in annualized revenue, more than doubling since January 2026. GitHub Copilot, OpenAI Codex, and a dozen other tools made it possible for anyone to generate and submit code at industrial scale.

The barrier to creating a pull request dropped to zero. The barrier to reviewing one did not change at all.


The Numbers: AI Code Is Measurably Worse

The intuition that AI-generated code is lower quality is now backed by hard data.

CodeRabbit's December 2025 study analyzed 470 pull requests — 320 AI-coauthored and 150 human-only — across production codebases. The findings were stark:

MetricAI-coauthored vs. human-only
Major issues1.7x more common (CodeRabbit)
Critical issues1.4x more common (CodeRabbit)
Security vulnerabilities2.74x more common (CodeRabbit)
Performance issues (excessive I/O)8x more common (CodeRabbit)
Logic errors (incorrect deps, flawed control flow)75% more common (CodeRabbit)

These are not toy projects or student assignments. These are production pull requests in real codebases.

The METR study, published in mid-2025, added a disorienting psychological dimension: according to METR's findings, experienced open source developers using AI tools were 19% slower at completing tasks — despite predicting beforehand that they would be 24% faster, and believing afterward that they had been 20% faster.

Read that again. The developers thought they were faster. They felt faster. They were measurably slower. The perception-reality gap is not a rounding error. It is a 40-percentage-point swing between subjective experience and objective measurement.

⚠️

The quality gap is measurable

According to CodeRabbit's study, AI-generated code contains 1.7x more major bugs. Security vulnerabilities are 2.74x more common. Performance issues are 8x more frequent. And according to the METR study, experienced developers using AI tools are actually 19% slower — while believing they are 20% faster.


The Real Problem: Unsupervised Vibe Coding

Let me be clear about where I stand, because this matters for everything that follows: the problem is not AI coding. At IJONIS, we use Claude Code and Cursor every single day. We built GEO Lint, an open source linter with 92 rules, using AI tools extensively. We ship production software with AI assistance. We know from firsthand experience that AI-assisted development, done well, is transformative.

The problem is unsupervised, no-review, spray-and-pray vibe coding. The distinction matters enormously.

There is a fundamental asymmetry at the heart of this crisis: generating a pull request now takes seconds. Reviewing one still takes hours. The cost of creation collapsed. The cost of evaluation did not. Every unsupervised AI submission imposes a tax on maintainers who never agreed to pay it.

GitHub's own blog called this the "Eternal September" of open source — referencing September 1993, when AOL opened Usenet to millions of new users who overwhelmed the existing community's norms and capacity. The parallel is precise: the norms of open source contribution — read the docs, understand the codebase, test your changes, explain your reasoning — are being bypassed by people who let AI do the work without applying any of that diligence.

The Tailwind CSS Collapse

The Tailwind CSS case study illustrates a different dimension of the same crisis. According to InfoQ's reporting, npm downloads of Tailwind kept climbing. But documentation traffic dropped 40%. Revenue dropped 80%. Adam Wathan, Tailwind's creator, laid off 75% of his engineering team.

What happened? AI tools had absorbed Tailwind's documentation and were generating Tailwind code directly, without users ever visiting the docs, reading the guides, or engaging with the project's community. The value was being extracted without any of the attention, engagement, or revenue flowing back to the maintainers who created it.

When someone submitted a pull request to add /llms.txt — a file specifically designed to make content more accessible to AI systems — Wathan rejected it. The reasoning was clear: making Tailwind even more machine-readable would accelerate the extraction of value without returning anything to the project. This is the same pattern we described in SaaS Is Dead — At Least as a Competitive Advantage: when the technology becomes a commodity, the business model collapses.

An arXiv paper titled "Vibe Coding Kills Open Source" formalized this as a negative feedback loop: AI intercepts user attention that used to flow from users to maintainers. No documentation visits. No bug reports. No community engagement. No funding. The project decays. The AI trains on the decaying project. The output gets worse. The loop tightens.


For Businesses: Your Software Supply Chain Is Affected

If you are running a business that depends on software — and in 2026, that is every business — this is not an abstract open source community problem. It is a supply chain risk.

Every enterprise application depends on dozens, often hundreds, of open source packages. When maintainers burn out, close contributions, or abandon projects entirely, the question becomes immediate and concrete: who patches your security vulnerabilities?

Consider the projects mentioned in this article. cURL is embedded in virtually every internet-connected device. Tailwind CSS is used by Claude.ai, Vercel, Shopify, OpenAI, Cursor, and GitLab. These are not niche libraries. They are infrastructure.

The Log4Shell crisis of 2021 showed what happens when a critical open source dependency is undermaintained: a catastrophic vulnerability in a logging library used by millions of applications. Now imagine that dynamic at scale, across dozens of projects simultaneously, because the maintainers who would catch and fix these vulnerabilities are instead spending their time triaging AI-generated garbage.

ℹ️

Supply chain risk checklist

Ask these questions about your software dependencies today:

  • Which open source packages does your product depend on?
  • Are those projects actively maintained, or showing signs of maintainer burnout?
  • Do those maintainers have sustainable funding?
  • What is your plan if a critical dependency goes unmaintained?
  • Are you contributing back — financially or through quality contributions — to the projects you depend on?

AI-Assisted vs. AI-Generated: The Critical Distinction

The conversation around AI coding suffers from a collapsed distinction. "AI-assisted" and "AI-generated" get treated as synonyms. They are not. The difference between them is the difference between a power tool and a loaded weapon left on a playground.

AI-assisted development means the human understands the code, reviews it critically, tests it thoroughly, knows the codebase, and takes responsibility for what gets submitted. The AI is a collaborator. The human is the quality gate.

AI-generated development means prompt in, code out, submit without understanding. No review. No testing. No knowledge of the codebase. No sense of responsibility. The AI is the author and the auditor and the decision-maker, and the human is a relay between the AI and the submit button.

At IJONIS, we practice AI-assisted development daily. We have written about this extensively — in our posts on multi-agent coding teams, and on the question of whether it matters if a human or machine wrote the code. We also maintain GEO Lint as an open source project, so we experience the maintainer side of this equation firsthand.

The future is not about using AI less. It is about using AI better — with context, domain knowledge, and human oversight.


What Companies Should Do Now

The AI Slopageddon is not going to reverse itself. The tools are getting cheaper, faster, and more accessible. The volume of AI-generated contributions will increase, not decrease. Companies that depend on open source — which is all of them — need to act.

1. Audit Your Dependencies

Map every open source package your software relies on. Identify the critical ones: those where a security vulnerability or abandonment would directly impact your product. Check whether those projects are showing signs of maintainer burnout — declining response times, growing issue backlogs, hostile or exhausted communication from maintainers.

2. Fund the Maintainers

Remi Verschelde's answer to the Godot crisis was unambiguous: "More funding so we can pay more maintainers is the only viable solution." If your business generates revenue from software built on open source foundations, contributing financially to those foundations is not philanthropy. It is supply chain security.

3. Establish AI Code Review Standards

Never submit AI-generated code without human review. Never accept AI-generated contributions without testing. Make disclosure of AI tool usage a standard part of your contribution guidelines — not as a stigma, but as a quality signal. Code that was reviewed by a knowledgeable human is worth more than code that was not, regardless of who or what wrote the first draft.

4. Distinguish AI-Assisted from AI-Generated

In your internal development processes, make the distinction explicit. AI-assisted work — where the developer understands, reviews, and takes responsibility for the output — should be encouraged. AI-generated work — where code is produced and submitted without understanding — should be flagged, reviewed with extra scrutiny, or rejected outright.

5. Invest in Quality AI Workflows

The answer is not to stop using AI tools. The answer is to use them well. Our Trust Spectrum for agent safety describes how companies can define security boundaries for autonomous AI agents — exactly the kind of control that unsupervised vibe coding lacks. That means context engineering: providing AI tools with architecture documentation, coding standards, test requirements, and domain context before they generate code. It means CLAUDE.md-driven development, not one-shot prompting. It means treating AI output as a first draft, not a finished product.

If you are building AI workflows for your development team, we can help.


What Is Vibe Coding?

Vibe coding is a term coined by AI researcher Andrej Karpathy in February 2025. It describes a programming approach where the developer describes what they want in natural language, the AI generates the code, and the developer accepts it without fully understanding or reviewing the output. The term reflects the informal, intuitive nature of the interaction — you describe the "vibe" and the AI handles the implementation. For a deep dive into the productive applications of vibe coding, see our article Vibe Coding: 5 Ways AI Is Transforming How We Program.

What Does AI Slopageddon Mean?

AI Slopageddon is a term coined by RedMonk analyst Kate Holterhoff to describe the flood of low-quality, AI-generated contributions overwhelming open source projects. The word combines "AI slop" — a colloquial term for low-effort AI output — with "Armageddon" to convey the scale of the crisis. Major projects including cURL, Ghostty, Godot Engine, and tldraw have responded with bans, disclosure requirements, and outright closure of external contributions.

Is AI-Assisted Coding Bad for Open Source?

No. Unsupervised AI coding is. There is a critical difference between AI-assisted development — where a knowledgeable human reviews, tests, and takes responsibility for AI-generated output — and purely AI-generated contributions submitted without understanding or review. Quality AI-assisted development can improve open source. CodeRabbit's findings about 1.7x more bugs apply specifically to code that lacks adequate human review, not to all AI-involved development.

How Do I Protect My Company from Open Source Risks Caused by AI Code?

Start with a dependency audit: identify which open source packages your product relies on and assess whether those projects are sustainably maintained. Fund the maintainers of critical dependencies — this is supply chain security, not charity. Establish internal standards that require human review of all AI-generated code before submission. And invest in quality AI workflows — context engineering, thorough testing, and clear distinction between AI-assisted and AI-generated work in your development processes.


Navigating AI-assisted development without compromising code quality? At IJONIS, we use AI tools daily to ship production software — and we maintain open source projects, so we understand both sides of this equation. Talk to us about building responsible AI development workflows, or explore our deep dives on vibe coding and multi-agent coding teams.

End of article

AI Readiness Check

Find out in 3 min. how AI-ready your company is.

Start now3 min. · Free

AI Insights for Decision Makers

Monthly insights on AI automation, software architecture, and digital transformation. No spam, unsubscribe anytime.

Let's talk

Questions about this article?.

Keith Govender

Keith Govender

Managing Partner

Book appointment

Auch verfügbar auf Deutsch: Jamin Mahmood-Wiebe

Send a message

This site is protected by reCAPTCHA and the Google Privacy Policy Terms of Service apply.