Zum Inhalt springen
KIStrategie

geo-lint: The Open-Source Linter for AI Search Visibility

Jamin Mahmood-Wiebe

Jamin Mahmood-Wiebe

Earth in space surrounded by glowing neural network layers, a magnifying glass hovering in front analyzing the AI structures
Article

geo-lint: The Open-Source Linter for AI Search Visibility

The problem was simple to state and irritating to ignore: everyone was talking about Generative Engine Optimization (GEO), but nobody had built a tool that would tell you, unambiguously, whether your content was actually ready for AI search.

SEO has had linters for years. You run a tool, it flags broken links, missing meta descriptions, heading hierarchy violations. You fix them. You re-run. You ship. That feedback loop is fast, deterministic, and automatable.

GEO had nothing like it.

Why I Built This

When I started building the GEO content strategy at IJONIS — a Hamburg-based AI agency — every GEO audit was manual. The checklist was always the same: check if there is a FAQ section, verify that comparison tables exist, assess citation density. Work that should take seconds was taking hours — and producing inconsistent results depending on who was doing the audit. So I built the linter I wanted to have.

What geo-lint Does

@ijonis/geo-lint is a CLI tool that validates content files against 92 rules across four categories. The breakdown below shows what each category covers and why it matters for AI search visibility:

The 92 rules break down as follows:

CategoryRulesFocus areaExample
GEO35AI citation readinessLead sentence, FAQ presence, direct answer patterns
SEO32Traditional search signalsTitle length, heading hierarchy, broken links
Content quality14Readability & structurePassive voice density, transition words
Technical8Crawler & schema signalsllms.txt presence, canonical validity
i18n3Locale completenessTranslation key pairing
  • 35 GEO rules — AI citation readiness: lead sentence structure, FAQ presence, question headings, citation density, direct answer patterns, entity disambiguation, structured answer blocks
  • 32 SEO rules — title length, description length, heading hierarchy, duplicate detection, broken internal links, image alt text, word count
  • 14 content quality rules — readability, passive voice density, transition structure, paragraph length variance
  • 8 technical rules — schema markup hints, crawler access signals, llms.txt presence, canonical link validity
  • 3 i18n rules — locale completeness, translation key pairing, hreflang consistency
# Install
npm install -g @ijonis/geo-lint

# Human-readable output
npx geo-lint

# Machine-readable JSON (for AI agents)
npx geo-lint --format=json

# List all 92 rules
npx geo-lint --rules

Zero peer dependencies. Node >= 18. MIT (Massachusetts Institute of Technology) licensed.

92validation rules
0peer dependencies
4rule categories: GEO, SEO, quality, i18n

The Agentic Loop

The human-readable output is useful for spot-checking. The JSON output is where things get interesting.

npx geo-lint --format=json

The output is a structured array of violations: rule ID, severity (error or warning), file path, line number, and a plain-English description of what is wrong and how to fix it.

An AI agent — Claude Code, Cursor, whatever you are using — can read that JSON, fix every violation, and re-run the linter. The loop continues until violations hit zero.

geo-lint → violations.json → agent fixes → geo-lint → violations.json → ...

This is the workflow we use internally. Content goes into the pipeline. The agent lints, fixes, re-lints. A human reviews the diff and the final zero-violation report. No manual audit. No inconsistency between reviewers.

💡

Why JSON output matters

Human-readable terminal output is useful for developers. Machine-readable JSON is what makes the tool composable with any agent framework. The two output modes are not redundant — they serve different audiences in the same workflow.

Why GEO Rules Are Different from SEO Rules

"Adding statistics, citations, and quotations to text led to the most significant boosts in generative engine visibility." — Aggarwal et al., Princeton / Georgia Tech, GEO Research Paper 2023

SEO rules are mostly structural: does the title exist, is it the right length, is the heading hierarchy intact. These are easy to validate because they map to discrete, measurable properties.

GEO rules are about how your content answers questions — which is harder to validate deterministically but not impossible. The key insight from the Princeton GEO research is that specific structural patterns correlate with higher citation rates: direct answer in the first two sentences of a section, presence of comparison tables, FAQ sections with schema-appropriate structure, citation of external sources with verifiable claims.

These patterns are measurable. They are not perfect proxies for citation rate — nothing is — but they are consistent, automatable, and they reflect what the research actually shows about how AI retrieval systems select content.

How We Use It at IJONIS

Every blog post and service page on this site runs through geo-lint before publishing. The workflow has four steps:

  1. Write — content created by a human or agent, saved as MDX
  2. Lintnpx geo-lint --format=json runs; violations written to a JSON file
  3. Fix — the agent reads the JSON, applies fixes file by file, re-runs the linter
  4. Review — a human reviews the diff and the zero-violation final report before publish

The linter is also part of the build step:

bun run build  # geo-lint runs automatically; errors block the deploy

Warnings are reported but do not block. Errors do. In summary: this forces a quality baseline on every content push without requiring a human to run a manual checklist.

This is what agentic SEO looks like in practice: not replacing human judgment, but removing the deterministic parts from the human's plate entirely.

What the Tool Does Not Do

geo-lint will not tell you whether you will rank. It will not tell you whether ChatGPT will cite you. It will not replace a full content strategy. However, it will remove the "I think this is probably fine" ambiguity that slows down every content team working on GEO.

The tool replaces subjective judgment with deterministic checks. Content either meets the structural criteria for AI citation readiness or it does not. The FAQ section either exists and uses the right heading pattern or it does not. The intro paragraph is either under 150 words or it is not.

The bottom line: that determinism is the entire point of the tool.

Try It and Contribute

The tool is open source, MIT licensed, and available now.

If you have rules you want added — especially domain-specific GEO patterns for particular industries or content types — open an issue or a PR. The rule format is simple TypeScript; adding a new rule takes about 15 minutes once you understand the structure.

The goal is for this to become the canonical tool for content validation before AI search. If you are doing GEO work, this should be in your pipeline.


Frequently Asked Questions About geo-lint

What is a GEO linter?

A GEO linter is a tool that validates content against a set of rules designed to improve visibility in AI-powered search engines (ChatGPT, Perplexity, Google AI Overviews). Unlike SEO linters, which focus on structural and technical signals, a GEO linter checks whether content is structured to be cited by large language models: direct answer patterns, FAQ presence, comparison tables, citation density, and entity clarity.

How is geo-lint different from existing SEO tools?

Most SEO tools focus on rankings in traditional search engines. geo-lint is the first tool specifically designed around GEO validation — the subset of content signals that research links to citation selection by AI retrieval systems. It also includes a JSON output mode designed explicitly for AI agent workflows, where the agent reads violations, applies fixes, and re-runs the linter in an automated loop.

Does geo-lint work with any content format?

The current version validates MDX and Markdown files. Support for HTML and plain text is on the roadmap. The rule engine is format-agnostic, which means adding new format parsers is straightforward — it is also a good first contribution opportunity for anyone wanting to extend the tool.

Can I integrate geo-lint into my CI/CD pipeline?

Yes. Run npx geo-lint --format=json and parse the exit code: 0 means clean, 1 means violations. Pipe the JSON output to any downstream system — Slack alerts, PR comments, agent pipelines. The JSON format makes it straightforward to build custom integrations that read specific rule categories or severity levels.

Is geo-lint only useful for AI-generated content?

No. geo-lint validates the structure of content regardless of how it was written. Human-written content often fails GEO rules just as much as AI-generated content — sometimes more, because human writers optimize for narrative flow rather than citation readiness.

Where can I report bugs or request new rules?

Open an issue on GitHub. For new rule requests, describe the pattern you want validated, the research or rationale behind it, and an example of passing and failing content.

End of article

AI Readiness Check

Find out in 3 min. how AI-ready your company is.

Start now3 min. · Free

AI Insights for Decision Makers

Monthly insights on AI automation, software architecture, and digital transformation. No spam, unsubscribe anytime.

Let's talk

Questions about this article?.

Keith Govender

Keith Govender

Managing Partner

Book appointment

Auch verfügbar auf Deutsch: Jamin Mahmood-Wiebe

Send a message

This site is protected by reCAPTCHA and the Google Privacy Policy Terms of Service apply.