AI-powered test generation

One test engine, for people, CI, and coding agents.

TestGen is a multi-language tool for inspecting code, generating tests, validating coverage, and fitting cleanly into terminal workflows, automation, and agent tooling. Start with analysis, preview dry-run artifacts, then write files only when you are ready.

TUI CLI Codex Claude Code OpenCode MCP
$ testgen generate --file ./src/utils.py --type=unit --dry-run --emit-patch --output-format json
results
per-file generation status
artifacts
generated test path + code
patches
structured write operations for review-first flows
validate
write files later with validation when ready

Choose the surface that fits the job.

TestGen stays high-level and predictable: one engine underneath, multiple ways to use it.

🖥️

TUI

Use testgen tui when you want a guided, keyboard-friendly flow.

⌨️

CLI

Use analyze, generate, and validate in scripts, local workflows, and CI.

🤖

Agent wrappers

Let coding agents inspect dry-run JSON and patch artifacts before writing tests.

🔌

MCP

Run testgen mcp when your client prefers tool calls over shell execution.

One shared model behind every integration.

Keep wrappers thin. Keep orchestration in TestGen. Use the same review-first JSON contract across tools.

🧠

Codex / oh-my-codex

Use the repo-local skill and keep TestGen as the source of truth for scanning, generation, and validation.

✍️

Claude Code

Ship the repo-local command so Claude can review structured dry-run artifacts before materializing tests.

⚙️

OpenCode

Use the command wrapper or connect through MCP when your setup prefers tool calling.

🔌

MCP clients

Expose testgen_generate, testgen_analyze, and testgen_validate over stdio.

Multi-language by default.

TestGen currently supports JavaScript/TypeScript, Python, Go, Rust, and Java.

JavaScript TypeScript

JS / TS

Jest, Vitest

Python

Python

pytest

Go

Go

testing, testify

Rust

Rust

cargo test

Review-first workflow

Inspect first.
Write later.

The clearest workflow is also the safest one. Analyze the codebase, generate dry-run output, inspect the structured results, then write files with validation only when the output looks right.

1 Run testgen analyze to inspect scope and estimate cost
2 Generate dry-run artifacts with --dry-run --emit-patch --output-format json
3 Re-run with --validate when you want files written
Install TestGen
curl -fsSL https://raw.githubusercontent.com/princepal9120/testgen/main/install.sh | bash
Install agent wrappers into another repo
./scripts/install-agent-integrations.sh /path/to/repo copy
Run the MCP server
testgen mcp

Know where to go next.

The website stays high level. The docs explain which source to read for which job.

🧭

Docs overview

Use docs.html as the high-level guide to installation, workflow, and doc ownership.

📘

Current docs

The current sources of truth live in the repo README, CLI reference, integrations docs, and architecture docs.

🗂️

Historical docs

Older PRD and tech-spec documents are preserved for context, not for day-to-day implementation truth.

🛠️

Contributors

Contributors get a cleaner split between onboarding, references, architecture, integrations, and maintenance docs.