Features How It Works Pricing Docs
Sign in Start Free

Your AI tools. Your tests. One workflow.

TestPlan doesn't bundle its own AI or charge per token. It plugs into the tools you already use via MCP (Model Context Protocol) — so you get AI-powered testing without paying twice.

TestPlan is the system of record

Features, test cases, runs, releases, issues — all tracked in one place with full audit history.

AI is the assistant

Claude Code, Copilot, Cursor — whatever you use. They connect via MCP and manage your test suite.

No vendor lock-in

Switch AI providers whenever you want. TestPlan doesn't care which model you use — it exposes the same 80+ MCP tools to all of them.

Why we don't bundle AI

How other tools do it

  • Bundle a specific AI model into the product
  • Charge per token on top of your subscription
  • Build bespoke integrations that break at 2am
  • Lock you into their model and their pricing
  • Maintain auth flows, sync processes, webhooks

How TestPlan does it

  • Expose the full API as an MCP server
  • Zero AI costs — use the tools you already pay for
  • One protocol, any AI tool, always up to date
  • Switch providers anytime, no migration needed
  • No maintenance burden, no sync to break

From code to tested release in 5 steps

Here's how teams get the most out of TestPlan with AI. No extra subscriptions, no extra cost.

1

Bootstrap your test suite from code

One-time setup

Point Claude Code at your codebase with the TestPlan MCP connected. Ask it to analyse your code and generate features and test cases directly into TestPlan.

Example prompt
Analyse this codebase. For each user-facing feature, create a feature in TestPlan with a description, then create test cases with clear steps and expected results. Focus on what a QA tester would actually test — ignore infrastructure.

From zero to a full test suite in minutes instead of days. Review what it created, tweak anything that's off, and you've got a baseline.

2

Before each release, analyse what changed

Each release

Pull your completed tickets from your task tracker. Ask Claude to compare what changed against existing test coverage and flag anything that needs attention in the Plan Inbox.

Example prompt
Look at the completed tasks for the v2.4 milestone. For each one, check if there are existing test cases in TestPlan that cover the change. If not, create an inbox item describing what changed and suggesting test cases.

A human still reviews the inbox and decides what becomes a real test case. The AI does the legwork of figuring out what changed and what's not covered.

3

Run your regression tests

Each release

Create a release in TestPlan, start a test run, and work through it. This is the bit that's deliberately not AI — a human clicks through the app, follows the steps, and records what passed and what didn't. That's the whole point.

4

Track pre and post-release issues

Each release

Log issues found during testing as pre-release issues. After release, anything reported by users goes in as post-release. Over time, this data tells you where your testing is strong and where it's missing things.

Example prompt
Look at the post-release issues in TestPlan for the last 3 releases. Are there patterns? Suggest new test cases that would have caught these issues earlier and create them as inbox items.
5

Continuously improve

Ongoing

Periodically ask Claude to review your test suite against the codebase. Features get added, code gets refactored, tests go stale. AI is good at spotting the gaps.

Example prompt
Review the test cases in TestPlan for the "Payments" feature. Compare them against the current codebase. Are any tests outdated? Are there new code paths that aren't covered? Send findings to the inbox.

80+ tools at your AI's fingertips

Everything your AI tool needs to manage your test suite, exposed via a single MCP server.

Feature Management

Create features, set categories, track status, view coverage stats, publish to your help centre.

8 tools

Test Cases

Create test cases with steps, expected results, priorities. Update, list, count, and filter.

6 tools

Test Runs

Start runs, record results, pause/resume, get reports, rerun failures, view team-wide stats.

10 tools

Releases

Create releases, manage the Draft → Testing → Released workflow, track issue stats.

7 tools

Issue Tracking

Create pre/post-release issues, resolve, reopen, and analyse coverage gaps.

9 tools

Plan Inbox

Create inbox items for changes that need testing. Track from flagged to addressed.

8 tools

Documentation

Create help centre pages, manage content, publish/unpublish, search documentation.

9 tools

Dashboard & Search

Get team-wide summaries, instructions, and an overview of your entire test suite.

2 tools

Other things you can do

All of these work through Claude Code (or any MCP client) with the TestPlan MCP — no extra tools or costs.

Generate documentation

Ask Claude to create help centre pages from your features and test cases. It already knows what the product does.

PR-triggered analysis

Add a hook that asks Claude to check if a merged PR affects any existing test cases. If it does, flag it in the inbox.

Test run analysis

After a test run, ask Claude to look at the failures and suggest whether they're likely bugs, test issues, or environment problems.

Exploratory test suggestions

Starting an exploratory session? Ask Claude what areas of the app are under-tested based on your TestPlan data.

Cross-tool sync

Connect multiple MCPs and let Claude coordinate between them. Close a task? Claude updates test coverage. Merge a PR? Claude checks if tests need updating.

Release readiness check

Before shipping, ask Claude to summarise the release: what's been tested, what hasn't, what issues are outstanding, and whether it looks ready.

Ready to connect your AI tools?

Start free. Connect MCP in under 5 minutes. No AI tokens to buy.

Start Free