Your AI tools. Your tests. One workflow.
TestPlan doesn't bundle its own AI or charge per token. It plugs into the tools you already use via MCP (Model Context Protocol) — so you get AI-powered testing without paying twice.
TestPlan is the system of record
Features, test cases, runs, releases, issues — all tracked in one place with full audit history.
AI is the assistant
Claude Code, Copilot, Cursor — whatever you use. They connect via MCP and manage your test suite.
No vendor lock-in
Switch AI providers whenever you want. TestPlan doesn't care which model you use — it exposes the same 80+ MCP tools to all of them.
Why we don't bundle AI
How other tools do it
- Bundle a specific AI model into the product
- Charge per token on top of your subscription
- Build bespoke integrations that break at 2am
- Lock you into their model and their pricing
- Maintain auth flows, sync processes, webhooks
How TestPlan does it
- Expose the full API as an MCP server
- Zero AI costs — use the tools you already pay for
- One protocol, any AI tool, always up to date
- Switch providers anytime, no migration needed
- No maintenance burden, no sync to break
From code to tested release in 5 steps
Here's how teams get the most out of TestPlan with AI. No extra subscriptions, no extra cost.
Bootstrap your test suite from code
One-time setupPoint Claude Code at your codebase with the TestPlan MCP connected. Ask it to analyse your code and generate features and test cases directly into TestPlan.
Analyse this codebase. For each user-facing feature, create a feature in TestPlan with a description, then create test cases with clear steps and expected results. Focus on what a QA tester would actually test — ignore infrastructure.From zero to a full test suite in minutes instead of days. Review what it created, tweak anything that's off, and you've got a baseline.
Before each release, analyse what changed
Each releasePull your completed tickets from your task tracker. Ask Claude to compare what changed against existing test coverage and flag anything that needs attention in the Plan Inbox.
Look at the completed tasks for the v2.4 milestone. For each one, check if there are existing test cases in TestPlan that cover the change. If not, create an inbox item describing what changed and suggesting test cases.A human still reviews the inbox and decides what becomes a real test case. The AI does the legwork of figuring out what changed and what's not covered.
Run your regression tests
Each releaseCreate a release in TestPlan, start a test run, and work through it. This is the bit that's deliberately not AI — a human clicks through the app, follows the steps, and records what passed and what didn't. That's the whole point.
Track pre and post-release issues
Each releaseLog issues found during testing as pre-release issues. After release, anything reported by users goes in as post-release. Over time, this data tells you where your testing is strong and where it's missing things.
Look at the post-release issues in TestPlan for the last 3 releases. Are there patterns? Suggest new test cases that would have caught these issues earlier and create them as inbox items.Continuously improve
OngoingPeriodically ask Claude to review your test suite against the codebase. Features get added, code gets refactored, tests go stale. AI is good at spotting the gaps.
Review the test cases in TestPlan for the "Payments" feature. Compare them against the current codebase. Are any tests outdated? Are there new code paths that aren't covered? Send findings to the inbox.80+ tools at your AI's fingertips
Everything your AI tool needs to manage your test suite, exposed via a single MCP server.
Feature Management
Create features, set categories, track status, view coverage stats, publish to your help centre.
8 toolsTest Cases
Create test cases with steps, expected results, priorities. Update, list, count, and filter.
6 toolsTest Runs
Start runs, record results, pause/resume, get reports, rerun failures, view team-wide stats.
10 toolsReleases
Create releases, manage the Draft → Testing → Released workflow, track issue stats.
7 toolsIssue Tracking
Create pre/post-release issues, resolve, reopen, and analyse coverage gaps.
9 toolsPlan Inbox
Create inbox items for changes that need testing. Track from flagged to addressed.
8 toolsDocumentation
Create help centre pages, manage content, publish/unpublish, search documentation.
9 toolsDashboard & Search
Get team-wide summaries, instructions, and an overview of your entire test suite.
2 toolsOther things you can do
All of these work through Claude Code (or any MCP client) with the TestPlan MCP — no extra tools or costs.
Ask Claude to create help centre pages from your features and test cases. It already knows what the product does.
Add a hook that asks Claude to check if a merged PR affects any existing test cases. If it does, flag it in the inbox.
After a test run, ask Claude to look at the failures and suggest whether they're likely bugs, test issues, or environment problems.
Starting an exploratory session? Ask Claude what areas of the app are under-tested based on your TestPlan data.
Connect multiple MCPs and let Claude coordinate between them. Close a task? Claude updates test coverage. Merge a PR? Claude checks if tests need updating.
Before shipping, ask Claude to summarise the release: what's been tested, what hasn't, what issues are outstanding, and whether it looks ready.
Ready to connect your AI tools?
Start free. Connect MCP in under 5 minutes. No AI tokens to buy.
Start Free