Your projects are saved in this browser only.
Clearing site data, switching browsers, or losing the device wipes them. Sign in and upgrade to Pro to sync across devices.
Test Dossier
Workspace•
Welcome to Test Dossier
Three things to know — that's all you need to get started.
1. Fill in the basicsAdd a ticket key and a quick summary so you'll find this test later.
2. Add stepsEach step is one thing you do, one thing you expect. Paste screenshots — they auto-save.
3. Export or shareOne click to export to your test tool, or generate a shareable link PRO
Your test data lives in this browser only. Clearing site data, switching browsers, or browser eviction can lose it.
Sign in to back up to the cloud (free).
New: brand your reportsAdd your logo, brand color, and company name. Your clients see your brand on every export and share link.
You're in a workspace.
Tabs and runs you create here are visible to all team members.
Your personal tabs are saved separately and waiting when you switch back.
12345set step result (PASS / FAIL / BLOCKED / SKIPPED / NOT TESTED)
Ctrl+/filter / search
▾Sign-off & Notes0 filled
Output
Columns:
✓ Copied!
Export to a test management tool:
Jira-native plugins
Zephyr ScaleCSV
Xray Cloud (Manual)CSV
Standalone test management
TestRailCSV
TestRailXLSX
PractiTestCSV
PractiTestXLSX
QaseCSV
Azure DevOps Test PlansCSV
TestLinkCSV
Don't see your tool?
Custom mapping…build it
My saved presetsmanage
Each export ships cases.csv or cases.xlsx in the target tool's format + a screenshots/ folder + a README. Column names match each vendor's documented import format — vendors occasionally rename columns, so if import fails, open Custom mapping and tweak.
Fastest workflow for Jira tickets: Click "ZIP Bundle" → unzip → drag images into the Jira ticket's attachment area → paste from evidence.jira (Server/DC) or evidence.md (Cloud). For other test tools, use the Export to Tool menu above.
Rendered preview — shows how the markup will look once images are attached at the destination.
▾ All Images
Discussion
0
Discussion is a workspace feature.Comments sync to every member of a workspace so the team can collaborate on each test.
Save & share this diff
A permalink anyone can open. Inputs ≤5 MB each, count toward your Pro storage.
Saved.Anyone with the link can open it.
Compare
All tests
Every test case in this project. Currently-open tabs are marked open — click to switch. Closed tabs (those you closed with content in them) can be restored.
Import Test Cases
Drop a CSV or JSON exported from your test management tool. We auto-detect Zephyr Scale, Xray, Qase, Azure DevOps, and TestLink. Each row becomes a step; each test case becomes a tab.
File
Drop a file here, or click to choose
.csv or .json — up to 10 MB
Source tool
Paste a markdown table of test cases. Each row becomes a step in a new tab, with fields pre-filled.
Works with spec docs, exports, or AI-generated tables.
Expected format
# Optional H1 or H2 title becomes the Summary
## Optional section headers group related cases
| TC | Scenario | Test data | Expected result |
|----|----------|-----------|-----------------|
| 1 | Click login with valid creds | user: admin | Dashboard loads |
| 2 | Click login with wrong pw | user: admin, pw: xxx | Error shown |
Column names are matched flexibly — the parser recognises Action/Scenario/What/Step, Data/Test Data/Input/Precondition, Expected/Result/Outcome. The TC or # column is optional and ignored (steps are numbered automatically). You can paste a single table or multiple tables with H3 section headers between them.
✓ Copied
If you want AI to generate this for you, copy this prompt into ChatGPT or Claude:
---
Generate test cases in this exact markdown format. No JSON, no prose before or after — just the H1 title and markdown table(s).
# [Feature name]
## [Optional section name]
| TC | Scenario | Test data | Expected result |
|----|----------|-----------|-----------------|
| 1 | [what to verify] | [data or condition] | [verifiable outcome] |
Cover: happy path, edge cases, boundary values, negative scenarios. Keep scenarios concise. Expected results must be specific and verifiable — avoid "works correctly". Group related cases under H3 section headers if the feature has multiple areas.
Feature to test:
[PASTE FEATURE DESCRIPTION OR JIRA STORY HERE]
Paste your test cases
Preview
Nothing to preview yet. Pick a file or switch to "From markdown" and paste.
Custom Export
Pick a vendor preset, edit columns, or build your own mapping. Live preview shows the first row.
Built-in preset. Click Clone & Edit to make a writable copy you can adjust.
Columns — drag to reorder, click + to insert tokens
Header
Template
Step Scope
Actions
Available Tokens — click to copy
Live Preview — first 2 rows from active tab
Help
Getting Started
Test Dossier is a universal test evidence builder. Document a test case with screenshots, then export to Zephyr Scale, Xray, TestRail, PractiTest, Qase, Azure DevOps Test Plans, TestLink, Jira (markup or markdown), or build your own column mapping for any other tool. Everything you enter saves automatically to your browser — no account, no upload.
Your first evidence doc in 60 seconds
Fill Essentials: Ticket Key, Tester, Overall Result, Summary.
In Test Steps, type the first action you performed.
Paste a screenshot with Ctrl+V anywhere on the page. It attaches to the active step (highlighted green).
Fill Expected and Actual. Pick the step's Result.
Click + Add Step and repeat, or use + From Template for pre-shaped steps (Login, API call, etc.).
When done, click ZIP Bundle (for Jira) or one of the test-management exports.
Tip: You don't have to fill every field. Essentials + a few steps with screenshots is enough for most evidence. Environment, Context, and Notes groups stay collapsed by default — open them only when relevant.
Starting from an existing spec
If you already have a list of test cases (a spec doc, Jira story, Confluence page, or something you'd like AI to generate), click Import Test Cases in the top bar. Paste a markdown table — the tool creates a pre-filled tab with all steps ready to execute. See the Workflows section for details.
Working on multiple tickets
Click + in the tab bar to open a new tab. Each tab holds its own ticket, steps, and screenshots. Switching between tabs is instant and everything autosaves.
Import
Import test cases from a spec or AI
Use when you already have a structured list of test cases — from a spec doc, Confluence page, Google Sheet, or an AI-generated table — and don't want to type each one.
Click Import Test Cases in the top bar.
Paste a markdown table. Columns are matched flexibly: the parser recognises Scenario / Action / What to test / Step description (becomes Action), Test Data / Input / Condition / Precondition (becomes Test Data), and Expected / Result / Outcome / Verify (becomes Expected). A TC or # column is auto-ignored — steps are numbered by the tool.
Optional: include an H1 or H2 heading above the table — it becomes the new tab's Summary.
Optional: include H3 section headers between multiple tables — each step gets prefixed with its section name (e.g. [Productkeuze] Scenario description).
The preview below the paste area shows how many steps will be created and which sections were detected. If something looks wrong, fix the markdown and re-paste.
Click Create Tab from Import. A new tab opens with all fields pre-filled — you can now skip straight to execution and evidence capture.
Tip for AI users: The modal has a Show AI prompt toggle with a ready-to-use prompt you can copy into ChatGPT or Claude, along with your feature description. The AI produces a markdown table; paste it back into the tool.
Imports of 15+ steps automatically switch to Compact output style (single scannable table) rather than the default Detailed per-step layout. You can toggle this per tab using the Style: buttons above the output preview.
Export
Jira (evidence as a comment)
Best when your Jira ticket is a story or bug and you're posting test evidence alongside it.
Click ZIP Bundle (current tab). A zip file downloads.
Unzip it — you get evidence.html, evidence.jira, evidence.md, evidence.txt, README.txt, and a screenshots/ folder.
In Jira, open the ticket. Drag all files from screenshots/ into Jira's attachment area.
For Jira Server / Data Center: open evidence.jira, copy, paste into a new comment.
For Jira Cloud: open evidence.md and paste — or paste evidence.jira using the comment menu's Paste as wiki markup option. Image references auto-resolve to the attachments you just uploaded.
Picking the right format:evidence.html is best for emailing or printing to PDF — it's a self-contained report with images embedded. evidence.txt is real plain text for tools that strip formatting. Result badges (✅ PASS, ❌ FAIL etc.) use shape-encoded emoji + bold so they render cleanly everywhere without depending on legacy color macros — and they survive monochrome / colorblind contexts where green/red discs would be ambiguous.
TestRail (import as test cases)
Use when you want the test to become a reusable case in your TestRail library.
Click TestRail CSV + Screenshots. Choose current tab or all tabs.
In TestRail, open the target suite and click the Import icon. Choose Import from CSV and upload cases.csv.
On Row Layout, pick Test cases use multiple rows. Set Title as the column that starts a new case.
Open each imported case and attach files from the screenshots/ folder to the right steps.
Save the TestRail config file after your first import. Load it next time and the wizard pre-configures itself.
PractiTest (import tests with steps)
Click PractiTest CSV + Screenshots.
In PractiTest go to Settings → Import & Export → Import Tests and Steps.
Upload cases.csv. Ensure UTF-8 encoding is selected.
Columns match PractiTest's documented field names — most auto-map. Verify Step Position maps correctly (it's how new tests are detected).
After import, attach screenshots to individual steps.
Case vs. Run: TestRail and PractiTest separate "test case definition" from "test run". The CSV import creates cases. The Actual result and step PASS/FAIL are preserved as a trailing note in each step's Expected field (separated by ---). To log a run proper, add the imported case to a Test Run/Set in the target tool.
JUnit XML (TestRail / PractiTest / Xray / CI tools)
Every evidence ZIP includes evidence.junit.xml — the standard JUnit XML format. Import it natively into TestRail, PractiTest, Xray, Allure, BrowserStack, CircleCI, Jenkins, GitHub Actions, GitLab — anything that ingests JUnit. One <testcase> per step; failures map to <failure> blocks. API check evidence + captured schemas travel as both <properties> (machine-readable for tools that key on attribute paths) and <system-out> (human-readable for tools that surface output text). Drop-in replacement for hand-rolled vendor-specific exporters when your platform speaks JUnit.
Other tools (Zephyr Scale, Xray, Qase, Azure DevOps, TestLink)
The Export to Tool ▾ menu in the output panel ships built-in mappings for the most common test management plugins beyond TestRail/PractiTest. Each one writes a cases.csv with columns matching the vendor's documented import format, plus a screenshots/ folder and a README.txt with vendor-specific import steps.
Xray Cloud — Atlassian Marketplace plugin. Manual test type, one row per step.
Qase — modern, SMB-friendly. One row per case, steps in a single cell.
Azure DevOps Test Plans — Microsoft work-item grid format.
TestLink — open-source test management.
If your vendor renames a column: there is no universal CSV standard for test cases. Each tool has its own column names and they change over the years. If a built-in import fails, click Export to Tool → Custom mapping, clone the failing preset, rename the column, and save. Your fix is one click away — no need to wait for a tool update.
Custom export mapping (any tool)
If your test management tool isn't in the menu — or if your team uses a custom CSV schema — open Export to Tool → Custom mapping. You can:
Pick any built-in preset, click Clone & Edit, and tweak it.
Build a mapping from scratch: add columns, set headers, and drop in {{token}} placeholders for any field. The token picker shows everything available (case-level: {{title}}, {{summary}}, {{preconditions}}, etc.; step-level: {{step.action}}, {{step.expected}}, {{step.result}}, {{step.screenshots}}, etc.).
Toggle row mode between "one row per case" (case columns blank on continuation rows — TestRail/Zephyr style) and "one row per step" (case columns repeat — Xray style).
Choose CSV or Excel output.
Save Preset stores it in your browser. Export Preset JSON shares it with teammates — they can Import Preset JSON to use the same mapping.
The Live Preview pane shows the first 2 rows so you can see the result before downloading.
Team tip: A QA lead can build one canonical preset for the team's test management tool, export it as JSON, and share it via Slack or a wiki page. Everyone else clicks Import Preset JSON once and gets identical exports forever.
Backup and restore
Use to move a tab between machines, recover from a browser wipe, or share a draft with a teammate.
⬇ Download Backup downloads the current tab as a single .json file (including screenshots as embedded base64).
⬆ Restore Backup reads a previously-downloaded backup and opens it as a new tab alongside your current ones — nothing is overwritten.
Back up before major browser updates, before clearing browser data, or if you want to continue work on a different device. The backup is the only way a tab leaves this tool — everything else stays in your browser's IndexedDB.
Moving multiple tabs? Use All Tabs ZIP from the output panel. The backup JSON is per-tab; for a full workspace, back up each tab you care about.
File format: The backup is a JSON object with the tab's fields (ticket, tester, summary, steps, etc.) plus a schema marker. If you try to restore something that isn't a backup from this tool, you'll see a specific error explaining what's missing — no silent failures.
File size: The backup can be large if the tab has many screenshots. Images are stored inline as base64 (roughly 33% bigger than the original). A tab with 50 screenshots might produce a 15–50 MB file.
Features
Tabs
Work on multiple tickets in parallel. Click + to add a tab; × closes one. Use Close all tabs in the toolbar's More-actions menu (or the command palette) to bulk-close — same archive behavior. Each tab has its own state. Closing a tab with content keeps it findable under All tests so you can restore it — nothing is lost. Empty tabs close silently with no prompt.
Test Case ID (TD-xxxxxxxx)
Every test case gets an auto-generated, stable ID that appears at the top of the form once you start filling it in. It's read-only — the app generates it once and it never changes. Use it to refer to a specific test case in Slack, Jira comments, or anywhere else. Click next to the ID to copy.
Why it matters: two test cases for the same ticket (e.g. happy path vs. error path) can be told apart by their Test Case IDs even though they share the ticket key. The ID is also embedded in every export (ZIP filenames, CSV/XLSX columns, README) so re-imports preserve the link to the original.
Sharing with teammates: when you export a backup and a teammate restores it, they see the same TD-xxxxxxxx you do. Both of you can refer to the case by its ID and mean the same thing. Only on the rare event of a true ID collision in their workspace will the import be renamed (with a notice).
Limit: the ID identifies a logical test case, not a version of it. If both you and a teammate edit the same case independently and re-share, you'll have two diverged copies with the same ID — there's no built-in merge resolution. For real collaborative editing, use the Pro tier's cloud sync where the server is the single source of truth.
Autosave
Every change saves to your browser's local database (IndexedDB) after about half a second. The indicator in the top bar shows "✓ Saved" when current. Closing the page and reopening restores everything.
Screenshots
Paste with Ctrl+V, drag files, or click the dropzone. Images compress automatically to keep drafts small. Click any image to view full size. Click the icon to annotate with rectangles, arrows, or blur/redaction.
Step templates
+ From Template offers pre-shaped steps (Login, Form submission, API call, Navigation, Permission check, Error handling). Some pre-fill the Test Data field with useful placeholders.
Test data at two levels
Shared test data (environment config, test users) goes in the Context group. Step-specific data goes in each step's Test Data field. Both support key: value lines — keys get auto-bolded in the output.
Output formats
Three preview tabs: Jira Markup (wiki syntax for Jira Server/Data Center and older Cloud instances), Markdown (Confluence, GitHub, general use), and Preview (rendered HTML — closest to what Jira will show).
Detailed vs Compact output
A Style: toggle sits next to the format tabs and switches how your test run gets rendered:
Detailed (default for most tabs) — a summary table on top, then one per-step detail block per step with Action/Test Data/Evidence and side-by-side Expected ⟋ Actual. Best for 1-15 step test runs where each step deserves its own visual block.
Compact (auto-selected for imports with 15+ steps) — one wide scannable table with every step's details in a single row. Failed and blocked steps with defect notes get a Defect details section below the table. Best for long verification matrices (20-50+ steps) where a wall of per-step tables would be unreadable.
The choice is per-tab and persists — a 3-step bug-repro tab can stay detailed while a 45-step feature verification tab is compact. The toggle affects all three output formats (Jira, Markdown, Preview).
Defect details for failing steps
When you set a step's result to FAIL or BLOCKED, a Defect Details field appears below. Use it to describe what went wrong — error messages, reproduction conditions, timing, console logs, suspected cause. Click Copy as Bug Ticket to copy a pre-shaped bug ticket body (Summary, Steps to Reproduce, Expected, Actual, Environment) that you can paste directly into a new issue.
By default, Steps to Reproduce includes only the failed step itself — most bugs are self-contained ("clicking submit returns 500") and prior steps would be noise. When the failure depends on context (e.g., "after navigating through 3 pages, the form clears"), tick Prior steps as repro path in the dropdown — the body then includes every PASS/FAIL step before the failure as a numbered chain, with the failed step bolded and annotated "← failed here". Skipped steps are omitted (intentionally bypassed); blocked prior steps are omitted (didn't run). Like the format choice, this toggle persists for the session.
The format dropdown (▾) next to the button picks the markup: Markdown (GitHub, GitLab, Linear, Notion, Jira Cloud), Jira wiki (Jira Server / Data Center), or Plain text. The default follows your output-format selector at the top of the page; the first time you pick something different from the dropdown, that choice sticks for the rest of the session — every subsequent step's button inherits it, so you don't have to re-pick per step. Closing the tab resets back to the global default.
If the step has screenshots, a Download screenshots button bundles just this step's images into a small zip — drag the files into the bug tracker after pasting the body.
Lightweight formatting
In any textarea you can use four Markdown-style shortcuts:
**bold** → emphasis in the output
\`code\` → inline code (good for error messages, IDs, API paths)
Lines starting with - or * → bullet list
Lines starting with 1. , 2. , etc. → numbered list (renumbered automatically)
These render properly in Jira markup, Markdown, and the rendered preview. In TestRail/PractiTest CSV exports they stay as literal characters (readable but not styled, since those tools import as plain text).
Note: Lists only render as proper lists in block fields (Summary, Preconditions, Defect Details, etc.) and not inside the per-step Expected/Actual table cells — Jira's table cells don't support list markup. In those cells they display as bulleted/numbered lines, which is still readable.
Collapsible field groups
Essentials is always visible. Environment, Context, and Notes collapse by default. They auto-expand when they contain content. Explicitly collapsing a group with content makes that choice stick for the tab.
Recovering a closed tab
Closing a tab with content keeps it under All tests so it's never lost. Open that modal to see every test case in the project — currently-open tabs are marked, closed tabs have a Restore button.
Projects (free + Pro)
Projects group your test cases by client, app, or sprint. Click the project switcher pill in the top bar to switch between them or create new ones.
Free / anonymous: 1 local project. It lives in this browser only — a "Local" pill marks it in the switcher. Upgrade to Pro for unlimited cloud-synced projects across all your devices.
Pro: unlimited cloud-synced projects. They appear on every device you sign in to.
Workspace: projects scoped to the workspace are visible to every member.
Archive a project from the project menu (× hover button). Archived projects move to the collapsed Archived section; you can restore them or permanently delete with a typed-name confirm.
Upgrading from free to Pro: when you upgrade, a one-shot prompt offers to move your local projects to the cloud. Each migration creates a server project, re-tags + uploads your tabs, and clears the local entry. You can skip and migrate later from the project switcher.
Test Data picker (free)
Open from the sidebar Tools section. The picker has two tabs:
Generate: ~50 client-side generators including emails, names, phones, addresses, credit cards (Stripe test cards + Luhn-valid Visa), dates, UUIDs, strong passwords, and a Validated ID group with country-correct algorithms — US SSN, NL BSN (11-proof), EU IBAN (NL/DE/GB/FR/ES, mod-97), UK NI Number, BR CPF + CNPJ, ES DNI, DE Steuer-ID, FR NIR, CA SIN, AU TFN, JP MyNumber, IN PAN, ISBN-13. Each passes real form validation. Click Roll new to refresh samples.
Saved fixtures (Pro): named bundles of fields like "QA admin user" with email + password + role. Workspace-shared. Multi-select from the Generate tab to create a fixture from chosen generators in one shot.
Region selector (top of picker): filter to a country — NL/US/DE/GB/FR/ES/BR/CA/AU/JP/IN. Dutch / German / British versions also include locale-correct first names, last names, streets, cities, postcodes, and phone formats.
Smart paste: if you opened the picker from inside a text field, clicking a value pastes it at the cursor. Otherwise the value goes to your clipboard.
Step Library (Pro)
Save a step once, reuse it across many test cases. Common use: "Sign in as admin user", "Reset the seed data", recurring assertions.
Save: click the 📎 icon in any step's action bar — the step's action / expected / test data become a library entry.
Insert: click 📎 From Library next to + Add Step and pick. The inserted step is linked back to the library entry (visible as a "Library: name" pill on the step header).
Propagation: when you edit the canonical library entry, every linked step updates on next render. Editing a linked step's fields locally detaches it from the library.
Reviews a set of tests against acceptance criteria and flags missing scenarios. Open from a plan's Coverage button (in the plan-detail header) or the sidebar.
Inputs: AC text + tests (manual paste, current plan, or whole project). Outputs: missing scenarios with suggested AC, edge cases a thorough QA would also test, and redundant pairs. Each missing/edge row has a Generate test from this → button that hands the suggested AC to the test generator and (when launched from a plan) auto-adds the new test to that plan.
Costs 1 AI quota slot per run.
Screen recording → AI test steps (Pro)
Record yourself doing the test once; let the AI write the step list. Open from sidebar Tools → Record screen. Test Dossier captures your screen + clicks (no audio by default), then sends key frames + interaction events to the model, which produces a draft of Action / Expected pairs you can paste into a test case. Edit before saving — the AI is a starting point, not a finisher.
Useful when documenting a flow you've never written down: regression tests for a feature you built last quarter, or a customer-reported bug repro. Recording stops on the next click outside the app or when you press the stop button. Costs 1 AI quota slot per recording.
AI Peer Review (Pro)
Reviews one test case (the active tab) for quality. Open from the sidebar or command palette.
Returns a 0–10 score plus issues with severity (high / medium / low), scope (per-step or whole-test), and a paste-ready suggestion. Buttons:
Apply — auto-write the suggestion into the right field. Available for vague_expected (replaces step's expected), weak_step (replaces step's action), missing_precondition (appends to preconditions), missing_edge_case (adds a new step). Cards mark as ✓ APPLIED.
Costs 1 AI quota slot per run. Difference vs Coverage: Coverage = breadth (many tests vs AC); Peer review = depth (one test, fine-grained).
File Template Filler (Pro)
Upload an empty data template — CSV, TSV, JSON, NDJSON (JSON Lines), XML, SQL INSERT statements, YAML, or Excel — and download it filled with valid test data. Open from sidebar Tools → Fill template.
Drop or pick the template. Auto-detected per extension; unknown files sniff by content.
Each column maps to a generator (auto-detected from header names: bsn → NL BSN, email → random email, iban → IBAN-NL, etc.). Override any column.
Pick row count: keep existing or generate up to 10,000.
Generate & download writes the same format you uploaded.
Auto-fill mode: tick the checkbox before uploading and the file goes through the whole pipeline silently — auto-detected mappings, downloaded immediately. No mapping review.
Presets: save a column-to-generator mapping as a named preset; next time you upload a file with the same columns, the preset auto-applies. Pro presets are workspace-shared (server-stored); free presets stay in your browser.
Test plans (free + Pro)
Group test cases into a sprint regression or release plan. Pass/fail roll-up per build/environment, shareable as one frozen link. Open from the sidebar → Plans.
Free includes one personal test plan — enough to run a real sprint and see what plans do for you. Pro unlocks unlimited plans plus templates and recurring cadence. Workspace plans count against the workspace's tier, so a free personal account inside a Pro workspace gets unlimited workspace plans.
Plan templates (Pro). Save a plan as a reusable template — its items and per-suite "common setup" descriptions stay; runs do not. Click Save as template on a plan's detail view to flag it; it'll move to the Templates tab in the plan list. From there, Use creates a fresh plan with the same skeleton (date-stamped name by default) ready to execute.
Recurring cadence (Pro). On a template plan, the Recurring dropdown picks Off / Daily / Weekly / Biweekly / Monthly. The hourly sweep auto-instantiates each due template — your "Weekly smoke" plan appears in the list every Monday with no manual action. The next-fire date is shown on the template's detail meta.
Common pattern: a "Sprint regression" template with 30 items, recurring weekly. Each Monday a new Sprint regression — 2026-04-29 plan appears, ready for the team to run.
Plan results vs dashboard results — they can disagree on the same day. A plan row uses latest-run-wins ("did we get this case green for the release?"); the dashboards (Insights, Hotspots) use any-fail-in-day ("is anything broken on this case today?"). If a chain failed at 03:00 and a tester verified PASS at 14:00, the plan shows PASS (sign-off achieved) and the dashboard shows FAIL (something flaked earlier). Both true; intentional split, not a bug.
Test Inbox (Pro / Workspace)
A disposable email address that delivers right inside the app. Drop it into a signup form, password reset, or magic-link flow — messages arrive within seconds. Open from sidebar Tools → Test Inbox.
Personal inbox (Pro): pick a prefix like chibueze and you get chibueze.<random-token>@inbox.testdossier.com. Random suffix prevents enumeration; the prefix is shown in the UI.
Workspace inbox (workspace owner): a single shared address visible to all workspace members. Useful for testing flows the whole team needs to verify.
Rotate: change the prefix or suffix any time — old mail to the previous address bounces, past messages stay readable.
TTL: messages auto-delete after 7 days.
API check (Pro)
Fire HTTP requests, validate the response, chain dependent calls, optionally schedule the whole thing nightly. Each saved request becomes evidence on a test step. Open from sidebar Tools → API check.
The chain rail. Every step holds a chain of API calls in order — one for a single check, many for a login → action → cleanup sequence. The chain rail at the top of the modal is the canonical editor: each link is a card showing method + URL + result. Click + Add link to grow the chain, Edit on a card to load it into the form below, the × on a card to remove it. The header tells you what test step the chain is bound to ("3 links on Step 2") so the binding is never invisible.
Request: method, URL, headers (one Name: value per line), body. Or expand Import from cURL and paste a curl command — DevTools / Postman / manual one-liners all parse.
Expected status: any 2xx, a specific code, or custom. Setting 500 means a 500 counts as a pass — useful for error-path tests.
Drift detection: tick Capture schema. First response's shape becomes the baseline; re-runs flag changes field-by-field (email: expected string, got number). Allow extra fields on by default — flips off for strict mode.
Edit schema: the response panel's Schema tab is editable JSON. Mark fields optional, change types, or paste your own. Apply schema re-validates the current response.
Captures (chaining): in the chain card, define rules like body.token → authToken. The captured value goes into the chain's shared scope. Subsequent links use it via {{var:authToken}} in their URL / headers / body. Chain variables surface as click-to-insert chips above the URL input.
Send chain: fires every link in order with one shared {{var:...}} scope so captures flow forward. Halts on a hard status mismatch (a chain depending on a 401 token wouldn't continue meaningfully). Schema drift in one link doesn't halt — the chain runs through and you see all results.
Test (per-link): a button on each rail card fires that single link in isolation with a fresh scope. {{var:NAME}} placeholders intentionally fail to resolve since no upstream link runs first — surfaces the dependency. Useful for debugging which link broke a chain.
Live progress: when a chain fires, each card flashes queued → running… → ✓/✗ as the runner walks through, instead of going silent until the whole chain finishes.
Automate (nightly schedule). Tick Automate in the chain rail to put the whole chain on a cron schedule. The picker offers Daily 02:00 UTC, Daily 06:00 UTC, Weekdays 09:00 UTC, Hourly, Every 6 hours. The cron worker fires every link in order on the cadence with one shared scope — same semantics as Send chain. Failures land in Project Insights and Hotspots. The toggle is disabled until at least one link is attached to the step (automation needs a test-case binding to fire). Single-link checks work the same way; Automate is one toggle for one or many.
Save credentials (Pro vault). Tick Save credentials to store Authorization / Cookie / X-API-Key values for use in nightly runs. Stored two places: in your browser's localStorage (fast cache for in-modal re-runs) AND server-side in the api_check_credentials table, encrypted at rest with AES-GCM under a per-deployment master key. Per-row IV, AAD bound to the (tab, evidence) pair so a row swap by a DB-write attacker won't decrypt elsewhere. The cron sweep decrypts in-memory at fire time and merges real values over the *** redacted headers in the saved evidence — this is what makes nightly authenticated chains actually work. Workspace-shared: anyone who can edit the tab can read or write the credential row, mirroring the rest of the tab data. Click Forget on the evidence card to clear both stores.
Captured schema as evidence. Each chain link's captured schema appears under its evidence card on the test step (collapsible ▸ Captured schema disclosure) and in every export — HTML / PDF, Markdown, Jira markup, JUnit XML, and the public share page. A multi-link chain shows N independent schemas, one per link, since each endpoint has its own response shape. The schema is the contract the link verifies; surfacing it tells reviewers what was checked, not just whether it passed.
History tab. Past runs of the active link, newest first. Click a row to expand the request + response detail. Multi-select for bulk Re-fire (replays through the proxy in sequence) or Delete (purges from synced evidence + browser cache, no undo). When the step has 2+ links, a This link's runs / Chain runs toggle appears: Chain runs view groups runs by the chain fire that produced them, shows one row per chain run with the step's overall result + which link failed (if any), and a Re-fire chain button that reopens the chain in the Run tab so you can rerun or edit any link before re-sending.
Step result auto-propagation. Any chain run (Send chain, Test, Automation Run all now, single Send re-run) updates the step's PASS/FAIL chip and Actual field immediately — no reload needed. Aggregation rule: any link FAIL flips the step to FAIL; otherwise PASS. The cron sweep also writes back to the in-tab evidence on next page load so nightly results show up without a manual refresh.
testType auto-classification. Adding a chain to a step auto-sets the test case's Test Type to API Manual. Toggling Automate on promotes it to API Automation. A user-set value (Smoke / UAT / Regression / etc.) is never overwritten — the auto-derive only writes when Test Type is empty or already one of the API values. Powers test-case filtering ("show me only my automated tests") without manual tagging.
Automation tab. Cross-step view of every chain on the active tab with Automate on. Each row is one chain (1-link or N-link). Run all now fires every chain ad-hoc with isolated scope per chain (no leak between rows). Pause / Resume per row toggles the schedule without losing config. Multi-select for bulk Run.
Hide secrets toggle next to the Headers field masks Authorization / Cookie / API-key values to bullets. Auto-locks when saved creds load. Send / Attach still use real values.
Security: server-side proxy blocks SSRF (private IPs, loopback, cloud metadata, obfuscated forms, non-HTTP ports), follows redirects manually with each hop re-validated, caps response body at 2 MB, times out at 15 s, strips hop-by-hop headers. Pro-gated, rate-limited to 30 / min + 500 / day per user. Sensitive header values are redacted to *** in the synced tab data; the real strings live encrypted in the credential vault (server-side, Pro only) plus the browser's localStorage cache.
VPN / internal-only APIs: requests fire from a Cloudflare Worker, not your laptop, so VPN-only infrastructure is unreachable — your VPN tunnels traffic from your machine, not from our edge. Test against a public staging URL, or use a desktop API client (Postman / Insomnia / Bruno / curl) that runs on your local network and inherits your VPN.
CI integration (Pro / Workspace)
Hook your CI pipeline (GitHub Actions, GitLab CI, Jenkins, anything that can curl) into Test Dossier's history. Each test run from CI lands in History exactly like a manual run, plans roll up the results automatically.
Open the project menu in the top bar → CI tokens….
Pick a label (e.g. "GitHub Actions") and create a token — copy the plaintext immediately, we only show it once.
POST your test report to /api/ci/ingest with Authorization: Bearer <token>. Three formats accepted:
JUnit XML: send Content-Type: application/xml with the standard JUnit report. Cypress, Playwright, pytest, Mocha, Jest all emit this when configured.
Cypress / Playwright JSON: native reporter output with the header X-CI-Format: cypress or playwright. Carries retry attempts, screenshots, and traces — richer than JUnit.
Each test is matched to a test case by ticket (exact) or summary (substring). Unmatched runs still log to History — visible but tagged "untracked".
Build / branch metadata: pass X-Build and X-Branch headers, or include them as body fields in the generic JSON path. They show up on the History row and the share page.
Idempotency: pass X-CI-Run-Id (e.g. ${{ github.run_id }}_${{ github.run_attempt }}) so retried CI workflows don't produce duplicate test_runs. Same run_id on a re-upload returns the original ingest's summary instead of inserting again.
End-to-end walkthrough: for a step-by-step recipe — Cypress test → GitHub Actions nightly → results in your History — see the CI-INTEGRATION.md guide in the repo. Covers token setup, Cypress/Playwright/JUnit options, idempotency headers, and a troubleshooting table for common errors.
Project insights
Per-project health dashboard, opened from the project menu → Project insights. Shows the project's pass rate, run count, last-run timestamp, and number of test cases at the top, then three stacked bar charts — Manual runs / day, CI runs / day, Automation runs / day. Charts split because a manual fail (a human noticed something), a CI fail (could be flake / timeout / infra), and an API automation fail (cron found a regression) read as different stories; KPIs stay unified because the project's overall health is one number.
Per-test-case-per-day rollup. The chart counts test cases as the unit, not raw fires. A test case with 5 chains scheduled at different cadences fires 5 test_runs rows on the same day; the chart folds them into one logical row per (test case, day, tester) with a fail-priority rule (any FAIL → the test case is FAIL for that day). Without this, a test case with many chains would inflate the chart 5x relative to one with one chain. Hotspots and the share-page snapshot use the same rollup. The modal History tab keeps per-link granularity for debugging.
Chart ⇄ List toggle: a pill in the modal header flips the body between the charts and a day-by-day list naming every failed test case. Click a chart bar → flips to List, scrolls to that day. From a failed row, the click destination depends on source: manual fails open the test case (the documentation is where you investigate); CI fails open a CI Artifacts panel showing stack trace, retry attempts, and attachment paths; automation fails open the chain in the API check modal so you can re-fire or edit. Unmatched CI runs (not linked to a test case) carry an "UNTRACKED" chip.
Range selector in the header (7 / 14 / 21 / 28 / 30 / 60 / 90 days) — defaults to 30 days, well-suited to nightly runs. Range and view choice persist across sessions.
Workspace insights
Cross-project team-health dashboard, opened from the sidebar (workspace mode only). Shows aggregate KPIs across every project the workspace owns — total test cases, runs in the sprint window, pass rate, active members — plus a runs-per-day chart, a result-distribution donut, top contributors, and a "needs attention" list (most-failing tests + stale tests not run this sprint).
Range selector defaults to 14 days (one sprint). Visible to any workspace member, not owner-only — the data is already member-visible via History and the activity feed; this surface just aggregates it.
vs. Project insights: Workspace insights answers "how is my team doing?" across every project. Project insights answers "how is this one project doing?" with manual / CI split. Use Workspace for management standups; use Project for narrowing down where to spend test effort.
Real-time presence (Workspace)
Inside a workspace, a small green dot on the tab bar shows when a teammate is viewing the same test case. Hover the dot for the email list. Heartbeat polls every 10s while the page is visible, pauses on hidden tabs.
Comments + @-mentions (Workspace)
Every test case in a workspace has a discussion thread under the output panel. Drop a comment to ask a teammate to review your evidence, flag a flaky step, or capture a decision. Type @ to open the member picker — the mentioned teammate gets a notification and the comment shows up in their Mentions modal (sidebar bell). Click any mention to jump straight to the test case it's on.
Comments survive workspace member churn — when a member leaves, their comments stay visible (attributed by email) so the conversation history isn't lost.
Activity feed (Workspace)
Sidebar Activity shows a chronological stream of what's happening in the workspace — runs logged, test cases edited / cloned / restored, comments posted, plans created. Each row links back to the source so you can jump from "Alice logged a FAIL on PROJ-123" straight to that run in History.
Use it as a low-effort standup feed: open the workspace in the morning and skim the feed instead of asking "what changed yesterday?"
Share as report (Workspace)
From the History modal, multi-select runs and click Share as report — Test Dossier mints a read-only link bundling the selected runs into a single page. Optionally protect with a password (the link won't open without it) and turn on per-share view analytics (you see who opened it and when, anonymized to first-time / repeat).
Useful when posting nightly results to a channel where stakeholders don't have Test Dossier accounts — or sending a customer a self-contained "what we tested for your release" page.
Branding (Pro)
Upload a logo, pick a brand color, and set a display name — every export the app produces (HTML evidence, branded PDF, public share pages) carries them. Open from sidebar Tools → Branding. Logo lives in R2 (5MB cap, PNG/SVG/JPEG). Color drives the accent in headers and result chips on exported pages. Display name shows on share pages instead of the bare email.
Workspace-scoped: a workspace owner sets the branding once and every member's exports inherit it, so customer-facing reports look consistent regardless of who ran the test. Personal Pro users brand only their own exports.
Run history + version diff
Every time you click Log to History, Test Dossier freezes a complete snapshot of the test case at that moment — every step, every screenshot, every result. Open History from the project menu or the sidebar to browse, filter, and compare snapshots.
Filter: by project, result (PASS / FAIL / etc.), text search, date range. The active filters persist across modal opens.
Restore: replace the current test case with a prior snapshot — useful when you've accidentally edited away something important. The button only appears when the snapshot is a prior version of the active test case.
Compare: side-by-side diff between any prior snapshot and the live test case. Highlights changes per field — added/removed steps, edited expected/actual, modified test data. Great for "what did I break?" investigations.
Re-run: opens the snapshot as a new tab so you can re-execute it without losing the original log.
Diff tool (Pro)
Compare two blobs side-by-side without leaving the app: text, code, JSON (with key-order normalization), CSV (row-aligned), log files (timestamp-stripped), PDFs (text extraction), or images (pixel-level overlay). Open from sidebar Tools → Diff tool.
Inputs: paste into either pane, drop a file (text, image, PDF), or use ⌘V on a screenshot. Both panes auto-detect kind — JSON / CSV / log / text / PDF / image — independently. Mixed kinds (e.g. text vs image) refuse to diff with a clear message.
View modes:Side-by-side for line-by-line scanning, Unified for patch-style review. Word-level highlights inside changed lines so you can see exactly which token shifted.
JSON smart-mode: on by default — {a:1, b:2} and {b:2, a:1} render as identical. Turn off when key order matters (e.g. ordered API responses).
Image diff: overlays both images, highlights changed pixels with a configurable tolerance. Shows total pixel-changed % and a "size mismatch" warning when the dimensions differ.
Save & Share: creates a permalink that anyone can open. Inputs ≤5 MB each (counts toward your Pro storage). Optional password + expiry. Saved diffs replay on the share page using the same engine.
Tab restore: closing the panel keeps your inputs in place. Reopening picks up where you left off — no lost work mid-investigation.
The same engine powers Compare in History (snapshot vs current test case) and the read-only viewer behind shared diff links.
Workspaces (Pro / Team)
A workspace is a shared container — projects, test cases, history, plans, fixtures, branding — visible to every member you invite. Personal mode is private to you; switching into a workspace puts you in shared context where teammates see your edits live and you see theirs.
Create: Account menu → Workspaces → Create team workspace. First seat is included with Pro; additional seats are $15 / month each, billed via the same Lemon Squeezy subscription.
Invite: from workspace settings, paste an email and pick a role (owner / member). Owners can manage members + billing + branding; members can edit shared content but can't add/remove people.
Switch context: the workspace pill in the top bar is your current scope. Click ← Switch to Personal to leave the shared space; clicking a workspace name in the account menu switches into it. The pill is color-coded so you always know which scope you're editing.
What's shared: projects, tabs, test runs, history, plans, fixtures, branding, comments, mentions, presence — anything created while in workspace context. Local items in your personal scope stay personal.
Membership lifecycle: leave any time from Account → Workspaces → Leave. Owners can step down (if at least one other owner exists) or remove members; removed members lose access immediately but their authored content stays. Workspaces can be archived (data preserved, no further edits) or fully deleted with a typed-name confirmation.
The features tagged (Workspace) below — real-time presence, comments + @-mentions, activity feed, share-as-report, project insights — only activate inside a workspace context. In personal mode they're invisible.
Anonymous + Free + Pro at a glance
Anonymous
Free (signed in)
Pro
Test cases
Unlimited
Unlimited
Unlimited
Projects
1 local
1 local
Unlimited cloud
Test plans
—
1 plan
Unlimited + templates + recurring cadence
Generators
All ~50
All ~50
All ~50
Pro tools
—
—
✓ and more (hover)
Survives clearing browser data
—
Account survives, projects don't
Cloud synced
Keyboard Shortcuts
Ctrl+V
Paste screenshot anywhere on the page → drops into the active step
Ctrl+Enter
Add a new step
Ctrl+D
Duplicate the active step (action/expected/data carry over; result resets to PASS)
1 / 2 / 3 / 4 / 5
Mark the active step ✅ PASS / ❌ FAIL / ⚠️ BLOCKED / ⚪ SKIPPED / – NOT TESTED
Alt+↑ / Alt+↓
Reorder the active step
Ctrl+/
Focus the step filter (search by text or result)
Mouse actions
Click a step to make it active (the target for Ctrl+V).
Click the ▾ on a step header to collapse it.
Drag a step header to reorder steps.
Tick the checkbox on the left of a step header to select it for bulk actions (mark many as PASS/SKIPPED at once).
Click ⎘ in a step header to duplicate that step.
Click any screenshot to open it at full size.
Click on a screenshot to annotate.
Click a group header (Environment, Context, Notes) to expand or collapse it.
Tips & Gotchas
Where your data lives
By default, tabs and drafts live in your browser's IndexedDB — one device, one browser profile. This storage is per-origin, so different URLs (or incognito windows) keep separate buckets and don't cross over. If you sign in, tabs and run history also sync to your account so you can pick them up on another device. In a team workspace, that account-synced data is visible to everyone on the team.
If you're signed in to Pro, your tabs, runs, and history live on your account, not this device. Sign out ends the session and clears the local cache so the next person on this browser starts fresh. Sign back in on any device to see your synced data again. Use Sign out and forget this device on a shared machine when you also want to clear device preferences (theme, focus mode, dismissed banners).
Incognito/private windows use throwaway storage — anything you do there disappears on close, even if you're signed in.
Backing up your work
Even with cloud sync, keep periodic local backups. Use Download Backup on each tab (or All Tabs ZIP from the output panel) to get a portable file you can restore later — handy for archival or moving work across accounts/workspaces.
Bug ticket: format + "prior steps" toggle reset on tab close
The Copy as Bug Ticket dropdown's choices (Markdown / Jira wiki / Plain text, plus the "Prior steps as repro path" checkbox) persist for this browser session only — closing the tab clears them and the next session falls back to your global Output format setting (with prior-steps off). This is intentional: it lets you flip into "I'm filing GitHub bugs today" mode without permanently changing your team's default. If you want a permanent change, set the Output format selector at the top of the page instead — that's the long-term preference the bug-ticket button inherits from.
Redaction / sensitive data
The annotator's Blur tool pixelates rectangles. It does NOT detect faces or PII automatically. Review your screenshots before exporting, especially if they contain credentials, customer data, or other sensitive info. Once a screenshot is in an exported ZIP, there's no undo.
Also: don't paste real passwords into the Test Data field. Use references like "password in 1Password vault X" instead.
Duplicate filenames
Two screenshots with the same name collide in Jira (only one renders). The tool warns you when duplicates exist and the filename input turns red. Rename them to unique values.
Large drafts
With compression, typical drafts are well under browser storage limits. If you're accumulating hundreds of full-resolution screenshots in one tab, IndexedDB may refuse to save. The save indicator turns red when that happens. Export and archive in that case.
Jira output looks dense in the Markup tab
The raw Jira wiki syntax (pipes, dashes, {color} tags) reads messy. Switch to the Preview tab to see what it'll actually look like when rendered in Jira — that's the tab to judge output by.
FAQ
Where did my tab go?
If you closed it accidentally, open All tests — every test case in the project is listed there. Closed tabs (any with content that you closed) have a Restore button. Click it to bring the tab back.
Can I share this with teammates?
Two ways. One-off: use the Share button to publish a tab or run as a read-only link — no account needed for the viewer. The link works in any browser; you can revoke at any time from the Share modal. Ongoing team work: create a team workspace from the Account modal, invite members by email, and projects/tabs/runs created in the workspace are visible to everyone on the team. See "How does a team workspace work?" below.
Pro share options: shared links can be password-protected (the viewer must enter the password before the page loads — no preview, no SEO crawl) and carry per-share view analytics (you see how many times the link was opened, when, and a coarse first-time vs repeat split — emails are not collected). Both toggle from the Share modal. Useful for sending sensitive evidence to customers or auditors where you want both confidentiality and proof of access.
How does a team workspace work?
A workspace is a shared container for projects, tabs, and run history. Any member can create, edit, or log against workspace projects; the History list shows a "by you" / "by alice@…" chip so you know which teammate logged each run. Owners can invite/remove members, change roles, and manage billing ($9 per active seat per month, free while you're the only member). Switching context (workspace ↔ Personal) happens via the banner at the top — your personal tabs stay private to you regardless.
Can multiple people edit the same tab simultaneously?
Not in real time. Workspace tabs use last-write-wins with a stale-check: if two people edit the same tab and one saves stale data, the second save returns a conflict and you can refresh. For a team, agree on who's editing what, or use separate tabs.
Does this work offline?
Yes for personal use — once the page is loaded, evidence capture and exports run entirely in your browser. Cloud sync, workspace access, and shares need network.
Will my drafts survive a browser update?
Yes, IndexedDB persists across updates. It only clears if you explicitly clear site data, use "forget this website," or switch browsers/profiles. Keep a periodic JSON export as backup anyway — storage loss from browser bugs is rare but not zero.
Can I connect it directly to TestRail/PractiTest without the CSV step?
Not currently. Direct API integration would need your TestRail/PractiTest credentials stored somewhere — the CSV import route keeps them in the target tool. The data flows through a file artifact you can inspect before importing, which is also useful for review/audit.
Do I need an account?
No — you can use the entire capture/export workflow without signing in. Your data stays in your browser. Sign in (free, magic-link) if you want cross-device sync, cloud history, public share links, or to join a team workspace. Pro adds higher limits; team workspaces are billed per seat.
Who runs Test Dossier / where does my data go?
Test Dossier runs on Cloudflare (Pages, D1, R2). If you're not signed in, no data leaves your browser. Once signed in, tabs/runs/screenshots you choose to sync are stored in your account; team-workspace data is visible to that workspace's members. You can export and delete your account from the Account modal at any time.
API check
Fire an HTTP request, capture the response, and (optionally) save the request shape as evidence on a step. On re-run we validate the new response against the captured status + schema and show any drift — useful for catching backend changes that break your contract.
Have a curl command? Paste it here.Auto-fills method, URL, headers, and body.
Re-run mode
🔗 Chain
Vars:
Request
Add header— pick to avoid typos
Insert dynamic value— fresh per Send
Preview
Validation
🔗 Chain to the next callCapture a response value here, reference it as {{var:NAME}} in the next API check.
Example: on this POST /login, capture body.token as authToken. The next check can then send Authorization: Bearer {{var:authToken}} automatically every time the chain fires together — manual or scheduled.
Edits become the baseline when you Attach to step.
Past runs of this saved evidence. Click a row to expand the request, or re-run.
🔒 History locked. Deletion is disabled for this evidence. Untoggle Lock history on the Run tab to allow it.
API checks on this tab with Automate on. The cron worker fires each chain on its chosen cadence (hourly, daily, weekdays — picker is in the chain rail) with one shared scope per chain; failures land in Project Insights and Hotspots. Adjust schedules from each row, or fire all of them ad-hoc to verify before bed.
0 scheduled
Get Pro · Sign in
New here? Enter your email — we'll send a magic link. After signing in, you can upgrade to Pro for cloud sync, shareable evidence links, branded PDFs, and AI test generation. Already paid? Same flow brings you back to your account.
Pick a filter and preview the impact before deleting. Removed images are detached from any test, run, or share that referenced them — the rows themselves stay. This can't be undone.
Share this test case
Generates a read-only link anyone can open. You choose what's included before sharing.
Include in this share
Required if you want to restrict who can view this link.
Your link is ready:
Your share links
Active links you've created. Revoke any link to make it instantly inaccessible.
0 selected
Loading…
Branding
Used on branded reports, share-page headers, and share-link previews. Pro feature.
Branding is a Pro feature
Upgrade to add your logo, brand color, and company name to every report and share link.
PNG, JPEG, WebP, or GIF. Max 500 KB. Recommended 240×80.
Used as the accent throughout your branded report and share pages.
Replaces "Test Dossier" on the report cover and share-page footer. Leave blank to keep Test Dossier branding.
Test Dossier
Test evidence
Checkout — happy path
8 steps7 passed1 failed
Workspace
Loading…
Logo, brand color, and display name shown on every report and share-link generated by this workspace. Owner-only.
No logo
PNG, JPEG, WebP, or GIF. Max 500 KB. Recommended 240×80.
Used as the accent throughout your team's branded reports and share pages.
Replaces "Test Dossier" on report covers and share-page footers. Leave blank to use the workspace name.
Test Dossier
Test evidence
Checkout — happy path
8 steps7 passed1 failed
Test Plans
Group test cases into a regression suite or release plan. Status rolls up from your test runs — re-run any test from History and it counts toward the plan automatically.
You don't have a project yet.
Plans live inside a project. Create one first — your plans, test cases, and history will all be organised under it.
Plan
Loading…
Add test cases to plan
History
Every test you log gets archived here, searchable forever. Click Versions of this test to filter to the active test case and roll back to a prior snapshot.
0 entries
Move your local projects to the cloud?
You're now on Pro. The projects you created while on free are still in this browser only — move them to the cloud so they sync across devices.
Welcome — sync your local tests?
You have 0 test cases stored locally. Move them into My Tests so they sync across devices?
"Skip" leaves them local-only. You can move them later by editing each tab.
Team health
Mentions
Comments where someone @-mentioned you. Click to jump to the test case.
Workspace activity
Who changed what, newest first. Click any item to open the related test case.
Fill template with valid data
Drop in a template — we support CSV, TSV, JSON, NDJSON (JSON Lines), XML, SQL (INSERT statements), YAML, and Excel. We map each column to a data type, fill the rows with valid values that pass real validation, and give you a downloadable copy. Nothing leaves your browser.
filename.csv
0 columns · 0 rows
Map each column to a generator. Cells you leave on "Keep as-is" won't be touched.
AI fixture builder
Describe what kind of fixture you want and the AI suggests fields bound to the right generators. Examples: "Dutch customer profile with bank info", "US e-commerce checkout test user", "German tenant with two contact emails".
AI will replace the current field list. Costs 1 daily AI quota slot. You can edit the result before saving.
Export fixture
Generator-typed fields re-roll for each row, so 100 rows = 100 fresh emails / IBANs / etc.
Test Data
0 selectedRoll new for fresh values, then save the bundle.
Saved fixtures live on the server and (in workspaces) are shared with your team. Each fixture is a named bag of fields — perfect for "QA admin user" with email + password + role.
No saved fixtures yet.
Fill placeholders
These values will be substituted into the step's action, expected, and test data fields.
Step Library
Save steps you reuse across tests — sign-in flows, common setup, recurring assertions. Insert with a click.
Use {{name}} placeholders (e.g. {{role}}, {{email}}) — when this step is inserted into a test, you'll be prompted to fill them once.
Each reference expands into its own step at insert time. If the referenced step has placeholders, fill default values here so the tester isn't prompted again.
AI peer review
A senior-QA-style review of the current test case. Flags vague expected results, missing edge cases, weak verifications. Always sanity-check — the AI doesn't know your product.
Reviewing your test case…
Review test coverage
Paste your acceptance criteria, optionally pull in existing tests to evaluate, and the AI will flag gaps the tests don't cover. Always review — the AI can miss product-specific context.
If you pick "Current plan" or "Whole project" above, this box is ignored.
Generate a new test case
AI returns a complete test case — summary, preconditions, and 3–12 steps — opened in a new tab. Always review before executing — AI suggestions can miss product-specific details.
Add steps from recording
Record your screen while you exercise the feature. AI extracts steps from the keyframes and appends them to the active tab. Up to 5 minutes; max 20 keyframes per recording. Recording stays on your device — only extracted keyframes are uploaded.
3
Switch to the app you're testing now. Recording starts in a moment.
Recording0:000 frames
Wrapping up — recording stops at 5:00.
You're recording a single browser tab — the address bar and browser chrome won't be captured. To record URL-bar interactions, pick Window or Entire Screen next time.
You're already on the app you're testing. When done, click the browser's "Stop sharing" bar (no need to come back here) — or use Stop below.
0 keyframes captured. Click any frame to view it full-size; click × to remove it. Need at least 2 frames.
⚠️ Your recording captured few keyframes — output may be sparse. If your screen wasn't changing much, the AI has limited context. Consider re-recording with more interactions, or generate anyway.
Extracting steps from recording…
Record interactions in Chrome → DevTools (F12) → Recorder, click Export → JSON, then drop the file or paste it below. Steps append to the active test case. Where do I find this?
Start new recording, perform the steps in your app, then End recording.
Click the export icon (↓) → Export as JSON.
Drag the downloaded file here, or open it and paste the contents.
or drop a .json here, or paste below
Preview
Click anywhere or press Esc to close
Test Inbox
A disposable email address that delivers right here. Drop it into signup forms, password resets, or magic-link flows — messages arrive within seconds. Auto-purges after 7 days.
CI Ingest Tokens
Generate a token, plug it into your CI pipeline, and POST results to /api/ci/ingest. Each token is scoped to this project. Plaintext is shown once on create — copy it to your secret manager before closing.
Project insights
CI run details
Change email
We'll email a confirmation link to your current address — clicking it moves your account to the new email. A stolen session alone can't change your email this way.
Check your current inbox.
Click the link there to confirm the move. We've also sent a heads-up to the new address so they know an account is moving in. The link expires in 30 minutes.
Delete account
This permanently deletes your account, all your test cases, projects, history, share links, and uploaded screenshots. This can't be undone.
We'll email a confirmation link to your address. The deletion only runs after you click the link, so a stolen session alone can't wipe your account.
If you have a Pro subscription, cancel it from Manage billing first to avoid renewal charges.
Check your inbox.
Click the confirmation link to finalize the deletion. The link expires in 30 minutes. If you don't click it, nothing happens — your account stays put.
New project
Projects help you organize tests by client, app, or initiative.
Upgrade to Pro
Pro — $15/month
Shareable evidence links (single, project, or collection)