Skip to main content
Stably has two complementary layers that keep your test suite healthy as your app evolves:
Classic Auto-HealAutofix (Agent 2.0)
WhenDuring test executionAfter a test run completes
What it fixesBroken locators, flaky screenshotsAny test failure — selectors, flows, timing, auth, config
HowRetries the failing action with AI-suggested alternativesMulti-agent orchestrator triages, diagnoses, and edits your test code
OutputTest passes in the current run (healed inline)Code changes committed to your repo (optionally as a PR)
SetupEnable in project settings + stably.config.tsstably.yamlautofix: true or stably fix CLI

Classic auto-heal runs during test execution. When a single action fails, Stably’s AI attempts the smallest safe fix inline so the run can continue. It covers two categories:
  • Locator healing — requires describe() on your locators. When a described locator fails, AI uses the description + page context to find the updated element.
  • Screenshot healing — distinguishes benign render variance (font hinting, subpixel shifts) from real UI changes in toHaveScreenshot() assertions, suppressing false positives automatically.
Classic auto-heal is configured in stably.config.ts via the autoHeal option. See the Stably SDK setup guide for details.

Autofix — Agent 2.0 (post-run code repair)

Autofix runs after a test run completes. Instead of patching actions inline, it edits your actual test code — updating selectors, rewriting flows, fixing timing issues — and optionally opens a PR. If the test caught a real bug in your application, Autofix can even fix your application code directly. Under the hood, Autofix uses an orchestrator pattern with specialized AI subagents:
1

Triage

A triage agent groups failing tests by likely root cause into issues and classifies each one:
The test caught a real bug in your product code. Autofix can fix the application code directly and open a PR — or flag it for your team to investigate.
The test has a flaky locator, timing issue, or small selector change. Autofix patches the test code with a targeted fix (e.g., updating a data-testid, adding a wait).
Your application had a significant update — a redesigned flow, new pages, or restructured UI. Autofix rewrites the affected test steps to match the new application behavior.
Tests that were already fixed 2+ times for the same root cause are automatically skipped to save cost — you’ll just be prompted to accept the already-opened PRs from previous autofix runs.
2

Test update workers

Each issue is assigned to a specialized worker:
  • Test editor — diagnoses failures from traces, screenshots, and DOM snapshots, then applies targeted fixes (can run in parallel).
  • Browser inspector — opens a live browser to inspect page state, debug locators, and verify fixes (runs sequentially).
Every fix is evidence-driven — workers diagnose the root cause before applying any changes.
3

Validate

Fixed tests are re-run to confirm the repair actually works. Each fix includes a trace proof you can inspect in the dashboard.
4

Report

A summary report is generated with fixed/unfixed counts, root causes, and code diffs. If your repo is connected to GitHub, fixes are submitted as a PR.
Autofix diagnosis report showing root cause analysis and applied fixes
Autofix code diff showing test code changes

Diagnosis categories

Every failure is classified so you can understand at a glance what happened:
CategoryWhat it means
Test OutdatedSelectors or flows changed in your app
Actual BugThe test caught a real application bug
UnstableIntermittent failure due to timing or race conditions
UI ChangeIntentional UI change — test needs updating
MiscellaneousOther issues
Autofix also distinguishes between different kinds of timeout failures — so it won’t blindly increase timeouts when the real issue is a broken locator or wrong page state.

Smart skip logic

Autofix tracks its own history. If a test has been “fixed” 2+ times for the same root cause but the fix keeps being reverted or rejected, Autofix skips it on the next run. This prevents wasting AI budget on persistently broken tests that need manual attention.

Enabling Autofix

CLI — run after any test execution:
stably test || stably fix
Scheduled runs — toggle in the dashboard or in stably.yaml:
stably.yaml
schedules:
  nightly-regression:
    cron: "0 2 * * *"
    autofix: true
API — pass autofix: true when triggering a run:
curl -X POST https://api.stably.ai/v1/projects/{projectId}/runs \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"autofix": true}'
Project-level default — enable for all runs:
stably.yaml
autofix: true

Configuration

Fine-tune the fix agent in stably.yaml:
stably.yaml
agent:
  fix:
    maxBudgetUsd: 100
    maxTurnsPerIssue: 30
    maxParallelWorkers: 3
    skipAfterConsecutiveUnfixed: 3
    rules: |
      Prefer data-testid selectors over CSS selectors.
      Always add comments explaining selector changes.
OptionTypeDescription
maxBudgetUsdnumberMax spend per fix session (default: 50)
maxTurnsPerIssuenumberMax AI turns per issue (default: 50)
maxParallelWorkersnumberParallel code-workers (default: 2)
skipAfterConsecutiveUnfixednumberSkip tests unfixed this many consecutive runs
rulesstringCustom instructions for the fix agent (selector preferences, coding conventions)
Set maxBudgetUsd in stably.yaml rather than inline on each command — it’s easier to manage in one place.

Next Steps