Skip to main content

Overview

Autofix automatically diagnoses test failures and generates code fixes after a scheduled run completes. When enabled, Stably’s AI analyzes why your tests are failing, categorizes each issue, and applies targeted repairs to your test code — all without manual intervention. This is ideal for teams that want their scheduled test suites to stay healthy overnight without waking anyone up.

How It Works

1

Tests run on schedule

Your scheduled tests execute normally on Stably Cloud Runner.
2

Failures detected

If any tests fail, autofix kicks in automatically.
3

AI diagnoses each failure

Stably analyzes the failure context — screenshots, traces, DOM snapshots, and logs — to determine why each test failed.
4

Fixes are applied

The AI generates targeted code changes and applies them to your test files.
5

Review the results

View the diagnosis report and fixes in your dashboard. If your repo is connected to GitHub, fixes can be submitted as a pull request.

Enabling Autofix

From the Dashboard

When creating or editing a scheduled test run, toggle “Auto-fix failing tests” to enable autofix for that schedule.

In stably.yaml

Add autofix: true to any schedule definition:
stably.yaml
schedules:
  nightly-regression:
    cron: "0 2 * * *"
    stablyTestArgs: "--project regression"
    autofix: true
You can enable autofix on some schedules and leave it off on others — it’s configured per schedule.

Diagnosis Categories

When autofix runs, it categorizes each failure into one of the following:
CategoryWhat it means
Test OutdatedThe test references selectors or flows that have changed in your app
Actual BugThe test caught a real bug in your application
UnstableThe test fails intermittently due to timing or race conditions
UI ChangeThe UI changed intentionally and the test needs to reflect the new design
MiscellaneousOther issues that don’t fit the categories above
This categorization helps you understand at a glance whether failures need attention or have already been addressed.

Viewing Results

After autofix completes, results appear in your test runs table under the “Diagnosis & Fix” column:
  • Fixing — autofix is still running
  • Diagnosed — analysis is complete, with issue counts by category
  • Review fix — click to see the full report and code changes
  • No fix available — the issue was identified but couldn’t be automatically repaired
Click “Review fix” to open the detailed report, which includes:
  • Each failing test with its diagnosis
  • The code changes that were applied
  • A link to the generated pull request (if your repo is connected to GitHub)

Configuration

You can fine-tune how the fix agent behaves using the agent.fix section in stably.yaml:
stably.yaml
agent:
  fix:
    maxTurnsPerIssue: 30
    maxParallelWorkers: 3
    skipAfterConsecutiveUnfixed: 3
    rules: |
      Prefer data-testid selectors over CSS selectors.
      Always add comments explaining selector changes.
OptionDescription
maxTurnsPerIssueMaximum AI turns per issue (default: 50)
maxParallelWorkersNumber of issues to fix in parallel (default: 2)
skipAfterConsecutiveUnfixedSkip tests that have failed to fix this many times in a row — saves AI costs on persistently broken tests
rulesCustom instructions for the fix agent (e.g., selector preferences, coding conventions)
See Agent Configuration for the full reference on stably.yaml agent settings.

Running Autofix Manually

You don’t have to wait for a scheduled run. You can run autofix anytime from the CLI:
# Auto-detects the last test run
stably fix

# With a specific run ID
stably fix <runId>

# Full pipeline: run tests, then fix failures
stably test || stably fix
See the stably fix CLI reference for details on run ID detection, CI integration patterns, and more.

Next Steps