Skip to main content

Overview

The Stably CLI is a command-line tool for developers who prefer working in the terminal. It provides essential commands for authentication, test generation (stably create), test execution (stably test), and automated maintenance with stably fix.
If you’re using an AI coding agent (e.g., Claude Code, Cursor), install the Stably CLI skill for the best experience:
npx skills add https://github.com/stablyai/agent-skills --skill stably-cli

Interactive Agent

Launch the interactive agent with stably:
stably
This opens a conversational interface where you can work with the AI agent to:
  • Create tests — Describe what you want to test and the agent generates Playwright tests
  • Fix failing tests — Paste error output or describe issues and get fixes applied
  • Explore your test suite — Ask questions about coverage, flaky tests, or test structure
  • Get guidance — Learn best practices or troubleshoot problems interactively
$ stably

🤖 Stably Agent
   Type your request or question. Press Ctrl+C to exit.

> Create a test for the checkout flow on our e-commerce site

Analyzing your application...
I'll create a test that covers:
  • Adding items to cart
  • Proceeding to checkout
  • Completing payment

✓ Created tests/checkout.spec.ts

> The login test is failing with a timeout error

Looking at the failure context...
The selector '.login-btn' no longer exists. I found a matching
element with '[data-testid="sign-in"]'.

Apply fix? (y/n): y
✓ Updated tests/auth.spec.ts

> What's our test coverage for the dashboard?

You have 12 tests covering the dashboard:
 4 tests for user settings
 3 tests for analytics widgets
 5 tests for navigation

Missing coverage: notification preferences, export functionality
The interactive agent is ideal when you want a flexible, back-and-forth workflow rather than running individual commands.

Create Tests on Autopilot

stably create is a headless, one-shot command designed for automation pipelines, background agents, and batch processing. It generates tests and exits — making it ideal for CI/CD workflows, shell scripts, and integration with AI coding agents.
stably create "login with valid and invalid credentials"
The prompt is optional. If no prompt is provided, Stably automatically analyzes:
  1. Current PR — If running in a CI environment with PR context
  2. Git diffs — Changes against origin/HEAD when running locally
This makes it easy to auto-generate tests for your recent changes without describing them manually.
For interactive, back-and-forth test creation, use the Interactive Agent instead. stably create is optimized for unattended execution.

Use Cases

ScenarioExample
CI/CD pipelinesAuto-generate tests for new features in PR workflows
Background agentsLet AI coding assistants create tests autonomously
Batch processingScript bulk test generation across multiple features
Scheduled jobsGenerate tests for new API endpoints on a cron schedule

Output Location

# Auto-generate tests from PR/git diffs (no prompt)
stably create
# → Analyzes changes and creates relevant tests

# With explicit prompt
stably create "login with valid credentials"
# → Creates tests/login.spec.ts

# Specify output directory
stably create "checkout flow" --output ./e2e/
# → Creates e2e/checkout-flow.spec.ts
If --output is not specified, Stably automatically detects the output directory:
  1. playwright.config.ts — Uses testDir if defined
  2. Auto-detect — First existing: tests/e2e/__tests__/test/
  3. Fallback — Current working directory
The command prints created file paths, making it easy to parse in CI:
# Capture output paths
stably create "login" | grep "^- " | cut -c3-
$ stably create "checkout flow for guest users"

Analyzing application...
Generating tests for: checkout flow for guest users

Created files:
- /absolute/path/to/tests/checkout-guest-add-to-cart.spec.ts
- /absolute/path/to/tests/checkout-guest-payment.spec.ts

Integration Patterns

# .github/workflows/auto-tests.yml
name: Auto-generate Tests

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  generate-tests:
    # Skip PRs created by stably-bot to prevent infinite loops
    if: github.event.pull_request.user.login != 'stably-bot'
    runs-on: ubuntu-latest
    permissions:
      contents: write
      pull-requests: write
    steps:
      - uses: actions/checkout@v4
        with:
          ref: ${{ github.head_ref }}

      - name: Setup Node
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm ci

      - name: Install browsers
        run: npx stably install

      - name: Generate tests
        env:
          STABLY_API_KEY: ${{ secrets.STABLY_API_KEY }}
          STABLY_PROJECT_ID: ${{ secrets.STABLY_PROJECT_ID }}
          # App credentials for login during test generation
          TEST_USERNAME: ${{ secrets.TEST_USERNAME }}
          TEST_PASSWORD: ${{ secrets.TEST_PASSWORD }}
        run: npx stably create  # Automatically analyzes PR changes

      - name: Check for new tests
        id: check
        run: |
          # Checks for changes in common test directories
          if [[ -n $(git status --porcelain tests/ e2e/ __tests__ 2>/dev/null) ]]; then
            echo "has_changes=true" >> $GITHUB_OUTPUT
          fi

      - name: Create PR with generated tests
        if: steps.check.outputs.has_changes == 'true'
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          BRANCH="auto-tests/${{ github.head_ref }}-${{ github.run_number }}"
          git config user.name "stably-bot"
          git config user.email "bot@stably.ai"
          git checkout -b "$BRANCH"
          git add tests/ e2e/ __tests__/
          git commit -m "test: auto-generate tests for PR #${{ github.event.pull_request.number }}"
          git push -u origin "$BRANCH"
          gh pr create \
            --title "Generated tests for #${{ github.event.pull_request.number }}" \
            --body "Auto-generated tests for the changes in #${{ github.event.pull_request.number }}" \
            --base "${{ github.head_ref }}"
# .github/workflows/staging-tests.yml
name: Generate Tests from Staging

on:
  deployment_status:
    # Triggers when any deployment status changes.
    # Example: if you have a "staging" deployment environment in GitHub,
    # this fires automatically when that deployment succeeds.
    # See: https://docs.github.com/en/actions/deployment/about-deployments
  workflow_dispatch:
    # Allow manual triggers for ad-hoc test generation

jobs:
  generate-staging-tests:
    runs-on: ubuntu-latest
    # Only run on successful staging deployments (skip production, preview, etc.)
    # Skip PRs created by stably-bot to prevent infinite loops
    if: >
      (github.event_name == 'workflow_dispatch' ||
       (github.event.deployment_status.state == 'success' &&
        github.event.deployment.environment == 'staging')) &&
      github.actor != 'stably-bot'
    permissions:
      contents: write
      pull-requests: write
    steps:
      - uses: actions/checkout@v4
        with:
          # Check out the exact commit that was deployed
          ref: ${{ github.event.deployment.sha || github.sha }}

      - name: Setup Node
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm ci

      - name: Install browsers
        run: npx stably install

      - name: Generate tests for staging
        env:
          STABLY_API_KEY: ${{ secrets.STABLY_API_KEY }}
          STABLY_PROJECT_ID: ${{ secrets.STABLY_PROJECT_ID }}
          # App credentials for login during test generation
          TEST_USERNAME: ${{ secrets.TEST_USERNAME }}
          TEST_PASSWORD: ${{ secrets.TEST_PASSWORD }}
        run: |
          npx stably create "Go to ${{ vars.STAGING_URL }} and create tests for any new features between this and the last staging deployment. Plan it out first."

      - name: Check for new tests
        id: check
        run: |
          # Checks for changes in common test directories
          if [[ -n $(git status --porcelain tests/ e2e/ __tests__ 2>/dev/null) ]]; then
            echo "has_changes=true" >> $GITHUB_OUTPUT
          fi

      - name: Create PR with generated tests
        if: steps.check.outputs.has_changes == 'true'
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          BRANCH="staging-tests/$(date +%Y%m%d-%H%M%S)"
          git config user.name "stably-bot"
          git config user.email "bot@stably.ai"
          git checkout -b "$BRANCH"
          git add tests/ e2e/ __tests__/
          git commit -m "test: auto-generate tests from staging deployment"
          git push -u origin "$BRANCH"
          gh pr create \
            --title "Generated tests from staging deployment" \
            --body "Auto-generated tests based on new features detected on staging." \
            --base main
# Called by AI coding agents (Cursor, Copilot, etc.)
# The agent can invoke this command to generate tests autonomously

stably create "PaymentService class with edge cases"

# Chain with test execution
stably create "user login and logout flow" && stably test
Avoid infinite PR loops. If a PR created by npx stably create triggers the same workflow, it can create an endless cycle of auto-generated PRs. Always add a precondition to skip the workflow when the PR author is stably-bot:
jobs:
  generate-tests:
    if: github.event.pull_request.user.login != 'stably-bot'

Running Tests

There are two ways to run your Stably-powered Playwright tests:

Fix Tests on Autopilot

stably fix is a headless command that automatically diagnoses test failures and applies AI-generated fixes. Designed for unattended execution, it’s ideal for self-healing CI pipelines, background maintenance agents, and automated test repair workflows.
# Auto-detects the last test run (local or CI)
stably fix

# With explicit run ID
stably fix <runId>
For interactive debugging, use the Interactive Agent instead. stably fix is optimized for automated, hands-off repair.

Use Cases

ScenarioExample
Self-healing CIAuto-fix flaky tests before they block deployments
Background agentsLet AI assistants maintain tests autonomously
Nightly maintenanceScheduled jobs that repair broken tests overnight
PR workflowsFix test failures and commit patches automatically

Run ID Detection

stably fix automatically detects the run ID using this fallback chain:
  1. Explicit argumentstably fix <runId>
  2. CI environment — detected from CI variables (e.g. GITHUB_RUN_ID)
  3. Last local run — read from .stably/last-run.json (written by stably test)
In most cases, just run stably test followed by stably fix — no run ID needed.

How It Works

The command analyzes failure context (screenshots, logs, DOM, traces) and applies fixes in a single execution:
1

Run tests

Execute your test suite with stably test (reporter captures run data)
2

AI analysis

The AI analyzes the failure context automatically
3

Apply fixes

Fixes are generated and applied to your test code
4

Exit

Command completes — chain with verification or commit steps
$ stably fix run_abc123

Analyzing 3 failures...

checkout.spec.ts > complete purchase
  Issue: Selector '.checkout-button' not found
  Fix: Updated to '[data-testid="checkout-btn"]'
 Fixed

login.spec.ts > invalid credentials
  Issue: Error message assertion failed
  Fix: Updated expected text to match new design
 Fixed

Summary: 2 auto-fixed, 1 requires manual review

Integration Patterns

# .github/workflows/self-healing-tests.yml
name: Self-Healing Tests

on:
  push:
    branches: [main]
  pull_request:

jobs:
  test-and-fix:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm ci && npx stably install

      - name: Run tests
        id: test
        continue-on-error: true
        env:
          STABLY_API_KEY: ${{ secrets.STABLY_API_KEY }}
          STABLY_PROJECT_ID: ${{ secrets.STABLY_PROJECT_ID }}
        run: npx stably test

      - name: Auto-fix failures
        if: steps.test.outcome == 'failure'
        env:
          STABLY_API_KEY: ${{ secrets.STABLY_API_KEY }}
          STABLY_PROJECT_ID: ${{ secrets.STABLY_PROJECT_ID }}
        run: npx stably fix

      - name: Commit fixes
        if: steps.test.outcome == 'failure'
        run: |
          git config user.name "stably-bot"
          git config user.email "bot@stably.ai"
          git add tests/
          git commit -m "fix: auto-repair failing tests" || exit 0
          git push
# Called by AI coding agents for autonomous test maintenance
# Chain test execution with automatic repair

stably test || stably fix

# Full pipeline: test → fix → verify
stably test || (stably fix && stably test)

Agent Configuration

Configure CLI agent behavior using stably.yaml in your repository root. This allows you to customize how agents like stably fix operate.
The stably.yaml file also supports scheduled test runs.
stably.yaml
agent:
  fix:
    maxTurnsPerIssue: 30               # Max turns per issue (default: 50)
    maxParallelWorkers: 3              # Max parallel workers for fixing issues (default: 2)
    skipAfterConsecutiveUnfixed: 3     # Skip tests unfixed 3+ consecutive runs
    rules: |                           # Custom instructions for the fix agent
      Prefer data-testid selectors over CSS selectors.
      Always add comments explaining selector changes.

Configuration Options

FieldTypeRequiredDescription
agent.fix.maxTurnsPerIssuenumberNoMaximum agent turns allowed per issue (default: 50)
agent.fix.maxParallelWorkersnumberNoMaximum number of parallel workers when fixing multiple issues simultaneously (default: 2)
agent.fix.skipAfterConsecutiveUnfixednumberNoSkip tests that have been unsuccessfully fixed this many consecutive times. Saves AI costs on persistently broken tests. If omitted, no tests are skipped
agent.fix.rulesstringNoCustom instructions appended to the agent’s system prompt. Use YAML | for multi-line rules

Custom Rules for Test Generation (STABLY-CREATE.md)

You can customize how stably create generates tests by placing a STABLY-CREATE.md file in your project root. The file content is loaded and appended to the system prompt, giving you fine-grained control over test generation style, conventions, and patterns. This mirrors the agent.fix.rules pattern in stably.yaml — but uses a standalone Markdown file so you can write longer, more detailed instructions with full formatting.
STABLY-CREATE.md
# Test Generation Rules

- Always use `data-testid` attributes for element selectors
- Follow the Page Object Model pattern — put locators in separate page classes
- Include both positive and negative test cases for form validations
- Use `test.describe` blocks to group related scenarios
- Add `@smoke` or `@regression` tags via test annotations
Commit STABLY-CREATE.md to source control so your entire team shares the same test generation conventions. This is especially useful when stably create runs in CI pipelines or is invoked by background agents.
FeatureConfigurationScope
stably fix custom rulesagent.fix.rules in stably.yamlFix agent behavior
stably create custom rulesSTABLY-CREATE.md in project rootTest generation behavior

Command Reference

A complete reference of all available Stably CLI commands.

Commands

Setup

CommandDescription
stably initInitialize Playwright and Stably SDK in your project
stably installInstall browser dependencies
stably loginAuthenticate via browser-based OAuth

Core Workflow

CommandDescription
stablyStart interactive agent session
stably create [prompt]Generate tests directly from prompt, PR context, or git diffs
stably test [options]Run Playwright tests with Stably reporter
stably fix [runId]Auto-fix failing tests (auto-detects run ID from last test run or CI)

Maintenance & Utility

CommandDescription
stably upgradeUpgrade Stably CLI to the latest version
stably logoutClear stored credentials
stably whoamiDisplay current authentication status
stably help [command]Show help for a specific command

Global Options

These options are available for all commands:
OptionDescription
--help, -hDisplay help information
--versionDisplay CLI version number
--verbose, -vEnable verbose output with debug information
--no-telemetryDisable anonymous telemetry for this session
--env <name>Load variables from a named environment stored in Stably
--env-file <path>Load variables from a local .env file (can be specified multiple times)
-C, --cwd <path>Change working directory before running the command

Environment Variables

Configure Stably CLI behavior using environment variables:
VariableDescriptionRequired
STABLY_API_KEYAPI key for authentication (from Settings → API Keys)Yes
STABLY_PROJECT_IDProject identifier (from app.stably.ai)Yes
STABLY_BASE_URLCustom API endpoint (for enterprise deployments)No
STABLY_LOG_LEVELConsole log level: error, warn, info, debug (default: warn)No
STABLY_DISABLE_TELEMETRYSet to 1 to disable anonymous telemetryNo
DO_NOT_TRACKStandard opt-out for telemetry (set to 1)No
To disable telemetry, set any one of: STABLY_DISABLE_TELEMETRY=1, DO_NOT_TRACK=1, or use the --no-telemetry flag.
Setting environment variables:
export STABLY_API_KEY=stably_xxxxxxxxxxxx
export STABLY_PROJECT_ID=proj_xxxxxxxxxxxx
Add to ~/.bashrc, ~/.zshrc, or ~/.profile for persistence.

Test Environment Variables

Beyond Stably configuration, you can pass your own variables to tests using --env and --env-file:
# Load from a named environment on Stably
stably test --env Staging

# Load from a local .env file
stably test --env-file .env.staging

# Combine both (remote overrides local)
stably test --env-file .env --env Production
Variable precedence (highest priority wins):
  1. Stably internals (STABLY_API_KEY, STABLY_PROJECT_ID)
  2. --env — remote environment from Stably
  3. --env-file — local .env file(s)
  4. process.env — system/shell environment
See Environments for managing named environments on the Stably dashboard.

Exit Codes

Stably CLI uses standard exit codes for scripting and CI/CD integration:
CodeDescription
0Success — command completed successfully
1Failure — command failed (test failures, errors, etc.)
2Invalid usage — incorrect arguments or missing required options
Example usage in scripts:
# Run tests and handle exit codes
stably test
if [ $? -eq 0 ]; then
  echo "All tests passed"
elif [ $? -eq 1 ]; then
  echo "Tests failed, attempting auto-fix..."
  stably fix
fi

# One-liner: run tests, fix on failure, re-run
stably test || (stably fix && stably test)

Debug Logging

The Stably CLI automatically writes detailed debug logs to help troubleshoot issues. Logs are organized by date with descriptive session names for easy discovery.

Log Location

Logs are stored in your system’s temp directory:
/tmp/stably-logs/
  2024-01-15/
    10-30-45-login.log
    10-31-02-init.log
    10-32-15-write-a-test-for-the-login-page.log
    10-45-12-fix-github-myorg_myrepo-123-1.log
Naming convention: HH-MM-SS-{session-name}.log
  • Named commands use the command name (e.g., login, init, test)
  • stably create uses the prompt text (sanitized, max 100 chars)
  • stably fix uses fix-{runId}
  • Interactive chat uses the first message
Logs in /tmp are automatically cleaned up by your operating system on reboot or via system cleanup policies.

Verbose Mode

Use --verbose (or -v) to see debug output in your terminal and display the log file path:
stably --verbose create "login test"
Output:
Debug log: /tmp/stably-logs/2024-01-15/10-30-45-login-test.log

debug Checking authentication
debug Authentication resolved { authType: 'oauth' }
debug Fetching system prompt { mode: 'single' }
...

Log Levels

You can also set the log level via environment variable:
STABLY_LOG_LEVEL=debug stably create "login test"
LevelDescription
errorUnexpected errors and crashes
warnConfiguration issues, auth failures (default console output)
infoNormal operations (session start/end)
debugDetailed debugging (API calls, state changes)

Sharing Logs with Support

When errors occur, the CLI automatically displays the log file path (no --verbose required):
Debug log written to:
  /tmp/stably-logs/2024-01-15/10-30-45-login-test.log
Share this file with support for assistance.
Attach this file when contacting support for faster resolution.
To see the log file path for successful runs, use --verbose. The path will be shown at startup and when you press Ctrl+C.

Troubleshooting

# Clear and re-authenticate
stably logout
stably login

# Or use API key directly
export STABLY_API_KEY=stably_...

# Check current status
stably whoami
If you encounter browser-related errors:
# Install browser dependencies
stably install

# Or use Playwright directly
npx playwright install

Next Steps