The Stably CLI is a command-line tool for developers who prefer working in the terminal. It provides essential commands for authentication, test generation (stably create), test execution (stably test), and automated maintenance with stably fix.
If you’re using an AI coding agent (e.g., Claude Code, Cursor), install the Stably CLI skill for the best experience:
This opens a conversational interface where you can work with the AI agent to:
Create tests — Describe what you want to test and the agent generates Playwright tests
Fix failing tests — Paste error output or describe issues and get fixes applied
Explore your test suite — Ask questions about coverage, flaky tests, or test structure
Get guidance — Learn best practices or troubleshoot problems interactively
Example Session
Copy
$ stably🤖 Stably Agent Type your request or question. Press Ctrl+C to exit.> Create a test for the checkout flow on our e-commerce siteAnalyzing your application...I'll create a test that covers: • Adding items to cart • Proceeding to checkout • Completing payment✓ Created tests/checkout.spec.ts> The login test is failing with a timeout errorLooking at the failure context...The selector '.login-btn' no longer exists. I found a matchingelement with '[data-testid="sign-in"]'.Apply fix? (y/n): y✓ Updated tests/auth.spec.ts> What's our test coverage for the dashboard?You have 12 tests covering the dashboard: • 4 tests for user settings • 3 tests for analytics widgets • 5 tests for navigationMissing coverage: notification preferences, export functionality
The interactive agent is ideal when you want a flexible, back-and-forth workflow rather than running individual commands.
stably create is a headless, one-shot command designed for automation pipelines, background agents, and batch processing. It generates tests and exits — making it ideal for CI/CD workflows, shell scripts, and integration with AI coding agents.
Copy
stably create "login with valid and invalid credentials"
The prompt is optional. If no prompt is provided, Stably automatically analyzes:
Current PR — If running in a CI environment with PR context
Git diffs — Changes against origin/HEAD when running locally
This makes it easy to auto-generate tests for your recent changes without describing them manually.
For interactive, back-and-forth test creation, use the Interactive Agent instead. stably create is optimized for unattended execution.
# .github/workflows/auto-tests.ymlname: Auto-generate Testson: pull_request: types: [opened, synchronize]jobs: generate-tests: # Skip PRs created by stably-bot to prevent infinite loops if: github.event.pull_request.user.login != 'stably-bot' runs-on: ubuntu-latest permissions: contents: write pull-requests: write steps: - uses: actions/checkout@v4 with: ref: ${{ github.head_ref }} - name: Setup Node uses: actions/setup-node@v4 with: node-version: '20' - name: Install dependencies run: npm ci - name: Install browsers run: npx stably install - name: Generate tests env: STABLY_API_KEY: ${{ secrets.STABLY_API_KEY }} STABLY_PROJECT_ID: ${{ secrets.STABLY_PROJECT_ID }} # App credentials for login during test generation TEST_USERNAME: ${{ secrets.TEST_USERNAME }} TEST_PASSWORD: ${{ secrets.TEST_PASSWORD }} run: npx stably create # Automatically analyzes PR changes - name: Check for new tests id: check run: | # Checks for changes in common test directories if [[ -n $(git status --porcelain tests/ e2e/ __tests__ 2>/dev/null) ]]; then echo "has_changes=true" >> $GITHUB_OUTPUT fi - name: Create PR with generated tests if: steps.check.outputs.has_changes == 'true' env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | BRANCH="auto-tests/${{ github.head_ref }}-${{ github.run_number }}" git config user.name "stably-bot" git config user.email "bot@stably.ai" git checkout -b "$BRANCH" git add tests/ e2e/ __tests__/ git commit -m "test: auto-generate tests for PR #${{ github.event.pull_request.number }}" git push -u origin "$BRANCH" gh pr create \ --title "Generated tests for #${{ github.event.pull_request.number }}" \ --body "Auto-generated tests for the changes in #${{ github.event.pull_request.number }}" \ --base "${{ github.head_ref }}"
GitHub Actions: Generate Tests from Staging Deployment
Copy
# .github/workflows/staging-tests.ymlname: Generate Tests from Stagingon: deployment_status: # Triggers when any deployment status changes. # Example: if you have a "staging" deployment environment in GitHub, # this fires automatically when that deployment succeeds. # See: https://docs.github.com/en/actions/deployment/about-deployments workflow_dispatch: # Allow manual triggers for ad-hoc test generationjobs: generate-staging-tests: runs-on: ubuntu-latest # Only run on successful staging deployments (skip production, preview, etc.) # Skip PRs created by stably-bot to prevent infinite loops if: > (github.event_name == 'workflow_dispatch' || (github.event.deployment_status.state == 'success' && github.event.deployment.environment == 'staging')) && github.actor != 'stably-bot' permissions: contents: write pull-requests: write steps: - uses: actions/checkout@v4 with: # Check out the exact commit that was deployed ref: ${{ github.event.deployment.sha || github.sha }} - name: Setup Node uses: actions/setup-node@v4 with: node-version: '20' - name: Install dependencies run: npm ci - name: Install browsers run: npx stably install - name: Generate tests for staging env: STABLY_API_KEY: ${{ secrets.STABLY_API_KEY }} STABLY_PROJECT_ID: ${{ secrets.STABLY_PROJECT_ID }} # App credentials for login during test generation TEST_USERNAME: ${{ secrets.TEST_USERNAME }} TEST_PASSWORD: ${{ secrets.TEST_PASSWORD }} run: | npx stably create "Go to ${{ vars.STAGING_URL }} and create tests for any new features between this and the last staging deployment. Plan it out first." - name: Check for new tests id: check run: | # Checks for changes in common test directories if [[ -n $(git status --porcelain tests/ e2e/ __tests__ 2>/dev/null) ]]; then echo "has_changes=true" >> $GITHUB_OUTPUT fi - name: Create PR with generated tests if: steps.check.outputs.has_changes == 'true' env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | BRANCH="staging-tests/$(date +%Y%m%d-%H%M%S)" git config user.name "stably-bot" git config user.email "bot@stably.ai" git checkout -b "$BRANCH" git add tests/ e2e/ __tests__/ git commit -m "test: auto-generate tests from staging deployment" git push -u origin "$BRANCH" gh pr create \ --title "Generated tests from staging deployment" \ --body "Auto-generated tests based on new features detected on staging." \ --base main
Background Agent Integration
Copy
# Called by AI coding agents (Cursor, Copilot, etc.)# The agent can invoke this command to generate tests autonomouslystably create "PaymentService class with edge cases"# Chain with test executionstably create "user login and logout flow" && stably test
Avoid infinite PR loops. If a PR created by npx stably create triggers the same workflow, it can create an endless cycle of auto-generated PRs. Always add a precondition to skip the workflow when the PR author is stably-bot:
# Run all testsstably test# Run with Playwright optionsstably test --headed --project=chromium# Run specific test filestably test tests/login.spec.ts# More Playwright optionsstably test --workers=4 --retries=2 --grep="login"
stably fix is a headless command that automatically diagnoses test failures and applies AI-generated fixes. Designed for unattended execution, it’s ideal for self-healing CI pipelines, background maintenance agents, and automated test repair workflows.
Copy
# Auto-detects the last test run (local or CI)stably fix# With explicit run IDstably fix <runId>
For interactive debugging, use the Interactive Agent instead. stably fix is optimized for automated, hands-off repair.
# Called by AI coding agents for autonomous test maintenance# Chain test execution with automatic repairstably test || stably fix# Full pipeline: test → fix → verifystably test || (stably fix && stably test)
Maximum agent turns allowed per issue (default: 50)
agent.fix.maxParallelWorkers
number
No
Maximum number of parallel workers when fixing multiple issues simultaneously (default: 2)
agent.fix.skipAfterConsecutiveUnfixed
number
No
Skip tests that have been unsuccessfully fixed this many consecutive times. Saves AI costs on persistently broken tests. If omitted, no tests are skipped
agent.fix.rules
string
No
Custom instructions appended to the agent’s system prompt. Use YAML | for multi-line rules
Custom Rules for Test Generation (STABLY-CREATE.md)
You can customize how stably create generates tests by placing a STABLY-CREATE.md file in your project root. The file content is loaded and appended to the system prompt, giving you fine-grained control over test generation style, conventions, and patterns.This mirrors the agent.fix.rules pattern in stably.yaml — but uses a standalone Markdown file so you can write longer, more detailed instructions with full formatting.
STABLY-CREATE.md
Copy
# Test Generation Rules- Always use `data-testid` attributes for element selectors- Follow the Page Object Model pattern — put locators in separate page classes- Include both positive and negative test cases for form validations- Use `test.describe` blocks to group related scenarios- Add `@smoke` or `@regression` tags via test annotations
Commit STABLY-CREATE.md to source control so your entire team shares the same test generation conventions. This is especially useful when stably create runs in CI pipelines or is invoked by background agents.
Beyond Stably configuration, you can pass your own variables to tests using --env and --env-file:
Copy
# Load from a named environment on Stablystably test --env Staging# Load from a local .env filestably test --env-file .env.staging# Combine both (remote overrides local)stably test --env-file .env --env Production
The Stably CLI automatically writes detailed debug logs to help troubleshoot issues. Logs are organized by date with descriptive session names for easy discovery.