Skip to main content
Stably gives you multiple ways to auto-create tests, from a fully interactive conversation to a headless one-shot command for CI. Every approach uses the same AI agent under the hood — you just choose how much control you want.
ModeCommandBest for
InteractivestablyExploring, iterating, learning — hands-on test creation
One-shot promptstably create "..."Quick generation from a description
Self-planningstably create (no prompt)CI/CD — auto-generates tests from PR diffs or git changes
From test planstably create "follow the test plan in tests/plan.md"Structured, repeatable test generation

Interactive Mode

The fastest way to start creating tests. Launch the interactive agent and have a conversation:
stably
This opens a conversational interface where you work with the AI agent in real-time:
  • Describe what to test and the agent generates Playwright tests
  • Iterate on the output — ask for changes, add edge cases, refine selectors
  • Fix failing tests — paste error output and get fixes applied
  • Explore your suite — ask about coverage gaps, flaky tests, or test structure
  • Get guidance — learn best practices or troubleshoot problems
$ stably

🤖 Stably Agent
   Type your request or question. Press Ctrl+C to exit.

> Create a test for the checkout flow on our e-commerce site

Analyzing your application...
I'll create a test that covers:
  • Adding items to cart
  • Proceeding to checkout
  • Completing payment

✓ Created tests/checkout.spec.ts

> Add edge cases for expired credit cards and empty cart

Adding edge case tests...
✓ Updated tests/checkout.spec.ts

> The login test is failing with a timeout error

Looking at the failure context...
The selector '.login-btn' no longer exists. I found a matching
element with '[data-testid="sign-in"]'.

Apply fix? (y/n): y
✓ Updated tests/auth.spec.ts

> What's our test coverage for the dashboard?

You have 12 tests covering the dashboard:
 4 tests for user settings
 3 tests for analytics widgets
 5 tests for navigation

Missing coverage: notification preferences, export functionality
Interactive mode is ideal when you want a flexible, back-and-forth workflow — especially when you’re exploring a new feature, aren’t sure exactly what tests you need, or want to iterate quickly on the output.

stably create — Headless Test Generation

stably create is the headless counterpart to the interactive agent. It generates tests and exits — making it ideal for CI/CD workflows, shell scripts, and integration with AI coding agents.
stably create "login with valid and invalid credentials"

Self-Planning (No Prompt)

When you run stably create without a prompt, Stably automatically analyzes your changes and decides what tests to write:
  1. In CI with PR context — analyzes the pull request diff
  2. Locally — analyzes git changes against origin/HEAD
# In CI — auto-detects PR changes and generates relevant tests
stably create

# Locally — analyzes your uncommitted/unpushed changes
stably create
This is the simplest way to add test generation to your CI pipeline — no prompt engineering required. The agent reads your code changes, understands what’s new, and creates targeted tests.

From a Prompt

Pass a description of what you want to test:
# Simple feature description
stably create "login with valid and invalid credentials"

# Multi-scenario request
stably create "checkout flow: guest checkout, saved payment method, and coupon codes"

# Specify test patterns
stably create "CRUD operations for the /api/users endpoint with error handling"

From a Test Plan

Point the agent at a structured test plan for repeatable, comprehensive test generation:
# Follow a markdown test plan
stably create "Read the test plan in tests/plan.md and create all the tests"

# Follow prompt files in a directory
stably create "Read the .md files in /tests/prompts and create tests for each one"

# Target specific scenarios from a plan
stably create "From tests/plan.md, create only the P0 critical-path tests"

Example Prompts

Here are prompts that work well with stably create:
# Feature-focused
stably create "user registration with email verification"
stably create "shopping cart: add, remove, update quantity, apply discount"

# Comprehensive with guidance
stably create "test the settings page — cover profile editing, password change, notification preferences, and account deletion with confirmation"

# Exploratory — let the agent discover what to test
stably create "Go to https://staging.myapp.com and create tests for the main user flows"

# From deployment
stably create "Go to $STAGING_URL and create tests for any new features between this and the last deployment. Plan it out first."

# With specific patterns
stably create "Create Page Object Model tests for the dashboard with data-testid selectors"

Output Location

# Auto-detect output directory
stably create "login test"
# → Creates tests/login.spec.ts

# Specify output directory
stably create "checkout flow" --output ./e2e/
# → Creates e2e/checkout-flow.spec.ts
If --output is not specified, Stably automatically detects the output directory:
  1. playwright.config.ts — Uses testDir if defined
  2. Auto-detect — First existing: tests/e2e/__tests__/test/
  3. Fallback — Current working directory
The command prints created file paths, making it easy to parse in CI:
# Capture output paths
stably create "login" | grep "^- " | cut -c3-
$ stably create "checkout flow for guest users"

Analyzing application...
Generating tests for: checkout flow for guest users

Created files:
- /absolute/path/to/tests/checkout-guest-add-to-cart.spec.ts
- /absolute/path/to/tests/checkout-guest-payment.spec.ts

Use Cases

ScenarioExample
CI/CD pipelinesAuto-generate tests for new features in PR workflows
Background agentsLet AI coding assistants create tests autonomously
Batch processingScript bulk test generation across multiple features
Scheduled jobsGenerate tests for new API endpoints on a cron schedule

CI/CD Integration

stably create is designed for unattended execution. Here are common integration patterns.
# .github/workflows/auto-tests.yml
name: Auto-generate Tests

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  generate-tests:
    # Skip PRs created by stably-bot to prevent infinite loops
    if: github.event.pull_request.user.login != 'stably-bot'
    runs-on: ubuntu-latest
    permissions:
      contents: write
      pull-requests: write
    steps:
      - uses: actions/checkout@v4
        with:
          ref: ${{ github.head_ref }}

      - name: Setup Node
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm ci

      - name: Install browsers
        run: npx stably install

      - name: Generate tests
        env:
          STABLY_API_KEY: ${{ secrets.STABLY_API_KEY }}
          STABLY_PROJECT_ID: ${{ secrets.STABLY_PROJECT_ID }}
          TEST_USERNAME: ${{ secrets.TEST_USERNAME }}
          TEST_PASSWORD: ${{ secrets.TEST_PASSWORD }}
        run: npx stably create  # Automatically analyzes PR changes

      - name: Check for new tests
        id: check
        run: |
          if [[ -n $(git status --porcelain tests/ e2e/ __tests__ 2>/dev/null) ]]; then
            echo "has_changes=true" >> $GITHUB_OUTPUT
          fi

      - name: Create PR with generated tests
        if: steps.check.outputs.has_changes == 'true'
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          BRANCH="auto-tests/${{ github.head_ref }}-${{ github.run_number }}"
          git config user.name "stably-bot"
          git config user.email "bot@stably.ai"
          git checkout -b "$BRANCH"
          git add tests/ e2e/ __tests__/
          git commit -m "test: auto-generate tests for PR #${{ github.event.pull_request.number }}"
          git push -u origin "$BRANCH"
          gh pr create \
            --title "Generated tests for #${{ github.event.pull_request.number }}" \
            --body "Auto-generated tests for the changes in #${{ github.event.pull_request.number }}" \
            --base "${{ github.head_ref }}"
# .github/workflows/staging-tests.yml
name: Generate Tests from Staging

on:
  deployment_status:
  workflow_dispatch:

jobs:
  generate-staging-tests:
    runs-on: ubuntu-latest
    if: >
      (github.event_name == 'workflow_dispatch' ||
       (github.event.deployment_status.state == 'success' &&
        github.event.deployment.environment == 'staging')) &&
      github.actor != 'stably-bot'
    permissions:
      contents: write
      pull-requests: write
    steps:
      - uses: actions/checkout@v4
        with:
          ref: ${{ github.event.deployment.sha || github.sha }}

      - name: Setup Node
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm ci

      - name: Install browsers
        run: npx stably install

      - name: Generate tests for staging
        env:
          STABLY_API_KEY: ${{ secrets.STABLY_API_KEY }}
          STABLY_PROJECT_ID: ${{ secrets.STABLY_PROJECT_ID }}
          TEST_USERNAME: ${{ secrets.TEST_USERNAME }}
          TEST_PASSWORD: ${{ secrets.TEST_PASSWORD }}
        run: |
          npx stably create "Go to ${{ vars.STAGING_URL }} and create tests for any new features between this and the last staging deployment. Plan it out first."

      - name: Check for new tests
        id: check
        run: |
          if [[ -n $(git status --porcelain tests/ e2e/ __tests__ 2>/dev/null) ]]; then
            echo "has_changes=true" >> $GITHUB_OUTPUT
          fi

      - name: Create PR with generated tests
        if: steps.check.outputs.has_changes == 'true'
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          BRANCH="staging-tests/$(date +%Y%m%d-%H%M%S)"
          git config user.name "stably-bot"
          git config user.email "bot@stably.ai"
          git checkout -b "$BRANCH"
          git add tests/ e2e/ __tests__/
          git commit -m "test: auto-generate tests from staging deployment"
          git push -u origin "$BRANCH"
          gh pr create \
            --title "Generated tests from staging deployment" \
            --body "Auto-generated tests based on new features detected on staging." \
            --base main
# Called by AI coding agents (Cursor, Copilot, Claude Code, etc.)
stably create "PaymentService class with edge cases"

# Chain with test execution
stably create "user login and logout flow" && stably test
Avoid infinite PR loops. If a PR created by npx stably create triggers the same workflow, it can create an endless cycle. Always add a precondition to skip the workflow when the PR author is stably-bot:
jobs:
  generate-tests:
    if: github.event.pull_request.user.login != 'stably-bot'

Monitoring Create Sessions

When stably create runs, it automatically creates an automation — a real-time view of the agent’s progress visible on the Stably web dashboard. This is especially useful in CI pipelines, Docker containers, and other non-interactive environments where you can’t see the terminal. From the dashboard you can:
  • Watch progress live — see the current phase, activity log, and files being created
  • Send messages to the agent — provide guidance or additional context while the agent works, even in CI
Create automations track these phases: initializinggeneratingtestingcomplete
Automation creation is best-effort and non-blocking. If the connection fails, the CLI continues normally — your commands are never interrupted.

Customizing Test Generation

Control how the AI agent generates tests using STABLY-CREATE.md — a markdown file in your project root with test-generation-specific rules.
STABLY-CREATE.md
# Test Generation Rules

- Always use `data-testid` attributes for element selectors
- Follow the Page Object Model pattern
- Include both positive and negative test cases
- Use `test.describe` blocks to group related scenarios
- Add `@smoke` annotation to critical-path tests
See the full STABLY-CREATE.md reference for examples and best practices.

Next Steps