Generate Playwright tests with AI — interactively, from prompts, PR diffs, or on full autopilot in CI
Stably gives you multiple ways to auto-create tests, from a fully interactive conversation to a headless one-shot command for CI. Every approach uses the same AI agent under the hood — you just choose how much control you want.
Mode
Command
Best for
Interactive
stably
Exploring, iterating, learning — hands-on test creation
One-shot prompt
stably create "..."
Quick generation from a description
Self-planning
stably create (no prompt)
CI/CD — auto-generates tests from PR diffs or git changes
From test plan
stably create "follow the test plan in tests/plan.md"
The fastest way to start creating tests. Launch the interactive agent and have a conversation:
Copy
stably
This opens a conversational interface where you work with the AI agent in real-time:
Describe what to test and the agent generates Playwright tests
Iterate on the output — ask for changes, add edge cases, refine selectors
Fix failing tests — paste error output and get fixes applied
Explore your suite — ask about coverage gaps, flaky tests, or test structure
Get guidance — learn best practices or troubleshoot problems
Example Session
Copy
$ stably🤖 Stably Agent Type your request or question. Press Ctrl+C to exit.> Create a test for the checkout flow on our e-commerce siteAnalyzing your application...I'll create a test that covers: • Adding items to cart • Proceeding to checkout • Completing payment✓ Created tests/checkout.spec.ts> Add edge cases for expired credit cards and empty cartAdding edge case tests...✓ Updated tests/checkout.spec.ts> The login test is failing with a timeout errorLooking at the failure context...The selector '.login-btn' no longer exists. I found a matchingelement with '[data-testid="sign-in"]'.Apply fix? (y/n): y✓ Updated tests/auth.spec.ts> What's our test coverage for the dashboard?You have 12 tests covering the dashboard: • 4 tests for user settings • 3 tests for analytics widgets • 5 tests for navigationMissing coverage: notification preferences, export functionality
Interactive mode is ideal when you want a flexible, back-and-forth workflow — especially when you’re exploring a new feature, aren’t sure exactly what tests you need, or want to iterate quickly on the output.
stably create is the headless counterpart to the interactive agent. It generates tests and exits — making it ideal for CI/CD workflows, shell scripts, and integration with AI coding agents.
Copy
stably create "login with valid and invalid credentials"
When you run stably create without a prompt, Stably automatically analyzes your changes and decides what tests to write:
In CI with PR context — analyzes the pull request diff
Locally — analyzes git changes against origin/HEAD
Copy
# In CI — auto-detects PR changes and generates relevant testsstably create# Locally — analyzes your uncommitted/unpushed changesstably create
This is the simplest way to add test generation to your CI pipeline — no prompt engineering required. The agent reads your code changes, understands what’s new, and creates targeted tests.
Point the agent at a structured test plan for repeatable, comprehensive test generation:
Copy
# Follow a markdown test planstably create "Read the test plan in tests/plan.md and create all the tests"# Follow prompt files in a directorystably create "Read the .md files in /tests/prompts and create tests for each one"# Target specific scenarios from a planstably create "From tests/plan.md, create only the P0 critical-path tests"
Here are prompts that work well with stably create:
Copy
# Feature-focusedstably create "user registration with email verification"stably create "shopping cart: add, remove, update quantity, apply discount"# Comprehensive with guidancestably create "test the settings page — cover profile editing, password change, notification preferences, and account deletion with confirmation"# Exploratory — let the agent discover what to teststably create "Go to https://staging.myapp.com and create tests for the main user flows"# From deploymentstably create "Go to $STAGING_URL and create tests for any new features between this and the last deployment. Plan it out first."# With specific patternsstably create "Create Page Object Model tests for the dashboard with data-testid selectors"
GitHub Actions: Generate Tests from Staging Deployment
Copy
# .github/workflows/staging-tests.ymlname: Generate Tests from Stagingon: deployment_status: workflow_dispatch:jobs: generate-staging-tests: runs-on: ubuntu-latest if: > (github.event_name == 'workflow_dispatch' || (github.event.deployment_status.state == 'success' && github.event.deployment.environment == 'staging')) && github.actor != 'stably-bot' permissions: contents: write pull-requests: write steps: - uses: actions/checkout@v4 with: ref: ${{ github.event.deployment.sha || github.sha }} - name: Setup Node uses: actions/setup-node@v4 with: node-version: '20' - name: Install dependencies run: npm ci - name: Install browsers run: npx stably install - name: Generate tests for staging env: STABLY_API_KEY: ${{ secrets.STABLY_API_KEY }} STABLY_PROJECT_ID: ${{ secrets.STABLY_PROJECT_ID }} TEST_USERNAME: ${{ secrets.TEST_USERNAME }} TEST_PASSWORD: ${{ secrets.TEST_PASSWORD }} run: | npx stably create "Go to ${{ vars.STAGING_URL }} and create tests for any new features between this and the last staging deployment. Plan it out first." - name: Check for new tests id: check run: | if [[ -n $(git status --porcelain tests/ e2e/ __tests__ 2>/dev/null) ]]; then echo "has_changes=true" >> $GITHUB_OUTPUT fi - name: Create PR with generated tests if: steps.check.outputs.has_changes == 'true' env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | BRANCH="staging-tests/$(date +%Y%m%d-%H%M%S)" git config user.name "stably-bot" git config user.email "bot@stably.ai" git checkout -b "$BRANCH" git add tests/ e2e/ __tests__/ git commit -m "test: auto-generate tests from staging deployment" git push -u origin "$BRANCH" gh pr create \ --title "Generated tests from staging deployment" \ --body "Auto-generated tests based on new features detected on staging." \ --base main
Background Agent Integration
Copy
# Called by AI coding agents (Cursor, Copilot, Claude Code, etc.)stably create "PaymentService class with edge cases"# Chain with test executionstably create "user login and logout flow" && stably test
Avoid infinite PR loops. If a PR created by npx stably create triggers the same workflow, it can create an endless cycle. Always add a precondition to skip the workflow when the PR author is stably-bot:
When stably create runs, it automatically creates an automation — a real-time view of the agent’s progress visible on the Stably web dashboard. This is especially useful in CI pipelines, Docker containers, and other non-interactive environments where you can’t see the terminal.From the dashboard you can:
Watch progress live — see the current phase, activity log, and files being created
Send messages to the agent — provide guidance or additional context while the agent works, even in CI
Control how the AI agent generates tests using STABLY-CREATE.md — a markdown file in your project root with test-generation-specific rules.
STABLY-CREATE.md
Copy
# Test Generation Rules- Always use `data-testid` attributes for element selectors- Follow the Page Object Model pattern- Include both positive and negative test cases- Use `test.describe` blocks to group related scenarios- Add `@smoke` annotation to critical-path tests