Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.stably.ai/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Release testing in Stably runs your existing regression test suites against staging environments before production deployments. These test suites include all of Stably’s AI-powered features like AI Auto-Heal, AI Steps, and intelligent test maintenance. Key Features:
  • Execute comprehensive regression test suites on staging environments
  • Leverage AI Auto-Heal to automatically adapt tests to UI changes
  • Use AI Steps for intelligent test interactions
  • Run tests manually on-demand or automatically in CI/CD pipelines
  • Get detailed test reports with failure analysis and screenshots
  • Block production deployments when critical tests fail
Perfect for: Validating staging environments with the same test suites you use for production monitoring, but triggered for release validation

Setup Process

1

Create Release Test Suite

Set up a comprehensive regression test suite for release validation:
Include tests that cover:
  • Critical User Journeys: Login, core workflows, checkout processes
  • Core Functionality: Features that could break with new releases
  • Integration Points: API endpoints, third-party services, external dependencies
  • Edge Cases: Error handling, boundary conditions, negative scenarios
Your release test suites automatically include Stably’s AI capabilities:
  • AI Auto-Heal: Tests automatically adapt to minor UI changes during releases
  • AI Steps: Intelligent interactions that handle dynamic content
  • AI Assertions: Smart validation that adapts to content changes
  • Automatic Maintenance: Tests stay current with application changes
  • Group related tests into logical test suites
  • Configure setup/teardown for test data preparation
  • Set appropriate timeouts for staging environment performance
  • Configure retry logic for handling flaky tests
2

Configure Environment Variables

Set up environment-specific variables to target your staging environment:
  1. Navigate to Core Configuration > Test Data
  2. Create environment-specific variables:
    • BASE_URL: Your staging environment URL (e.g., https://staging.yourapp.com)
    • API_ENDPOINT: Staging API endpoints
    • TEST_CREDENTIALS: Staging-specific login credentials
    • DATABASE_URL: Staging database connection (if needed)
  3. Create separate environment scopes for staging vs production
  4. Mark sensitive data (API keys, passwords) as sensitive
3

Configure Test Suite Settings

Optimize your test suite configuration for release testing:
  • Environment: Select staging environment scope
  • Parallelism: Set to “Auto” for faster execution on large regression suites
  • Individual Test Retries: Configure 1-2 retries for handling flaky tests
  • Test Timeout: Adjust for staging performance (default: 8 minutes)
  • Action Delay: Increase if staging is slower than production (default: 750ms)
Configure AI Auto-Heal behavior for release testing:
  • Healing Sensitivity: Set appropriate level for UI changes
  • Exploration Cycles: Limit cycles for faster execution
  • Auto-Update: Enable to keep tests current with releases
Set up alerts for test results:
  • Email Notifications: Alert team members of test results
  • Slack Integration: Post results to team channels
  • Suite-Specific Settings: Override project notifications for releases
  • Failure Alerts: Immediate notification for critical failures

Running Release Tests

Manual Execution

For on-demand release validation:
1

Navigate to Test Suite

Go to your release test suite in the Stably dashboard
2

Verify Configuration

Before running, confirm:
  • Environment is set to staging
  • All critical tests are included (not skipped)
  • Test data and credentials are current
  • AI Auto-Heal is properly configured
3

Execute Test Run

Click Run Tests to start execution:
  • Monitor progress in real-time dashboard
  • Watch AI Auto-Heal adapt to any UI changes
  • Review test results as they complete
  • Check failure details and screenshots
4

Review Results

Once testing completes:
  • Review comprehensive test reports
  • Investigate any failures with AI-generated insights
  • Document findings for release decision
  • Share results with stakeholders

Automated CI/CD Execution

For pipeline integration and automated release validation:
1

Set Up CI Integration

Configure your CI/CD pipeline to run release tests automatically:
  1. Get your API key from Settings > API Keys
  2. Add STABLY_API_KEY as a repository secret in GitHub
  3. From your test suite page, click Add to CI to get the workflow code
  4. Create your release workflow (e.g., .github/workflows/release.yaml)
Example release workflow:
name: Release Testing

on:
  push:
    branches:
      - release/*
      - main
  workflow_dispatch:

jobs:
  release-tests:
    name: Run Release Tests
    runs-on: ubuntu-latest

    steps:
      - name: Run Release Test Suite
        uses: stablyai/stably-runner-action@v4
        with:
          api-key: ${{ secrets.STABLY_API_KEY }}
          test-suite-id: your-release-suite-id
          url-replacement: |-
            https://your-production-url.com
            https://your-staging-url.com
For other CI/CD platforms, use the Stably API directly:
export STABLY_API_KEY=your-api-key
export TEST_SUITE_ID=your-release-suite-id

curl -X 'POST' \
  "https://api.stably.ai/v1/testSuite/$TEST_SUITE_ID/run" \
  -H "accept: application/json" \
  -H "Authorization: Bearer $STABLY_API_KEY"
2

Configure URL Replacement

Use URL replacement to test staging with production test suites:
  • Original URL: The production URL configured in your test suite
  • Replacement URL: Your staging environment URL
  • This allows you to reuse production test suites for staging validation
  • AI features work the same way on staging as production
  • Ensure staging environment mirrors production configuration
3

Set Up Release Gates

Configure your pipeline to block releases on test failures:
  • Make the test step required for deployment
  • Configure branch protection rules
  • Set up approval processes for overriding failures
  • Document the release process for your team

AI-Powered Release Testing

AI Auto-Heal During Releases

AI Auto-Heal automatically adapts your tests during release validation:
  • UI Changes: Tests automatically adapt to button text changes, layout updates
  • Dynamic Content: AI handles content that changes between staging and production
  • Element Updates: Automatically finds elements when selectors change
  • Workflow Changes: Adapts to modified user flows and navigation
  • Healing Sensitivity: Set conservative healing to avoid false positives
  • Exploration Limits: Balance thorough testing with execution time
  • Manual Review: Configure when healing changes need approval
  • Rollback Options: Revert healing changes if they cause issues

AI Steps for Complex Scenarios

AI Steps provide intelligent interactions during release testing:
  • Dynamic Forms: Handle forms that change structure between releases
  • Complex Workflows: Navigate multi-step processes intelligently
  • Conditional Logic: Adapt to different paths based on application state
  • Error Handling: Intelligently handle unexpected states or errors

Release Testing Workflow

Trigger Conditions

Release tests can be triggered:
On-Demand Execution
  • Before major releases
  • After significant staging deployments
  • When investigating issues
  • For compliance or audit requirements