Skip to main content
When Playwright tests fail in your suite, Stably’s Test Maintenance Agent automatically performs deep root-cause analysis to identify whether the failure is a real product bug, a test that needs updating, or infrastructure/flake issues. The agent scans execution traces, console logs, network activity, and your source repository to generate actionable repair prompts you can paste directly into your coding agent (Cursor, Claude Code, GitHub Copilot, etc.).
Suite Triage Report showing automated test failure analysis

How Test Maintenance Works

When test failures occur, the maintenance agent provides comprehensive analysis and repair guidance:

1. Trigger Triage from Dashboard

Navigate to your test run results in the Stably Dashboard and locate any failed test execution. Click the “Triage” button to initiate the analysis. The agent will automatically:
  • 📊 Analyze Playwright traces and execution timeline
  • 🔍 Review console logs and network requests
  • 🧬 Cross-reference your source repository
  • 🤖 Identify root cause category (bug vs. test vs. infrastructure)

2. Review Automated Analysis

The triage report provides: Summary — High-level overview of all failures grouped by root cause Root Cause Classification:
  • 🐞 Product Bug — Actual application defect requiring code fix
  • 🧪 Test Needs Update — Test logic or locators require maintenance
  • ⚙️ Infrastructure/Flake — Environment, timing, or network issues
Recommended Fix — Specific technical guidance for each failure Impacted Tests — List of all tests affected by the same root cause

3. Copy Prompt for Your Coding Agent

Each diagnosis includes a generated AI repair prompt optimized for coding agents. Simply click “Copy Prompt” and paste it into:
  • Cursor — for inline code editing and test repair
  • Claude Code — for conversational test maintenance
  • GitHub Copilot — for IDE-integrated suggestions
  • Any LLM-powered coding assistant
The prompt contains:
Root cause: selector 'button[data-id="y7pacvlgm"]' removed
Likely caused by UI refactor in src/components/Button.tsx
Suggested fix: replace with getByRole('button', { name: 'Start' })

Prompt for your coding agent:
"Update login.spec.ts to use accessible role locators instead of data-id."
Your coding agent will understand the context and generate the appropriate test code changes.

Types of Failures Detected

Test-Level Issues

The agent identifies common test maintenance scenarios: Fixed Timeout Misuse — Tests using page.waitForTimeout(5000) instead of proper wait conditions, causing race conditions and flakiness
// ❌ Before (detected by agent)
await page.waitForTimeout(5000);

// ✅ Suggested fix
await page.waitForSelector('button[data-testid="submit"]');
Brittle Locators — Selectors using dynamic IDs or implementation details that break during refactors
// ❌ Before
await page.click('button[data-id="y7pacvlgm"]');

// ✅ Suggested fix
await page.getByRole('button', { name: 'Start' });
Missing Wait Conditions — Actions performed before elements are ready in the DOM Incorrect Assertions — Tests checking for specific values that legitimately changed

Product Bugs

Real application defects identified by the agent:
  • API Failures — Backend errors, timeouts, or unexpected responses
  • JavaScript Errors — Console exceptions from application code
  • Navigation Issues — Broken links, incorrect redirects, or routing problems
  • Visual Regressions — Layout shifts, missing elements, or styling problems

Infrastructure Issues

Environment and timing problems:
  • Network Timeouts — External API calls or CDN failures
  • Resource Loading — Assets failing to load due to CDN/cache issues
  • Flaky Selectors — Elements appearing inconsistently due to race conditions
  • Environment Configuration — Test data, auth tokens, or setup problems

Best Practices

Leverage AI Repair Prompts

Instead of manually debugging failures:
  1. Trigger triage after each test run with failures
  2. Review the automated analysis to understand failure categories
  3. Copy the AI-generated prompt for test-related failures
  4. Paste into your coding agent and review suggested changes
  5. Apply fixes and re-run tests to verify

Integrate with Your Workflow

In CI/CD:
  • Set up alerts for test failures (see Alerting)
  • Review triage reports before investigating manually
  • Use prompts to fix tests in feature branches before merging
In Development:
  • Run triage on local test failures
  • Generate repair prompts for immediate fixes
  • Build muscle memory for common failure patterns

Continuous Improvement

Use triage insights to improve test quality over time:
  • Track common failure patterns across your suite
  • Refactor tests with recurring issues
  • Update test authoring guidelines based on agent findings
  • Share AI repair prompts with your team

The Test Maintenance Agent works seamlessly with other Stably capabilities:

Next Steps