Why Use AI Assistants for Test Creation?
Faster Development
Generate complete test suites in minutes instead of hours
Best Practices Built-in
AI automatically applies Stably SDK patterns like
.describe() and AI assertionsContext-Aware
Playwright MCP gives AI direct access to your browser state and page structure
Reduced Errors
AI writes tests that leverage auto-heal from day one
Prerequisites
1
Install Stably SDK
Follow the Playwright AI Tests guide to install and configure the Stably SDK in your project.
2
Choose Your AI Assistant
Install one of these AI coding assistants:
- Cursor — AI-first code editor with deep IDE integration
- Claude Code — Standalone AI assistant with MCP support
3
Configure Playwright MCP
Playwright MCP allows the AI to interact with browsers directly, inspect page state, and generate accurate selectors.
- Cursor
- Claude Code
Add to your Cursor settings (
~/.cursor/mcp.json or via Settings → MCP):Setup AI Rules for Stably SDK
AI coding assistants work best when they understand the specific patterns and capabilities of your testing framework. Configure your assistant with Stably SDK rules:1
Copy the Stably SDK AI Rules
2
Add Rules to Your AI Assistant
- Cursor
- Claude Code
Add the AI rules through Cursor’s Rules feature:
- Open Cursor Settings → Rules (or press
Cmd/Ctrl + Shift + J) - Create a new
stably-sdk-rules.mdcrule file - Paste the AI rules content
- Configure when to apply the rule:
- Set appropriate file globs (e.g.,
**/*.spec.ts,**/tests/**/*.ts) - Choose whether to always apply or apply based on file patterns
- Set appropriate file globs (e.g.,
Cursor rules support project-specific and global configurations. You can create multiple rule files and control their scope using globs and the
alwaysApply setting.After adding rules to your AI assistant, restart it or start a new conversation to ensure the rules are loaded.
3
Verify Configuration
Test that your AI assistant understands Stably SDK by asking:
“Generate a test that uses Stably SDK’s AI assertion to verify a dashboard page”The AI should generate code using
toMatchScreenshotPrompt() instead of basic Playwright assertions.Creating Tests with AI
Basic Workflow
1
Navigate to the Page
Use Playwright MCP to open the page you want to test:Ask the AI:
“Open a browser and navigate to https://app.example.com/dashboard”The AI will use Playwright MCP to launch a browser and navigate to the page.
2
Explore the Page
Ask the AI to inspect elements and understand the page structure:
“What elements are visible on this page? Show me the main navigation and action buttons”The AI uses MCP to capture page snapshots and identify interactive elements.
3
Generate Test Code
Describe the user flow you want to test:
“Generate a Stably SDK test that:The AI generates production-ready code using Stably SDK patterns.
- Logs into the app with test credentials
- Clicks the ‘Create Project’ button
- Fills in the project form
- Uses an AI assertion to verify the success message appears”
4
Refine and Iterate
Review the generated test and refine as needed:
“Add error handling for network timeouts and use .describe() on the submit button locator”The AI updates the test with improvements.
Example: Generated Test
Here’s an example of a test generated by an AI assistant with Stably SDK rules:The AI automatically:
- Uses
.describe()on critical locators for auto-heal - Applies
toMatchScreenshotPrompt()for dynamic UI validation - Includes proper waits and navigation checks
- Follows Playwright best practices
Advanced Use Cases
Multi-Step User Flows
Generate complex, multi-page flows by describing the complete journey:Data-Driven Tests
Generate tests that use extracted data for validation:Visual Regression Testing
Generate comprehensive visual tests with AI assertions:Best Practices
Be Specific in Your Prompts
Be Specific in Your Prompts
The more detailed your description, the better the generated test. Include:
- Exact labels and button text
- Expected outcomes and error states
- Data formats and validation rules
- Whether to use AI assertions vs. standard assertions
Leverage MCP for Selector Discovery
Leverage MCP for Selector Discovery
Instead of manually finding selectors, ask the AI:
- “What’s the best selector for the submit button on this form?”
- “Show me all clickable elements in the header”
- “Find the selector for the error message container”
Use AI Assertions Strategically
Use AI Assertions Strategically
Guide the AI on when to use different assertion types:
- Standard assertions (
toBeVisible(),toHaveText()) for stable, predictable elements - AI assertions (
toMatchScreenshotPrompt()) for dynamic content, personalized UIs, or complex layouts - AI extraction (
page.extract()) when you need to validate computed values or extract data for later use
Iterate on Generated Tests
Iterate on Generated Tests
AI-generated tests are a starting point. Refine them by asking:
- “Add error handling for network failures”
- “Make the login reusable as a fixture”
- “Add .describe() to locators that might break”
- “Include comments explaining the test logic”
Review and Validate
Review and Validate
Always review generated tests:
- Run the test to verify it works
- Check that locators are resilient (using
getByRole,getByLabel, etc.) - Ensure proper wait conditions
- Verify timeout values are reasonable
- Check that API keys and secrets are not hardcoded
Troubleshooting
AI is generating basic Playwright code without Stably features
AI is generating basic Playwright code without Stably features
Solution: Ensure AI rules are properly configured. Try:
- Verify
stably-sdk-rules.mdcorclaude.mdfile exists in project root - Restart your AI assistant to reload configuration
- Explicitly mention in your prompt: “Use Stably SDK patterns with .describe() and toMatchScreenshotPrompt()”
Playwright MCP is not working
Playwright MCP is not working
Solution:
- Verify MCP configuration is correct in settings JSON
- Restart your AI assistant
- Check that
@playwright/mcp@latestcan be installed: - Look for MCP connection errors in your assistant’s console/logs
Generated selectors are too fragile
Generated selectors are too fragile
Solution: Ask the AI to use more semantic selectors:
- “Add .describe() to all action locators”
- “Use getByRole instead of CSS selectors”
- “Prefer getByLabel for form inputs”
- “Add test-id attributes to critical elements and use getByTestId”
- “Use viewport screenshots instead of fullPage: true”
Tests are too slow
Tests are too slow
Solution: Optimize generated tests:
- “Reduce timeout values where possible”
- “Remove unnecessary waits”
- “Use viewport screenshots instead of fullPage: true”
- “Combine multiple toMatchScreenshotPrompt() in a single one where logical”
AI is overusing toMatchScreenshotPrompt()
AI is overusing toMatchScreenshotPrompt()
Solution: Guide the AI on assertion selection:
- “Use toMatchScreenshotPrompt() only for dynamic content that can’t be validated with standard assertions”
- “Prefer toHaveText() and toBeVisible() for stable, predictable elements”
- “Reserve AI assertions for visually complex validations”
Example Prompts
For Complete Test Suites
“Generate a complete test suite for our e-commerce site covering: product search, add to cart, checkout, and order confirmation. Use Stably SDK with AI assertions for the product grid and checkout summary. Include data extraction for order total validation.”
For Form Testing
“Create a Stably SDK test for the user registration form. Validate all fields, test error messages, and use an AI assertion to verify the success modal appears after submission. Add .describe() to all form input locators.”
For Visual Testing
“Generate visual regression tests for our component library. Test button variants, card layouts, and navigation menus. Use toMatchScreenshotPrompt() for each component and scope with locators.”
For API + UI Testing
“Write a test that creates a project via API, then verifies it appears correctly in the UI. Extract the project ID from the API response and use it to navigate to the project detail page. Use AI assertion to verify the project details render correctly.”
Next Steps
AI Auto-Heal Configuration
Configure auto-heal to automatically fix failing tests when locators break
AI Assertions Guide
Deep dive into AI-powered visual assertions for dynamic content
AI Extraction
Learn to extract structured data from pages using AI
Run Tests in CI
Set up continuous testing with GitHub Actions