Skip to main content
While you can use your own IDE, we encourage you to also check out our CLI or Web editor.
Write Stably SDK tests faster and more reliably by leveraging AI coding assistants like Cursor or Claude Code with Playwright MCP (Model Context Protocol). These tools can generate complete, production-ready test suites that take full advantage of Stably’s AI capabilities.

Why Use AI Assistants for Test Creation?

Faster Development

Generate complete test suites in minutes instead of hours

Best Practices Built-in

AI automatically applies Stably SDK patterns like .describe() and AI assertions

Context-Aware

Playwright MCP gives AI direct access to your browser state and page structure

Reduced Errors

AI writes tests that leverage auto-heal from day one

Prerequisites

1

Install Stably SDK

Follow the SDK Setup Guide guide to install and configure the Stably SDK in your project.
2

Choose Your AI Assistant

Install one of these AI coding assistants:
  • Cursor — AI-first code editor with deep IDE integration
  • Claude Code — Standalone AI assistant with MCP support
3

Configure Playwright MCP

Playwright MCP allows the AI to interact with browsers directly, inspect page state, and generate accurate selectors.
Add to your Cursor settings (~/.cursor/mcp.json or via Settings → MCP):
{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["-y", "@playwright/mcp@latest"]
    }
  }
}

Setup AI Rules for Stably SDK

AI coding assistants work best when they understand the specific patterns and capabilities of your testing framework. Configure your assistant with Stably SDK rules:

Creating Tests with AI

Basic Workflow

1

Navigate to the Page

Use Playwright MCP to open the page you want to test:Ask the AI:
“Open a browser and navigate to https://app.example.com/dashboard
The AI will use Playwright MCP to launch a browser and navigate to the page.
2

Explore the Page

Ask the AI to inspect elements and understand the page structure:
“What elements are visible on this page? Show me the main navigation and action buttons”
The AI uses MCP to capture page snapshots and identify interactive elements.
3

Generate Test Code

Describe the user flow you want to test:
“Generate a Stably SDK test that:
  1. Logs into the app with test credentials
  2. Clicks the ‘Create Project’ button
  3. Fills in the project form
  4. Uses an AI assertion to verify the success message appears”
The AI generates production-ready code using Stably SDK patterns.
4

Refine and Iterate

Review the generated test and refine as needed:
“Add error handling for network timeouts and use .describe() on the submit button locator”
The AI updates the test with improvements.

Example: Generated Test

Here’s an example of a test generated by an AI assistant with Stably SDK rules:
import { test, expect } from "@stablyai/playwright-test";

test("create project flow with AI validation", async ({ page }) => {
  // Navigate to the app
  await page.goto("https://app.example.com");
  
  // Login with test credentials
  await page.getByLabel("Email").fill("[email protected]");
  await page.getByLabel("Password").fill("TestPassword123");
  await page.getByRole("button", { name: "Sign In" })
    .describe("Login submit button")
    .click();
  
  // Wait for dashboard to load
  await expect(page).toHaveURL(/.*dashboard/);
  
  // Click create project button
  await page.getByRole("button", { name: "Create Project" })
    .describe("Main CTA to create new project")
    .click();
  
  // Fill project form
  await page.getByLabel("Project Name").fill("E2E Test Project");
  await page.getByLabel("Description").fill("Automated test project");
  await page.getByRole("combobox", { name: "Project Type" })
    .describe("Project type dropdown")
    .selectOption("Web Application");
  
  // Submit form
  await page.getByRole("button", { name: "Create" })
    .describe("Project creation submit button")
    .click();
  
  // Use AI assertion to verify success
  await expect(page).toMatchScreenshotPrompt(
    "Success message showing 'Project created successfully' with green checkmark icon",
    { timeout: 30_000 }
  );
  
  // Verify project appears in list
  await page.getByRole("link", { name: "Projects" })
    .describe("Navigation link to projects list")
    .click();
  
  await expect(page.getByRole("heading", { name: "E2E Test Project" }))
    .toBeVisible();
});
The AI automatically:
  • Uses .describe() on critical locators for auto-heal
  • Applies toMatchScreenshotPrompt() for dynamic UI validation
  • Includes proper waits and navigation checks
  • Follows Playwright best practices

Advanced Use Cases

Multi-Step User Flows

Generate complex, multi-page flows by describing the complete journey:
Generate a Stably SDK test for the complete checkout flow:
1. Browse product catalog and add 3 items to cart
2. Proceed to checkout
3. Fill shipping information form
4. Select payment method
5. Use AI assertion to verify order summary shows correct total
6. Complete purchase
7. Use AI extraction to get the order number
8. Verify confirmation page with order number

Data-Driven Tests

Generate tests that use extracted data for validation:
Create a test that:
1. Navigates to the analytics dashboard
2. Uses AI extraction to get revenue, active users, and churn rate from the page
3. Validates that revenue is greater than $10,000
4. Validates that churn rate is below 5%
5. Screenshots the trends chart if validation passes
Example generated code:
import { test, expect } from "@stablyai/playwright-test";
import { z } from "zod";

const MetricsSchema = z.object({
  revenue: z.number(),
  activeUsers: z.number(),
  churnRate: z.number()
});

test("validate analytics dashboard metrics", async ({ page }) => {
  await page.goto("/analytics/dashboard");
  
  // Extract metrics using AI
  const metrics = await page.extract(
    "Return revenue (as number), active users (as number), and churn rate (as percentage number)",
    { schema: MetricsSchema }
  );
  
  // Validate metrics
  expect(metrics.revenue).toBeGreaterThan(10000);
  expect(metrics.churnRate).toBeLessThan(5);
  
  console.log(`Validation passed:`, {
    revenue: `$${metrics.revenue.toLocaleString()}`,
    activeUsers: metrics.activeUsers.toLocaleString(),
    churnRate: `${metrics.churnRate}%`
  });
  
  // Verify the trends chart renders correctly
  await expect(page.locator(".trends-chart"))
    .toMatchScreenshotPrompt("Revenue trend chart showing last 6 months of data");
});

Visual Regression Testing

Generate comprehensive visual tests with AI assertions:
Create a visual regression test suite for the marketing landing page that:
1. Checks hero section with CTA button and value proposition
2. Validates features section shows all 6 feature cards
3. Verifies pricing table with 3 tiers
4. Checks footer has social links and newsletter signup
Use AI assertions for all checks to handle dynamic content

Best Practices

The more detailed your description, the better the generated test. Include:
  • Exact labels and button text
  • Expected outcomes and error states
  • Data formats and validation rules
  • Whether to use AI assertions vs. standard assertions
Instead of manually finding selectors, ask the AI:
  • “What’s the best selector for the submit button on this form?”
  • “Show me all clickable elements in the header”
  • “Find the selector for the error message container”
MCP allows the AI to inspect the live page and suggest robust selectors.
Guide the AI on when to use different assertion types:
  • Standard assertions (toBeVisible(), toHaveText()) for stable, predictable elements
  • AI assertions (toMatchScreenshotPrompt()) for dynamic content, personalized UIs, or complex layouts
  • AI extraction (page.extract()) when you need to validate computed values or extract data for later use
AI-generated tests are a starting point. Refine them by asking:
  • “Add error handling for network failures”
  • “Make the login reusable as a fixture”
  • “Add .describe() to locators that might break”
  • “Include comments explaining the test logic”
Always review generated tests:
  • Run the test to verify it works
  • Check that locators are resilient (using getByRole, getByLabel, etc.)
  • Ensure proper wait conditions
  • Verify timeout values are reasonable
  • Check that API keys and secrets are not hardcoded

Troubleshooting

Solution: Ensure AI rules are properly configured. Try:
  1. Verify stably-sdk-rules.mdc or claude.md file exists in project root
  2. Restart your AI assistant to reload configuration
  3. Explicitly mention in your prompt: “Use Stably SDK patterns with .describe() and toMatchScreenshotPrompt()”
Solution:
  1. Verify MCP configuration is correct in settings JSON
  2. Restart your AI assistant
  3. Check that @playwright/mcp@latest can be installed:
    npx -y @playwright/mcp@latest --help
    
  4. Look for MCP connection errors in your assistant’s console/logs
Solution: Ask the AI to use more semantic selectors:
  • “Add .describe() to all action locators”
  • “Use getByRole instead of CSS selectors”
  • “Prefer getByLabel for form inputs”
  • “Add test-id attributes to critical elements and use getByTestId”
  • “Use viewport screenshots instead of fullPage: true”
Solution: Optimize generated tests:
  • “Reduce timeout values where possible”
  • “Remove unnecessary waits”
  • “Use viewport screenshots instead of fullPage: true”
  • “Combine multiple toMatchScreenshotPrompt() in a single one where logical”
Solution: Guide the AI on assertion selection:
  • “Use toMatchScreenshotPrompt() only for dynamic content that can’t be validated with standard assertions”
  • “Prefer toHaveText() and toBeVisible() for stable, predictable elements”
  • “Reserve AI assertions for visually complex validations”

Example Prompts

For Complete Test Suites

“Generate a complete test suite for our e-commerce site covering: product search, add to cart, checkout, and order confirmation. Use Stably SDK with AI assertions for the product grid and checkout summary. Include data extraction for order total validation.”

For Form Testing

“Create a Stably SDK test for the user registration form. Validate all fields, test error messages, and use an AI assertion to verify the success modal appears after submission. Add .describe() to all form input locators.”

For Visual Testing

“Generate visual regression tests for our component library. Test button variants, card layouts, and navigation menus. Use toMatchScreenshotPrompt() for each component and scope with locators.”

For API + UI Testing

“Write a test that creates a project via API, then verifies it appears correctly in the UI. Extract the project ID from the API response and use it to navigate to the project detail page. Use AI assertion to verify the project details render correctly.”

Next Steps

Additional Resources