This is a Cursor Project Rule file written to support automated test generation with CoreStory + Cursory. MDC-format files are Cursor’s preferred filetype for setting system configurations.

Cursor-Specific Customizations:

✅ Uses actual Cursor features and keyboard shortcuts ✅ Proper MDC format with appropriate globs ✅ Emphasizes IDE-integrated workflows ✅ Includes terminal execution and file creation guidance

Quick Usage:

# Add to your project as a Project Rule:
mkdir -p .cursor/rules
cp CoreStory_Cursor_Test_Generation_Skill_File.mdc .cursor/rules/

# Commit to version control:
git add .cursor/rules/CoreStory_Cursor_Test_Generation_Skill_File.mdc
git commit -m "Add CoreStory test generation skill"

Cursor Test Generation Project Rule (.mdc) File Content:

---
description: Expert guidance for generating comprehensive test suites with CoreStory MCP in Cursor. Applies when working on test files or when explicitly requested.
globs: ["**/*.test.*", "**/*.spec.*", "**/tests/**/*", "**/test/**/*", "**/__tests__/**/*"]
alwaysApply: false
---

# CoreStory + Cursor Test Generation Skill

> Purpose: This file provides Cursor with expert-level guidance for generating comprehensive, high-quality test suites using CoreStory's MCP server. When generating tests in Cursor, follow these principles and workflows for optimal results.

---

## Core Principles

### 1. Always Use CoreStory Context First

**Before generating any tests:**
- Use CoreStory MCP tools to understand the actual codebase structure
- Analyze existing code patterns and conventions
- Identify dependencies and integration points
- Review business logic and data flows
- Don't assume or hallucinate code structure

**Workflow:**

  1. CoreStory:list_projects → confirm project_id with user
  2. CoreStory:get_project → verify project details
  3. CoreStory:create_conversation → establish context for this test generation session
  4. CoreStory:send_message → ask CoreStory about the specific module/feature to test
  5. Generate tests based on ACTUAL specifications or source code, not assumptions

### 2. The T.E.S.T. Framework

Every test generation request should extract:
- **T**arget: What specific code needs testing?
- **E**xpectations: What test types and coverage goals?
- **S**pecifications: What are the business rules and acceptance criteria?
- **T**echnical: What frameworks, patterns, and constraints?

**If any component is unclear, ASK before generating.**

### 3. Progressive Refinement Over Bulk Generation

**Don't:** Generate 100 tests in one shot
**Do:** Generate in rounds with validation:

Round 1: Happy path tests (10-15 tests) ↓ User reviews and validates in Cursor Round 2: Edge cases based on feedback (15-20 tests) ↓ User reviews and runs tests Round 3: Error scenarios and integration tests (10-15 tests) ↓ User reviews test execution Round 4: Performance/security tests if needed (5-10 tests)


This approach:
- Allows course correction
- Builds user confidence
- Prevents wasted effort
- Enables immediate test execution in Cursor's integrated terminal
- Ensures alignment with expectations

---

## CoreStory Integration Workflow

### Step 1: Project Discovery

  1. If user mentions project_id → use it
  2. If not → Call CoreStory:list_projects
  3. Present options to user: "I see you have these CoreStory projects: [list]. Which one should we work on?"
  4. Call CoreStory:get_project to confirm and show project details
  5. Call CoreStory:get_project_stats to check if codebase is fully indexed

### Step 2: Code Understanding

  1. Create conversation: CoreStory:create_conversation(project_id, title="Test Generation for {feature}")
  2. Query CoreStory about the target:
  3. Use CoreStory:send_message to get contextual information
  4. Build mental model of the code before generating tests

**Note:** Cursor can also read the actual source files directly. Cross-reference CoreStory's architectural knowledge with the actual implementation to catch any spec/code drift.

### Step 3: Requirement Analysis

If user provides acceptance criteria: → Map each AC to test scenarios

If user provides business rules: → Identify testable conditions

If no specs provided: → Ask CoreStory: "What business rules are implemented in {module}?" → Ask user: "What are the critical behaviors to test?"


### Step 4: Test Generation

Generate tests with:

  1. Clear, descriptive test names following pattern: "should {expected behavior} when {condition}"
  2. Arrange-Act-Assert (AAA) structure
  3. Realistic test data matching production patterns
  4. Proper mocking of dependencies
  5. Edge cases and boundary conditions
  6. Error scenarios
  7. Comments explaining complex test logic
  8. Explicit file paths for Cursor to create files in the correct location

**Cursor-Specific:** Always specify the complete output path when creating test files:

"I'll create tests in ./tests/auth/login.test.ts" "Writing integration tests to ./tests/integration/payment-flow.spec.ts"


### Step 5: Validation & Iteration

After generating tests, provide:

  1. Coverage summary: "These tests cover X% of acceptance criteria"
  2. Edge cases identified: "I've included tests for: [list]"
  3. Assumptions made: "I assumed {assumptions} - please confirm"
  4. Gaps: "Consider adding tests for: [missing scenarios]"
  5. Execution instructions: "Run tests with: npm test auth/login.test.ts"

Ask: "Would you like me to run these tests now to verify they work?"


**Cursor Advantage:** Tests can be executed immediately in the integrated terminal for instant feedback.

---

## Test Generation Best Practices

### Test Structure

**Good Test Structure:**

```jsx
describe('User Authentication', () => {
  describe('login', () => {
    it('should return access token when credentials are valid', async () => {
      // Arrange: Set up test data and mocks
      const validUser = { email: '[email protected]', password: 'SecurePass123!' };
      mockDatabase.findUser.mockResolvedValue({ id: 1, ...validUser });
      
      // Act: Execute the function being tested
      const result = await authService.login(validUser.email, validUser.password);
      
      // Assert: Verify the outcome
      expect(result).toHaveProperty('accessToken');
      expect(result.accessToken).toMatch(/^[A-Za-z0-9-_=]+\\.[A-Za-z0-9-_=]+\\.?[A-Za-z0-9-_.+/=]*$/);
    });

    it('should throw AuthenticationError when password is invalid', async () => {
      // Arrange
      const invalidCredentials = { email: '[email protected]', password: 'WrongPass' };
      mockDatabase.findUser.mockResolvedValue({ id: 1, email: invalidCredentials.email });
      
      // Act & Assert
      await expect(authService.login(invalidCredentials.email, invalidCredentials.password))
        .rejects
        .toThrow(AuthenticationError);
    });
  });
});

Key Elements:


Test Naming Conventions

Pattern: should {expected behavior} when {condition}

Examples:

Avoid:


Edge Case Discovery

Use CoreStory to discover edge cases:

Query: "What edge cases should I test for {feature}?"
Query: "What error conditions can occur in {module}?"
Query: "What boundary conditions exist for {input parameter}?"

Common edge case categories to always consider:

  1. Input Validation:

  2. State-Dependent Behavior:

  3. Concurrent Access:

  4. External Dependencies:

  5. Business Logic:


Test Data Generation

Principles:

  1. Realistic: Match production data patterns
  2. Minimal: Use smallest dataset that proves the point
  3. Varied: Cover different scenarios
  4. Safe: No PII or sensitive data in tests

Good Test Data:

// Realistic user data
const validUser = {
  email: '[email protected]',
  password: 'SecurePass123!',
  firstName: 'John',
  lastName: 'Doe',
  dateOfBirth: '1990-01-15'
};

// Edge case: boundary condition
const userTurning18Today = {
  email: '[email protected]',
  dateOfBirth: new Date(new Date().setFullYear(new Date().getFullYear() - 18))
    .toISOString()
    .split('T')[0]
};

// Edge case: invalid data
const userWithInvalidEmail = {
  email: 'not-an-email',
  password: 'ValidPass123!'
};

Avoid:


Framework-Specific Guidelines

Jest / React Testing Library

Queries Priority (use in this order):

  1. getByRole (most accessible)
  2. getByLabelText (forms)
  3. getByPlaceholderText (inputs)
  4. getByText (non-interactive)
  5. getByTestId (last resort)

Example:

import { render, screen, waitFor } from '@testing-library/react';
import userEvent from '@testing-library/user-event';

it('should submit form when all fields are valid', async () => {
  // Arrange
  render(<LoginForm onSubmit={mockSubmit} />);
  const user = userEvent.setup();
  
  // Act
  await user.type(screen.getByRole('textbox', { name: /email/i }), '[email protected]');
  await user.type(screen.getByLabelText(/password/i), 'SecurePass123!');
  await user.click(screen.getByRole('button', { name: /submit/i }));
  
  // Assert
  await waitFor(() => {
    expect(mockSubmit).toHaveBeenCalledWith({
      email: '[email protected]',
      password: 'SecurePass123!'
    });
  });
});

Best Practices:


PyTest

Fixture-Based Setup:

import pytest
from datetime import datetime

@pytest.fixture
def valid_user():
    """Fixture providing a valid user for testing"""
    return {
        'email': '[email protected]',
        'password': 'SecurePass123!',
        'created_at': datetime.now()
    }

@pytest.fixture
def mock_database(mocker):
    """Fixture providing a mocked database"""
    return mocker.patch('app.database.Database')

def test_login_success(valid_user, mock_database):
    # Arrange
    mock_database.find_user.return_value = valid_user
    
    # Act
    result = auth_service.login(valid_user['email'], valid_user['password'])
    
    # Assert
    assert 'access_token' in result
    assert result['user_id'] == valid_user['id']

Parameterized Tests:

@pytest.mark.parametrize("email,password,expected_error", [
    ("", "ValidPass123!", "Email required"),
    ("[email protected]", "", "Password required"),
    ("invalid-email", "ValidPass123!", "Invalid email format"),
    ("[email protected]", "short", "Password too short"),
])
def test_login_validation_errors(email, password, expected_error):
    with pytest.raises(ValidationError, match=expected_error):
        auth_service.login(email, password)

Best Practices:


Playwright

Page Object Pattern:

// pages/login.page.ts
export class LoginPage {
  constructor(private page: Page) {}
  
  async goto() {
    await this.page.goto('/login');
  }
  
  async login(email: string, password: string) {
    await this.page.getByLabel('Email').fill(email);
    await this.page.getByLabel('Password').fill(password);
    await this.page.getByRole('button', { name: 'Login' }).click();
  }
  
  async getErrorMessage() {
    return this.page.getByRole('alert').textContent();
  }
}

// tests/login.spec.ts
test('should login successfully with valid credentials', async ({ page }) => {
  // Arrange
  const loginPage = new LoginPage(page);
  await loginPage.goto();
  
  // Act
  await loginPage.login('[email protected]', 'ValidPass123!');
  
  // Assert
  await expect(page).toHaveURL('/dashboard');
  await expect(page.getByRole('heading', { name: 'Welcome' })).toBeVisible();
});

Best Practices:


JUnit / Mockito

Test Structure:

@ExtendWith(MockitoExtension.class)
class AuthServiceTest {
    @Mock
    private UserRepository userRepository;
    
    @Mock
    private PasswordEncoder passwordEncoder;
    
    @InjectMocks
    private AuthService authService;
    
    @Test
    @DisplayName("Should return access token when credentials are valid")
    void shouldReturnAccessTokenWhenCredentialsAreValid() {
        // Arrange
        User user = new User("[email protected]", "hashedPassword");
        when(userRepository.findByEmail("[email protected]")).thenReturn(Optional.of(user));
        when(passwordEncoder.matches("ValidPass123!", "hashedPassword")).thenReturn(true);
        
        // Act
        AuthResult result = authService.login("[email protected]", "ValidPass123!");
        
        // Assert
        assertThat(result)
            .isNotNull()
            .extracting(AuthResult::getAccessToken)
            .isNotNull();
        
        verify(userRepository).findByEmail("[email protected]");
        verify(passwordEncoder).matches("ValidPass123!", "hashedPassword");
    }
    
    @Test
    @DisplayName("Should throw AuthenticationException when password is invalid")
    void shouldThrowAuthenticationExceptionWhenPasswordIsInvalid() {
        // Arrange
        User user = new User("[email protected]", "hashedPassword");
        when(userRepository.findByEmail("[email protected]")).thenReturn(Optional.of(user));
        when(passwordEncoder.matches("WrongPass", "hashedPassword")).thenReturn(false);
        
        // Act & Assert
        assertThatThrownBy(() -> authService.login("[email protected]", "WrongPass"))
            .isInstanceOf(AuthenticationException.class)
            .hasMessage("Invalid credentials");
    }
}

Parameterized Tests:

@ParameterizedTest
@CsvSource({
    "'', ValidPass123!, Email is required",
    "[email protected], '', Password is required",
    "invalid-email, ValidPass123!, Invalid email format",
    "[email protected], short, Password too short"
})
@DisplayName("Should throw ValidationException for invalid inputs")
void shouldThrowValidationExceptionForInvalidInputs(
    String email, 
    String password, 
    String expectedMessage
) {
    assertThatThrownBy(() -> authService.login(email, password))
        .isInstanceOf(ValidationException.class)
        .hasMessage(expectedMessage);
}

Best Practices:


Behavior-Driven Development (BDD) Integration

When to Use BDD

BDD is most valuable when:

Gherkin Feature Files

Structure:

Feature: User Authentication
  As a registered user
  I want to log in to the system
  So that I can access my account

  Background:
    Given the system is running
    And the database is populated with test users

  Scenario: Successful login with valid credentials
    Given I am on the login page
    When I enter email "[email protected]"
    And I enter password "ValidPass123!"
    And I click the "Login" button
    Then I should be redirected to the dashboard
    And I should see a welcome message

  Scenario: Failed login with invalid password
    Given I am on the login page
    When I enter email "[email protected]"
    And I enter password "WrongPassword"
    And I click the "Login" button
    Then I should see an error message "Invalid credentials"
    And I should remain on the login page

  Scenario Outline: Account lockout after failed attempts
    Given I have attempted to login <attempts> times with wrong password
    When I attempt to login again with wrong password
    Then my account should be <status>
    And I should see message "<message>"

    Examples:
      | attempts | status | message                           |
      | 4        | active | Invalid credentials               |
      | 5        | locked | Account locked due to too many... |

Best Practices:

Generating Step Definitions

After creating Gherkin scenarios, generate corresponding step definitions:

Cucumber (JavaScript/TypeScript):

import { Given, When, Then } from '@cucumber/cucumber';
import { expect } from '@playwright/test';

Given('I am on the login page', async function() {
  await this.page.goto('/login');
});

When('I enter email {string}', async function(email: string) {
  await this.page.getByLabel('Email').fill(email);
});

When('I enter password {string}', async function(password: string) {
  await this.page.getByLabel('Password').fill(password);
});

When('I click the {string} button', async function(buttonName: string) {
  await this.page.getByRole('button', { name: buttonName }).click();
});

Then('I should be redirected to the dashboard', async function() {
  await expect(this.page).toHaveURL('/dashboard');
});

Then('I should see a welcome message', async function() {
  await expect(this.page.getByText(/welcome/i)).toBeVisible();
});

SpecFlow (C#/.NET):

[Binding]
public class AuthenticationSteps
{
    private readonly IWebDriver driver;
    private readonly ScenarioContext scenarioContext;

    public AuthenticationSteps(ScenarioContext scenarioContext)
    {
        this.scenarioContext = scenarioContext;
        this.driver = scenarioContext.Get<IWebDriver>("driver");
    }

    [Given(@"I am on the login page")]
    public void GivenIAmOnTheLoginPage()
    {
        driver.Navigate().GoToUrl("<http://localhost:3000/login>");
    }

    [When(@"I enter email ""(.*)""")]
    public void WhenIEnterEmail(string email)
    {
        driver.FindElement(By.Id("email")).SendKeys(email);
    }

    [When(@"I enter password ""(.*)""")]
    public void WhenIEnterPassword(string password)
    {
        driver.FindElement(By.Id("password")).SendKeys(password);
    }

    [When(@"I click the ""(.*)"" button")]
    public void WhenIClickTheButton(string buttonText)
    {
        driver.FindElement(By.XPath($"//button[text()='{buttonText}']")).Click();
    }

    [Then(@"I should be redirected to the dashboard")]
    public void ThenIShouldBeRedirectedToTheDashboard()
    {
        Assert.That(driver.Url, Does.Contain("/dashboard"));
    }
}

CoreStory Integration for BDD

When generating BDD tests:

  1. Query CoreStory for business context:

    "What are the user stories for the authentication feature?"
    "What are the acceptance criteria for user login?"
    
  2. Map requirements to scenarios:

  3. Generate both Gherkin AND step definitions:

  4. Keep scenarios business-focused:

BDD Framework-Specific Notes

Cucumber: Supports JavaScript, TypeScript, Ruby, Java Uses @cucumber/cucumber for JavaScript/TypeScript

SpecFlow: Uses LivingDoc or SpecFlow+ for living documentation

Living Documentation

BDD tests serve as living documentation. Encourage users to:

  1. Generate HTML reports from Cucumber/SpecFlow
  2. Publish reports to team wiki or documentation site
  3. Keep scenarios in sync with code
  4. Review scenarios with business stakeholders
  5. Use scenarios as specification source of truth

Quality Checklist for BDD Tests

Before finalizing BDD test generation:

Final Reminders

When generating BDD tests:

  1. Always query CoreStory for business context first
  2. Write for business stakeholders, not just developers
  3. Use concrete examples, not abstract descriptions
  4. One scenario = one behavior
  5. Generate both Gherkin AND step definitions
  6. Consider living documentation from the start

BDD is collaborative. Encourage users to review scenarios with business stakeholders before finalizing.


Coverage Analysis

After Generating Tests, Always Provide:

  1. Coverage Summary:

    Coverage Analysis:
    ✅ Happy path: 100% (5/5 acceptance criteria)
    ✅ Edge cases: 80% (12/15 identified scenarios)
    ✅ Error scenarios: 90% (9/10 error paths)
    ⚠️  Performance: 0% (no performance tests yet)
    
    Overall: ~85% comprehensive coverage
    
  2. Gap Identification:

    Potential gaps to consider:
    - Concurrent user access scenarios
    - Rate limiting behavior
    - Long-running operations (> 30s)
    - Network retry logic
    
    Would you like me to generate tests for these?
    
  3. Test Execution Estimate:

    Estimated test execution time:
    - Unit tests (45 tests): ~2 seconds
    - Integration tests (12 tests): ~8 seconds
    - E2E tests (5 tests): ~45 seconds
    Total: ~55 seconds
    
    Run with: npm test
    
  4. Cursor-Specific: Immediate Execution Option:

    Ready to validate? I can run these tests now in the integrated terminal.
    Would you like me to:
    1. Run all tests
    2. Run only unit tests first
    3. Run tests in watch mode
    

Quality Checklist

Before finalizing generated tests, verify:


Anti-Patterns to Avoid

❌ Brittle Tests

// BAD: Relies on specific implementation
expect(component.state.counter).toBe(5);

// GOOD: Tests behavior
expect(screen.getByText('Count: 5')).toBeInTheDocument();

❌ Over-Mocking

// BAD: Mocking internal implementation
mockPrivateMethod.mockReturnValue(true);

// GOOD: Mock only external dependencies
mockApiClient.get.mockResolvedValue(data);

❌ Vague Assertions

// BAD: Not specific enough
expect(result).toBeTruthy();

// GOOD: Specific expectations
expect(result).toEqual({
  id: expect.any(Number),
  email: '[email protected]',
  createdAt: expect.any(String)
});

❌ Test Interdependence

// BAD: Tests depend on execution order
let userId;
test('creates user', () => { userId = createUser(); });
test('updates user', () => { updateUser(userId); });

// GOOD: Each test is independent
test('creates user', () => {
  const userId = createUser();
  expect(userId).toBeDefined();
});

test('updates user', () => {
  const userId = createUser();
  updateUser(userId);
  expect(getUser(userId).updated).toBe(true);
});

Cursor-Specific Workflows

Using Agent Mode (Cmd+.)

When users activate Agent mode for complex test generation:

Example Agent prompt:
"I'm working on CoreStory project {project-id}.
Create a complete test suite for the authentication module:

1. Query CoreStory for auth specifications
2. Generate unit tests for each auth method in ./tests/unit/auth/
3. Generate integration tests for auth flows in ./tests/integration/auth/
4. Generate E2E tests for login/logout in ./tests/e2e/auth/
5. Set up test fixtures and mocks in ./tests/fixtures/
6. Run tests and fix any failures
7. Generate coverage report"

Agent will autonomously:

Using Composer (Cmd+I)

For iterative test generation:

User opens Composer and types:
"Generate unit tests for ./src/auth/login.ts
Output to ./tests/unit/auth/login.test.ts"

Cursor generates tests, creates the file, user reviews.

User continues in same Composer session:
"Add edge cases for rate limiting and account lockout"

Cursor adds tests to existing file, maintaining context.

Using Inline Edit (Cmd+K)

For quick test fixes:

User sees failing test in editor
Selects the test function
Presses Cmd+K
Types: "Fix this test - the assertion is checking wrong property"

Cursor edits the test inline with correct assertion.

Cross-Referencing CoreStory + Source

Take advantage of Cursor's ability to read BOTH CoreStory knowledge AND actual source files:

Example workflow:
1. Query CoreStory: "What should the authentication module do according to specs?"
2. Read source: View ./src/auth/login.ts
3. Compare: Identify differences between spec and implementation
4. Generate tests covering both specified behavior AND actual implementation
5. Flag any spec/implementation mismatches for user

This catches:
- Undocumented features that need tests
- Implementation drift from original specs
- Edge cases only visible in code
- Business rules that evolved without doc updates

Watch Mode Integration

For TDD workflows:

After generating initial tests:
"I've created tests in ./tests/auth/login.test.ts.
Should I run them in watch mode so you can see immediate feedback as you implement?"

If user agrees:
Run: npm test -- --watch auth/login.test.ts

Tests auto-run on every code change, perfect for TDD cycle.

Conversation Guidelines

When User Asks for Tests:

  1. Clarify scope if needed:

  2. Set expectations:

  3. Use CoreStory proactively:

  4. Provide context with output:

  5. Invite iteration:


Example Complete Workflow

User (in Cursor Composer): "Generate tests for the payment processing module"

Cursor:

I'll help you generate comprehensive tests for payment processing. Let me start by understanding the codebase structure with CoreStory.

[Uses CoreStory:list_projects, CoreStory:get_project, CoreStory:create_conversation]
[Uses CoreStory:send_message to query about payment module]

I see the payment processing module handles:
- Credit card payments via Stripe
- Refund processing
- Payment retry logic
- Fraud detection

I'll generate tests in rounds and create them in your test directory:

Round 1: Happy path tests → ./tests/unit/payment/
- Successful payment processing
- Successful refunds
- Payment confirmation

Round 2: Edge cases → ./tests/unit/payment/edge-cases/
- Insufficient funds
- Invalid card details
- Network timeout during payment
- Concurrent payment attempts

Round 3: Integration tests → ./tests/integration/payment/
- Stripe API integration
- Database transaction handling
- Webhook processing

Does this approach work for you? Any specific scenarios you want me to prioritize?

User: "Yes, also include tests for the fraud detection"

Cursor:

Perfect! I'll add fraud detection tests covering:
- Suspicious transaction patterns
- Velocity checks (too many transactions)
- Geographic anomalies
- Amount thresholds

Starting with Round 1 - Happy Path Tests...

[Creates ./tests/unit/payment/payment-processing.test.ts with 12 tests]

✅ Created ./tests/unit/payment/payment-processing.test.ts

Coverage: 100% of main payment flows, 0% of edge cases yet.

Run tests with: npm test payment/payment-processing.test.ts

Would you like me to:
1. Run these tests now to verify they work?
2. Continue to Round 2 (edge cases)?
3. Review the tests first?

User: "Run them"

Cursor:

[Executes in integrated terminal: npm test payment/payment-processing.test.ts]

✅ All 12 tests passed! (2.3s)

Great! The happy path tests are working. Ready for Round 2 (edge cases)?

Success Metrics

A successful test generation session includes:

If user seems unsatisfied:


Final Reminder

Always remember:

  1. Use CoreStory MCP FIRST to understand the code
  2. Cross-reference CoreStory specs with actual source files
  3. Generate tests INCREMENTALLY with user validation
  4. Create files in EXPLICIT locations for Cursor
  5. Focus on BEHAVIOR over implementation
  6. Make tests READABLE and MAINTAINABLE
  7. Cover EDGE CASES not just happy paths
  8. Provide CONTEXT with every test suite
  9. Offer to RUN tests immediately in integrated terminal
  10. Invite ITERATION and refinement

Cursor's advantages for test generation:

The goal is not just generating tests, but generating confidence in the codebase through comprehensive, maintainable test coverage that executes reliably in Cursor's development environment.