This is a Cursor Project Rule file written to support automated test generation with CoreStory + Cursory. MDC-format files are Cursor’s preferred filetype for setting system configurations.
Cursor-Specific Customizations:
✅ Uses actual Cursor features and keyboard shortcuts ✅ Proper MDC format with appropriate globs ✅ Emphasizes IDE-integrated workflows ✅ Includes terminal execution and file creation guidance
# Add to your project as a Project Rule:
mkdir -p .cursor/rules
cp CoreStory_Cursor_Test_Generation_Skill_File.mdc .cursor/rules/
# Commit to version control:
git add .cursor/rules/CoreStory_Cursor_Test_Generation_Skill_File.mdc
git commit -m "Add CoreStory test generation skill"
---
description: Expert guidance for generating comprehensive test suites with CoreStory MCP in Cursor. Applies when working on test files or when explicitly requested.
globs: ["**/*.test.*", "**/*.spec.*", "**/tests/**/*", "**/test/**/*", "**/__tests__/**/*"]
alwaysApply: false
---
# CoreStory + Cursor Test Generation Skill
> Purpose: This file provides Cursor with expert-level guidance for generating comprehensive, high-quality test suites using CoreStory's MCP server. When generating tests in Cursor, follow these principles and workflows for optimal results.
---
## Core Principles
### 1. Always Use CoreStory Context First
**Before generating any tests:**
- Use CoreStory MCP tools to understand the actual codebase structure
- Analyze existing code patterns and conventions
- Identify dependencies and integration points
- Review business logic and data flows
- Don't assume or hallucinate code structure
**Workflow:**
### 2. The T.E.S.T. Framework
Every test generation request should extract:
- **T**arget: What specific code needs testing?
- **E**xpectations: What test types and coverage goals?
- **S**pecifications: What are the business rules and acceptance criteria?
- **T**echnical: What frameworks, patterns, and constraints?
**If any component is unclear, ASK before generating.**
### 3. Progressive Refinement Over Bulk Generation
**Don't:** Generate 100 tests in one shot
**Do:** Generate in rounds with validation:
Round 1: Happy path tests (10-15 tests) ↓ User reviews and validates in Cursor Round 2: Edge cases based on feedback (15-20 tests) ↓ User reviews and runs tests Round 3: Error scenarios and integration tests (10-15 tests) ↓ User reviews test execution Round 4: Performance/security tests if needed (5-10 tests)
This approach:
- Allows course correction
- Builds user confidence
- Prevents wasted effort
- Enables immediate test execution in Cursor's integrated terminal
- Ensures alignment with expectations
---
## CoreStory Integration Workflow
### Step 1: Project Discovery
### Step 2: Code Understanding
**Note:** Cursor can also read the actual source files directly. Cross-reference CoreStory's architectural knowledge with the actual implementation to catch any spec/code drift.
### Step 3: Requirement Analysis
If user provides acceptance criteria: → Map each AC to test scenarios
If user provides business rules: → Identify testable conditions
If no specs provided: → Ask CoreStory: "What business rules are implemented in {module}?" → Ask user: "What are the critical behaviors to test?"
### Step 4: Test Generation
Generate tests with:
**Cursor-Specific:** Always specify the complete output path when creating test files:
"I'll create tests in ./tests/auth/login.test.ts" "Writing integration tests to ./tests/integration/payment-flow.spec.ts"
### Step 5: Validation & Iteration
After generating tests, provide:
Ask: "Would you like me to run these tests now to verify they work?"
**Cursor Advantage:** Tests can be executed immediately in the integrated terminal for instant feedback.
---
## Test Generation Best Practices
### Test Structure
**Good Test Structure:**
```jsx
describe('User Authentication', () => {
describe('login', () => {
it('should return access token when credentials are valid', async () => {
// Arrange: Set up test data and mocks
const validUser = { email: '[email protected]', password: 'SecurePass123!' };
mockDatabase.findUser.mockResolvedValue({ id: 1, ...validUser });
// Act: Execute the function being tested
const result = await authService.login(validUser.email, validUser.password);
// Assert: Verify the outcome
expect(result).toHaveProperty('accessToken');
expect(result.accessToken).toMatch(/^[A-Za-z0-9-_=]+\\.[A-Za-z0-9-_=]+\\.?[A-Za-z0-9-_.+/=]*$/);
});
it('should throw AuthenticationError when password is invalid', async () => {
// Arrange
const invalidCredentials = { email: '[email protected]', password: 'WrongPass' };
mockDatabase.findUser.mockResolvedValue({ id: 1, email: invalidCredentials.email });
// Act & Assert
await expect(authService.login(invalidCredentials.email, invalidCredentials.password))
.rejects
.toThrow(AuthenticationError);
});
});
});
Key Elements:
Pattern: should {expected behavior} when {condition}
Examples:
should return 200 status when request is validshould throw ValidationError when email is missingshould calculate discount correctly when user is premium membershould lock account after 5 failed login attemptsAvoid:
test1, test2, testLogin (not descriptive)it works (what works?)handles edge cases (which edge cases?)Use CoreStory to discover edge cases:
Query: "What edge cases should I test for {feature}?"
Query: "What error conditions can occur in {module}?"
Query: "What boundary conditions exist for {input parameter}?"
Common edge case categories to always consider:
Input Validation:
State-Dependent Behavior:
Concurrent Access:
External Dependencies:
Business Logic:
Principles:
Good Test Data:
// Realistic user data
const validUser = {
email: '[email protected]',
password: 'SecurePass123!',
firstName: 'John',
lastName: 'Doe',
dateOfBirth: '1990-01-15'
};
// Edge case: boundary condition
const userTurning18Today = {
email: '[email protected]',
dateOfBirth: new Date(new Date().setFullYear(new Date().getFullYear() - 18))
.toISOString()
.split('T')[0]
};
// Edge case: invalid data
const userWithInvalidEmail = {
email: 'not-an-email',
password: 'ValidPass123!'
};
Avoid:
Queries Priority (use in this order):
getByRole (most accessible)getByLabelText (forms)getByPlaceholderText (inputs)getByText (non-interactive)getByTestId (last resort)Example:
import { render, screen, waitFor } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
it('should submit form when all fields are valid', async () => {
// Arrange
render(<LoginForm onSubmit={mockSubmit} />);
const user = userEvent.setup();
// Act
await user.type(screen.getByRole('textbox', { name: /email/i }), '[email protected]');
await user.type(screen.getByLabelText(/password/i), 'SecurePass123!');
await user.click(screen.getByRole('button', { name: /submit/i }));
// Assert
await waitFor(() => {
expect(mockSubmit).toHaveBeenCalledWith({
email: '[email protected]',
password: 'SecurePass123!'
});
});
});
Best Practices:
userEvent over fireEvent for realistic user interactionswaitFor for async operationsFixture-Based Setup:
import pytest
from datetime import datetime
@pytest.fixture
def valid_user():
"""Fixture providing a valid user for testing"""
return {
'email': '[email protected]',
'password': 'SecurePass123!',
'created_at': datetime.now()
}
@pytest.fixture
def mock_database(mocker):
"""Fixture providing a mocked database"""
return mocker.patch('app.database.Database')
def test_login_success(valid_user, mock_database):
# Arrange
mock_database.find_user.return_value = valid_user
# Act
result = auth_service.login(valid_user['email'], valid_user['password'])
# Assert
assert 'access_token' in result
assert result['user_id'] == valid_user['id']
Parameterized Tests:
@pytest.mark.parametrize("email,password,expected_error", [
("", "ValidPass123!", "Email required"),
("[email protected]", "", "Password required"),
("invalid-email", "ValidPass123!", "Invalid email format"),
("[email protected]", "short", "Password too short"),
])
def test_login_validation_errors(email, password, expected_error):
with pytest.raises(ValidationError, match=expected_error):
auth_service.login(email, password)
Best Practices:
@pytest.mark.parametrize for multiple test casespytest.raises() for exception testingPage Object Pattern:
// pages/login.page.ts
export class LoginPage {
constructor(private page: Page) {}
async goto() {
await this.page.goto('/login');
}
async login(email: string, password: string) {
await this.page.getByLabel('Email').fill(email);
await this.page.getByLabel('Password').fill(password);
await this.page.getByRole('button', { name: 'Login' }).click();
}
async getErrorMessage() {
return this.page.getByRole('alert').textContent();
}
}
// tests/login.spec.ts
test('should login successfully with valid credentials', async ({ page }) => {
// Arrange
const loginPage = new LoginPage(page);
await loginPage.goto();
// Act
await loginPage.login('[email protected]', 'ValidPass123!');
// Assert
await expect(page).toHaveURL('/dashboard');
await expect(page.getByRole('heading', { name: 'Welcome' })).toBeVisible();
});
Best Practices:
Test Structure:
@ExtendWith(MockitoExtension.class)
class AuthServiceTest {
@Mock
private UserRepository userRepository;
@Mock
private PasswordEncoder passwordEncoder;
@InjectMocks
private AuthService authService;
@Test
@DisplayName("Should return access token when credentials are valid")
void shouldReturnAccessTokenWhenCredentialsAreValid() {
// Arrange
User user = new User("[email protected]", "hashedPassword");
when(userRepository.findByEmail("[email protected]")).thenReturn(Optional.of(user));
when(passwordEncoder.matches("ValidPass123!", "hashedPassword")).thenReturn(true);
// Act
AuthResult result = authService.login("[email protected]", "ValidPass123!");
// Assert
assertThat(result)
.isNotNull()
.extracting(AuthResult::getAccessToken)
.isNotNull();
verify(userRepository).findByEmail("[email protected]");
verify(passwordEncoder).matches("ValidPass123!", "hashedPassword");
}
@Test
@DisplayName("Should throw AuthenticationException when password is invalid")
void shouldThrowAuthenticationExceptionWhenPasswordIsInvalid() {
// Arrange
User user = new User("[email protected]", "hashedPassword");
when(userRepository.findByEmail("[email protected]")).thenReturn(Optional.of(user));
when(passwordEncoder.matches("WrongPass", "hashedPassword")).thenReturn(false);
// Act & Assert
assertThatThrownBy(() -> authService.login("[email protected]", "WrongPass"))
.isInstanceOf(AuthenticationException.class)
.hasMessage("Invalid credentials");
}
}
Parameterized Tests:
@ParameterizedTest
@CsvSource({
"'', ValidPass123!, Email is required",
"[email protected], '', Password is required",
"invalid-email, ValidPass123!, Invalid email format",
"[email protected], short, Password too short"
})
@DisplayName("Should throw ValidationException for invalid inputs")
void shouldThrowValidationExceptionForInvalidInputs(
String email,
String password,
String expectedMessage
) {
assertThatThrownBy(() -> authService.login(email, password))
.isInstanceOf(ValidationException.class)
.hasMessage(expectedMessage);
}
Best Practices:
@Mock and @InjectMocks for dependency injection@DisplayName for descriptive test names@ParameterizedTest for multiple test casesBDD is most valuable when:
Structure:
Feature: User Authentication
As a registered user
I want to log in to the system
So that I can access my account
Background:
Given the system is running
And the database is populated with test users
Scenario: Successful login with valid credentials
Given I am on the login page
When I enter email "[email protected]"
And I enter password "ValidPass123!"
And I click the "Login" button
Then I should be redirected to the dashboard
And I should see a welcome message
Scenario: Failed login with invalid password
Given I am on the login page
When I enter email "[email protected]"
And I enter password "WrongPassword"
And I click the "Login" button
Then I should see an error message "Invalid credentials"
And I should remain on the login page
Scenario Outline: Account lockout after failed attempts
Given I have attempted to login <attempts> times with wrong password
When I attempt to login again with wrong password
Then my account should be <status>
And I should see message "<message>"
Examples:
| attempts | status | message |
| 4 | active | Invalid credentials |
| 5 | locked | Account locked due to too many... |
Best Practices:
After creating Gherkin scenarios, generate corresponding step definitions:
Cucumber (JavaScript/TypeScript):
import { Given, When, Then } from '@cucumber/cucumber';
import { expect } from '@playwright/test';
Given('I am on the login page', async function() {
await this.page.goto('/login');
});
When('I enter email {string}', async function(email: string) {
await this.page.getByLabel('Email').fill(email);
});
When('I enter password {string}', async function(password: string) {
await this.page.getByLabel('Password').fill(password);
});
When('I click the {string} button', async function(buttonName: string) {
await this.page.getByRole('button', { name: buttonName }).click();
});
Then('I should be redirected to the dashboard', async function() {
await expect(this.page).toHaveURL('/dashboard');
});
Then('I should see a welcome message', async function() {
await expect(this.page.getByText(/welcome/i)).toBeVisible();
});
SpecFlow (C#/.NET):
[Binding]
public class AuthenticationSteps
{
private readonly IWebDriver driver;
private readonly ScenarioContext scenarioContext;
public AuthenticationSteps(ScenarioContext scenarioContext)
{
this.scenarioContext = scenarioContext;
this.driver = scenarioContext.Get<IWebDriver>("driver");
}
[Given(@"I am on the login page")]
public void GivenIAmOnTheLoginPage()
{
driver.Navigate().GoToUrl("<http://localhost:3000/login>");
}
[When(@"I enter email ""(.*)""")]
public void WhenIEnterEmail(string email)
{
driver.FindElement(By.Id("email")).SendKeys(email);
}
[When(@"I enter password ""(.*)""")]
public void WhenIEnterPassword(string password)
{
driver.FindElement(By.Id("password")).SendKeys(password);
}
[When(@"I click the ""(.*)"" button")]
public void WhenIClickTheButton(string buttonText)
{
driver.FindElement(By.XPath($"//button[text()='{buttonText}']")).Click();
}
[Then(@"I should be redirected to the dashboard")]
public void ThenIShouldBeRedirectedToTheDashboard()
{
Assert.That(driver.Url, Does.Contain("/dashboard"));
}
}
When generating BDD tests:
Query CoreStory for business context:
"What are the user stories for the authentication feature?"
"What are the acceptance criteria for user login?"
Map requirements to scenarios:
Generate both Gherkin AND step definitions:
Keep scenarios business-focused:
Cucumber:
Supports JavaScript, TypeScript, Ruby, Java
Uses @cucumber/cucumber for JavaScript/TypeScript
SpecFlow: Uses LivingDoc or SpecFlow+ for living documentation
BDD tests serve as living documentation. Encourage users to:
Before finalizing BDD test generation:
When generating BDD tests:
BDD is collaborative. Encourage users to review scenarios with business stakeholders before finalizing.
Coverage Summary:
Coverage Analysis:
✅ Happy path: 100% (5/5 acceptance criteria)
✅ Edge cases: 80% (12/15 identified scenarios)
✅ Error scenarios: 90% (9/10 error paths)
⚠️ Performance: 0% (no performance tests yet)
Overall: ~85% comprehensive coverage
Gap Identification:
Potential gaps to consider:
- Concurrent user access scenarios
- Rate limiting behavior
- Long-running operations (> 30s)
- Network retry logic
Would you like me to generate tests for these?
Test Execution Estimate:
Estimated test execution time:
- Unit tests (45 tests): ~2 seconds
- Integration tests (12 tests): ~8 seconds
- E2E tests (5 tests): ~45 seconds
Total: ~55 seconds
Run with: npm test
Cursor-Specific: Immediate Execution Option:
Ready to validate? I can run these tests now in the integrated terminal.
Would you like me to:
1. Run all tests
2. Run only unit tests first
3. Run tests in watch mode
Before finalizing generated tests, verify:
expect(result).toBeTruthy())// BAD: Relies on specific implementation
expect(component.state.counter).toBe(5);
// GOOD: Tests behavior
expect(screen.getByText('Count: 5')).toBeInTheDocument();
// BAD: Mocking internal implementation
mockPrivateMethod.mockReturnValue(true);
// GOOD: Mock only external dependencies
mockApiClient.get.mockResolvedValue(data);
// BAD: Not specific enough
expect(result).toBeTruthy();
// GOOD: Specific expectations
expect(result).toEqual({
id: expect.any(Number),
email: '[email protected]',
createdAt: expect.any(String)
});
// BAD: Tests depend on execution order
let userId;
test('creates user', () => { userId = createUser(); });
test('updates user', () => { updateUser(userId); });
// GOOD: Each test is independent
test('creates user', () => {
const userId = createUser();
expect(userId).toBeDefined();
});
test('updates user', () => {
const userId = createUser();
updateUser(userId);
expect(getUser(userId).updated).toBe(true);
});
When users activate Agent mode for complex test generation:
Example Agent prompt:
"I'm working on CoreStory project {project-id}.
Create a complete test suite for the authentication module:
1. Query CoreStory for auth specifications
2. Generate unit tests for each auth method in ./tests/unit/auth/
3. Generate integration tests for auth flows in ./tests/integration/auth/
4. Generate E2E tests for login/logout in ./tests/e2e/auth/
5. Set up test fixtures and mocks in ./tests/fixtures/
6. Run tests and fix any failures
7. Generate coverage report"
Agent will autonomously:
For iterative test generation:
User opens Composer and types:
"Generate unit tests for ./src/auth/login.ts
Output to ./tests/unit/auth/login.test.ts"
Cursor generates tests, creates the file, user reviews.
User continues in same Composer session:
"Add edge cases for rate limiting and account lockout"
Cursor adds tests to existing file, maintaining context.
For quick test fixes:
User sees failing test in editor
Selects the test function
Presses Cmd+K
Types: "Fix this test - the assertion is checking wrong property"
Cursor edits the test inline with correct assertion.
Take advantage of Cursor's ability to read BOTH CoreStory knowledge AND actual source files:
Example workflow:
1. Query CoreStory: "What should the authentication module do according to specs?"
2. Read source: View ./src/auth/login.ts
3. Compare: Identify differences between spec and implementation
4. Generate tests covering both specified behavior AND actual implementation
5. Flag any spec/implementation mismatches for user
This catches:
- Undocumented features that need tests
- Implementation drift from original specs
- Edge cases only visible in code
- Business rules that evolved without doc updates
For TDD workflows:
After generating initial tests:
"I've created tests in ./tests/auth/login.test.ts.
Should I run them in watch mode so you can see immediate feedback as you implement?"
If user agrees:
Run: npm test -- --watch auth/login.test.ts
Tests auto-run on every code change, perfect for TDD cycle.
Clarify scope if needed:
Set expectations:
Use CoreStory proactively:
Provide context with output:
Invite iteration:
User (in Cursor Composer): "Generate tests for the payment processing module"
Cursor:
I'll help you generate comprehensive tests for payment processing. Let me start by understanding the codebase structure with CoreStory.
[Uses CoreStory:list_projects, CoreStory:get_project, CoreStory:create_conversation]
[Uses CoreStory:send_message to query about payment module]
I see the payment processing module handles:
- Credit card payments via Stripe
- Refund processing
- Payment retry logic
- Fraud detection
I'll generate tests in rounds and create them in your test directory:
Round 1: Happy path tests → ./tests/unit/payment/
- Successful payment processing
- Successful refunds
- Payment confirmation
Round 2: Edge cases → ./tests/unit/payment/edge-cases/
- Insufficient funds
- Invalid card details
- Network timeout during payment
- Concurrent payment attempts
Round 3: Integration tests → ./tests/integration/payment/
- Stripe API integration
- Database transaction handling
- Webhook processing
Does this approach work for you? Any specific scenarios you want me to prioritize?
User: "Yes, also include tests for the fraud detection"
Cursor:
Perfect! I'll add fraud detection tests covering:
- Suspicious transaction patterns
- Velocity checks (too many transactions)
- Geographic anomalies
- Amount thresholds
Starting with Round 1 - Happy Path Tests...
[Creates ./tests/unit/payment/payment-processing.test.ts with 12 tests]
✅ Created ./tests/unit/payment/payment-processing.test.ts
Coverage: 100% of main payment flows, 0% of edge cases yet.
Run tests with: npm test payment/payment-processing.test.ts
Would you like me to:
1. Run these tests now to verify they work?
2. Continue to Round 2 (edge cases)?
3. Review the tests first?
User: "Run them"
Cursor:
[Executes in integrated terminal: npm test payment/payment-processing.test.ts]
✅ All 12 tests passed! (2.3s)
Great! The happy path tests are working. Ready for Round 2 (edge cases)?
A successful test generation session includes:
If user seems unsatisfied:
Always remember:
Cursor's advantages for test generation:
The goal is not just generating tests, but generating confidence in the codebase through comprehensive, maintainable test coverage that executes reliably in Cursor's development environment.