For: Fast lookup of common patterns, prompts, and best practices
Use: Keep this open while generating tests for quick reference
I'm working on CoreStory project {project-id}.
Generate {test-type} tests for {module/feature} using {framework}.
Cover {requirements}.
Output to ./tests/{module}/
I'm working on CoreStory project {project-id}.
Generate comprehensive tests for {feature} including:
- Unit tests for business logic
- Integration tests for dependencies
- Edge cases and boundary conditions
- Error scenarios
Use {framework}. Organize by {criterion}.
Output to ./tests/{feature}/
I'm working on CoreStory project {project-id}.
I'm about to implement {feature}. Generate failing tests first (TDD approach).
Requirements:
- {requirement 1}
- {requirement 2}
- {requirement 3}
Use {framework}. Tests should be RED until I implement.
Output to ./tests/{feature}/
I'm working on CoreStory project {project-id}.
Analyze {module} and identify all edge cases, then generate tests for them.
Focus on:
- Boundary conditions
- Invalid inputs
- Error scenarios
- Race conditions
Output to ./tests/{module}/edge-cases/
I'm working on CoreStory project {project-id}.
Generate integration tests for API endpoints in {path}.
Test:
- Request/response validation
- Authentication scenarios
- Error handling (400, 401, 403, 404, 500)
- Edge cases
Use {framework}.
Output to ./tests/integration/api/
I'm working on CoreStory project {project-id}.
Analyze test coverage for {module}:
1. Identify gaps against requirements
2. List top 10 missing tests
3. Generate the highest-priority missing tests
Requirements:
{paste requirements or ACs}
Output to ./tests/{module}/
describe('FeatureName', () => {
describe('methodName', () => {
it('should {behavior} when {condition}', () => {
// Arrange const input = { /* test data */ }; const expected = { /* expected result */ }; // Act const result = functionUnderTest(input); // Assert expect(result).toEqual(expected); }); });});