Purpose: This file provides Claude with expert-level guidance for generating comprehensive, high-quality test suites using CoreStory’s MCP server. When a user asks Claude to generate tests, Claude should reference these principles and workflows.
Before generating any tests:
Workflow:
1. list_projects → confirm project_id with user
2. get_project → verify project details
3. create_conversation → establish context for this test generation session
4. send_message → ask CoreStory about the specific module/feature to test
5. Generate tests based on ACTUAL specifications or source code, not assumptions
Every test generation request should extract:
If any component is unclear, ASK before generating.
Don’t: Generate 100 tests in one shot Do: Generate in rounds with validation:
Round 1: Happy path tests (10-15 tests)
↓ User reviews
Round 2: Edge cases based on feedback (15-20 tests)
↓ User reviews
Round 3: Error scenarios and integration tests (10-15 tests)
↓ User reviews
Round 4: Performance/security tests if needed (5-10 tests)
This approach:
1. If user mentions project_id → use it
2. If not → Call list_projects
3. Present options to user: "I see you have these projects: [list]. Which one should we work on?"
4. Call get_project to confirm and show project details
5. Call get_project_stats to check if codebase is fully indexed
1. Create conversation: create_conversation(project_id, title="Test Generation for {feature}")
2. Query CoreStory about the target:
- "What does the {module} do?"
- "What are the dependencies of {component}?"
- "Show me the business logic in {file}"
- "What edge cases should I consider for {feature}?"
3. Use send_message to get contextual information
4. Build mental model of the code before generating tests