Add the markdown file below to your claudeskills folder per these instructions:
CoreStory + Claude Code Agentic Bug Resolution Playbook
# CoreStory + Claude Bug Resolver Skill File
**Description:** Automatically resolves bugs using CoreStory’s code intelligence and TDD methodology
**When to activate:** User requests bug fix/investigation OR provides a ticket ID (e.g., “Fix bug #6992”, “Investigate ticket JIRA-123”)
**Prerequisites:**
- CoreStory MCP server configured
- At least one CoreStory project with completed ingestion
- (Optional) Ticketing system MCP (GitHub Issues, Jira, ADO, Linear)
---
## Skill Execution
When this skill activates, systematically execute all six phases of the CoreStory bug resolution workflow.
### PHASE 1: Bug Intake & Context Gathering
**Objective:** Import bug details and set up CoreStory investigation environment
**Actions:**
1. **Extract Bug Information**
If user provided a ticket ID (e.g., #6992, JIRA-123):
- Determine ticketing system from format or ask user
- Use appropriate MCP to fetch ticket details
- Parse: description, symptoms, reproduction steps, expected/actual behavior
If user described bug directly:
- Extract symptom description from user message
- Ask for reproduction steps if not provided
- Clarify expected vs actual behavior
2. **Select CoreStory Project**
```
Use CoreStory MCP: list_projects
```
- If multiple projects: Ask user which one
- If one project: Auto-select
- Verify project status is “completed”
- If no projects: Error - ask user to create CoreStory project first
3. **Create Investigation Conversation**
```
Use CoreStory MCP: create_conversation
Title: "Bug Investigation: #[ID] - [brief description]"
Project: [selected-project-id]
```
Store conversation_id for all subsequent queries.
**Output to user:**
🔍 Starting bug investigation for [ticket-id]
Bug: [brief description] Symptoms: [what's broken] Expected: [correct behavior]
Created CoreStory investigation conversation: [conversation-id] Proceeding to understand system architecture...
---
### PHASE 2: Understanding System Behavior (Oracle Phase)
**Objective:** Establish ground truth about intended system behavior
**Actions:**
Send these three CoreStory queries in sequence. After each, summarize key insights to user.
**Query 1: Architecture Discovery**
Use CoreStory MCP: send_message Conversation: [conversation-id] Message: "What files are responsible for [affected feature based on bug description]? I need to understand:
Parse response for:
- File names (e.g., dataset.py, user_auth.py)
- Test file names
- Related modules
- Recent PRs/changes mentioned
**Query 2: Invariants & Data Structures**
Use CoreStory MCP: send_message Message: "What are the key data structures involved in [feature]? What invariants should be maintained? Specifically:
Parse response for:
- Critical variables (e.g., _coord_names, _variables)
- **Invariants** (e.g., “coord_names ⊆ variables.keys()”) - CRITICAL
- Business rules
- Expected behavior descriptions
**Query 3: Historical Context**
Use CoreStory MCP: send_message Message: "Have there been recent changes to [feature]? What was the design intent? Are there related user stories or issues?"
Parse response for:
- Related PR numbers
- Design rationale
- Similar past bugs
- User stories
**Output to user:**
📚 System Behavior Analysis Complete
Key Files:
Critical Invariants:
Data Structures:
Design Context:
Proceeding to hypothesis generation...
---
### PHASE 3: Hypothesis Generation (Navigator Phase)
**Objective:** Map symptoms to specific code locations and probable causes
**Actions:**
Send these three CoreStory queries. Build ranked list of hypotheses.
**Query 1: Map Symptoms to Code Paths**
Use CoreStory MCP: send_message Message: "If there's a bug where [symptom from bug report], what are the specific code paths I should investigate? Walk me through the logic flow step by step."
Parse response for:
- Entry points
- Logic flow sequence
- State transition points
- Likely failure points
**Query 2: Root Cause Candidates**
Use CoreStory MCP: send_message Message: "Based on the symptom that [detailed symptom], what are the most likely root causes? Which variables or state management issues could cause this? Rank them by probability."
Parse response for:
- Ranked hypotheses (high/medium/low probability)
- Specific variables to inspect
- State management concerns
**Query 3: Precise Navigation**
Use CoreStory MCP: send_message Message: "What specific methods in [file from Phase 2] handle [operation]? Where should I look for the [state update/validation/cleanup] logic?"
Parse response for:
- Exact file names
- Method names
- Line number hints if provided
- Related test methods
**Output to user:**
🎯 Investigation Targets Identified
Most Likely Root Cause (High Probability): [description] Location: [file]:[method]
Alternative Hypotheses:
Code Path to Investigate: [step-by-step flow]
Proceeding to test-first investigation...
---
### PHASE 4: Test-First Investigation
**Objective:** Write failing test, validate it, then investigate code
**CRITICAL:** Tests come BEFORE code reading. This is non-negotiable.
**Actions:**
**Step 1: Write Failing Test**
Based on:
- Expected behavior from Phase 2
- Symptom description from Phase 1
- Invariants from Phase 2
Create test file or add to existing test file:
```python
def test_[bug_id]_[descriptive_name]():
"""Test [bug description]. Bug: [ticket-id] - [one-line description] Expected: [correct behavior] Invariant: [invariant that should hold] """ # Setup: [create scenario from reproduction steps] [setup code]
# Action: [perform buggy operation] [operation that triggers bug]
# Assert: [test expected behavior] # These will FAIL until we fix the bug assert [primary assertion]
assert [invariant check]
[additional assertions]
Step 2: Verify Test Fails
Run test suite for this specific test
Expect: FAILED
If test passes → Bug doesn’t exist or test is wrong. Ask CoreStory for clarification.
Output to user:
✅ Test written and verified to fail (confirms bug exists)
**Test:** test_[name]
**Failure:** [assertion that failed]
This confirms the bug is reproducible.
Step 3: Validate Test with CoreStory
Use CoreStory MCP: send_message
Message: "I've written this test to reproduce the bug:
[paste full test code]
Does this correctly test the expected behavior according to the system design?
Are there edge cases I'm missing?"
If CoreStory suggests improvements, update test.
Step 4: Read Code
NOW (and only now) read the files identified in Phase 3:
Use Read tool on:
- [primary implementation file]
- [test file for reference]
Focus on:
Step 5: Identify Bug
Compare actual code against expected behavior from Phase 2.
Look for:
Step 6: Validate Finding with CoreStory
Use CoreStory MCP: send_message
Message: "Looking at line [X] in [file]:
```[language]
[paste relevant code snippet]
I think this is the bug because [explanation of why this violates expected behavior/invariant].
Does this align with the intended design? Should the code be:
[paste proposed fix]
So that [explanation of how fix restores invariant]?”
Wait for CoreStory confirmation before proceeding.
**Output to user:**
🐛 Bug Located!
File: [file]:[line] Issue: [what’s wrong] Invariant Violated: [which invariant]
Proposed Fix: [brief description]
CoreStory has validated this is the correct root cause. Proceeding to implement fix…
---
### PHASE 5: Solution Development
**Objective:** Implement minimal fix, verify with tests, add edge cases
**Actions:**
**Step 1: Implement Fix**
Use Edit tool to make the minimal code change
Guidelines:
- Smallest change that restores invariant
- Follow architectural patterns from CoreStory
- Add comments referencing invariant if complex
**Step 2: Verify Test Passes**
```bash
Run the test from Phase 4
Expect: PASSED
If still fails → Fix is incomplete or wrong. Return to investigation.
Output to user:
✅ Fix implemented - test now passes!
**Change:** [brief description of change]
**Invariant Restored:** [which invariant]
Step 3: Validate Fix with CoreStory
Use CoreStory MCP: send_message
Message: "I've implemented this fix:
```[language]
[paste diff or description]
Does this align with the system architecture? Does it maintain all invariants? Could it have unintended side effects?”
If CoreStory raises concerns, address them.
**Step 4: Identify Edge Cases**
Use CoreStory MCP: send_message Message: “My basic test passes now. What edge cases should I test for [feature]? What scenarios might break the invariant [invariant] in different ways?”
Based on response, add edge case tests:
- Boundary conditions
- Empty/null inputs
- Different parameter combinations
- Integration scenarios
**Step 5: Run Full Test Suite**
```bash
Run complete test suite for the module/component
Ensure:
If regressions found → Fix introduced side effects. Revise approach.
Output to user:
🧪 Comprehensive Testing Complete
**Tests Added:** [count]
**Edge Cases Covered:**
- [list edge cases]
**Full Test Suite:** ✅ All passing (no regressions)
Objective: Close loop, preserve knowledge, document thoroughly
Actions:
Step 1: Update Ticket (if integrated)
If ticketing MCP available:
Use [GitHub/Jira/ADO/Linear] MCP: update_issue
Issue: [ticket-id]
Comment: "**Investigation Summary:**
Root cause: [description]
Fix: [description]
Tests added: [count]
**Commit:** [will be added after commit]"
Status: In Review (or Resolved, based on team workflow)
Step 2: Commit with Rich Context
git add [files]git commit -m "$(cat <<'EOF'Fix: [Brief one-line description of bug]**Problem:**[Symptom and user impact]**Root Cause:**[What was wrong and why - reference invariant if applicable]**Solution:**[What was changed and why this fixes the root cause]**Invariants Restored:**[List invariants now maintained]**Testing:**- Added test_[primary_test_name]- Added [count] edge case tests- All existing tests pass (no regressions)**References:**- Issue: [ticket-id]- CoreStory Investigation: [conversation-id]🤖 Generated with Claude CodeCo-Authored-By: Claude <[email protected]>EOF)"
Step 3: Rename CoreStory Conversation
Use CoreStory MCP: rename_conversation
Conversation: [conversation-id]
New Title: "Bug Investigation: #[ID] - RESOLVED - [brief description]"
This marks the conversation as resolved and preserves it for future reference.
Step 4: Final Report to User
Output to user:
✅ Bug #[ID] RESOLVED
**Summary:**
- Root Cause: [one-line description]
- Fix: [one-line description]
- Time to Root Cause: [calculate if possible]
- Tests Added: [count]
- Invariants Restored: [list]
**Commit:** [commit hash if available]
**CoreStory Investigation:** [conversation-id] (preserved for future reference)
**Quality Metrics:**
- First hypothesis: ✅ Correct (guided by CoreStory)
- Regressions: None
- Test coverage: Comprehensive
- Documentation: Complete
The fix is ready for review and merge.
Automatically check:
Use CoreStory MCP: send_message
Message: "What security considerations apply to [feature]? Are there security
requirements I should verify? Could this bug have security implications?"
Include security validation in tests.
Check integration impact:
Use CoreStory MCP: send_message
Message: "What other systems or components integrate with [feature]? What
downstream impacts should I consider if I change [behavior]?"
Add integration tests for dependent components.
Create separate conversations but cross-reference:
Use CoreStory MCP: send_message
Message: "I'm investigating [bug A], [bug B], and [bug C] which seem related.
Are there common patterns or root causes? Could they stem from the same
underlying issue?"
Consider unified fix if appropriate.
Ask about performance expectations:
Use CoreStory MCP: send_message
Message: "What are the performance characteristics of [feature]? What's the
expected complexity? Are there known performance bottlenecks?"
Add performance regression tests.
This skill successfully resolves a bug when:
This skill should NOT be used for:
In these cases, defer to standard development workflow.