Add the below markdown file to your .claudeskills folder per these instructions:

CoreStory + Claude Code Agentic Ticket Resolution Playbook


# CoreStory + Claude Ticket Resolver Skill File

**Description:** Automatically resolves tickets using CoreStory's code intelligence and TDD methodology

**When to activate:** User requests ticket implementation OR provides a ticket ID (e.g., "Implement ticket #6992", "Build feature JIRA-123", "Resolve ENG-456")

**Prerequisites:**
- CoreStory MCP server configured
- At least one CoreStory project with completed ingestion
- (Optional) Ticketing system MCP (GitHub Issues, Jira, ADO, Linear)

---

## Skill Execution

When this skill activates, systematically execute all six phases of the CoreStory ticket resolution workflow.

### PHASE 1: Ticket Intake & Context Gathering

**Objective:** Import ticket details and set up CoreStory implementation environment

**Actions:**

1. **Extract Ticket Information**
    
    If user provided a ticket ID (e.g., #6992, JIRA-123):
    
    - Determine ticketing system from format or ask user
    - Use appropriate MCP to fetch ticket details
    - Parse: user story, acceptance criteria, requirements, constraints, related tickets
    
    If user described feature directly:
    
    - Extract feature description from user message
    - Ask for acceptance criteria if not provided
    - Clarify requirements and constraints
2. **Select CoreStory Project**
    
    ```
    Use CoreStory MCP: list_projects
    ```
    
    - If multiple projects: Ask user which one
    - If one project: Auto-select
    - Verify project status is "completed"
    - If no projects: Error - ask user to create CoreStory project first
3. **Create Implementation Conversation**
    
    ```
    Use CoreStory MCP: create_conversation
    Title: "Ticket Implementation: #[ID] - [brief description]"
    Project: [selected-project-id]
    ```
    
    Store conversation_id for all subsequent queries.
    

**Output to user:**

๐Ÿ“‹ Starting ticket implementation for [ticket-id]

Feature: [brief description] User Story: [user story] Acceptance Criteria:

Created CoreStory implementation conversation: [conversation-id] Proceeding to understand system architecture...


---

### PHASE 2: Understanding System Architecture (Oracle Phase)

**Objective:** Establish ground truth about how the system works and where new code should integrate

**Actions:**

Send these three CoreStory queries in sequence. After each, summarize key insights to user.

**Query 1: Architecture Discovery**

Use CoreStory MCP: send_message Conversation: [conversation-id] Message: "What files are responsible for [feature area based on ticket]? I need to understand:

  1. Primary implementation files and their responsibilities
  2. Existing test patterns and coverage
  3. Helper/utility modules I can reuse
  4. Integration points with other components"

Parse response for:
- Core implementation files (e.g., user_service.py, export_controller.py)
- Existing similar features (e.g., pdf_export.py)
- Utility modules (e.g., serializers.py, formatters.py)
- Integration patterns (e.g., REST endpoints, background jobs)

**Query 2: Design Patterns & Conventions**

Use CoreStory MCP: send_message Message: "What architectural patterns are used for [feature type]? Specifically:

  1. How are similar features structured?
  2. What design patterns should I follow?
  3. What naming conventions apply?
  4. What invariants must I maintain?"

Parse response for:
- Class/module structure patterns
- Naming conventions (e.g., {Format}ExportService)
- Async/sync patterns
- Error handling patterns
- Testing patterns
- **Invariants** (e.g., "all exports require authentication") - CRITICAL

**Query 3: Historical Context**

Use CoreStory MCP: send_message Message: "Have there been similar features implemented recently? What was the design intent? Are there related user stories or tickets I should reference?"


Parse response for:
- Related PR numbers and discussions
- Design rationale documentation
- Similar past implementations
- Known gotchas or lessons learned

**Output to user:**

๐Ÿ“š System Architecture Analysis Complete

Relevant Files:

Architectural Patterns:

Reusable Components:

Design Context:

Proceeding to implementation planning...


---

### PHASE 3: Implementation Planning (Navigator Phase)

**Objective:** Map feature requirements to specific code locations and implementation strategy

**Actions:**

Send these three CoreStory queries. Build implementation plan.

**Query 1: Identify Extension Points**

Use CoreStory MCP: send_message Message: "Where should I implement [feature]? What are the extension points? Walk me through the implementation locations step by step."


Parse response for:
- Files to create (with paths)
- Files to modify (with specific sections)
- Base classes to extend
- Test file locations

**Query 2: Data Structure Analysis**

Use CoreStory MCP: send_message Message: "What data structures should I use for [feature]? What models/schemas are involved? What are the relationships and dependencies?"


Parse response for:
- Primary models/entities
- Required fields
- Relationships to other entities
- Serialization requirements

**Query 3: Reference Implementations**

Use CoreStory MCP: send_message Message: "What existing features are most similar to [new feature]? Can I reuse code? What patterns should I copy?"


Parse response for:
- Reference implementations
- Reusable code patterns
- Copy-paste opportunities
- Anti-patterns to avoid

**Output to user:**

๐ŸŽฏ Implementation Plan Complete

Files to Create:

Files to Modify:

Data Structures:

Reference Pattern: [existing similar feature]

Proceeding to test-first implementation...


---

### PHASE 4: Test-First Implementation

**Objective:** Write failing tests that define the feature, then implement to make them pass

**CRITICAL:** Tests come BEFORE code. This is non-negotiable.

**Actions:**

**Step 1: Write Acceptance Tests**

Based on:
- Acceptance criteria from Phase 1
- Architecture patterns from Phase 2
- Data structures from Phase 3

Create test file or add to existing test file:

```python
def test_[ticket_id]_[descriptive_name]():
    """Test [feature description].
    
    Ticket: [ticket-id] - [one-line description]
    
    Acceptance Criteria:
    - [criterion 1]
    - [criterion 2]
    """
    # Setup: [create test scenario from user story]
    [setup code]
    
    # Action: [perform feature operation]
    [operation code]
    
    # Assert: [test acceptance criteria]
    # These will FAIL until we implement the feature
    assert [primary assertion]
    assert [secondary assertion]
    [additional assertions]

Step 2: Write Unit Tests

Create unit tests for individual components:

def test_[component]_[specific_behavior]():
    """Test specific component behavior."""
    component = Component()
    result = component.method(input_data)
    assert result == expected_output

Step 3: Verify Tests Fail

Run test suite for these specific tests
Expect: FAILED

If tests pass โ†’ Feature already exists or tests are wrong. Ask CoreStory for clarification.

Output to user:

โœ… Tests written and verified to fail (confirms feature not yet implemented)

**Test Suite:**
- test_[primary_test]: FAILED (expected)
- test_[edge_case_1]: FAILED (expected)
- test_[unit_test_1]: FAILED (expected)

This confirms the feature is not yet implemented.

Step 4: Validate Tests with CoreStory

Use CoreStory MCP: send_message
Message: "I've written these tests for the [feature]:

[paste relevant test code]

Do these correctly test the acceptance criteria according to the system design?
Are there edge cases I'm missing? Do they follow the testing patterns from
[reference feature tests]?"

If CoreStory suggests improvements, update tests.

Step 5: Implement Feature

NOW (and only now) write the implementation code.

Follow patterns from Phase 2:

Create new files:

# [new_service_file.py]

class [FeatureName]Service:
    """Service for [feature description].
    
    Follows the [pattern] established in [reference implementation].
    Ticket: [ticket-id]
    """
    
    def [main_method](self, params):
        """[Method description].
        
        Args:
            params: [description]
            
        Returns:
            [description]
            
        Invariants:
        - [invariant 1]
        - [invariant 2]
        """
        # Implementation following established pattern
        pass

Modify existing files:

# Add to [existing_controller.py]

def [new_endpoint](self):
    """[Endpoint description].
    
    Ticket: [ticket-id]
    """
    # Implementation
    pass

Step 6: Verify Tests Pass

Run the tests from Step 1-2
Expect: PASSED

If still fails โ†’ Implementation is incomplete or wrong. Debug and fix.

Output to user:

โœ… Feature implemented - all tests now pass!

**Implementation:**
- Created: [list new files]
- Modified: [list modified files]
- Pattern followed: [reference pattern]

**Tests Passing:**
- test_[primary_test]: PASSED โœ…
- test_[edge_case_1]: PASSED โœ…
- test_[unit_test_1]: PASSED โœ…

Step 7: Validate Implementation with CoreStory

Use CoreStory MCP: send_message
Message: "I've implemented [feature] with this structure:

[paste code structure overview or key snippets]

Does this align with the system architecture? Does it follow the [reference pattern]?
Could it have unintended side effects on [related systems]?"

If CoreStory raises concerns, address them.


PHASE 5: Feature Completion

Objective: Add edge case coverage, ensure quality, prevent regressions

Actions:

Step 1: Identify Edge Cases

Use CoreStory MCP: send_message
Message: "My basic [feature] tests pass. What edge cases should I test?
What scenarios might break in production? Are there [performance/security/i18n]
concerns I should validate?"

Based on response, add edge case tests:

Step 2: Run Full Test Suite

Run complete test suite for the module/component

Ensure:

If regressions found โ†’ Implementation introduced side effects. Revise approach.

Output to user:

๐Ÿงช Comprehensive Testing Complete

**Tests Added:** [count]
**Edge Cases Covered:**
- [list edge cases tested]

**Full Test Suite:** โœ… All passing (no regressions)
**Code Coverage:** [percentage]%

Step 3: Performance Validation (if applicable)

If feature has performance requirements:

def test_[feature]_performance():
    """Feature meets performance requirements."""
    start = time.time()
    result = feature.execute(test_data)
    duration = time.time() - start
    
    assert duration < performance_threshold

Step 4: Security Validation (if applicable)

If feature handles sensitive data or auth:

def test_[feature]_security():
    """Feature enforces security requirements."""
    unauthorized_user = create_user(role='guest')
    
    response = client.get('/feature', headers=auth(unauthorized_user))
    assert response.status_code == 403

PHASE 6: Completion & Knowledge Capture

Objective: Close loop, preserve knowledge, document thoroughly

Actions:

Step 1: Update Ticket (if integrated)

If ticketing MCP available:

Use [GitHub/Jira/ADO/Linear] MCP: update_issue
Issue: [ticket-id]
Comment: "**Implementation Summary:**

Feature: [description]

**Implementation:**
- Created: [list new files]
- Modified: [list modified files]
- Pattern: [architectural pattern followed]

**Testing:**
- [count] comprehensive tests covering acceptance criteria and edge cases
- All tests passing
- No regressions in existing test suite
- Code coverage: [percentage]%

**Technical Details:**
- Follows [reference pattern] for consistency
- [Any notable implementation decisions]

**Files Changed:**
[list all files created/modified]

Ready for review and merge.

**Commit:** [will be added after commit]"

Status: Ready for Review

Step 2: Commit with Rich Context

git add [files]
git commit -m "$(cat <<'EOF'
Feat: [Brief one-line description of feature] (#[ticket-id])

**Feature:**
[User story or feature description]

**Implementation:**
- [Key implementation detail 1]
- [Key implementation detail 2]
- [Key implementation detail 3]

**Architecture Alignment:**
- Follows [reference pattern]
- Extends [base class] for consistency
- Uses [design pattern] as established in [reference]

**Testing:**
- [count] tests covering acceptance criteria:
  * [test category 1]
  * [test category 2]
  * [test category 3]
- [count] edge case tests:
  * [edge case 1]
  * [edge case 2]
- All tests passing, no regressions
- Code coverage: [percentage]%

**Edge Cases Handled:**
- [edge case 1 description]
- [edge case 2 description]

**References:**
- Ticket: [ticket-id]
- CoreStory Investigation: [conversation-id]
- Design Pattern: [reference file or pattern]

๐Ÿค– Generated with Claude Code
Co-Authored-By: Claude <[email protected]>
EOF
)"

Step 3: Rename CoreStory Conversation

Use CoreStory MCP: rename_conversation
Conversation: [conversation-id]
New Title: "Ticket Implementation: #[ID] - COMPLETED - [brief description]"

This marks the conversation as completed and preserves it for future reference.

Step 4: Final Report to User

Output to user:

โœ… Ticket #[ID] COMPLETED

**Summary:**
- Feature: [one-line description]
- Implementation: [architectural approach]
- Time to Implementation: [duration if trackable]
- Tests Added: [count]
- Code Coverage: [percentage]%

**Commit:** [commit hash if available]
**CoreStory Investigation:** [conversation-id] (preserved for future reference)

**Quality Metrics:**
- Architecture Alignment: โœ… Follows [pattern]
- Test Coverage: โœ… Comprehensive (acceptance criteria + edge cases)
- Regressions: None
- Performance: [โœ… Meets requirements / N/A]
- Security: [โœ… Validates auth/authz / N/A]
- Documentation: โœ… Complete commit message with context

**Files Created:**
[list]

**Files Modified:**
[list]

The implementation is ready for code review and merge.

Advanced Patterns

If Feature Has Security Implications

Automatically check:

Use CoreStory MCP: send_message
Message: "What security considerations apply to [feature]? Are there security
requirements I should verify? What authentication/authorization patterns should
I use?"

Include security validation in tests:

If Feature Affects Shared Component

Check integration impact:

Use CoreStory MCP: send_message
Message: "What other systems or components integrate with [feature]? What
downstream impacts should I consider if I add [behavior]?"

Add integration tests for dependent components.

If Feature Requires Data Migration

Plan migration carefully:

Use CoreStory MCP: send_message
Message: "What's the current data structure for [entity]? How much data exists?
What constraints must be maintained during migration?"

Implement migration with validation and rollback.

If Performance Requirements Exist

Ask about performance expectations:

Use CoreStory MCP: send_message
Message: "What are the performance requirements for [feature type]? What's the
expected latency/throughput? Are there performance optimizations I should use?"

Add performance regression tests.

If Feature Needs Gradual Rollout

Understand feature flag patterns:

Use CoreStory MCP: send_message
Message: "How are feature flags implemented? What's the pattern for gradual
rollouts? How do I ensure backward compatibility?"

Implement with feature flag protection.

If Multiple Related Tickets

Cross-reference tickets:

Use CoreStory MCP: send_message
Message: "I'm implementing [ticket A], [ticket B], and [ticket C] which seem related.
Are there common patterns? Could they share implementation?"

Consider unified implementation if appropriate.


Error Handling

If CoreStory Project Not Found

If Tests Won't Fail

If Implementation Causes Regressions

If CoreStory Response Unclear


Success Criteria

This skill successfully resolves a ticket when:

  1. โœ… Feature fully implements acceptance criteria
  2. โœ… Implementation aligns with system architecture (validated by CoreStory)
  3. โœ… All architectural patterns and invariants followed
  4. โœ… Comprehensive test coverage (acceptance + edge cases)
  5. โœ… All tests pass
  6. โœ… No regressions in existing test suite
  7. โœ… Commit message documents full context
  8. โœ… CoreStory conversation preserved
  9. โœ… Ticket updated (if applicable)

Skill Deactivation

This skill should NOT be used for:

In these cases, defer to standard development workflow.