# Completeness Evaluation Implementation
## Overview
The Completeness Evaluation component assesses how thoroughly a Jira issue is documented and whether it contains all the necessary information for implementation. It produces a score on a 1-10 scale and identifies key missing information that could impact successful delivery.
## Implementation Approach
### Function Signature
The function should follow the pattern established by other analysis components:
```typescript
function getCompletenessEvaluation(issue: JiraIssue, commentsResponse: IssueCommentResponse): CompletionEvaluation {
// Implementation details
}
```
Where `CompletionEvaluation` matches the interface defined in the analysis framework:
```typescript
interface CompletionEvaluation {
score: number; // 1-10 scale
missingInformation: string[];
suggestions: string[];
}
```
### Core Evaluation Logic
#### Information Categories to Check
The function should check for the presence and quality of these critical information categories:
1. **Acceptance Criteria**
- Check if the issue contains explicit acceptance criteria
- Evaluate if criteria are measurable and specific
2. **Technical Constraints**
- Identify mentions of technical limitations
- Check for environment requirements
- Look for performance expectations
3. **Dependencies**
- Check for explicit linked issues
- Scan description and comments for mention of dependencies
- Verify if dependency relationships are clearly defined
4. **Testing Requirements**
- Check for specific testing scenarios
- Look for edge cases that should be tested
- Verify presence of validation criteria
5. **Design Specifications**
- Check for attached mockups, wireframes, or design links
- Verify UI/UX requirements are specified if relevant
- Look for references to design systems or patterns
6. **User Impact Considerations**
- Check if the issue describes affected user groups
- Verify if accessibility requirements are considered
- Look for descriptions of expected user workflows
#### Implementation Details
1. **Text Analysis**
- Parse issue description, summary, and custom fields (like acceptance criteria fields)
- Use regular expressions and keyword detection to identify information categories
- Implement detection for common formats of acceptance criteria (Given/When/Then, bullet lists, etc.)
2. **Attachment Scanning**
- Check for relevant attachments (designs, diagrams, documents)
- Verify if attachments are referenced in the description
3. **Comment Analysis**
- Scan comment threads for additional information that complements the description
- Detect clarification questions that might indicate missing information
- Check if questions were answered with additional information
4. **Issue Links**
- Analyze linked issues to understand dependencies
- Check if epic-level information might fill gaps in the current issue
### Scoring Algorithm
The scoring algorithm should:
1. Start with a base score of 10
2. Subtract points for each missing critical information category
3. Apply weighted deductions based on issue type (e.g., missing design specs might be more critical for UI tasks)
4. Consider partially present information (e.g., some acceptance criteria but not comprehensive)
5. Factor in the quality and specificity of the information provided
Suggested scoring weights:
- Acceptance Criteria: -2 points if missing, -1 if present but vague
- Technical Constraints: -1.5 points if missing
- Dependencies: -1.5 points if missing or unclear
- Testing Requirements: -1.5 points if missing
- Design Specifications: -2 points if missing (for UI/UX tasks), -1 for backend tasks
- User Impact: -1.5 points if missing
### Generating Suggestions
For each identified gap, the function should generate specific, actionable suggestions:
1. For missing acceptance criteria:
- "Add specific acceptance criteria using Given/When/Then format"
- "Define measurable outcomes for each feature aspect"
2. For missing technical constraints:
- "Specify performance requirements (e.g., expected response times)"
- "Document browser/device compatibility requirements"
3. For unclear dependencies:
- "Formally link blocking issues in Jira"
- "Specify external systems this implementation interacts with"
4. For missing testing requirements:
- "Document key test scenarios for QA"
- "Identify edge cases that require specific testing"
5. For missing design specifications:
- "Attach wireframes or design mockups"
- "Link to relevant design system components"
6. For missing user impact information:
- "Describe user workflows affected by this change"
- "Specify accessibility requirements"
## Processing Flow
1. Extract all relevant text content from the issue (description, comments, custom fields)
2. Process each information category with specialized checkers
3. Accumulate missing information markers
4. Calculate the completeness score
5. Generate appropriate suggestions for each missing element
6. Format results according to the `CompletionEvaluation` interface
## Testability Considerations
The implementation should be designed for testability:
1. **Modular Checkers**: Implement each information category checker as a separate function to allow isolated testing
2. **Deterministic Scoring**: Ensure scoring algorithm produces consistent results for the same input
3. **Mock Test Cases**: Create sample issues with various levels of completeness for testing
4. **Edge Cases**: Handle edge cases like empty descriptions or non-standard issue types
5. **Regression Tests**: Include tests for issues that follow different documentation patterns
## Extensibility
The implementation should allow for future extensions:
1. **Configurable Weights**: Allow score weights to be adjusted based on team/project preferences
2. **Custom Checkers**: Support adding project-specific information category checkers
3. **Integration Points**: Allow for integration with custom fields or external documentation systems
## Example Implementation Structure
```typescript
function getCompletenessEvaluation(issue: JiraIssue, commentsResponse: IssueCommentResponse): CompletionEvaluation {
const allText = extractAllRelevantText(issue, commentsResponse);
// Check for each information category
const acceptanceCriteriaResult = checkAcceptanceCriteria(allText, issue);
const technicalConstraintsResult = checkTechnicalConstraints(allText, issue);
const dependenciesResult = checkDependencies(allText, issue);
const testingRequirementsResult = checkTestingRequirements(allText, issue);
const designSpecificationsResult = checkDesignSpecifications(allText, issue);
const userImpactResult = checkUserImpact(allText, issue);
// Combine missing information
const missingInformation = [
...acceptanceCriteriaResult.missing,
...technicalConstraintsResult.missing,
...dependenciesResult.missing,
...testingRequirementsResult.missing,
...designSpecificationsResult.missing,
...userImpactResult.missing
];
// Generate suggestions
const suggestions = generateSuggestions(missingInformation, issue.fields.issuetype.name);
// Calculate score
const score = calculateCompletenessScore(
acceptanceCriteriaResult,
technicalConstraintsResult,
dependenciesResult,
testingRequirementsResult,
designSpecificationsResult,
userImpactResult,
issue.fields.issuetype.name
);
return {
score,
missingInformation,
suggestions
};
}
```
## Integration
The completeness evaluation should be integrated into the main `analyzeIssue` function:
```typescript
export function analyzeIssue(issue: JiraIssue, commentsResponse: IssueCommentResponse): Partial<IssueAnalysisResult> {
const metadataAssessment = getMetadataAssessment(issue, commentsResponse);
const complexityAnalysis = getComplexityAnalysis(issue, commentsResponse);
const completenessEvaluation = getCompletenessEvaluation(issue, commentsResponse);
return {
issueKey: issue.key,
summary: issue.fields.summary,
issueType: issue.fields.issuetype.name,
metadata: metadataAssessment,
complexity: complexityAnalysis,
completeness: completenessEvaluation,
// Other analysis components would be added here
};
}
```
## Conclusion
The Completeness Evaluation component provides critical insights into issue documentation quality. By systematically checking for key information categories and generating an objective score with actionable suggestions, it helps teams improve their issue refinement process and reduce implementation ambiguities.
This approach balances comprehensiveness with simplicity, focusing on the core elements that contribute to issue completeness while maintaining a modular, testable design.