CodeLogic
OfficialServer Quality Checklist
Latest release: v1.0.0
- Disambiguation5/5
The two tools have clearly distinct purposes: one analyzes impacts between code and database entities, while the other analyzes impacts of modifying specific methods within classes or types. There is no overlap or ambiguity in their functions, making it easy for an agent to select the appropriate tool.
Naming Consistency5/5Both tools follow a consistent naming pattern: 'codelogic-' prefix followed by a descriptive hyphenated phrase (database-impact, method-impact). This uniformity makes the tool set predictable and easy to understand, with no deviations in style.
Tool Count2/5With only 2 tools, the server feels thin for a domain like code impact analysis. While the tools cover specific aspects (database and method impacts), the scope suggests potential gaps (e.g., class-level impacts, dependency analysis) that might require more tools for comprehensive coverage.
Completeness2/5The tool set is severely incomplete for code impact analysis. It lacks tools for broader analyses (e.g., class impacts, dependency graphs) and CRUD operations (e.g., creating or updating impact data), leaving significant gaps that could cause agent failures in complex scenarios.
Average 3.8/5 across 2 of 2 tools scored.
See the Tool Scores section below for per-tool breakdowns.
- No issues in the last 6 months
- No commit activity data available
- Last stable release on
- No critical vulnerability alerts
- No high-severity vulnerability alerts
- No code scanning findings
- CI is passing
Add a LICENSE file by following GitHub's guide. Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear after some time, you can manually trigger a new scan using the MCP server admin interface.
MCP servers without a LICENSE cannot be installed.
This repository includes a README.md file.
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
This server has been verified by its author.
Add related servers to improve discoverability.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool's purpose and workflow but doesn't cover critical behavioral aspects like whether it's read-only or mutative, authentication requirements, rate limits, or error handling. The description adds some context but leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and concise, with a clear purpose statement followed by a bulleted workflow and specific use cases. Every sentence adds value without redundancy, making it easy to scan and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (impact analysis with 3 parameters) and lack of annotations or output schema, the description is moderately complete. It explains the purpose and usage but doesn't cover behavioral traits, return values, or error conditions, which are important for a tool of this nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the schema fully documents all three parameters. The description doesn't add any parameter-specific information beyond what's in the schema, such as examples or constraints. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as analyzing impacts between code and database entities, which is specific and actionable. However, it doesn't explicitly differentiate from its sibling tool 'codelogic-method-impact', leaving some ambiguity about their distinct roles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage guidelines with a recommended workflow and specific contexts (before implementing changes, when AI-suggested modifications are considered, or when modifying SQL code). It lacks explicit exclusions or comparisons to the sibling tool, which would elevate it to a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the tool's function as an impact analysis for code modifications, which implies it's a read-only analysis tool (not a mutation tool). However, it doesn't specify behavioral traits like whether it requires specific permissions, how it performs the analysis (e.g., static vs. dynamic), what the output format is, or any rate limits. The description adds some context but lacks detailed behavioral information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized. It starts with a clear purpose statement, followed by a bullet-point workflow and a concluding note. Every sentence adds value: the first defines the tool, the workflow provides actionable guidance, and the last emphasizes importance in AI contexts. There's no redundant or wasted text, making it efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (impact analysis for code changes), no annotations, and no output schema, the description is moderately complete. It covers purpose and usage well but lacks details on behavioral aspects (e.g., how analysis is performed, output format) and doesn't leverage structured fields. For a tool with no annotations or output schema, it should provide more context about what the analysis entails and what results to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with clear descriptions for both parameters ('class' and 'method'). The description doesn't add any semantic details beyond what the schema provides—it doesn't explain parameter formats, constraints, or examples. Given the high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't need to given the schema's completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Analyze impacts of modifying a specific method within a given class or type.' It specifies the verb ('analyze impacts') and resource ('specific method within a given class or type'), making the purpose unambiguous. However, it doesn't explicitly differentiate from its sibling tool 'codelogic-database-impact' beyond the domain difference implied by the names.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines in a recommended workflow format: 'Use this tool before implementing code changes,' 'Run the tool against methods or functions that are being modified,' and 'Particularly crucial when AI-suggested modifications are being considered.' This clearly indicates when to use the tool and provides context for its application, though it doesn't explicitly mention when not to use it or compare to the sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/CodeLogicIncEngineering/codelogic-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server