FreightUtils MCP Server
Server Quality Checklist
- Disambiguation4/5
Tools are well-differentiated by transport mode and function (road vs sea vs air, lookup vs calculation). Minor potential confusion exists between unit_converter and specialized calculators (cbm_calculator, chargeable_weight_calculator) since the converter handles freight-specific conversions, but descriptions clarify that calculators provide full determination logic while converter handles simple unit math.
Naming Consistency4/5All tools use consistent snake_case formatting with functional suffixes (_calculator, _lookup, _converter) indicating the operation type. The domain prefixes are descriptive and follow a clear [domain]_[action] pattern, though the mix of abbreviations (cbm, ldm, adr, hs) and full words is slightly inconsistent but still readable.
Tool Count5/5Eleven tools is an appropriate count for a freight utilities server, covering the full multimodal lifecycle (road LDM, sea containers/CBM, air chargeable weight, pallets, hazmat, customs, trade terms) without bloat. Each tool earns its place with distinct functionality.
Completeness4/5Strong coverage for core freight calculations and references across ADR dangerous goods, Incoterms, HS codes, and modal loading. Minor gaps include lack of port/airport code lookups and customs duty calculators, but the surface supports the primary utility workflows effectively.
Average 4.3/5 across 11 of 11 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 11 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description provides good domain context (what Incoterms define) but lacks tool-specific behavior details like output format, error handling for invalid codes, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded purpose, logical flow, and bulleted use cases; listing all 11 codes is justified for input validation guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate given simple schema, but lacks description of return values/output structure (no output schema provided) or authentication requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds valuable context beyond 100% schema coverage by listing specific valid codes (EXW, FCA, etc.) and mapping them to category values (any_mode vs sea_only).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool looks up 'Incoterms 2020 trade rules' and distinguishes from logistics-calculator siblings by defining unique domain (risk/cost transfer rules).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this tool when' scenarios (explain meaning, compare responsibilities, check transport modes) though lacks explicit 'when not to use' or sibling comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses calculation formula and European trailer standards, but fails to explain stack_factor derivation from stack_height parameter or what the tool returns (LDM value vs utilisation percentage).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with definition first, then formula, usage scenarios, and capability summary; no redundant sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for basic invocation but lacks output specification (critical given no output_schema) and clarification on how stackable/stack_height affect the LDM calculation via stack_factor.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds value beyond 100% schema coverage by grouping parameters (presets vs custom) and clarifying vehicle dimensions (artic=13.6m), though schema already covers most enum descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb (Calculate) + resource (loading metres/LDM) with clear mathematical definition and explicit differentiation from volume/weight-based siblings like cbm_calculator and chargeable_weight_calculator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'when to use' scenarios (European road freight, trailer utilisation) but lacks explicit 'when not to use' or direct comparison to alternatives like pallet_fitting_calculator.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds data scope context (2,939 entries, 9 hazard classes) beyond name, but missing error handling, rate limits, or destructiveness info since no annotations provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose, followed by domain context, scope specifics, bulleted use cases, and parameter guidance; no redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage of input intent and data points returned (hazard class, tunnel codes, etc.), though output format remains unspecified given lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, exceeds baseline by clarifying parameter relationship: 'UN number for exact lookup, or a search term for name-based search' (mutual exclusivity guidance).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Look up' + resource 'ADR 2025 database', clearly distinguishes from sibling 'adr_exemption_calculator' by positioning as database lookup vs calculation tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'Use this tool when you need to' section with 4 specific scenarios, but lacks 'when not to use' or explicit contrast with adr_exemption_calculator.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses data scope (6,352 airlines) and educates on domain concept (AWB prefixes), though lacks technical behaviors like rate limits or exact vs. fuzzy matching details (but no annotations contradict).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with front-loaded purpose, bulleted use cases, and domain explanation; every sentence adds value without bloat.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a lookup tool of this complexity; explains return value types (codes, cargo capabilities) despite lacking output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with examples and constraints, so description correctly does not redundantly explain parameters, meeting baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Search') + resource ('airlines') and clearly distinguishes from logistics siblings (containers, HS codes, calculators) by focusing on carrier identification data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'Use this tool when you need to:' section with three specific scenarios, though lacks explicit 'when not to use' or alternative suggestions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Lists covered container types and calculation capability, but omits behavioral details like: validation logic when items exceed capacity, whether weight limits are enforced, or if calculations are optimistic (3D bin packing) vs simple volume division.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient structure with purpose statement, coverage list, usage triggers, and parameter instructions. Every sentence earns its place; no redundancy. Well front-loaded with the core value proposition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a 6-parameter tool with 100% schema coverage. Covers primary use cases and acknowledges the 'omit to list all' behavior via schema. Minor gap: doesn't clarify output format or calculation methodology (e.g., whether it accounts for pallet stacking inefficiencies).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage, but description adds valuable usage logic: 'Provide a container type for specs' vs 'Add item dimensions... to calculate loading.' This clarifies the optional parameter grouping (type only for specs, dimensions+weight+quantity for loading calc) better than the schema's zero-required-params structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb+resource ('Get ISO shipping container specifications and calculate loading') and explicitly enumerates the 10 container types covered. Distinguishes from generic calculator siblings by specifying ISO container focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this tool when you need to:' section with three concrete scenarios (dimensions/capacity, pallet fitting, item loading). Lacks explicit 'when not to use' guidance or differentiation from siblings like pallet_fitting_calculator or cbm_calculator.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses algorithmic behavior (90-degree rotation optimization, layer-by-layer stacking, weight-before-space checking) and output metrics (volume utilization percentage).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent front-loading with summary parenthetical, followed by mechanism bullets and usage bullets; every sentence earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description adequately covers implied outputs (box count, layers, utilization percentage); slightly more detail on return structure would achieve completeness given parameter complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage; description adds minimal semantic value beyond schema but appropriately references key parameters (rotation, deck height inclusion) to explain logic.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Calculate') + resource ('boxes fit on a pallet') with distinguishing features (layers, rotation) that clearly differentiate from volume-based siblings like cbm_calculator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'Use this tool when' section with three concrete scenarios, but lacks explicit 'when not to use' guidance or comparison to alternatives like cbm_calculator.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Explains calculation logic (points = quantity x multiplier, 1,000 threshold) and business meaning (reduced requirements) without contradicting annotations (none provided).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear hierarchy: purpose, rule explanation, usage list, parameter guidance; no redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Missing output description (what values are returned: points total, boolean exemption status?) given no output schema exists to document return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds crucial guidance on mutually exclusive input patterns (single substance vs items array) beyond 100% schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific action (Calculate) + resource (ADR 1.1.3.6 exemption thresholds) with clear domain context that distinguishes it from sibling adr_lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists three scenarios for use (check qualification, calculate points, determine compliance) and clarifies single vs mixed load input patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable shipping domain context (freight tonne pricing rule, CBM definition) and input unit requirements beyond what annotations would provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded definition, bulleted use cases, and clear input instructions; no extraneous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Since no output schema exists, implies return values (CBM, cubic feet, litres) through conversion mentions; adequately complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Reinforces centimetre units and clarifies the 'pieces' parameter purpose for aggregating identical items, adding value despite 100% schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Explicitly defines CBM calculation for shipments and distinguishes from weight-based siblings (chargeable_weight_calculator) by emphasizing volume/ocean freight context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this tool when' bulleted scenarios (sea freight quoting, freight tonnes) but lacks explicit contrast with when to use chargeable_weight_calculator instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, description effectively discloses data scope (6,940 codes), hierarchy structure (section→subheading), and digit semantics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded purpose, concise domain explanation, and scannable bullet points; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for input domain despite missing output schema; explains HS code system sufficiently for agent to interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage, adds critical logical context that parameters are mutually exclusive (search term OR exact code OR section).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb-resource ('Search HS tariff codes') clearly distinguishes from logistics/calculator siblings (cbm_calculator, container_lookup, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'Use this tool when' section with three concrete scenarios and parameter usage patterns, though lacks explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses specific conversion factors (1 CBM = 1,000 kg, short vs long ton definitions) that explain calculation behavior beyond what schema provides.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with front-loaded purpose, bulleted use cases, and technical notes; every sentence provides actionable information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a 3-parameter tool; covers unit categories, conversion logic, and use cases, though output schema is absent (not required for simple converters).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds value beyond 100% schema coverage by listing valid unit code examples (kg, lbs, cbm, cuft) that clarify expected inputs for the string parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific action (Convert) and domain (freight/logistics units), clearly distinguishing itself as a general converter among specialized siblings like cbm_calculator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this tool when' scenarios with specific logistics contexts (air freight quoting, freight tonnes), though lacks explicit 'when not to use' or alternative tool references.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses calculation logic (volumetric factor 6000 vs 5000), industry standards (IATA), and interprets output meaning (ratio >1.0 = volumetric) despite no output schema or annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded summary, logical flow from concept to usage to interpretation; no redundant sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive given no output schema: explains return value interpretation (ratio meanings) and covers all business logic needed for an agent to understand and present results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage, but description adds crucial business context for 'factor' parameter (IATA standard vs DHL 5000) and explains dimensional weight concept beyond raw parameter definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific purpose ('Calculate air freight chargeable weight') with explicit domain context (airlines, IATA standards) that distinguishes it from sibling cbm_calculator or container tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'Use this tool when...' bullet list covers quoting, volume/weight determination, and carrier comparison, though lacks explicit 'when not to use' or alternative suggestions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/SoapyRED/freightutils-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server