TypeScript MCP Server Boilerplate
Server Quality Checklist
- Disambiguation5/5
Each tool serves a completely distinct purpose (arithmetic, image generation, geocoding, weather, greeting, time) with no functional overlap. An agent can easily distinguish which tool to use based on the task requirements.
Naming Consistency3/5Mixed naming conventions exist: some use kebab-case verb-noun (generate-image, get-weather), others use single words (calc, greet, time, geocode). The abbreviation 'calc' instead of 'calculate' and the inconsistent use of prefixes (get-weather vs just weather) reduce predictability.
Tool Count4/5Six tools is appropriate for a boilerplate/demo server, providing enough variety to demonstrate different integration patterns (external APIs, calculations, utilities) without being overwhelming. Slightly arbitrary collection but reasonable for demonstration purposes.
Completeness3/5While geocode and get-weather form a cohesive pair, the other tools (calc, generate-image, greet, time) are isolated utilities with no workflow connections. As a boilerplate this demonstrates variety, but as a functional tool set it lacks domain cohesion and specific workflow completion.
Average 3.7/5 across 5 of 6 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 6 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, placing the full disclosure burden on the description, yet it fails to mention behavioral traits such as whether the operation is read-only, idempotent, or stateless. It also does not clarify that the greeting is generated text rather than fetched from an external service, though the existence of an output schema mitigates the need for return value description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single, highly efficient sentence that front-loads the core functionality without tautology or redundant phrasing. Every word contributes to understanding the input-output relationship, making it appropriately concise for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (two simple parameters, one required), complete schema coverage, and the existence of an output schema, the description provides sufficient context for correct invocation. However, it could be improved by mentioning safety characteristics or confirming the localized nature of the output given the Korean-language context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameters (name and language) are fully documented in the input schema, including the default value for language. The description merely references them ('이름과 언어') without adding semantic context, syntax details, or usage examples beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb '반환합니다' (returns) with the resource '인사말' (greeting), clearly stating the tool generates a greeting message when given inputs. It effectively distinguishes itself from siblings like calc, generate-image, and geocode, which perform distinct functions (mathematics, image generation, geocoding).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool does but does not provide explicit guidance on when to prefer it over alternatives or specific use cases (e.g., when templated greetings are needed). Usage is implied by the tool name and parameter names, but no explicit when/when-not conditions are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns a result but does not mention error handling (e.g., division by zero), side effects, or whether the operation is idempotent. For a simple calculator, this is minimally acceptable but lacks richness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the essential information. Every element contributes to understanding the tool's function with no redundant or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity, complete parameter schema coverage, and existence of an output schema (mentioned in context signals), the description is appropriately complete. It successfully conveys the essential operation without needing to detail return values, though mentioning edge cases like division by zero would have improved completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all three parameters (a, b, operator) fully documented in the schema. The description summarizes these as 'two numbers and an operator' but does not add syntax details, constraints, or semantic relationships beyond what the schema already provides. Baseline 3 is appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool receives two numbers and an operator, returning arithmetic results. It specifically mentions '사칙연산' (four arithmetic operations), which distinguishes it clearly from image generation, weather, and other sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While there are no explicit when-to-use instructions or named alternatives, the specific domain (mathematical calculation) makes the implied usage clear given the unrelated sibling tools (generate-image, geocode, etc.). However, it lacks explicit guidance on when to prefer this over manual calculation or error conditions like division by zero.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It identifies the specific model (FLUX.1-schnell) and provider (Together), which is valuable context, but omits operational characteristics such as typical latency, rate limits, cost implications, or whether results are persisted vs. ephemeral.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with parenthetical model specification. Information is front-loaded with the core action, and every element (API name, model name, provider) earns its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (handling return value documentation) and complete parameter descriptions, the description provides sufficient essential context by identifying the AI model and backend service. However, it could benefit from noting this is an external API call with potential latency implications.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'text prompts' generally but does not elaborate on parameter semantics beyond the schema (e.g., explaining how num_inference_steps affects quality for FLUX specifically or prompt engineering best practices).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool generates images from text prompts using the HuggingFace Inference API, specifying both the verb (generate) and resource (images). It clearly distinguishes from siblings (calc, geocode, get-weather, etc.) which handle calculations and data retrieval rather than media generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use exclusions or alternatives are mentioned. However, the tool's purpose (AI image generation) is distinct enough from text-based/calculation siblings that implied usage is reasonably clear, though explicit guidance on when to prefer this over other image generation methods is absent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully identifies the external dependency (Nominatim OpenStreetMap), hinting at network latency and rate limits, but lacks explicit details about error handling, what happens when addresses are not found, or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with a parenthetical data source attribution. It is appropriately front-loaded with no redundant or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema with 100% coverage and the presence of an output schema, the description is sufficiently complete for a straightforward geocoding tool. It identifies the return value type (coordinates) and data source, though it could benefit from mentioning error scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description conceptually maps to the 'query' parameter (도시명 또는 주소) but adds no additional semantic guidance beyond the schema descriptions, such as address formatting tips or the significance of the 'limit' parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts city names or addresses into latitude/longitude coordinates using specific verbs (반환합니다) and identifies the resource (위도·경도 좌표). It distinguishes clearly from unrelated siblings like calc, generate-image, and greet.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the unique name makes the purpose obvious among siblings, there is no explicit guidance on when to use this versus alternatives or prerequisites. For example, it doesn't mention whether to use this before get-weather when only a city name is available.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the Open-Meteo data source and specifies that both current weather and daily forecasts are returned. However, it omits behavioral details like rate limits, caching behavior, or error handling for invalid coordinates.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with action front-loaded. The parenthetical data source '(Open-Meteo)' adds provenance without verbosity. No redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema (per context signals) and 100% input schema coverage, the description appropriately summarizes the return value type (current + daily forecast) without enumerating fields. Adequate for a standard weather lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds conceptual context by grouping parameters as 'coordinates and forecast period' and explaining they are used to fetch weather data, but does not add syntax details beyond the schema (e.g., WGS84 format is only in schema).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool returns current weather and daily forecasts using coordinates and forecast periods. It effectively distinguishes from siblings (calc, generate-image, geocode, greet, time) by specifying the weather domain and Open-Meteo data source.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage context (weather lookup by coordinates) but provides no explicit when-to-use guidance versus alternatives. It does not mention coordinate prerequisites or suggest using the 'geocode' sibling tool first if the user only has an address string.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden of behavioral disclosure, mentioning timezone support but omitting details about the return format (though mitigated by the presence of an output schema). It does not explicitly confirm this is a safe, idempotent read operation, though this is reasonably inferred from the description's wording.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences that front-load the core functionality (returning current time) followed by the key optional feature (timezone specification). There is no redundant or extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single optional parameter) and the presence of an output schema to define return values, the description provides sufficient context for an agent to understand and invoke the tool correctly. It appropriately delegates parameter details to the schema while conveying the essential purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameter semantics are adequately handled by the schema itself, which documents the timezone string format and default value. The description adds minimal semantic context beyond stating that timezone specification is possible, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb '반환합니다' (returns) with the resource '현재 시각' (current time), clearly indicating it retrieves temporal data. This distinctly differentiates it from siblings like calc (calculation), generate-image (image creation), and get-weather (weather data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage by stating it returns the current time and accepts timezone parameters, it lacks explicit guidance on when to prefer this over manually calculating time or using other tools. No alternative approaches or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/devbrother2024/my-mcp-server-260402'
If you have feedback or need assistance with the MCP directory API, please join our Discord server