TypeScript MCP Server Boilerplate
Server Quality Checklist
- Disambiguation5/5
Each tool has a completely distinct purpose—calculation, image generation, geocoding, weather fetching, greeting, and time retrieval—with no functional overlap. An agent can easily distinguish between them based on use case.
Naming Consistency3/5Naming conventions are inconsistent: 'generate-image' and 'get-weather' use hyphenated verb-noun patterns, while 'calc', 'geocode', 'greet', and 'time' are unhyphenated and vary between shortened verbs, nouns, and standalone words. The mix of hyphenation and verb styles lacks a predictable schema.
Tool Count4/5With six tools, the count falls within the ideal range (3-15) for a demonstration boilerplate. Each tool showcases different integration patterns (calculation, AI API, geocoding, weather, string manipulation, datetime), justifying the scope without being excessive.
Completeness3/5As a boilerplate server covering disparate utilities, domain completeness is ambiguous. While 'geocode' and 'get-weather' form a coherent pipeline, other tools are isolated capabilities. Notable gaps exist for a cohesive utility suite—no reverse geocoding, timezone conversion, or image manipulation beyond generation.
Average 3.7/5 across 5 of 6 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 6 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the data source (Open-Meteo API) and the specific data types returned (current weather, fine dust, daily forecast). However, it lacks details on rate limits, caching behavior, error conditions, or whether the API call is synchronous/asynchronous.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence structured as input-to-output mapping, followed by a data source disclosure. Every clause earns its place; there is no redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description appropriately summarizes the return values (weather, dust, forecast) without replicating the full schema structure. It covers the essential contract for a read-only weather tool, though it could mention that `forecast_days` defaults to 3.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description lists the three parameters (latitude, longitude, forecast period) in the first clause, confirming their role as inputs, but adds no semantic details beyond what the schema already provides (e.g., no clarification on coordinate systems or the optional nature of forecast_days).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns current weather, fine dust (air quality), and daily forecasts based on coordinates and forecast period. It uses specific verbs and resources. However, it does not explicitly distinguish from the `geocode` sibling tool, which users might need to call first if they only have an address.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention that `geocode` should be used first if the user provides a location name instead of coordinates. No rate limits, prerequisites, or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool returns a greeting output, confirming the read-only/generative nature. However, it omits side-effect disclosure, idempotency, or state persistence details, though these are less critical for this simple utility.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with zero waste. It front-loads the core action (returning a greeting) and necessary inputs.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a simple 2-parameter utility with 100% schema coverage and an output schema exists (per context signals), the description provides sufficient context for invocation. It does not need to explain return values since output schema is present.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the schema fully documenting both parameters (name as 'name of person to greet' and language with enum/default). The description mentions '이름과 언어' (name and language) confirming the schema, but does not add syntax or semantic meaning beyond the structured schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a greeting (인사말) when given name and language inputs, providing a specific verb+resource. However, it does not explicitly differentiate from the generate-image sibling or clarify that this is text generation vs. other output types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., when to use this instead of generate-image for text purposes), nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It mentions returning a calculation result, but does not disclose error handling (e.g., division by zero), numeric precision limits, or other behavioral edge cases that would help an agent predict failure modes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with zero waste. It is appropriately front-loaded with the core action (input) and result (return) clearly stated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity, 100% schema coverage, and existence of an output schema, the description is sufficient. It acknowledges the return value (계산 결과), which is adequate when output schema details are available separately, though it could mention error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description ('두 숫자와 연산자를 입력하면') essentially restates what the schema already documents without adding syntax guidance, valid ranges, or semantic relationships between parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs calculations (계산 결과를 반환합니다) on two numbers using an operator. It effectively distinguishes from siblings (generate-image, geocode, etc.) by specifying the mathematical nature of the operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, or when not to use it. While the siblings are in different domains, the description does not explicitly state decision criteria for invoking this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the external dependency ('Nominatim OpenStreetMap API'), informing the agent about the data source. However, it lacks details on error behavior (e.g., ambiguous queries, not-found locations), rate limits, or whether the operation is idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. The first front-loads the core action (input→output transformation), while the second adds valuable implementation context (external API dependency). Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simple single-parameter nature, 100% schema coverage, and existence of an output schema, the description is appropriately complete. It identifies the return values (lat/lon) without needing to detail the full output structure. Minor gap: no mention of error cases or ambiguous results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description reinforces the parameter's purpose ('도시 이름이나 주소' / city name or address) but does not add semantic constraints or format guidance beyond what the schema already provides (e.g., no guidance on preferred address formats or disambiguation hints).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the transformation function (input city name/address → output lat/lon coordinates) using specific verbs and resources. It clearly distinguishes from siblings (calc, generate-image, get-weather, etc.) by specifying geocoding functionality that none of the others provide.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, particularly sibling 'get-weather' which may also accept location inputs. It does not clarify whether locations should be geocoded first before passing to other tools or if those tools accept raw city names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully discloses the specific model (FLUX.1-schnell) which hints at speed/quality tradeoffs, but lacks details about output format (URL vs base64), file persistence, authentication requirements, or rate limiting that would be crucial for an agent to know before invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two compact sentences with zero redundancy. The first establishes the input-output contract (prompt → image), the second provides implementation context (model name). Every word earns its place; structure is front-loaded with the most critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with complete schema documentation, the description is minimally sufficient. It mentions the specific model which is helpful context. However, given no output schema exists, the description could have clarified the return format (e.g., image URL, base64 data) or whether the generation is synchronous/asynchronous.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description references '텍스트 프롬프트' (text prompt) which aligns with the required 'prompt' parameter, but adds no semantic detail beyond the schema for 'num_inference_steps' or regarding parameter interactions. The schema fully documents both parameters, so minimal additional description is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (generate images) and resource (images from text prompts), and explicitly names the underlying model (FLUX.1-schnell). It clearly distinguishes from siblings like calc, geocode, and get-weather which handle completely different domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance is provided. While the purpose is clear enough that an agent would know to use this for image generation tasks, there is no discussion of prerequisites, rate limits, or comparison to hypothetical alternative image generation approaches.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the localization behavior (Korean format), which is crucial context. However, it omits timezone specifics, exact output structure (though output schema exists), and read-only safety characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with no冗余. It front-loads the action (returns) and specifies the resource and format immediately. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple utility tool with zero parameters and an existing output schema, the description is complete. It covers the essential semantic detail (Korean format) that structured fields cannot convey, without needing to elaborate on return values covered by the output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters. According to the rubric baseline, this defaults to 4. The schema coverage is 100% (trivially), and there are no parameter semantics to describe beyond what the empty schema indicates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns (반환합니다) current date and time (현재 날짜와 시간) in Korean format (한국어 형식으로). It distinguishes from siblings like calc, get-weather, and geocode which handle calculations, weather, and geocoding respectively.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying 'Korean format,' suggesting use when Korean localization is needed. However, it lacks explicit when-to-use guidance, prerequisites, or comparisons to alternatives (e.g., noting that calc or other tools don't provide time).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/JunWoo0406/my-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server