TypeScript MCP Server Boilerplate
Server Quality Checklist
- Disambiguation5/5
Each tool serves a completely distinct purpose (arithmetic, image generation, geocoding, weather, greetings, time). There is no functional overlap between tools, making selection unambiguous.
Naming Consistency3/5Mixed conventions: 'generate-image' and 'get-weather' use kebab-case verb-noun patterns, while 'calc', 'geocode', 'greet', and 'time' use single lowercase words (with 'time' being a noun rather than a verb). Readable but inconsistent structure.
Tool Count4/5Six tools is reasonable for a boilerplate/demo server intended to showcase different integration patterns (HuggingFace, OpenStreetMap, Open-Meteo, simple functions). Not overwhelming, though the selection feels somewhat arbitrary.
Completeness2/5No coherent domain unifies the tools; they appear to be random unrelated examples. While geocode+get-weather form a minimal workflow, the set lacks completeness for any specific purpose (weather, math, image gen, etc.) and appears as a disconnected grab bag of capabilities.
Average 3.8/5 across 6 of 6 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 6 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It correctly discloses that the tool returns a greeting (output behavior), but omits details on localization defaults, determinism, or whether external translation services are invoked.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero redundancy. Immediately conveys inputs and outputs without filler words. Appropriate length for a two-parameter utility function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for the tool's simplicity. With a documented output schema (per context signals) and complete input parameter coverage, the description successfully covers the essential contract without needing to elaborate on return value structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (both name and language parameters have complete descriptions and enums), the schema already documents semantics. The description merely lists the parameters ('이름과 언어') without adding syntax constraints, examples, or validation rules beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The Korean description '이름과 언어를 입력하면 인사말을 반환합니다' clearly states the tool takes a name and language and returns a greeting. This functionally distinguishes it from siblings like calc, generate-image, and geocode, though it does not explicitly mention sibling alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides only functional description ('input X, get Y') with no guidance on when this greeting tool should be preferred over simple string manipulation or which specific greeting format is returned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the external API dependency (Open-Meteo) and authentication behavior (API key unnecessary), but omits other behavioral traits like rate limits, timeout behavior, or data freshness/caching policies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient single-sentence structure with high-density parenthetical addition. Every element earns its place: the main clause covers inputs/outputs, and the parenthetical covers critical behavioral context (API source and auth requirements) without clutter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate completeness given the tool's moderate complexity, 100% schema coverage, and existing output schema. The description covers the essential contract (inputs/outputs) and key operational note (no API key). Minor gap: does not explicitly declare read-only/safe nature absent annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed constraints (WGS84 ranges, defaults, enums) for all 4 parameters. The description mentions latitude, longitude, and forecast period conceptually but adds no semantic details beyond what the schema already provides, earning the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (returns current weather and daily forecast) and required inputs (latitude, longitude, forecast period). However, it does not explicitly distinguish from siblings like 'geocode' (which complements this tool by converting addresses to coordinates), though the domains are distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides useful context that the tool uses Open-Meteo API and requires no API key, which helps agents decide when to use it (free, accessible). However, it lacks explicit guidance on when NOT to use it (e.g., 'do not use for historical weather') or prerequisites (e.g., 'requires coordinates, not city names').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states that it returns 'time' but fails to specify the format (ISO 8601, Unix timestamp, human-readable string) or timezone handling (UTC, local, server time), which are critical for an agent to use the result correctly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality. There is no wasted text or redundancy appropriate to the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has zero input parameters and an output schema exists (per context signals), the description appropriately does not need to detail return values. However, with no annotations covering behavioral traits, the description could be more complete by mentioning timezone or format specifics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool accepts zero parameters. Per the scoring rubric, zero parameters warrants a baseline score of 4. The schema coverage is 100% (of zero properties), so there are no semantic gaps to fill.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb (반환합니다/returns) and resource (시간/time), clearly stating the tool fetches the current time. It distinguishes clearly from siblings like calc, generate-image, and geocode which handle completely different domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, no prerequisites, and no exclusions. While the purpose is simple, there is no explicit mention of use cases (e.g., when to prefer this over calc for timestamp calculations).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the external API dependency, specific model version, and authentication requirement. However, it omits behavioral traits like output format (base64, URL, or binary?), inference latency expectations, rate limits, or content policy restrictions that would help an agent handle the response appropriately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence containing the action, method, and parenthetical technical details (model, auth). No redundant words or wasted space; information density is high with critical details front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema provided, the description should ideally indicate what the tool returns (image data format, URL, etc.). While it covers the input parameters adequately and mentions authentication, the absence of return value documentation leaves a significant gap for an agent attempting to process the result.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The description mentions '텍스트 프롬프트' (text prompt) which aligns with the required parameter, but adds no additional semantic meaning, examples, or format guidance beyond what the schema already provides for 'prompt' and 'num_inference_steps'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool generates images using text prompts via HuggingFace Inference API. Specific model (FLUX.1-schnell) is named, and the action (생성합니다/generates) is precise. Distinct from siblings (calc, geocode, weather, etc.) which operate on entirely different domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the authentication requirement (HF_TOKEN environment variable) which is critical for usage. While it doesn't explicitly state 'when not to use' alternatives, the sibling tools are functionally distinct (math, location, weather), making the appropriate use case self-evident.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full disclosure burden. It states that the tool returns calculation results, implying a pure function. However, it omits critical behavioral details like division-by-zero handling, numeric precision, or whether the operation is read-only (though implied by 'returns').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, zero waste. Front-loaded with inputs (두 숫자와 연산자), operation type (사칭연산), and output (반환합니다). Every element earns its place with no redundancies.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (3 primitive parameters, 100% schema coverage) and presence of an output schema, the description provides sufficient context. It adequately covers the happy path behavior, though it could mention edge cases like division by zero for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage (baseline 3). The description adds value by explicitly listing the operator symbols (+,-,*,/) in prose, reinforcing the enum constraints and providing a complete summary of required inputs (two numbers and an operator) even though the schema documents individual fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs four arithmetic operations (사칭연산) on two numbers using an operator, specifying the exact supported operators (+,-,*,/). It clearly distinguishes from siblings like generate-image, geocode, and get-weather which handle completely different domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through functional definition (use for arithmetic calculations), but lacks explicit when-to-use guidance or exclusions. Given siblings are unrelated (weather, images, etc.), the context is clear, but no explicit alternatives or limitations are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It successfully identifies the external API dependency (Nominatim OpenStreetMap), signaling network usage, potential rate limits, and data source attribution requirements. However, it omits explicit mention of blocking behavior, error handling when locations are not found, or rate limit specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with the API attribution parenthetically appended. Every element earns its place: the input types, the output format, and the service provider. No redundancy or verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown but indicated in context signals), the description appropriately omits return value details. With 100% parameter coverage in the schema and clear API identification in the description, it is complete for the tool's complexity level, though it could briefly mention error scenarios for perfection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already comprehensively documents all three parameters including type constraints and examples (e.g., ISO 3166-1 alpha-2 format for countrycodes). The description mentions '도시 이름이나 주소' aligning with the query parameter but does not add semantic value beyond the schema definitions, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (반환합니다/returns), input types (도시 이름이나 주소/city names or addresses), and output (위도·경도 좌표/latitude-longitude coordinates). It includes the underlying API provider (Nominatim OpenStreetMap), distinguishing it from unrelated siblings like calc, generate-image, and get-weather.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through input/output specification but lacks explicit guidance on when to use this tool versus alternatives, or prerequisites like network availability. Given the siblings are completely unrelated (calc, greet, time), the absence of explicit differentiation is mitigated but still present.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/bakcoder/my-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server