Skip to main content
Glama

Server Quality Checklist

92%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool has a clearly distinct purpose: register/delete/list for API lifecycle management, search_endpoints for discovery, get_endpoint_schema for detailed inspection, get_workflow for task planning with dependencies, set_api_auth for configuration, and call_api for execution. No overlapping functionality.

    Naming Consistency5/5

    Consistent snake_case throughout with verb_noun pattern (call_api, delete_api, list_apis, register_api, search_endpoints, get_workflow, get_endpoint_schema, set_api_auth). All use standard CRUD-style verbs (get, list, search, call, set, register, delete).

    Tool Count5/5

    8 tools is well-scoped for an API management server covering the full lifecycle: registration, listing, deletion, authentication, discovery (search), inspection (schema), planning (workflow), and execution. Each tool earns its place without redundancy.

    Completeness4/5

    Covers the core API lifecycle well (register, list, delete, auth, search, call) with helpful additions like workflow planning. Minor gaps: no get_api for specific API details (only list_apis), no update_api for refreshing specs without full deletion, and no way to remove auth without deleting the entire API.

  • Average 3.7/5 across 8 of 8 tools scored. Lowest: 3.1/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 8 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It confirms a read operation via 'List' but fails to specify what 'basic information' includes, whether pagination is supported, or any rate limiting concerns.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence of 9 words is appropriately concise and front-loaded with the action verb. However, given the lack of annotations and output schema, the extreme brevity leaves significant gaps that additional context could have filled.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While adequate for a zero-parameter tool, the description lacks sufficient detail given the absence of an output schema and annotations. It fails to clarify what constitutes 'basic information' or how this differs from the more detailed data returned by 'get_endpoint_schema'.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema contains 0 parameters, establishing a baseline of 4. The description does not need to compensate for missing parameter documentation, though it confirms the parameter-less nature by implying an unfiltered 'list all' operation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('List') and clear resource ('registered APIs'), and specifies scope ('all' with 'basic information'). It implicitly distinguishes from 'search_endpoints' by suggesting unfiltered enumeration versus targeted search, though it doesn't explicitly clarify this distinction.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this versus siblings like 'search_endpoints' (for filtering) or 'get_endpoint_schema' (for detailed specification). No prerequisites or conditions mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the auth prerequisite but fails to disclose that this tool makes external network requests, may have side effects depending on the HTTP method (POST/PUT/DELETE), or describe the response format. Significant gaps remain.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste: first states purpose, second states prerequisite. Efficiently front-loaded and appropriately sized for the information provided.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a tool with 5 parameters, nested objects, no output schema, and zero annotations, the description is insufficient. It omits what the tool returns, error handling behavior, and whether operations are potentially destructive (mutations possible via POST/PUT in endpoint_id).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% description coverage (all 5 parameters documented). The description adds no parameter-specific semantics, but baseline 3 is appropriate since the schema already comprehensively documents api_id, endpoint_id, path_params, query_params, and body.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verb 'Execute' and resource 'API call', clearly distinguishing this runtime/execution tool from sibling management tools like register_api, delete_api, and list_apis. However, it lacks explicit scope clarification (e.g., 'HTTP request to configured endpoints') that would make it a 5.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides one critical prerequisite ('Make sure authentication is configured first'), implying it should be used after set_api_auth. However, it lacks explicit 'when to use vs when not to use' guidance or alternatives (e.g., 'use get_endpoint_schema to inspect before calling').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a safe/idempotent read operation, what format the schema is returned in, or whether there are rate limits or caching considerations. It only repeats the functional purpose.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of two efficient sentences with zero waste. It is front-loaded with the action ('Get the full schema') followed immediately by usage context, making it easy for an agent to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (2 parameters, no nested objects) and 100% schema coverage, the description is minimally adequate. However, since no output schema exists, the description could have been more specific about the return structure beyond 'detailed parameter and response information.'

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage ('The API identifier' and 'The endpoint identifier'), establishing a baseline of 3. The description adds minimal parameter semantics beyond referencing 'a specific endpoint,' relying entirely on the schema to document the parameter purposes and format.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves 'the full schema for a specific endpoint' with specific verb (get) and resource (schema). It implies distinction from sibling 'search_endpoints' by emphasizing 'full schema' and 'detailed' information versus listing, though it doesn't explicitly name the alternative.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The second sentence provides implied usage guidance ('Use this to get detailed parameter and response information'), suggesting when to invoke it. However, it lacks explicit 'when not to use' guidance or comparison to siblings like 'call_api' or 'search_endpoints' that might be confused for this use case.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full disclosure burden. It successfully explains the matching logic (semantic similarity) but omits operational details like auth requirements, rate limits, read-only status, error behaviors, or the structure/format of returned endpoint objects.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. The first establishes the action and input method; the second establishes the return value. Information is front-loaded and every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simple 3-parameter structure with complete schema documentation and no output schema, the description is sufficiently complete. It conceptually explains the return value (semantically similar endpoints), though it could benefit from describing the output structure or error scenarios.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'natural language' which maps to the query parameter, but does not augment the schema with additional guidance like example queries, format constraints, or the relationship between api_id filtering and search scope.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action (search), resource (API endpoints), and method (natural language/semantic similarity). It implicitly distinguishes from list_apis via the 'natural language' and 'semantically similar' qualifiers, but does not explicitly reference sibling tools to clarify when to use each.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage context through 'natural language' (suggesting use when exact endpoint names are unknown), but provides no explicit when-to-use guidance, exclusions, or named alternatives like list_apis. The agent must infer when semantic search is preferred over listing.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully discloses the critical security behavior that env_var credentials are 'never stored on disk' and resolved at request time. However, it omits other important behavioral traits for a configuration tool: whether this overwrites existing auth, validation behavior, and idempotency semantics.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of three tightly constructed sentences with zero waste: sentence 1 establishes purpose, sentence 2 enumerates capabilities, and sentence 3 provides critical security guidance. Information is front-loaded and every clause earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the 100% schema coverage and lack of output schema, the description adequately covers the primary function. However, for a configuration/mutation tool with zero annotations indicating side effects or safety, the description should disclose overwrite behavior and validation semantics to be considered complete. As is, it leaves operational questions unanswered.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While the schema has 100% coverage (baseline 3), the description adds meaningful semantic context beyond the schema. Specifically, it clarifies the security implication of the env_var parameter ('never stored on disk'), which is not explicitly stated in the schema's technical description of the parameter, and maps the auth types to their transport mechanisms (header vs query param).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Configure authentication') and resource ('for an API'), and enumerates supported auth types (API key header/query, bearer). It implicitly distinguishes from siblings like call_api or register_api by focusing specifically on auth configuration, though it could explicitly clarify this is a prerequisite for call_api.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides internal guidance on parameter selection ('Use env_var to reference a secret'), helping users choose between credential and env_var parameters. However, it lacks explicit workflow guidance regarding when to use this tool versus siblings (e.g., 'use this before call_api') or prerequisites for invocation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and successfully discloses the destructive cascade (endpoints, embeddings, credentials). However, it omits critical mutation context such as irreversibility, permission requirements, or confirmation behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, front-loaded sentence with zero waste. Every clause earns its place by specifying the action and enumerating the cascading deletion scope without redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter destructive operation with no output schema, the description adequately covers the deletion scope. It could be improved by mentioning return value indicators or confirmation requirements, but it satisfies the essential disclosure needs for this tool type.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage for the single 'api_id' parameter. The description does not add semantic detail beyond the schema's 'The API identifier to delete', meeting the baseline expectation when schema coverage is high.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a specific verb ('Delete') and resource ('registered API'), and explicitly distinguishes this from sibling tools by detailing the comprehensive scope of deletion ('all its data including endpoints, embeddings, dependency graph, and authentication credentials').

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies the tool is for permanent removal by enumerating what gets destroyed, but lacks explicit when-to-use guidance, prerequisites, or named alternatives (e.g., when to use this vs. simply unregistering or disabling).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and successfully discloses internal side effects: parsing the spec, building a dependency graph, and creating searchable embeddings. However, it lacks explicit safety information (idempotency, error handling on duplicate api_id, or execution time expectations).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficient sentences with zero waste: the first establishes the primary action and method, while the second explains valuable internal processing mechanics. Information is front-loaded and appropriately sized.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no output schema and no annotations, the description adequately covers the tool's purpose and internal mechanics (parsing, embeddings). It could be improved by clarifying idempotency behavior or return value structure, but the core functionality is well-documented given the 100% schema coverage.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'OpenAPI specification' which maps to spec_url and implies api_id through 'new API', but does not add syntax details, format constraints, or examples beyond the schema definitions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a specific verb ('Register') and resource ('API'), and clearly distinguishes this from sibling tools like delete_api, call_api, or list_apis by specifying this is for 'new' API ingestion via OpenAPI specification.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The word 'new' implies this is for initial registration rather than updating existing APIs, but there is no explicit 'when to use' guidance, workflow context, or named alternatives to guide the agent in selecting this over siblings like set_api_auth.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It effectively discloses key behavioral traits: returns 'search results expanded with their dependencies' and enables planning of 'API calls in the right order.' Missing minor details like rate limits or specific error conditions, but captures the essential read-only, planning-oriented nature.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with zero waste. Front-loaded with purpose ('Get relevant endpoints...'), followed by return value description, and closes with explicit workflow guidance. Every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite no output schema and no annotations, the description adequately explains what the tool returns ('search results expanded with their dependencies') and the next step in the workflow. Sufficient for a discovery/planning tool, though explicit mention of output structure would improve this to a 5.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% description coverage with clear examples (e.g., 'create a user and place an order'). The description mentions 'accomplishing a task' which conceptually maps to the 'query' parameter, but does not add syntax details or formatting rules beyond what the schema already provides. Baseline score appropriate for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action ('Get relevant endpoints') with key features ('dependency resolution and full schemas') and scope ('accomplishing a task'). Clearly distinguishes from sibling 'call_api' by emphasizing planning versus execution, and from 'search_endpoints' by highlighting dependency resolution.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly directs the workflow: 'After reviewing the results, use call_api to execute each step.' This creates clear separation between when to use this tool (planning/discovery) versus the sibling execution tool.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

jitapi MCP server

Copy to your README.md:

Score Badge

jitapi MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nk3750/jitapi'

If you have feedback or need assistance with the MCP directory API, please join our Discord server