Skip to main content
Glama

Server Details

Cloudflare Workers MCP server: api-mock-server

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
lazymac2x/api-mock-server-api
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: create, delete, get details, get logs, list, and update. No overlap between tool functionalities.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case, such as create_mock, delete_mock, list_mocks, etc.

Tool Count5/5

With 6 tools covering full lifecycle management of mock servers (create, read, update, delete, list, logs), the count is well-scoped and each tool serves a necessary function.

Completeness5/5

The tool set covers the complete CRUD operations plus logging, leaving no obvious gaps for basic mock server management.

Available Tools

6 tools
create_mockAInspect

OpenAPI/Swagger 스펙(JSON)으로 목업 API 서버를 생성합니다. 즉시 사용 가능한 엔드포인트 URL을 반환합니다.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes목업 서버 이름
specYesOpenAPI 3.0 또는 Swagger 2.0 스펙 JSON 객체
descriptionNo목업 서버 설명 (선택)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description bears full responsibility for behavioral disclosure. It only states creation and return of URL, but omits important details such as whether existing mocks are overwritten, required permissions, rate limits, or side effects on other resources. For a mutation tool, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, directly stating the core function and output. No unnecessary words or repetition. Front-loaded with the primary action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (3 params, no output schema, no annotations), the description adequately covers the core purpose and return value. However, it could mention the optional description parameter or any constraints on the spec object to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% parameter description coverage, so the description adds no additional meaning beyond what is already in the schema. Baseline is 3, and the description does not elaborate on parameter formats or constraints beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (create) and the resource (mock API server from OpenAPI/Swagger spec), and mentions the output (immediately usable endpoint URL). It effectively distinguishes from sibling tools like delete_mock or list_mocks by specifying creation functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (when wanting to create a mock server from a spec), but does not explicitly mention when not to use it or provide alternatives. The context of sibling tools is clear enough, but no direct guidance on prerequisites or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_mockCInspect

목업 서버를 삭제합니다.

ParametersJSON Schema
NameRequiredDescriptionDefault
mock_idYes삭제할 목업 서버 ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It says 'deletes' which implies destruction, but it does not mention whether the operation is irreversible, requires permissions, or has cascading effects (e.g., on associated endpoints). This is insufficient for a destructive tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (one sentence) but minimal. It could be improved by adding brief usage context or behavior hints without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete tool with one parameter and no output schema, the description is adequate but lacks context on error handling, idempotency, or preconditions. It meets the minimum viable level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the parameter description is adequate. The tool description does not add meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (delete) and resource (mock server), which is specific and distinguishes from sibling tools like create_mock or list_mocks. However, it's a single sentence without additional context that could enhance clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives (e.g., when a mock is no longer needed) or prerequisites (e.g., mock must exist). The description does not help the agent decide to invoke this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_mock_endpointBInspect

특정 목업 서버의 상세 정보와 엔드포인트 목록을 반환합니다.

ParametersJSON Schema
NameRequiredDescriptionDefault
mock_idYes목업 서버 ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It indicates a read operation implicitly but does not state whether it is read-only, requires permissions, or any side effects. It also does not describe the structure of the returned data beyond 'detailed information and endpoint list'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise and to the point. No unnecessary words or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should provide more detail on the return format. It states 'detailed information and endpoint list' but lacks specifics (e.g., what fields are included, pagination). For a simple tool with one input, it is minimally adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter (mock_id), so the schema already explains its purpose. The description does not add additional meaning or constraints beyond '목업 서버 ID' (mock server ID).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns detailed information and endpoint list for a specific mock server, which distinguishes it from siblings like list_mocks (lists all) and get_mock_logs (retrieves logs).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as list_mocks or get_mock_logs. The description lacks context on prerequisites or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_mock_logsCInspect

목업 서버에 들어온 요청 로그를 조회합니다.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo최대 로그 수 (기본: 50, 최대: 100)
mock_idYes목업 서버 ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must disclose behavioral traits. It only states 'retrieves logs' but does not mention read-only nature, pagination, or what happens when limit is exceeded. The agent cannot infer safety or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no filler or redundancy. Front-loads the core function. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given absence of output schema and annotations, the description is too minimal. It does not specify what the logs contain, ordering, or error handling. For a simple tool, it lacks completeness to guide an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers 100% of parameters with descriptions, so the baseline is 3. The description adds no extra parameter information beyond the schema's descriptions of mock_id and limit with default/max.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (retrieve) and resource (request logs for mock server), distinguishing it from sibling tools that create, delete, or update mocks. However, it does not explicitly differentiate from potential log-related siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. While siblings are about mock management, the description does not specify usage context or prerequisites (e.g., requires existing mock_id).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_mocksAInspect

생성된 모든 목업 서버 목록을 반환합니다.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; the description does not disclose any behavioral traits such as authentication needs, pagination, or read-only nature. For a list operation, it's minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no fluff, but lacks structure like a title. Appropriate for a zero-parameter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity, the description is adequate but does not specify output format or ordering. Could be improved with minor details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so the baseline is 4. Description adds no parameter info because none needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb and resource: 'returns the list of all created mock servers'. It distinguishes itself from siblings like create_mock, delete_mock, etc., which perform other operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings; it's implied by the name and description, but no exclusions or alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_mock_responseAInspect

특정 엔드포인트의 응답을 수정하거나 새 엔드포인트를 추가합니다.

ParametersJSON Schema
NameRequiredDescriptionDefault
pathYesAPI 경로 (예: /users/{id})
methodYesHTTP 메서드
statusNoHTTP 상태 코드 (기본: 200)
headersNo추가 응답 헤더 (선택)
mock_idYes목업 서버 ID
responseNo반환할 응답 JSON 객체
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits. It only says 'modify or add' but does not explain idempotency, overwrite behavior, or what happens when the endpoint already exists. The tool name suggests update, but description allows addition, which is ambiguous.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, short sentence that directly states the tool's purpose. It is front-loaded and efficient, though it could be slightly more structured with separate clauses for update vs. add.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lacks important context such as return values, prerequisite of an existing mock_id, and consequences of updating versus adding. With no output schema, the tool behavior is underspecified for an agent to confidently invoke.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds value by clarifying that the tool can also add new endpoints, which is not obvious from the schema. This helps agents understand the dual purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool modifies responses or adds new endpoints for a mock server. This distinguishes it from siblings like create_mock (creates a mock server) and delete_mock (deletes a mock server).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description implies it is for updating or adding endpoints, but does not mention prerequisites (e.g., mock_id must exist) or conditions for using create_mock instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.