Skip to main content
Glama

Server Details

Apideck Unified API MCP — 330 tools across 200+ SaaS connectors (accounting, CRM, HRIS, ATS).

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
apideck-libraries/mcp
GitHub Stars
2
Server Listing
Apideck MCP

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation4/5

The tools are mostly distinct: describe_tool_input and execute_tool work together, list_scopes and list_tools serve separate purposes. However, list_scopes and list_tools overlap slightly since scopes filter tools, but descriptions clarify their roles.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (describe_tool_input, execute_tool, list_scopes, list_tools) with snake_case, making the naming predictable and clear.

Tool Count4/5

With 4 tools, the count is reasonable for a meta-management server. It could possibly include a tool to get server info, but the current scope is well-covered.

Completeness3/5

The server covers tool discovery and execution, but lacks tools for managing scopes or providing metadata beyond listing. The surface is functional but minimal, with no obvious dead ends for its purpose.

Available Tools

4 tools
describe_tool_inputAInspect

Return the JSON-Schema input contract for a tool by name.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, making the safe read behavior clear. The description adds value by stating it returns input schemas, but does not elaborate on format or pagination beyond what the schema implies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences and efficient. The first sentence states the function, the second provides usage guidance. No wasted words, though the first sentence could be slightly more specific about 'input schema' instead of 'input'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one parameter, full schema coverage, and strong annotations, the description is complete. It covers purpose, usage guidance, and integrates with the sibling tool context. No output schema is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the parameter 'tool_names' is well-documented in the schema. The description adds no additional meaning beyond 'get input schema for one or more tools' which mirrors the schema's purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool gets the input schema for one or more tools. It distinguishes its purpose from sibling tools like execute_tool, list_tools, and list_scopes by focusing on input schema retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly recommends calling this tool first to understand how to use execute_tool, providing clear context for when to use it and the benefit for the primary sibling tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

execute_toolAInspect

Invoke a tool by name. input is forwarded raw to the tool handler.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
inputYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate destructiveHint=true (destructive), readOnlyHint=false (mutates), openWorldHint=true (may accept unknown parameters), idempotentHint=false (not idempotent). Description does not contradict annotations and adds context that it's for executing tools, but does not elaborate on consequences of destructive behavior or authorization needs. Minor gap given annotations already cover safety profile.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, both front-loaded: first states core purpose, second adds important usage hint. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has no output schema and high complexity due to dynamic tool execution and nested input object. Description is brief but covers essential usage hint. Lacks details about error handling or output format, but the openWorldHint and dynamic nature make it hard to specify further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with descriptions for both parameters (tool_name as string, input as JSON object). Description adds no additional meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it executes a tool by name with input parameters, and distinguishes from siblings by recommending describe_tool_input for first-time use. The verb 'execute' plus 'tool by name' specifies the resource and action precisely.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly recommends calling describe_tool_input before first use to understand the input schema, providing clear when-to-use guidance. It also distinguishes from sibling describe_tool_input by advising to use it before this tool for unfamiliar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_scopesAInspect

Return the list of allowed MCP scopes.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, so the description only needs to add behavioral context. It explains that scopes are categories and their purpose, but does not describe output format or any side effects. With rich annotations, a 3 is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is extremely concise with three short sentences, each adding new information. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple (no parameters, no output schema), and the description covers what scopes are and how they can be used. It could mention that the output is a list of scope names, but the description is largely complete given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has no parameters, so the description does not need to explain parameter semantics. The description adds value by explaining the purpose of the output (scopes as categories and search terms), which is not in the input schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool lists scopes, defines what scopes are (categories grouping related tools), and explains their use (as search terms for filtering tools). This is a specific verb+resource with context distinguishing it from siblings like list_tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when you need to discover available categories for filtering, but does not explicitly state when not to use or compare to sibling tools. However, the context (scopes as search terms) gives clear guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_toolsAInspect

Discover Apideck tools. Call with no args for domain index; filter with domain/search_terms/scope.

ParametersJSON Schema
NameRequiredDescriptionDefault
scopeNo
domainNo
search_termsNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and non-destructive behavior, so the description's burden is lower. The description adds context about filtering behavior (case-insensitive substring matching, OR logic) which is valuable beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no waste. The purpose and filtering details are front-loaded. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple parameter, no output schema, and rich annotations, the description is complete. It covers what the tool does and how filtering works, leaving no ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, providing a solid baseline. The description adds no parameter-level details beyond the schema, but the schema itself is thorough. Slight deduction because description could reiterate the OR logic but it's not necessary.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists available tools and optionally filters by search terms. The verb 'list' and resource 'tools' are specific, and the filtering capability distinguishes it from siblings like describe_tool_input or execute_tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the optional filter parameter and implies it's for discovery. However, it does not explicitly say when not to use it or contrast with alternatives like describe_tool_input for specific tool details.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.