Skip to main content
Glama

Server Details

Hire specialists by the hour — search, schedule, and pay via MCP protocol.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Ceki-me/mcp-server
GitHub Stars
0
Server Listing
Ceki

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 19 of 19 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct resource or action (e.g., schedule CRUD, wallet info, profile, search, etc.) with no overlapping purposes. Descriptions clearly differentiate singular vs. plural endpoints.

Naming Consistency5/5

All tool names use a consistent lowercase hyphen-separated verb-noun pattern (e.g., create-schedule, get-wallet). No mixed conventions or inconsistent verb styles.

Tool Count5/5

With 19 tools covering agent registration, schedule management, wallet operations, profile updates, search, and authentication, the count is well-scoped for the server's purpose. Each tool earns its place.

Completeness4/5

Core CRUD and lifecycle operations for schedules are present, along with wallet, profile, search, and email verification. Minor gaps include no tool to update wallet details beyond currency selection, but overall coverage is solid.

Available Tools

19 tools
create-scheduleCreate ScheduleBInspect

[Auth Required] Create a new agent schedule (paid action: schedule_create). Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault
endNoEnd date (YYYY-MM-DD or ISO 8601)
startYesStart date (YYYY-MM-DD or ISO 8601)
kal_idYesCalendar ID to attach the schedule to
privateNoWhether the schedule is private
settingsYesSchedule settings (events, days, hours, contacts, skills, links, etc.)
timezoneNoTimezone (e.g. Europe/Moscow, UTC)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose side effects, idempotency, or failure modes. It only mentions auth and cost, omitting behavior like whether duplicate requests create multiple schedules or what happens on validation errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that includes essential context (auth, cost, authentication method) without unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description does not explain the return value (no output schema), nor does it elaborate on the complex 'settings' object parameter. Given the absence of annotations and the nested parameter, the description is insufficient for full agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 6 parameters have descriptions in the input schema (100% coverage). The description adds no extra parameter information, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Create a new agent schedule', which is a specific verb+resource. The sibling tools include delete, update, get, making the purpose distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions '[Auth Required]' and 'paid action', indicating prerequisites, but does not explicitly state when to use this tool versus alternatives like update-schedule or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete-scheduleDelete ScheduleBInspect

[Auth Required] Delete an agent schedule. Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesSchedule ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description must disclose behavior. Mentions auth but does not state that deletion is permanent or irreversible. Lacks clarity on consequences.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence, front-loaded with essential auth info. Concise but could be structured slightly better.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given one required param and no output schema, the description is mostly complete for basic use. Missing behavioral details like permanence and confirmation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage for the single parameter with a basic description ('Schedule ID'). Description adds no extra meaning. Baseline score applied.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and the resource ('agent schedule'), matching the title. It distinguishes from siblings like create-schedule and update-schedule.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides authentication requirements ('Auth Required' and header info) which aids usage. Does not explicitly state when to use versus alternatives, but the delete purpose is implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-pricingGet PricingAInspect

[Public] Get current pricing for all agent actions, subscriptions, and supported currencies. No authentication required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses public access and no auth needed. No annotations provided, so description carries full burden. Lacks details on rate limits or response format, but covers key behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: two sentences conveying purpose and access requirements. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description sufficiently explains what is retrieved (agent actions, subscriptions, currencies) and access level. Could mention response format, but adequate for a simple public endpoint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist; schema coverage is 100%. Description adds no additional param info, but none is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves current pricing for agent actions, subscriptions, and currencies. Includes '[Public]' and 'No authentication required', distinguishing it from other tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly notes 'No authentication required', guiding usage. Does not explicitly mention when not to use or alternatives, but the context of public pricing makes it clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-profileGet Agent ProfileAInspect

[Auth Required] Get the authenticated agent's profile information. Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses that authentication is required and how to pass the API key, but does not cover rate limits, error behavior, or response structure. Adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states the purpose, second the authentication requirement. No superfluous words; front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with no parameters and no output schema, the description covers the core action and authentication. Missing details like response fields or error scenarios, but sufficient for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist in the input schema, so baseline 4 applies. The description adds value by specifying authentication headers that are not in the schema, providing implicit parameter guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the authenticated agent's profile information, which is a specific verb+resource. It distinguishes itself from sibling tools like create-schedule or update-profile by being a read-only retrieval of agent identity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like get-user or update-profile. The description does not provide usage context or conditions beyond the authentication requirement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-scheduleGet ScheduleAInspect

[Auth Required] Get a specific agent schedule by ID. Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesSchedule ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must carry the behavioral burden. It correctly notes authentication, but fails to disclose error handling (e.g., if ID not found), read-only nature, or any side effects. For a simple read operation, this is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with necessary authentication info, no wasted words. It is front-loaded with the core purpose and immediately useful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple with one parameter and no output schema. The description covers authentication and identification, but it would benefit from mentioning the return type or error behavior. Still, for a get-by-ID tool, it is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with parameter 'id' having a description and constraints. The description adds no extra semantics beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get a specific agent schedule by ID', using a specific verb and resource. This distinguishes it from sibling 'get-schedules' (which lists all) by specifying the unique identifier.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes authentication requirements ('Auth Required') and the method to pass the API key, which is crucial context. However, it does not explicitly contrast with alternatives like 'get-schedules' or 'create-schedule', though the 'by ID' phrasing implies when to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-schedulesGet SchedulesAInspect

[Auth Required] Get the authenticated agent's schedules (paginated). Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number
perPageNoResults per page
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full responsibility for behavioral disclosure. It correctly indicates this is a read operation ('Get') and specifies pagination and auth requirements. It does not mention error handling or rate limits, but for a simple list tool, this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the essential information: auth requirement, action, resource, and pagination. No unnecessary words. It is front-loaded and earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description covers auth and pagination, it lacks details about the return format (e.g., structure of a schedule object, pagination metadata). Since there is no output schema, the agent may need more context to use the response correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already documents both parameters (page, perPage) with descriptions, so schema coverage is 100%. The description adds no additional meaning or constraints beyond what the schema provides, resulting in a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'Get' and the resource 'the authenticated agent's schedules (paginated)'. It distinguishes from sibling 'get-schedule' (singular) by indicating plural and pagination, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly mentions authentication requirements and how to pass the API key. It implies usage for listing all schedules of the authenticated agent. However, it does not explicitly differentiate from 'get-schedule' or state when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-userGet User ProfileAInspect

[Auth Required + Active] Get user profile by ID (paid action: api_user_view after free limit). Requires active account (deposit verified). Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUser ID
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With empty annotations, the description fully discloses auth methods, active account requirement, and billing behavior, providing comprehensive behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence containing all essential information without redundancy, achieving maximum conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While auth and billing details are covered, the description omits what the response contains (e.g., profile fields), which would be helpful given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of the single parameter with a description. The tool description adds no further meaning beyond 'by ID,' so baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get user profile by ID,' specifying a verb and resource. However, it does not differentiate from sibling tool 'get-profile,' which may cause confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context such as auth requirements and paid action limits, but lacks explicit guidance on when to use this tool versus alternatives (e.g., get-profile).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-walletGet WalletAInspect

[Auth Required] Get the authenticated agent's wallet information. Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that authentication is required and implies a read operation (get). It does not mention destructive or side effects, but for a simple read, this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the most critical info (auth requirement). No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and no output schema, the description is reasonably complete: it explains the action and authentication. It lacks details on response format or potential errors, but for a simple get operation, it is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so the description does not need to add param semantics. Schema coverage is 100%, meeting the baseline criteria.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'wallet information' for the authenticated agent. It distinguishes from sibling tools like get-wallet-transactions and get-wallet-usage by focusing on the base wallet info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Auth required' and specifies how to pass the API key via header or bearer token. However, it does not differentiate when to use this tool versus sibling wallet tools, though the purpose is distinct.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-wallet-transactionsGet Wallet TransactionsAInspect

[Auth Required] Get the authenticated agent's wallet transactions. Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoPagination limit
offsetNoPagination offset
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must disclose behavior. It mentions auth requirements and agent scope, but does not describe pagination behavior, sorting, or return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that efficiently conveys the tool's purpose and auth requirements without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity, full schema coverage, and no output schema, the description is mostly complete. It identifies the resource but could briefly note the default response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description adds no extra meaning beyond what the schema already provides for 'limit' and 'offset'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get'), the resource ('wallet transactions'), and the scope ('authenticated agent's'), making it easy to distinguish from sibling tools like 'get-wallet'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies authentication requirements and provides header format, which is helpful for usage. However, it does not compare this tool to alternatives or explain when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-wallet-usageGet Wallet UsageAInspect

[Auth Required] Get the authenticated agent's API usage statistics. Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses authentication requirement and header methods, but no further behavioral traits (e.g., rate limits, data freshness, response details). Unannotated, so description carries burden, yet incomplete.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no extraneous content. Auth requirement front-loaded. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; 'API usage statistics' is vague. Tool returns something but lacks detail on format or content, leaving agent uncertain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters (0 params), baseline 4. Description does not add parameter info, which is acceptable given empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get the authenticated agent's API usage statistics', specifying verb and resource. Distinguishable from sibling tools like get-wallet and get-wallet-transactions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Includes authentication prerequisites but lacks explicit when-to-use, when-not-to-use, or alternative tools. The context is implied but not detailed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

regenerate-keyRegenerate API KeyAInspect

[Auth Required] Regenerate the API key. Old key is immediately invalidated. Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses that old key is immediately invalidated, a critical behavioral trait. Does not detail return value or further side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with auth requirement, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers auth method and invalidation. Lacks mention of what is returned (e.g., new key), but for a simple regeneration tool it is mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters, so description need not add parameter info. Baseline 4 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it regenerates the API key. Distinct from all sibling tools which deal with schedules, pricing, etc., so purpose is unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Specifies auth requirement and how to pass the key (via header or Bearer). Doesn't explicitly contrast with alternatives, but no other tool does this, so context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register-agentRegister AgentAInspect

[Public] Register a new agent. Returns API key, wallet with deposit address and required amount immediately. Email verification is optional but required to access user profiles.

ParametersJSON Schema
NameRequiredDescriptionDefault
slaNoSLA
nameYesAgent name
tagsNoTags
emailYesAgent email
linksNoList of links
avatarNoAvatar URL
skillsNoList of skills
licenseNoLicense
versionNoAgent version
currencyNoCurrency for walletUSDT
languagesNoLanguages
descriptionNoAgent description
limitationsNoLimitations
owner_emailNoOwner email
privacy_urlNoPrivacy URL
website_urlNoWebsite URL
callback_urlNoCallback URL
input_formatsNoInput formats
last_updated_atNoLast updated date
env_requirementsNoEnvironment requirements
example_requestsNoExample requests
example_responsesNoExample responses
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool returns immediately and that email verification is optional but required for profiles, but does not cover side effects, rate limits, or whether the operation is reversible. Since annotations are absent, the description carries full burden but provides only partial transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with key information front-loaded and no unnecessary text. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 22 parameters and no output schema, the description is brief. It mentions the immediate return but does not detail the return format or guide on parameter selection. The email verification note is useful but insufficient for a complex registration tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for all 22 parameters, so baseline is 3. The description adds no additional meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool registers a new agent and lists immediate returns (API key, wallet). It is distinct from sibling tools like create-schedule or delete-schedule, which operate on different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives, nor prerequisites or conditions. The note about email verification being optional but needed for profiles provides some context, but does not address comparison with other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

request-withdrawalRequest WithdrawalAInspect

[Auth Required + Active] Request a cryptocurrency withdrawal from your agent wallet. Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault
noteNoOptional note
amountYesAmount to withdraw (in USD equivalent)
addressYesDestination wallet address
currencyYesCryptocurrency to withdraw
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description provides minimal behavioral context: it notes authentication and active state. It does not disclose potential side effects, reversibility, processing delays, or failure modes, which are important for a withdrawal operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that conveys essential information without any unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema and annotations, the description omits critical context: there is no mention of return values, confirmation, processing time, or potential errors. For a financial transaction tool, this is insufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so each parameter is documented. The description adds no additional details beyond the schema, so it meets the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Request a withdrawal'), specific resource ('from your agent wallet'), and differentiates from sibling tools as no other sibling handles withdrawals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states authentication requirements ('Pass API key via X-Agent-Key header or Authorization: Bearer') and that the agent must be 'Active'. However, it does not provide explicit guidance on when not to use this tool or compare with alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resend-verificationResend VerificationCInspect

[Public] Resend email verification code.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesAgent email
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description bears full burden. It notes '[Public]' implying no auth, but omits side effects like rate limiting, idempotency, or consequences of resending to an already-verified email.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence with no wasted words. Appropriate length for the minimal information provided.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite low complexity (one parameter), the description lacks behavioral context such as success/failure responses, error conditions, or rate limits, making it insufficient for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter, which already describes 'Agent email.' The description adds no extra meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Resend' and resource 'email verification code,' distinguishing it from siblings which focus on schedules, wallet, and profile management.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Lacks context on prerequisites (e.g., user must have registered but not yet verified) or when not to use (e.g., if already verified).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search-specialistsSearch SpecialistsAInspect

[Auth Required] Search for specialists by skills, availability, price, location and other criteria (paid action: api_search). Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault
endNoAvailable until date (YYYY-MM-DD)
daysNoFilter by available days (0=Sunday, 6=Saturday)
pageNoPage number
queryNoFree-text search query
startNoAvailable from date (YYYY-MM-DD)
skillsNoFilter by skill IDs
sortByNoSort field
perPageNoResults per page
currencyNoPrice currency filter
durationNoRequired duration in minutes
price_toNoMaximum price per hour
timezoneNoTimezone filter
hours_endNoFilter by availability end time (HH:MM)
descendingNoSort descending
price_fromNoMinimum price per hour
hours_startNoFilter by availability start time (HH:MM)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full behavioral disclosure. It explicitly states authentication requirements ('Auth Required') and that it's a 'paid action: api_search', which implies cost. It also explains how to pass credentials. However, it does not mention rate limits, data freshness, or potential side effects, though these are less critical for a read-like search.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences that front-load authentication and purpose, then provide API key instructions. No extraneous information; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description does not explain what the search returns (e.g., list of profiles, contact info). The parameters imply pagination, but a brief note on response structure or details would enhance completeness. However, the description is adequate for a straightforward search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description provides a high-level summary of filter types ('skills, availability, price, location') but does not add meaning beyond what the schema already documents for individual parameters. No additional context is given for parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search for specialists by skills, availability, price, location and other criteria'. The name 'search-specialists' directly reflects the action and resource. No sibling tool has a similar purpose, making differentiation straightforward.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly suggests usage when a search for specialists is needed, but it does not provide explicit guidance on when to use this tool versus alternatives. No sibling tools compete directly, but a brief note on scenarios or prerequisites (e.g., 'Use this to find specialists for scheduling') would improve this dimension.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

select-currencySelect Wallet CurrencyCInspect

[Auth Required] Select currency for the agent's wallet. Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault
currencyYesCurrency
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided. The description only mentions '[Auth Required]' but does not disclose other behavioral traits such as mutation effects, idempotency, or side effects on wallet state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (one line plus auth note) without extraneous text. It front-loads the core purpose and includes essential authentication detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple enum-based tool with no output schema, the description covers basic purpose and auth. However, it lacks context on the effect of selecting currency (e.g., immediate vs deferred, reversibility).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one enum parameter, but the description does not add meaning beyond the schema. Baseline 3 is appropriate since the schema already documents the parameter sufficiently.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Title 'Select Wallet Currency' and description 'Select currency for the agent's wallet' clearly state the verb and resource. It is distinguishable from sibling tools like get-wallet or create-schedule.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides authentication instructions but no guidance on when to use this tool versus alternatives or when not to use it. Missing context about prerequisites or scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update-profileUpdate Agent ProfileAInspect

[Auth Required] Update the authenticated agent's profile. Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault
slaNoSLA
nameNoAgent name
tagsNoTags
linksNoList of links
avatarNoAvatar URL
skillsNoList of skills
licenseNoLicense
versionNoAgent version
languagesNoLanguages
descriptionNoAgent description
limitationsNoLimitations
privacy_urlNoPrivacy URL
website_urlNoWebsite URL
callback_urlNoCallback URL
input_formatsNoInput formats
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must disclose behavioral traits. It only mentions auth, but does not explain if updates are partial or full, what happens to unspecified fields, or any other behavioral effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise – one sentence plus auth details. No wasted words, front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (15 optional parameters, no output schema, no annotations), the description is insufficient. It lacks information on update behavior, return values, and precautions needed for a modification tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are already well-documented at the schema level. The description adds no additional parameter-specific meaning beyond the overarching purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Update the authenticated agent's profile', clearly specifying the verb and resource. It is distinct from sibling tools like 'get-profile' for reading.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear authentication requirements via header or bearer token. However, no explicit guidance on when to use this tool versus alternatives like 'register-agent' or other update tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update-scheduleUpdate ScheduleBInspect

[Auth Required] Update an existing agent schedule. Pass API key via X-Agent-Key header or Authorization: Bearer.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesSchedule ID
endNoNew end date
startNoNew start date
privateNoWhether the schedule is private
settingsNoSchedule settings to update
timezoneNoTimezone
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description only indicates it is a mutation operation requiring authentication. It does not disclose error conditions, idempotency, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise with one sentence plus an auth note. While efficient, it could benefit from a slightly more structured format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and 6 parameters, the description lacks information about return values, error handling, or usage context beyond auth.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% parameter description coverage, so the description does not need to add more. Baseline 3 is appropriate as it neither adds nor detracts from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool updates an existing agent schedule, using the verb 'Update' and specifying the resource 'agent schedule'. This distinguishes it from sibling tools like create-schedule or delete-schedule.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions authentication requirements but provides no guidance on when to use this tool over alternatives like create-schedule or delete-schedule.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify-emailVerify EmailBInspect

[Public] Confirm email with the 6-digit code. Email verification is optional but required to access user profiles.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYes6-digit verification code
emailYesAgent email
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must carry the behavioral burden. It only indicates the tool is public and confirms email with a code. It omits details about failure modes, rate limits, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that conveys the essential purpose and usage context without unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple verification tool with no output schema and full schema coverage, the description covers the basic purpose and usage. However, it lacks details on return values or success/failure behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds '6-digit' and 'Agent email' but does not provide new meaning beyond the schema constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'confirm' and the resource 'email', and mentions the 6-digit code. It distinguishes from the sibling tool 'resend-verification' by implying this is for confirmation rather than resending, though not explicitly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context that email verification is optional but required for accessing user profiles, which helps the agent decide when to use it. However, it does not specify when not to use it or name an alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.