Skip to main content
Glama

Server Details

Agent-native marketing platform: create campaigns, submit proofs, review submissions.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
benzatkulak-collab/socialperks
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 10 of 10 tools scored. Lowest: 2.7/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct action or resource: campaigns, benchmarks, stats, pricing, actions list, submissions, proof, review, and influencer search. No overlapping functionalities.

Naming Consistency5/5

All tool names follow a consistent camelCase verbNoun pattern (e.g., createCampaign, getBenchmarks, listActions), with no mixing of styles or unclear verbs.

Tool Count5/5

With 10 tools covering campaign management, submissions, analytics, and influencer search, the count is well-scoped for the domain—neither too few nor excessive.

Completeness4/5

The tool set covers core workflows (create, list, stats, review) but lacks an update/delete campaign tool and a detailed getCampaign tool beyond stats, which are minor gaps.

Available Tools

10 tools
createCampaignAInspect

Create and launch a new campaign for the calling business. Returns the campaign id, name, and dashboard URL. The campaign goes live immediately — no separate publish step.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesCustomer-facing campaign name
actionsYesAction IDs the campaign accepts (e.g. ['ig_st'] for an Instagram Story Tag).
businessIdYesThe business owning this campaign. Must match the API key's business.
descriptionNoOptional internal description
discountTypeYesDiscount denomination: 'pct' for percentage, 'dol' for dollars off.
discountValueYesDiscount amount. Capped at 100 for pct, 10000 for dol.
expiresInDaysNoDays until the campaign auto-expires. Default 60.
maxCompletionsNoOptional cap on total completions. Null = no cap.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description discloses key behaviors: immediate activation (no publish step), return shape, and implicit mutation. Lacks details on idempotency, error states, or rate limits, but covers major behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no unnecessary words. Could be slightly improved by front-loading the most critical info (immediate activation) but is already efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 params and no output schema, the description explains return shape (id, name, dashboard URL) and immediate launch. It omits potential error conditions or the businessId constraint explicitly, but is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% so baseline is 3. The description adds no extra meaning beyond the schema's parameter descriptions, only mentioning return values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Create and launch a new campaign' with return values (id, name, dashboard URL) and immediate activation. It clearly distinguishes from sibling list/stat tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit usage guidance or when-not-to-use instructions. Context implies it's for campaign creation but doesn't mention alternatives or constraints beyond the schema's businessId matching requirement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getBenchmarksCInspect

Get industry benchmarks (engagement rate, conversion rate, etc.).

ParametersJSON Schema
NameRequiredDescriptionDefault
industryNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral traits. It fails to disclose any details about data source, update frequency, rate limits, or whether the tool reads or writes. Only the basic function is mentioned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the tool's purpose with examples. It is appropriately sized and front-loaded with the action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the tool's relative simplicity, the description is incomplete. It does not explain the output format or structure, and the optional nature of the industry parameter is not addressed. With no output schema, more detail on return values is expected.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'industry' has no description in the schema (0% coverage) and the description does not clarify acceptable values, format, or whether it is optional. The mention of benchmark examples gives minimal context but does not compensate for the lack of parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get industry benchmarks' and provides examples (engagement rate, conversion rate), making the purpose straightforward. However, it does not explicitly distinguish from sibling tools like getCampaignStats, though the name alone differentiates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as getCampaignStats or searchInfluencers. The description lacks context on prerequisites or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getCampaignStatsAInspect

Get summary stats for a single campaign — total submissions, approved count, conversion rate, perks issued, and time-to-first-submission. Useful for an agent reporting back to its user.

ParametersJSON Schema
NameRequiredDescriptionDefault
campaignIdYesCampaign to fetch stats for.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry burden. It describes the read operation but doesn't mention side effects, auth, or limitations. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, direct and efficient, with no irrelevant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lists returned stats, compensating for lack of output schema. Missing error handling or edge cases, but sufficient for a simple query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and description does not add meaning beyond the schema's description of campaignId. Baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it gets summary stats for a single campaign and lists the specific metrics, distinguishing it from sibling tools like listCampaigns or listSubmissions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly notes it's useful for agent reporting, implying when to use. Does not state when not to use or alternatives, but context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getPricingAInspect

Get the market-rate pricing for a marketing action. Returns USD value and recommended perk type/value.

ParametersJSON Schema
NameRequiredDescriptionDefault
actionIdNoAction ID (e.g. ig_post, google_review)
platformIdNoPlatform ID (e.g. instagram, google)
businessTypeNoBusiness type modifier (default: general)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description must disclose behavioral traits. It states the tool returns USD value and perk recommendation, implying read-only behavior. However, it does not specify authentication needs, side effects (e.g., mutability), or response structure beyond the basic output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that efficiently communicates the tool's purpose and output. Every word contributes value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description covers basic purpose and output, it lacks details on default behavior for optional parameters, error handling, and output format specifics. Given the absence of an output schema, more contextual completeness would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all three parameters (actionId, platformId, businessType). The tool description adds no additional meaning beyond what the schema already provides, resulting in a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves market-rate pricing for a marketing action. It specifies the output includes USD value and recommended perk type/value, making the purpose distinct from sibling tools like getCampaignStats or getBenchmarks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as when pricing is needed before creating a campaign. It does not mention any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

listActionsAInspect

List the 107 marketing actions available on Social Perks. Filterable by platform, type, and effort.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
typeNo
perPageNo
maxEffortNo
platformIdNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It implies a read-only operation (list) and mentions filtering, but does not disclose pagination behavior, rate limits, or what happens with empty results.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, 13-word sentence that front-loads the action and includes key filters. Every word is necessary and there is no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the basic purpose and filters but lacks details on pagination parameters (page, perPage) and output format. Given no output schema and no annotations, more completeness would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must compensate. It adds meaning for three parameters (platformId, type, maxEffort) but omits page and perPage. The enum values for 'type' are not explained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists 'marketing actions' on Social Perks, with a specific count (107) and filtering capabilities. It distinguishes from siblings like listCampaigns and listSubmissions by its resource type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., listCampaigns). The description only states what it does, not when it should be preferred or avoided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

listCampaignsBInspect

List campaigns. Requires auth — returns the caller's campaigns.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses the auth requirement and scoped output. However, it lacks details like pagination, rate limits, or handling of empty results.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. The key information is front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (one optional param, no output schema), the description covers purpose and auth but omits response format and parameter details, leaving agents underinformed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter (status) is entirely absent from the description. With 0% schema coverage, the description must compensate but fails to explain the parameter's purpose or usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'List campaigns' and specifies the scope as 'caller's campaigns', providing a specific verb and resource. No sibling tool duplicates this function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions authentication requirement but offers no guidance on when to use this tool over siblings or when not to use it. No alternatives or exclusions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

listSubmissionsAInspect

List submissions for a business or campaign, filterable by state (pending/approved/rejected). Returns paginated results.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
stateNo
perPageNo
businessIdNo
campaignIdNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It mentions paginated results, indicating a read operation, but does not disclose traits like rate limits, ordering, or behavior without filters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single 14-word sentence that is front-loaded with the verb and resource, containing no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description covers the core functionality (list, filter, paginate) but lacks details on sorting, response structure, or error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description adds meaning to all 5 parameters: state enum values, business/campaign filtering, and pagination via page/perPage. However, it does not explicitly describe each parameter's role beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists submissions for a business or campaign, with filtering by state and paginated results. This distinguishes it from sibling tools like createCampaign or reviewSubmission.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies it is used to retrieve submissions but does not provide explicit when-to-use or when-not-to-use guidance, nor does it mention alternative tools like reviewSubmission for other actions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reviewSubmissionAInspect

Approve or reject a submission. Approving releases the perk; rejecting requires a reason. Use this when the business chooses manual review over auto-verification.

ParametersJSON Schema
NameRequiredDescriptionDefault
reasonNoRequired when decision='reject'. 1-500 chars.
decisionYesApprove releases the perk. Reject explains why the proof was insufficient.
submissionIdYesSubmission id from a prior submitProof call.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries the full burden. It discloses that approving releases the perk and rejecting requires a reason. This is adequate for a simple decision tool, though it omits potential side effects or reversibility.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. The purpose and core information are front-loaded in the first sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and simple parameter structure, the description adequately covers what the tool does and when to use it. It could mention return values or error handling, but for a review action this is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and already includes detailed descriptions for all three parameters. The description adds some contextual reinforcement but no new semantic details beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool approves or rejects a submission, with specific consequences for each action (releases perk vs. requires reason). This verb+resource combination uniquely distinguishes it from siblings like submitProof or listCampaigns.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use: 'when the business chooses manual review over auto-verification.' It does not directly name alternatives but implies the context. A explicit alternative mention would raise to 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

searchInfluencersCInspect

Search influencers by platform and follower count.

ParametersJSON Schema
NameRequiredDescriptionDefault
platformNo
minFollowersNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It describes a search operation but does not disclose whether it is read-only, whether authentication is needed, or any behavioral traits like pagination or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that directly states the tool's purpose. It is concise and front-loaded, though it could benefit from a bit more detail without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should provide more context about return values, authentication, or error handling. The current description leaves the agent guessing about what happens after the search.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%. The description mentions the two parameters by name but adds no extra meaning like allowed values for platform or whether minFollowers is inclusive. The agent gets no additional semantics beyond the parameter names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (search), the resource (influencers), and the filtering criteria (platform and follower count). It differentiates from sibling tools which focus on campaigns and stats, but does not specify what the search returns (e.g., list or count).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives, no prerequisites, and no exclusion criteria. The description only states the basic function without context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submitProofAInspect

Submit proof of completion for a campaign action — a public URL to a post, a screenshot, a video, or platform-verified data. The submission enters a review queue (or auto-approves depending on the campaign's verification mode).

ParametersJSON Schema
NameRequiredDescriptionDefault
actionIdYesSpecific action being completed (must be allowed by the campaign).
metadataNoOptional bag of context (poster handle, post timestamp, etc.).
proofUrlYesPublic URL of the proof. For url-type submissions, the platform verifier will fetch this.
proofTypeYesHow the proof was captured. 'url' triggers automated verification.
campaignIdYesCampaign the submission applies to.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses the review queue/auto-approval behavior and verification for URL submissions. It does not mention immutability, side effects, or permissions, which are relevant for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the main action, and free of unnecessary words. Every sentence provides value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the submission process but omits the return value (e.g., submission ID, status). Given 5 parameters and no output schema, the description should clarify what the agent expects as a response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already describes each parameter. The description adds minor context (e.g., 'url' triggers verification) but does not significantly extend understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Submit proof of completion for a campaign action' with specific verb and resource, and lists the types of proof accepted. It clearly distinguishes from sibling tools like reviewSubmission and listSubmissions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the post-submission behavior (review queue or auto-approval) and mentions automated verification for URL types. However, it does not explicitly state when not to use the tool or provide alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.