Skip to main content
Glama

AgentPMT - Marketplace For Autonomous Agents

Ownership verified

Server Details

AgentPMT is the AI agent marketplace that turns any MCP-compatible AI assistant into an autonomous employee. Connect once and your agents gain access to a growing ecosystem of tools, workflows, and skills spanning communication, data analytics, development, file management, search, and more. AgentPMT dynamically discovers and orchestrates tools from across the MCP ecosystem, so your agents can independently find the right tool for any task without manual configuration.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

4 tools
AgentPMT-Report-Tool-IssueCInspect

Report an issue with a tool to the AgentPMT team.

ParametersJSON Schema
NameRequiredDescriptionDefault
tool_nameYes
error_messageYes
recommended_improvementsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only specifies the recipient team, omitting what happens after reporting (confirmation, async processing, response format).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste, front-loaded with action verb, appropriately sized for the information conveyed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Insufficient for a 3-parameter tool with zero schema descriptions and no annotations. Lacks parameter details, return value specification, or behavioral outcomes required for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. Description mentions 'tool' and 'issue' loosely mapping to tool_name and error_message, but provides no syntax guidance, examples, or any mention of the optional recommended_improvements parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Report') and resource ('an issue with a tool'), specifying the target recipient ('to the AgentPMT team').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus siblings (e.g., AgentPMT-Send-Human-Request) or when reporting is appropriate versus retrying.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

AgentPMT-Send-Human-RequestBInspect

AgentPMT Send Human Request - Send a request to your human to enable a tool, workflow, or add funds to your budget.

ParametersJSON Schema
NameRequiredDescriptionDefault
idNoProduct/workflow ObjectId for access requests, or optional target identifier for check_response lookup
actionNoOperation to perform: send (default) or check_response. Only use check_response when send returned approval_required=true.send
requestNoFreeform request body to send to the user (required when action=send)
request_idNoMobile approval request ObjectId to check when action=check_response.
request_typeNoRequest type: add_funds, enable_tool, enable_workflow, notification_only, other (required when action=send). Use notification_only to send a message that does not require approval; do not wait after sending.
include_requestNoWhen action=check_response, include the full request record in the response.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. While it correctly identifies the human recipient, it fails to disclose critical behavioral traits: that this creates a pending approval state requiring subsequent check_response calls, whether the operation blocks or is async, and what happens upon denial.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized but front-loaded with redundancy: 'AgentPMT Send Human Request' restates the tool name verbatim before the dash. The subsequent sentence 'Send a request to your human...' is efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich input schema (100% coverage, enums with clear values) and lack of output schema, the description adequately covers the primary use cases. However, it omits end-to-end workflow context that would help an agent understand the full request-response lifecycle with the human.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions the three request types (enable tool, workflow, add funds) which align with the request_type enum, but adds no semantic information beyond what the schema already provides for the 6 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Send a request) and scope (to your human for enabling tools, workflows, or adding funds), distinguishing it from siblings like Tool-Search-and-Execution or Report-Tool-Issue. The prefix 'AgentPMT Send Human Request' is redundant with the tool name, preventing a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implicit usage guidance by enumerating the three scenarios where the tool applies (enable tool, enable workflow, add funds). However, it lacks explicit 'when to use vs. when not to use' guidance or mention of prerequisite steps, such as whether to search for existing tools first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

AgentPMT-Tool-Search-and-ExecutionCInspect

AgentPMT Tool Search and Execution - Unified interface for discovering, searching, and using AgentPMT tools without MCP refresh. Access tools required for workflows and skills here.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number for pagination (1-based). Used when listing tools.
queryNoSearch query for text and semantic matching. If provided, performs hybrid search.
actionYesOperation to perform: get_tools (discover tools - list/search/workflow), get_schema (full tool schema by ID), request_credentials (email user to add credentials), call_tool (execute a tool), get_instructions (help)
messageNoCustom message to include in credential request notification (for request_credentials action)
tool_idNoTool/product ID (required for get_schema and use actions)
page_sizeNoNumber of results per page (1-100)
tool_nameNoAlternative: Tool name to search for (for use action, searches then executes first match)
parametersNoParameters to pass to the tool when using 'use' action
workflow_idNoWorkflow/skill chain ID. If provided, returns all tools from that workflow.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to mention that the 'call_tool' action executes tools with potential side effects/destructive capabilities, or that 'request_credentials' sends emails to users. There is no mention of rate limits, authentication requirements for specific actions, or whether operations are reversible.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two reasonably concise sentences that are front-loaded with the primary function. However, the first sentence begins with the tool name (tautological), and the second sentence ('Access tools required for workflows and skills here') could be more specific. It is appropriately sized but not exceptionally tight.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, 5 distinct action modes including execution and credential requests, nested objects) and the absence of annotations and output schema, the description is insufficient. It lacks behavioral warnings about the 'call_tool' and 'request_credentials' actions, does not explain the multi-modal nature of the tool, and fails to document expected outcomes or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all 9 parameters. The description adds contextual meaning by mentioning 'workflows and skills' (relating to the workflow_id parameter) and 'without MCP refresh' (operational context), but does not elaborate on parameter syntax, formats, or relationships between fields (e.g., that tool_id is required for specific actions) beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs (discovering, searching, using) and identifies the resource (AgentPMT tools). It distinguishes the tool by noting it works 'without MCP refresh,' which signals a key benefit. However, it does not clearly differentiate from the sibling 'AgentPMT-Workflow-Skills,' which also relates to workflows and could cause confusion about when to use which tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance. While 'without MCP refresh' implies a technical context and 'Access tools required for workflows and skills here' hints at use cases, there is no explicit guidance on when to select this over sibling tools (particularly AgentPMT-Workflow-Skills), nor warnings about when NOT to use it (e.g., when direct tool invocation is inappropriate).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

AgentPMT-Workflow-SkillsCInspect

AgentPMT Workflow Skills - Fetch, Search, and Use Agent Workflows and Skills. Use to retrieve, initiate, and complete workflows.

ParametersJSON Schema
NameRequiredDescriptionDefault
skipNoNumber of results to skip for pagination
limitNoMaximum results to return (1-200)
queryNoOptional substring search over name/description (case-insensitive)
actionYesOperation to perform: search, browse_industry, get_workflow_skill, start_workflow, end_workflow, get_active_workflow, get_instructions
ratingNoWorkflow rating from 1-5 stars (required for end_workflow)
commentNoComment about the workflow experience (required for end_workflow)
includeNoComma-separated entity types to include for browse_industry: workflows, content, categories. Defaults to all.
industryNoIndustry name or slug (required for browse_industry). Returns linked workflows, content, and categories.
skill_idNoSkill chain ObjectId or slug (required for get_workflow_skill, start_workflow, end_workflow)
publisherNoFilter by publisher username (case-insensitive substring match)
categoriesNoComma-separated category names to filter by
industry_tagsNoComma-separated industry tag names to filter by
suggested_improvementsNoSuggested improvements or changes to the workflow (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but offers minimal behavioral context. It doesn't explain what happens during workflow initiation, whether end_workflow is mandatory, persistence of workflow state, or error conditions. The lifecycle implication (retrieve→initiate→complete) is present but underdeveloped.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences are appropriately brief, but the first sentence redundantly repeats 'Workflow Skills' and 'AgentPMT' without adding new information beyond the tool name. The front-loading is acceptable but not information-dense.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 13 parameters handling a full workflow lifecycle (search→start→end) and no output schema or annotations, the description is insufficient. It lacks explanation of the workflow state machine, what 'skills' represent versus workflows, and expected interaction patterns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing a baseline of 3. The description adds minimal value beyond the schema, only loosely grouping the 7 actions into 'fetch/search/use' categories without clarifying the specific requirements for parameters like rating/comment being required only for end_workflow.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description lists specific verbs (fetch, search, retrieve, initiate, complete) and identifies the resource (workflows and skills), but fails to distinguish from the sibling 'AgentPMT-Tool-Search-and-Execution' tool. The 'AgentPMT' prefix is opaque jargon that doesn't clarify scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus siblings, nor prerequisites for the workflow lifecycle (e.g., requiring get_active_workflow before end_workflow). The phrase 'Use to retrieve, initiate, and complete workflows' states capabilities but not selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources