Skip to main content
Glama

Server Details

ProxyLink MCP server for finding and booking home service professionals

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: lookup_company retrieves company information, create_support_ticket submits tickets, search_knowledge_base performs searches, and rate_knowledge_base_answer provides feedback. The descriptions explicitly differentiate their roles, making misselection unlikely.

Naming Consistency4/5

The tools follow a consistent verb_noun pattern (e.g., lookup_company, create_support_ticket) with clear, descriptive names. There is a minor deviation with rate_knowledge_base_answer using 'rate' instead of a more standard verb like 'evaluate', but overall the naming is predictable and readable.

Tool Count5/5

With 4 tools, the server is well-scoped for its apparent purpose of handling support and knowledge base interactions. Each tool earns its place by covering distinct aspects of the workflow, from company lookup to ticket creation and feedback, without being overly sparse or bloated.

Completeness4/5

The tool set covers core workflows for support and knowledge base management, including lookup, search, creation, and feedback. A minor gap exists in lacking update or delete operations for tickets or knowledge base entries, but agents can work around this for basic use cases.

Available Tools

4 tools
create_support_ticketAInspect

Submit a support ticket request (e.g., address change, service cancellation). Use lookup_company first to get the tenant_id and see available ticket types with their required fields.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsYesKey-value map of field IDs to their values based on the ticket type's form fields
tenant_idYesCompany tenant ID from lookup_company
conversationYesConversation history leading to this ticket. Only include messages that occured after the ticket request was initiated, until the ticket was created.
ticket_type_idYesThe ticket type ID (e.g., 'change-address', 'cancel-service')
ticket_type_nameYesThe ticket type display name (e.g., 'Change Address', 'Cancel Service')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It adds workflow dependency context but lacks mutation disclosure (irreversibility, permissions, side effects, or success/failure behavior). Adequate but gaps remain for a destructive operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence establishes purpose with examples; second provides critical prerequisite instruction. Efficiently structured and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and nested conversation object, the description successfully covers the prerequisite workflow needed to populate parameters. Lacks output guidance but no output schema exists to require such description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds semantic value by explaining the tenant_id/ticket_type relationship to lookup_company and referencing 'required fields' which contextualizes the fields parameter beyond raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Submit' and resource 'support ticket request' with specific examples (address change, service cancellation). Implicitly distinguishes from siblings (lookup vs. create) though could explicitly contrast with lookup_company.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states prerequisite workflow: 'Use lookup_company first to get the tenant_id and see available ticket types with their required fields.' Provides clear when-to-use guidance and dependency chain.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_companyAInspect

Find a company by name to access their knowledge base. Use this first before searching a company's knowledge base to get the tenant_id.

ParametersJSON Schema
NameRequiredDescriptionDefault
company_nameYesThe name of the company to look up (e.g., 'ABC Plumbing', 'ProTech HVAC')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds valuable behavioral context by disclosing the return value (tenant_id) since no output schema exists. However, lacks disclosure of safety profile (read-only status), error conditions (company not found), or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences. First establishes purpose, second provides critical workflow sequencing. No waste, every word earns its place. Front-loaded with the action verb.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately complete for a simple single-parameter lookup tool. Compensates for missing output schema by mentioning tenant_id return value. Could improve with error handling guidance (e.g., what happens if company not found) but covers primary happy-path workflow sufficiently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good examples ('ABC Plumbing', 'ProTech HVAC'). Description aligns with schema ('Find a company by name') but adds no additional semantic detail about validation rules, fuzzy matching, or case sensitivity beyond what the schema provides. Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Find' with clear resource 'company'. Distinguishes from sibling search_knowledge_base by stating the goal is to 'access their knowledge base' and specifically mentioning the return of 'tenant_id' for subsequent use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states workflow ordering: 'Use this first before searching a company's knowledge base'. Clearly identifies the prerequisite relationship with sibling tool search_knowledge_base and explains why (to get the tenant_id).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rate_knowledge_base_answerAInspect

Provide feedback on a knowledge base answer to help improve the system. Use this after receiving an answer from search_knowledge_base to rate its quality and helpfulness.

ParametersJSON Schema
NameRequiredDescriptionDefault
ratingYesRating from 1 (poor) to 5 (excellent)
commentNoOptional feedback comment to provide context
helpfulYesWhether the answer was helpful (true or false)
query_idYesQuery ID from the search_knowledge_base response
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentions feedback helps 'improve the system' but lacks critical behavioral details: what happens to ratings (stored?), side effects on future searches, idempotency, or confirmation of write operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with logical flow: purpose first, usage context second. Minor waste in 'to help improve the system' which adds no concrete information. Otherwise tight and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a feedback tool: links clearly to search_knowledge_base sibling for query_id sourcing, all 4 parameters are self-documenting in schema. Lacks output description but no output schema exists. Suitable completeness given tool simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (rating, helpful, query_id, comment all documented). Description mentions 'rate its quality and helpfulness' which loosely maps to parameters, but doesn't add syntax, formats, or constraints beyond the schema. Baseline 3 appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Provide feedback' and resource 'knowledge base answer'. Explicitly links to sibling 'search_knowledge_base' establishing workflow context, though 'to help improve the system' is slightly vague corporate filler.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent explicit guidance: 'Use this after receiving an answer from search_knowledge_base' clearly establishes when to use vs the sibling retrieval tool. Also explains purpose (rate quality and helpfulness).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_knowledge_baseAInspect

Search a company's knowledge base for policies, procedures, and documentation. First use lookup_company to get the tenant_id, then use this tool to search their knowledge base.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesThe search query for embedding matching. Can be keywords extracted from the user's question if you believe it improves search results.
tenant_idYesCompany tenant ID from lookup_company
original_questionNoThe user's original question, verbatim. Always pass this for logging purposes. Do NOT extract keywords - pass the complete question as the user asked it. Example: 'How do I create an invite link for my fantasy football league?'
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentions searchable content types (policies, procedures, documentation) but omits return format, result cardinality, error behaviors, or rate limits. Adequate but incomplete behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. Purpose stated first, prerequisites second. No redundancy with schema or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema and no annotations, the 100% schema coverage ensures parameters are documented. Description covers the critical invocation dependency chain (lookup_company prerequisite). Lacks description of search results format, which would be helpful given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). Description adds critical workflow context explaining tenant_id dependency on lookup_company, which aids correct parameter sourcing. Query embedding logic is documented in schema property descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Search' + resource 'knowledge base' + content scope 'policies, procedures, and documentation' clearly defines the operation. Distinct from sibling lookup_company (which resolves tenant_id) and create_support_ticket.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit prerequisite workflow: 'First use lookup_company to get the tenant_id, then use this tool.' Provides clear sequencing and implicitly defines when-not-to-use (without prior lookup).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources