Skip to main content
Glama

pearl-api-mcp-server

Server Details

Hybrid human + AI expertise for faster, trusted answers and decisions via MCP Server.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation3/5

The tools have overlapping purposes that could cause confusion, particularly between askExpert and askPearlExpert which both involve human experts and phone callbacks. However, the descriptions provide clear guidance on when to use each tool, helping to mitigate misselection. The distinctions are based on user intent and escalation paths rather than purely functional differences.

Naming Consistency4/5

The tool names follow a consistent verb_noun pattern (askExpert, askPearlAi, askPearlExpert, verifyAnswer) with 'ask' as the common verb for three tools. The naming is mostly predictable, though verifyAnswer deviates slightly from the 'ask' pattern, which is a minor inconsistency. Overall, the naming convention is clear and readable.

Tool Count5/5

With 4 tools, the server is well-scoped for its purpose of managing expert and AI interactions, covering key workflows from AI assistance to human escalation and validation. Each tool has a distinct role in the process, and the count is appropriate without being too sparse or overwhelming for the domain.

Completeness4/5

The tool set covers the core lifecycle of query handling: AI responses (askPearlAi), human expert intake (askExpert and askPearlExpert), and answer validation (verifyAnswer). A minor gap exists in not having a tool for managing or tracking ongoing expert interactions, but agents can work around this with the provided tools. The surface is largely complete for the stated purpose.

Available Tools

4 tools
askExpertaskExpertAInspect

Use this when the user explicitly asks to speak with a real human expert, needs personalized advice in a complex or sensitive domain, or says something like 'Can I talk to a real expert?'. Supports phone callback — pass phoneNumber and contactPreference='phone' if the user wants a call.

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesThe user's question
sessionIdNoOptional session ID for continuing a conversation
chatHistoryNoOptional conversation history. This ensures experts see the complete context
phoneNumberNoCustomer's phone number for expert callback in E.164 format (e.g., +15551234567). Only pass when the user explicitly provides it.
contactPreferenceNoCustomer's preferred contact method. Set to 'phone' when the user wants a phone callback.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it facilitates human expert connection (not AI), supports phone callbacks with specific parameter usage, and handles sensitive/complex domains. Could improve by mentioning response time, expert availability, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly focused sentences with zero waste. First sentence covers purpose and usage guidelines, second provides specific parameter implementation guidance. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description does well covering purpose, usage, and key behavioral aspects. Could be more complete by describing what happens after invocation (e.g., expert response format, timing expectations) since there's no output schema to provide this information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds some value by explaining when to use phoneNumber and contactPreference='phone' for callback functionality, but doesn't provide additional semantic context beyond what's already well-documented in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to connect users with human experts for personalized advice in complex/sensitive domains. It specifies the verb 'speak with a real human expert' and distinguishes from AI-based alternatives like 'askPearlAi' by emphasizing human expertise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: when users ask for human experts, need personalized advice in complex/sensitive domains, or use phrases like 'Can I talk to a real expert?'. Also provides specific guidance on when to pass phone parameters for callback functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

askPearlAiaskPearlAiAInspect

Use this when the user wants a rapid AI-generated answer, draft, or alternative perspective on a low-risk or exploratory topic that does not require human validation. Do not use for medical, legal, financial, or safety-critical questions — use askExpert or askPearlExpert instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesThe user's question
sessionIdNoOptional session ID for continuing a conversation
chatHistoryNoOptional conversation history. This ensures experts see the complete context
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it's for 'rapid' responses and 'does not require human validation,' which helps the agent understand speed and reliability expectations. However, it doesn't mention potential limitations like response length, format, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded: the first sentence defines the purpose and usage context, and the second provides critical exclusions and alternatives. Every sentence earns its place with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (AI response generation) and lack of annotations/output schema, the description is fairly complete. It covers purpose, usage boundaries, and alternatives, though it could benefit from more detail on behavioral aspects like response format or limitations to fully compensate for missing structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema, maintaining the baseline score of 3 for adequate coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: providing 'rapid AI-generated answer, draft, or alternative perspective' for 'low-risk or exploratory topics.' It specifies the resource (AI-generated content) and verb (ask/answer), though it doesn't explicitly distinguish from all siblings beyond the exclusions mentioned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use ('low-risk or exploratory topics') and when not to use ('medical, legal, financial, or safety-critical questions'), with clear alternatives named ('askExpert or askPearlExpert'). This directly helps the agent choose between sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

askPearlExpertaskPearlExpertAInspect

Use this when the problem is complex, ambiguous, high-stakes, or multidisciplinary and would benefit from AI intake followed by escalation to a human expert. Do not use for simple fact queries (use askPearlAi) or when the user explicitly requests a human directly (use askExpert). Supports phone callback — pass phoneNumber and contactPreference='phone' if the user wants a call.

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesThe user's question
sessionIdNoOptional session ID for continuing a conversation
chatHistoryNoOptional conversation history. This ensures experts see the complete context
phoneNumberNoCustomer's phone number for expert callback in E.164 format (e.g., +15551234567). Only pass when the user explicitly provides it.
contactPreferenceNoCustomer's preferred contact method. Set to 'phone' when the user wants a phone callback.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: escalation to human expert, phone callback capability, and intake process. However, it lacks details about response time, expert availability, cost implications, or what happens after escalation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences. The first sentence establishes when to use the tool and distinguishes from siblings. The second explains phone callback support. Both sentences earn their place, though some information about the escalation process could be more explicit.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 parameters, no annotations, and no output schema, the description provides good usage guidance but lacks details about the escalation workflow, expected response format, or what users can expect after tool invocation. It covers when to use but not what happens during/after use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds minimal parameter semantics beyond the schema - only mentioning phoneNumber and contactPreference in the context of phone callbacks. Baseline 3 is appropriate when schema does heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to handle complex, ambiguous, high-stakes, or multidisciplinary problems by escalating to a human expert after AI intake. It specifies the verb 'escalation' and resource 'human expert', but doesn't fully distinguish from sibling 'askExpert' beyond mentioning user preference.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: use for complex/ambiguous/high-stakes/multidisciplinary problems, do not use for simple fact queries (use askPearlAi), and do not use when user explicitly requests a human directly (use askExpert). It also mentions phone callback support with specific parameter conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verifyAnswerverifyAnswerBInspect

Use this when a professional needs to validate the correctness, safety, or trustworthiness of a specific AI-generated answer, or when the user asks to have an answer double-checked by a real expert.

ParametersJSON Schema
NameRequiredDescriptionDefault
answerYesThe AI-generated answer that requires human verification
sessionIdNoExisting session ID to continue; generated if omitted
chatHistoryNoOptional prior messages for context (ordered, oldest first)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions validation by 'real experts,' hinting at human involvement, but lacks details on permissions, response time, rate limits, or what constitutes 'correctness, safety, or trustworthiness.' This is inadequate for a tool that likely involves external verification processes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose and usage context. It is front-loaded with key information and has no wasted words, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of verification involving experts and no output schema or annotations, the description is incomplete. It does not explain what the tool returns (e.g., validation result, expert feedback), potential errors, or behavioral constraints like authentication needs. This leaves significant gaps for an agent to understand the tool's full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (answer, sessionId, chatHistory). The description adds minimal semantic value beyond the schema, as it only implies that 'answer' is the AI-generated content to verify, without explaining how sessionId or chatHistory affect the verification process. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to validate AI-generated answers for correctness, safety, or trustworthiness, or to double-check answers with experts. It specifies the verb 'validate' and resource 'AI-generated answer,' but does not explicitly differentiate from sibling tools like askExpert, askPearlAi, or askPearlExpert, which may have overlapping verification functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool: when a professional needs validation or a user requests double-checking. It implies usage for AI-generated answers specifically, but does not explicitly state when not to use it or name alternatives among sibling tools, leaving some ambiguity about tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources