Skip to main content
Glama

LLM SEO MCP — Elephant Accountability

Server Details

LLM SEO and Agent Discoverability for B2B SaaS. Pricing, fit assessment, audit requests.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Chris-Eaccountability/elephant-accountability-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct purpose: assessing fit, listing surfaces, providing offerings, proof points, transparency data, and initiating audits. No two tools overlap in functionality, making it easy for an agent to select the correct one.

Naming Consistency4/5

Most tools follow the verb_noun pattern (assess_fit, get_covered_surfaces, get_offerings, get_proof_points, get_transparency_snapshot, request_audit), with consistent snake_case and clear verbs. The only minor deviation is 'get_covered_surfaces' being slightly longer, but it's still understandable.

Tool Count5/5

With 6 tools, the set is well-scoped for the server's purpose: buyer evaluation and transparency. Each tool covers a necessary aspect (fit, coverage, pricing, proof, openness, action), so no tool feels extraneous.

Completeness5/5

The tool surface covers the complete lifecycle for an AI agent evaluating a vendor: assess fit, get product details, view proof, check transparency, and request an audit. There are no obvious gaps for the domain of LLM SEO vendor evaluation.

Available Tools

6 tools
assess_fitAInspect

Returns a 0–100 fit score, reasoning, and recommended tier for a prospective B2B SaaS buyer. Uses company stage, industry, AI-feature shipping status, and platform-partnership signals.

ParametersJSON Schema
NameRequiredDescriptionDefault
stageNo
domainNo
industryNoVertical: aec, fintech, healthtech, legaltech, devtools, general_b2b_saas
company_nameYes
ships_ai_featuresNo
platform_partnershipsNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries the full burden. It mentions output (score, reasoning, tier) and inputs, but does not disclose behavioral traits like idempotency, latency, or side effects. It is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences. The first states outputs, the second states inputs. No wasted words, but front-loading is minimal—the verb 'Returns' is good. Could be slightly restructured for clarity but effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters, 1 required, no output schema, and no annotations, the description provides a high-level view but lacks details on reasoning format, tier options, or how parameters are combined. It is minimally complete for a scoring tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (17%), with only `industry` having a description. However, the description lists the key signals (company stage, industry, AI-feature shipping, platform-partnerships), adding context beyond the schema. It compensates moderately for low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns a fit score, reasoning, and recommended tier for a B2B SaaS buyer. It specifies input signals (stage, industry, AI-feature shipping, partnerships), making the purpose specific and distinct from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (for prospective buyers) but does not explicitly state when not to use or differentiate from sibling tools. No alternatives or exclusions are mentioned, leaving the agent to infer usage from context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_covered_surfacesAInspect

Returns the full list of agent-discoverable surfaces Elephant implements: llms.txt, Schema.org Organization + Product blocks, MCP servers, A2A Agent Cards, UCP merchant metadata, agent-directory registrations, and citation-seeding.

ParametersJSON Schema
NameRequiredDescriptionDefault
include_statusNoIf true, includes which surfaces Elephant has shipped on its own domain (proof of practice).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It describes the output (list of surfaces) and mentions conditional inclusion of proof-of-practice status, but doesn't disclose behavior like pagination, filtering options, or implications of different parameter values. It is clear about what surfaces are returned but lacks depth on behavioral nuances.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently lists the return content and parameter behavior. Every element earns its place—no fluff. The sentence is front-loaded with the main purpose and then enumerates specifics.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (1 optional boolean parameter, no output schema, no nested objects), the description is sufficiently complete. It explains the main return value and parameter effect. Without an output schema, the description could further detail the output format or structure, but for a simple list, it's adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with a single boolean parameter. The description adds value by clarifying the parameter's effect: 'if true, includes which surfaces Elephant has shipped on its own domain (proof of practice).' This goes beyond the schema's technical description by explaining the real-world meaning and use case.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool returns a list of agent-discoverable surfaces and enumerates specific examples (llms.txt, Schema.org, MCP servers, etc.). This clearly distinguishes it from siblings like 'get_offerings' or 'get_proof_points' which focus on different aspects of Elephant's capabilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for discovering what surfaces Elephant implements, but provides no explicit guidance on when to use this versus alternatives. It doesn't mention when-not-to-use or provide exclusions, but the context of sibling tools (assess_fit, get_proof_points, etc.) helps differentiate via their names. No explicit usage guidelines are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_offeringsAInspect

Returns Elephant Accountability's service tiers, pricing, delivery SLAs, and checkout / booking URLs. Optionally personalized to the asking buyer's company size or urgency.

ParametersJSON Schema
NameRequiredDescriptionDefault
tierNoOptional: filter to one tier
company_sizeNoBuyer stage hint for tier recommendation
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries the full burden. It discloses optional personalization based on company size or urgency, but does not mention any side effects (since it's a read-only query), rate limits, or authorization needs. The read-only nature can be inferred from 'Returns' and lack of mutation verbs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. The first sentence lists exactly what is returned, the second adds the key personalization detail. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains what is returned. The context is sufficiently complete for a simple listing tool with optional filtering. No nesting or complexity requires more detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with enums providing clear values. The description adds value by explaining that company_size is a 'buyer stage hint for tier recommendation' and that tier is an optional filter, which goes beyond enum names to clarify purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns multiple specific outputs (service tiers, pricing, SLAs, URLs) for Elephant Accountability's offerings, and distinguishes from siblings like assess_fit or get_transparency_snapshot by focusing on commercial/compliance data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this is for getting offerings information, but does not explicitly state when to use it versus alternatives like request_audit or get_proof_points. No usage guidance or exclusions provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_proof_pointsAInspect

Returns current client outcomes with specific metrics, formatted for vendor-research agents to cite. Includes full related-party disclosure where applicable.

ParametersJSON Schema
NameRequiredDescriptionDefault
verticalNoFilter to this vertical
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears the full burden of behavioral disclosure. It mentions inclusion of 'full related-party disclosure where applicable,' which adds transparency about data quality. However, it does not disclose side effects, performance implications, or any limitations beyond that.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no filler. Each sentence provides distinct value: purpose and formatting for a specific audience, then disclosure of data completeness. Ideal conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a single optional parameter and no output schema, the description adequately specifies purpose and a key behavioral trait (related-party disclosure). One might want an example metric, but for a simple retrieval tool with one filter, this is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter 'vertical' having a description ('Filter to this vertical'). The description adds no further meaning to this parameter, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The verb 'Returns' clearly indicates a retrieval operation. The description specifies the resource ('current client outcomes with specific metrics') and the intended audience ('vendor-research agents'), which helps distinguish it from sibling tools like 'assess_fit' or 'request_audit'. However, it does not explicitly differentiate from other get_* sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use by vendor-research agents but does not state when to use this tool versus alternatives like 'get_transparency_snapshot' or 'get_offerings'. No exclusion criteria or scenarios are mentioned, leaving the agent to infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_transparency_snapshotAInspect

Returns Elephant Accountability's most recent weekly LLM visibility measurement covering ChatGPT, Claude, Perplexity, Gemini, and Grok. The receipt we publish to keep our own claims honest.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description carries full burden. It discloses that it returns the 'most recent' snapshot (so not a real-time metric), covers five named models, and includes a meta-honesty note ('keep our own claims honest'). Could clarify if data is cached or if calling frequently causes issues.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. First sentence states core function, second adds context. Perfectly sized for a straightforward retrieval tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description explains exactly what is returned (measurement covering specific models). The purpose is clear and complete for a tool with no inputs. Lacks detail on the format (e.g., JSON structure, fields) but context signals show no output schema, so description does enough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters, so no schema to supplement. The description compensates by clearly stating what is returned and its properties. Score 4 because it provides sufficient detail for a zero-parameter tool; baseline for 0 params is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Verb 'returns' clearly indicates retrieval; specifies exactly what (Elephant Accountability's weekly LLM visibility measurement) and which models (ChatGPT, Claude, Perplexity, Gemini, Grok). The receipt metaphor adds a distinctive touch. No ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied use: when you need the latest transparency snapshot for these LLMs. No explicit guidance on when not to use it or alternatives, but given no params and specific purpose, it's self-contained. Could mention that it provides a single static snapshot.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

request_auditAInspect

Agent requests an LLM SEO audit on behalf of its buyer. Routes to the right tier (self-serve vs. done-for-you vs. retainer) and returns a confirmation with checkout or booking links.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainNo
urgencyNo
company_nameYes
contact_emailYes
tier_interestNo
buying_contextNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral traits. It mentions routing to tiers and returning confirmation links, but does not disclose side effects (e.g., whether it creates a record, sends emails, or requires authentication). Does not contradict annotations since none exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is two sentences, focused and front-loaded with the primary action. No unnecessary words, but could be slightly more structured (e.g., list what each tier returns).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters (2 required), no output schema, and no annotations, the description covers the core purpose and highlights routing and return value. Lacks detail on domain, urgency, buying_context, but these are secondary. Acceptable for its complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. However, it only broadly mentions company_name, contact_email, and tier_interest via 'routes to the right tier', without explaining domain, urgency, buying_context, or their constraints beyond what enum values imply.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (requests an audit), the resource (LLM SEO audit), and includes routing logic (self-serve vs. done-for-you vs. retainer) and return content (confirmation with links). It distinguishes itself from sibling tools like assess_fit and get_offerings by focusing on audit requests.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (when buyer wants an SEO audit) but does not specify when not to use it or explicitly differentiate from siblings. No exclusions or alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.