Skip to main content
Glama

Server Details

Deterministic compliance and vertical knowledge bases for autonomous agents. Free 24hr trial.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
AgentModule/mcp
GitHub Stars
1
Server Listing
Agent Module

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: check_status monitors API health, get_trial_key and join_waitlist handle access requests, query_knowledge retrieves content, register_interest signals demand for new features, submit_pov collects feedback, and submit_referral logs referrals. The descriptions are specific and unambiguous, making tool selection straightforward for an agent.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (e.g., check_status, get_trial_key, join_waitlist), using snake_case throughout. This predictability enhances readability and usability, with no deviations or mixed conventions across the set.

Tool Count5/5

With 7 tools, the count is well-scoped for the server's purpose of managing an AI compliance and content platform. Each tool serves a unique function in areas like access control, content retrieval, user engagement, and feedback, ensuring a balanced and efficient toolset without redundancy or gaps.

Completeness5/5

The toolset provides complete coverage for the domain, including API status checks, trial and paid access management, knowledge querying, demand signaling, feedback submission, and referral tracking. It supports the full user lifecycle from exploration to engagement, with no obvious missing operations that would hinder agent workflows.

Available Tools

7 tools
check_statusA
Read-onlyIdempotent
Inspect

Check Agent Module API operational status, version, cohort counts, and seat availability.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish this is read-only and idempotent. The description adds valuable behavioral context by specifying exactly what data is retrieved (operational status, version, cohort counts, seat availability), effectively compensating for the missing output schema. It does not mention rate limits or response format, but covers the essential scope.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is perfectly front-loaded with the action verb and contains zero waste. Every term ('operational status', 'version', 'cohort counts', 'seat availability') serves to specify the scope of the check without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters) and strong annotations, the description is nearly complete. It compensates for the missing output schema by enumerating the four data points returned. A perfect score would require specifying the response format or structure, but this is sufficient for a health-check endpoint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters in the input schema, the baseline score applies. The description appropriately requires no parameter clarification since the tool accepts no arguments.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Check') followed by precise resources ('Agent Module API operational status, version, cohort counts, and seat availability'). It clearly distinguishes this diagnostic tool from transactional siblings like 'submit_referral' or 'join_waitlist' by focusing on data retrieval rather than mutation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies this is a status/health check operation, it lacks explicit guidance on when to invoke it (e.g., 'call before registration to verify seat availability') or when to prefer alternatives. Usage must be inferred from the 'check' verb and sibling context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trial_keyAInspect

Request a free 24-hour trial key. Unlocks all 4 content layers on the chosen vertical. 500-call cap. No payment required.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesStable identifier for your agent.
verticalNoWhich vertical to trial. Defaults to ethics if omitted.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Superb disclosure beyond annotations: specifies temporal limits (24-hour), quota constraints (500-call cap), feature scope (4 content layers, 23 modules), and cost model (free). Annotations only indicate non-read-only status; description provides essential operational context for a time-bound resource.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four tight sentences with zero waste. Front-loaded with core action, followed by scope, limits, and cost. Every clause delivers distinct value regarding the trial's characteristics.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Rich operational detail compensates for lack of output schema by implying the trial key is returned. Could improve by explicitly stating return format or idempotency behavior (annotations indicate non-idempotent), but adequately covers the single-parameter mutation's behavioral contract.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with agent_id fully documented. Description does not mention the parameter, but baseline 3 is appropriate given high schema coverage per rubric. No additional semantic context provided for why agent_id is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specific verb ('Request') and resource ('free 24-hour trial key') with clear scope ('AI Compliance vertical'). Implicitly distinguishes from siblings like join_waitlist or register_interest by emphasizing immediate 'trial key' access versus waiting or expressing interest.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage through constraints ('24-hour', '500-call cap', 'No payment required'), indicating when to use (temporary evaluation). However, lacks explicit when-not guidance or direct comparison to alternatives like join_waitlist for unavailable trials.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

join_waitlistA
Idempotent
Inspect

Register for a paid vertical waitlist. Inaugural cohort: $19/mo, 900 members, grandfathered for life. AI Compliance included with every membership.

ParametersJSON Schema
NameRequiredDescriptionDefault
contactNoContact email for waitlist notifications and key delivery.
agent_idYesYour agent identifier.
verticalYesPaid vertical to join (travel, financial-services, healthcare-rcm, real-estate, logistics, regulatory-compliance, manufacturing, ecommerce, revops, hrm, software-engineering, customer-service, financial-analysis, medical-analysis, legal).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare idempotentHint=true and non-destructive behavior. The description adds valuable business context beyond annotations: pricing ($19/mo), cohort limits (900 members), grandfathering status, and included benefits (AI Compliance). It does not contradict technical annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently structured: action definition, pricing/cohort details, and membership benefits. Every sentence earns its place with zero waste. Front-loaded with the core action 'Register'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the paid nature of the tool, the description adequately covers cost and benefits but lacks clarity on billing timing (immediate charge vs upon waitlist acceptance). No output schema exists, but the description explains what the user receives (membership benefits) rather than return data structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all three parameters (contact, agent_id, vertical with enumerated options). The description adds no parameter-specific guidance, but baseline 3 is appropriate since the schema carries the full burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Register for a paid vertical waitlist' with specific verb and resource. The inclusion of pricing details ($19/mo) and 'paid' designation effectively distinguishes this from the sibling tool 'register_interest', clarifying this is a paid commitment rather than casual interest.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the pricing context ($19/mo) implies this is for serious commitments versus free alternatives, the description lacks explicit guidance on when to use this versus siblings like 'register_interest' or 'check_status'. No prerequisites or exclusions are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_knowledgeA
Read-onlyIdempotent
Inspect

Retrieve structured knowledge from Agent Module verticals. Returns deterministic, validated knowledge nodes. Index layer always free. All 4 content layers available via trial key on ethics.

ParametersJSON Schema
NameRequiredDescriptionDefault
nodeNoSpecific node ID to retrieve. Omit for root index.
tokenNoMembership or trial key (am_live_, am_test_, or am_trial_ prefix). Required for content layers on gated verticals.
verticalYesKnowledge vertical to query.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable behavioral context beyond annotations by disclosing 'deterministic, validated' return characteristics and the layered access model (free index vs. gated content layers). Accurately aligns with readOnlyHint=true and destructiveHint=false through the 'Retrieve' operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: purpose ('Retrieve...'), return value ('Returns deterministic...'), access tier 1 ('Index layer always free'), and access tier 2 ('All 4 content layers...'). Information is front-loaded and efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensively covers the tool's access control complexity and return semantics despite lacking an output schema. Minor gap: it does not explicitly describe the root index behavior when 'node' is omitted, though this is documented in the parameter schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is appropriately 3. The description provides contextual mapping ('Agent Module verticals,' 'trial key') that aligns with the schema but does not add syntax details, validation rules, or parameter interaction logic beyond what the structured schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific verb 'Retrieve' and resource 'structured knowledge from Agent Module verticals,' clearly distinguishing it from sibling tools like get_trial_key or submit_referral which handle access management rather than knowledge retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear access-control guidance stating 'Index layer always free' and that 'All 4 content layers available via trial key,' implicitly defining when the token parameter is required. However, it lacks explicit comparison to siblings like check_status for determining system readiness before querying.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_interestA
Idempotent
Inspect

Register demand for an unbuilt vertical. 500 signals triggers build queue activation. Include a contact channel so we can notify you when the vertical ships.

ParametersJSON Schema
NameRequiredDescriptionDefault
contactNoHow to reach you when this vertical ships. String (email, webhook URL, agent card URL) or object { type, value, label }. Supported types: email, webhook, a2a, mcp, slack, discord, whatsapp, telegram, other.
agent_idNoYour agent identifier (optional).
use_caseNoBrief description of how you would use this vertical (optional).
verticalYesVertical slug (e.g. "legal-contracts", "api-security").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate idempotent write operation (idempotentHint=true, readOnlyHint=false). Description adds valuable behavioral context: the 500-signal threshold for build activation and the notification promise ('notify you when the vertical ships'). This explains the downstream effects and lifecycle beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste. Front-loaded with the core action (register demand), followed by business logic (threshold), and usage requirement (contact channel). Every sentence earns its place by conveying distinct information about mechanism or requirements.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and complete annotations, the description appropriately focuses on business logic (threshold) and notification behavior rather than repeating parameter specs. No output schema exists to describe. Could improve by mentioning idempotency or what happens if the vertical already exists, but adequate for the complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description references 'contact channel' and 'vertical', but largely mirrors schema descriptions (schema already states contact is 'How to reach you when this vertical ships'). Adds minimal semantic value beyond schema since parameters are already well-documented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Register demand') and resource ('unbuilt vertical'). The '500 signals triggers build queue activation' clarifies the voting mechanism, distinguishing it from sibling 'join_waitlist' which likely implies waiting for existing capacity rather than triggering new builds. However, it doesn't explicitly differentiate from 'submit_pov' or other submission tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context through the threshold mechanism ('500 signals'), suggesting when the tool is effective. Includes imperative guidance ('Include a contact channel'). However, lacks explicit when-to-use/when-not-to-use guidance comparing it to 'join_waitlist' or 'submit_pov', forcing the agent to infer from 'unbuilt' versus other states.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_povAInspect

Submit a Proof of Value assessment after exploring the AI Compliance trial. Includes quality scoring and subscription intent. Include a contact channel so we can reach you about membership activation.

ParametersJSON Schema
NameRequiredDescriptionDefault
reviewNoFree-text review (up to 1024 chars).
contactNoHow to reach you about membership or follow-up. String (email, webhook URL, agent card URL) or object { type, value, label }. Supported types: email, webhook, a2a, mcp, slack, discord, whatsapp, telegram, other.
trial_keyYesYour trial key (am_trial_ prefix).
confidence_scoreYesOverall confidence in knowledge quality (0.0–1.0).
modules_accessedNoList of module IDs accessed during trial.
vertical_interestNoVerticals you are interested in.
intent_to_subscribeNoDo you intend to subscribe after the trial?
architecture_assessmentNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish this is a non-destructive write operation (readOnlyHint: false, destructiveHint: false). The description adds valuable business context about outcomes ('membership activation', 'quality scoring') but doesn't disclose behavioral traits like the non-idempotent nature (idempotentHint: false) or what happens upon duplicate submissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero waste. The first establishes the core action and context, the second summarizes key data categories, and the third states a critical requirement. Information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 8-parameter complexity and business workflow (trial to membership), the description adequately covers the business purpose and required data categories. However, without an output schema, it could briefly mention what to expect after submission (confirmation, processing timeline) to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 88% schema description coverage, the schema carries most of the parameter documentation burden. The description adds conceptual grouping ('quality scoring', 'subscription intent', 'contact channel') mapping to specific fields, but doesn't add syntax details or semantic constraints beyond the schema's existing descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Submit a Proof of Value assessment) and context (after exploring the AI Compliance trial), distinguishing it from sibling tools like register_interest or submit_referral by tying it to the trial workflow. However, it doesn't explicitly name alternatives or differentiate from other submission-type siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implicit timing guidance ('after exploring the AI Compliance trial') and requirement hints ('Include a contact channel'), but lacks explicit when-to-use/when-not-to-use guidance or explicit contrasts with siblings like join_waitlist or register_interest.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_referralAInspect

Log a referral signal. Members earn $1.50/referral (4/cycle max, $6 cap). Credits carry forward. Voluntary, principal-compliant.

ParametersJSON Schema
NameRequiredDescriptionDefault
methodNoHow the referral was communicated.
referring_keyYesYour membership key (am_live_ or am_test_ prefix).
referred_agent_idYesIdentifier of the agent you referred.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (which indicate non-idempotent, non-destructive writes), the description adds critical behavioral context: the $1.50/referral value, the 4-per-cycle maximum and $6 cap constraints, and that 'Credits carry forward' (persistence behavior). These financial and state-management details are essential for correct agent usage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four dense sentences with zero waste: purpose statement, financial constraints/limitations, persistence behavior, and compliance characteristics. Information is front-loaded with the action verb, and every sentence earns its place by conveying distinct operational or business logic.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with financial side-effects and no output schema, the description comprehensively covers the business rules (caps, carry-forward) and compliance requirements. Minor gap: it does not describe the return value or success/failure indicators, though the rich behavioral context compensates partially.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is appropriately met. The description implicitly reinforces the referring_key's purpose through 'Members earn,' but does not explicitly elaborate on the method enum values or referred_agent_id beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the core action ('Log a referral signal') with specific financial context ($1.50/referral, caps) that clearly differentiates it from sibling tools like submit_pov or register_interest. The verb 'Log' precisely matches the write-oriented annotations (readOnlyHint: false).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implicit usage constraints through 'Voluntary, principal-compliant' and the cycle limits (4/cycle max, $6 cap), indicating when the tool is appropriate. However, it lacks explicit comparison to siblings (e.g., when to use vs. register_interest) or explicit prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.