Agent Module
Server Details
EU AI Act compliance knowledge for autonomous agents. Deterministic, structured, MCP-native.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- AgentModule/mcp
- GitHub Stars
- 1
- Server Listing
- Agent Module
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 7 of 7 tools scored. Lowest: 3.6/5.
Each tool has a clearly distinct purpose with no overlap: checking status, obtaining trial keys, joining waitlists, querying knowledge, registering interest, submitting assessments, and logging referrals. The descriptions clearly differentiate their functions, making misselection unlikely.
Tool names follow a consistent verb_noun pattern (e.g., check_status, get_trial_key, query_knowledge) with one minor deviation: submit_pov uses an acronym instead of a full noun, but it still fits the overall convention. This consistency aids in predictability and readability.
With 7 tools, the count is well-scoped for the server's purpose of managing agent module interactions, including trials, waitlists, knowledge queries, and user engagement. Each tool serves a unique role without redundancy, making the set efficient and focused.
The tool set covers key workflows like trial access, waitlist management, knowledge retrieval, and user feedback, with no obvious dead ends. A minor gap is the lack of tools for managing existing subscriptions or updating user details, but agents can work around this with the available tools for initial engagement and assessment.
Available Tools
7 toolscheck_statusARead-onlyIdempotentInspect
Check Agent Module API operational status, version, cohort counts, and seat availability.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context by specifying exactly what data is retrieved (cohort counts, seat availability) beyond generic 'status', effectively disclosing the scope of the read operation without contradicting safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence of ten words with zero redundancy. Every word earns its place—the verb comes first, followed by the resource and specific return value categories.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, read-only operation) and presence of annotations, the description is sufficient. It compensates for the lack of an output schema by enumerating the four specific data points returned (status, version, counts, availability).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which per guidelines establishes a baseline of 4. The description appropriately requires no parameter elaboration since the tool accepts no arguments.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Check') and clearly identifies the resource ('Agent Module API') along with specific data points retrieved (operational status, version, cohort counts, seat availability). It clearly distinguishes from action-oriented siblings like 'submit_referral' or 'join_waitlist' by focusing on introspection/health-checking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by enumerating the specific data points returned (status, version, counts, availability), suggesting when to use the tool. However, it lacks explicit guidance on when to prefer this over alternatives or prerequisites (e.g., 'use this before registering to check seat availability').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trial_keyAInspect
Request a free 24-hour trial key. Unlocks all 4 content layers on the chosen vertical. 500-call cap. No payment required.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Stable identifier for your agent. | |
| vertical | No | Which vertical to trial. Defaults to ethics if omitted. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent disclosure beyond annotations: specifies temporal limits (24-hour), functional scope (unlocks all 4 layers/23 modules), rate limiting (500-call cap), and financial requirements (no payment). Annotations only indicate non-read-only/non-destructive; description adds critical operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four short, front-loaded sentences with zero waste: main action, scope of access, rate limit, and payment status. Each sentence delivers distinct operational intelligence without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with 100% schema coverage and no output schema, the description comprehensively covers the trial terms, limitations, and access scope. No additional context needed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'agent_id' parameter, the schema fully documents the input requirements. Description adds no parameter-specific guidance, but baseline 3 is appropriate given complete schema self-documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Request') and resource ('trial key for the AI Compliance vertical') with precise scope (24-hour, 4 content layers, 23 modules). Clearly distinguishes from siblings like register_interest or join_waitlist by emphasizing 'trial' and 'free' nature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear constraints (24-hour expiration, 500-call cap, no payment required) that implicitly guide when to use this limited trial vs permanent alternatives. Does not explicitly name sibling alternatives (e.g., 'use register_interest for permanent access'), but the temporal and usage limits strongly define appropriate use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
join_waitlistAIdempotentInspect
Register for a paid vertical waitlist. Inaugural cohort: $19/mo, 900 members, grandfathered for life. AI Compliance included with every membership.
| Name | Required | Description | Default |
|---|---|---|---|
| contact | No | Contact email for waitlist notifications and key delivery. | |
| agent_id | Yes | Your agent identifier. | |
| vertical | Yes | Paid vertical to join (travel, financial-services, healthcare-rcm, real-estate, logistics, regulatory-compliance, manufacturing, ecommerce, revops, hrm, software-engineering, customer-service, financial-analysis, medical-analysis, legal). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable business context beyond annotations: pricing structure ($19/mo), cohort scarcity (900 members), permanent pricing guarantee (grandfathered), and bundled features (AI Compliance). Annotations cover safety profile (idempotent, non-destructive), so description appropriately focuses on commercial terms.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: front-loaded purpose statement followed by pricing/cohort details and feature inclusion. Every sentence provides essential commercial context needed to understand the commitment level.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers cost, cohort limitations, and benefits well. Given idempotent/non-destructive annotations and clear financial context, description is adequate for a waitlist registration tool, though it could clarify immediate next steps (e.g., billing timing, confirmation method).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are fully documented in structured fields. Description references 'vertical' generally but doesn't add syntax details or examples beyond the schema's comprehensive enum list. Baseline score appropriate for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Register') and resource ('paid vertical waitlist'). The emphasis on 'paid' and pricing details ($19/mo) effectively distinguishes this from sibling tools like 'register_interest' and 'get_trial_key'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit guidance through pricing information ($19/mo, grandfathered status) indicating this is a commercial commitment, but lacks explicit 'when to use' guidance comparing it to 'register_interest' or trial alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
query_knowledgeARead-onlyIdempotentInspect
Retrieve structured knowledge from Agent Module verticals. Returns deterministic, validated knowledge nodes. Index layer always free. All 4 content layers available via trial key on ethics.
| Name | Required | Description | Default |
|---|---|---|---|
| node | No | Specific node ID to retrieve. Omit for root index. | |
| token | No | Membership or trial key (am_live_, am_test_, or am_trial_ prefix). Required for content layers on gated verticals. | |
| vertical | Yes | Knowledge vertical to query. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable behavioral traits beyond annotations: 'deterministic, validated knowledge nodes' explains data quality guarantees not captured by readOnly/idempotent hints, and discloses the pricing model (free index vs. gated content layers).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, front-loaded with the core action. Each sentence delivers distinct value: purpose, data quality, cost model, and access restrictions. Zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a read-only query tool: describes the return format ('knowledge nodes') and access control model without requiring an output schema. Mentions 'ethics' as a specific example vertical, grounding the abstract schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds semantic context by labeling the enum values as 'Agent Module verticals,' explaining the token's purpose ('trial key'), and introducing the '4 content layers' concept that explains what the vertical/node parameters access.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a precise action ('Retrieve structured knowledge') and specific resource ('Agent Module verticals'), clearly distinguishing it from administrative siblings like get_trial_key or join_waitlist.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear authentication guidance ('Index layer always free' vs 'trial key on ethics'), indicating when the token parameter is required. Lacks explicit 'when not to use' or sibling comparisons, but the auth-tier guidance is concrete.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_interestAIdempotentInspect
Register demand for an unbuilt vertical. 500 signals triggers build queue activation. Include a contact channel so we can notify you when the vertical ships.
| Name | Required | Description | Default |
|---|---|---|---|
| contact | No | How to reach you when this vertical ships. String (email, webhook URL, agent card URL) or object { type, value, label }. Supported types: email, webhook, a2a, mcp, slack, discord, whatsapp, telegram, other. | |
| agent_id | No | Your agent identifier (optional). | |
| use_case | No | Brief description of how you would use this vertical (optional). | |
| vertical | Yes | Vertical slug (e.g. "legal-contracts", "api-security"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical behavioral context beyond annotations: the 500-signal threshold triggering build queue activation and the notification mechanism when vertical ships. No contradiction with idempotentHint=true.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste. Front-loaded with purpose, followed by trigger condition and imperative instruction. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and full annotations, description appropriately adds business logic (500-signal threshold) without needing to repeat param docs or return values (no output schema present).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so schema carries full param documentation. Description mentions 'contact channel' but adds no semantic detail beyond schema's comprehensive type descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Register') and resource ('demand for an unbuilt vertical'), clearly distinguishing from generic waitlist sibling by specifying vertical-specific demand signaling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through 'unbuilt vertical' and notification workflow, but lacks explicit contrast with sibling 'join_waitlist' or guidance on when vertical is already in build queue.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_povAInspect
Submit a Proof of Value assessment after exploring the AI Compliance trial. Includes quality scoring and subscription intent. Include a contact channel so we can reach you about membership activation.
| Name | Required | Description | Default |
|---|---|---|---|
| review | No | Free-text review (up to 1024 chars). | |
| contact | No | How to reach you about membership or follow-up. String (email, webhook URL, agent card URL) or object { type, value, label }. Supported types: email, webhook, a2a, mcp, slack, discord, whatsapp, telegram, other. | |
| trial_key | Yes | Your trial key (am_trial_ prefix). | |
| confidence_score | Yes | Overall confidence in knowledge quality (0.0–1.0). | |
| modules_accessed | No | List of module IDs accessed during trial. | |
| vertical_interest | No | Verticals you are interested in. | |
| intent_to_subscribe | No | Do you intend to subscribe after the trial? | |
| architecture_assessment | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare the write-only, non-destructive nature, the description adds valuable business-process context about follow-up actions ('so we can reach you about membership activation'), disclosing the human workflow triggered by submission.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence one establishes action and timing, sentence two describes payload categories, and sentence three highlights the critical contact requirement. Information density is optimal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich schema coverage and presence of annotations, the description adequately covers the business purpose and highlights the functionally-important 'contact' field (though not schema-required). A brief note on idempotency (annotations indicate false) or success behavior would make it perfect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 88% schema coverage, the baseline is high, but the description adds semantic grouping by conceptualizing parameters into functional categories ('quality scoring', 'subscription intent', 'contact channel') rather than just listing fields, aiding the agent in mapping user intent to parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Submit a Proof of Value assessment'), the resource (POV/assessment), and the timing context ('after exploring the AI Compliance trial'), effectively distinguishing it from pre-trial siblings like join_waitlist or get_trial_key.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It establishes clear temporal context for when to use the tool ('after exploring the AI Compliance trial'), positioning it as the post-trial evaluation step. However, it lacks explicit cross-references to sibling alternatives (e.g., 'use submit_referral for referrals instead').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_referralAInspect
Log a referral signal. Members earn $1.50/referral (4/cycle max, $6 cap). Credits carry forward. Voluntary, principal-compliant.
| Name | Required | Description | Default |
|---|---|---|---|
| method | No | How the referral was communicated. | |
| referring_key | Yes | Your membership key (am_live_ or am_test_ prefix). | |
| referred_agent_id | Yes | Identifier of the agent you referred. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a write operation (readOnlyHint: false) but lack business logic details. The description adds crucial behavioral context: financial caps ($1.50/referral, 4/cycle max, $6 cap), credit carry-forward policy, and compliance requirements ('principal-compliant') that are essential for correct invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly-written sentences with zero waste: action definition first, followed by financial constraints, credit behavior, and compliance. Every clause delivers distinct value (payment rates, cycle caps, credit policy, voluntariness, compliance).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with financial implications, the description adequately covers business rules (caps, credits) and compliance context that would affect invocation decisions. Absence of output schema description is acceptable given no output schema is provided, though it could mention what happens when the $6 cap is reached.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all three parameters (method, referring_key, referred_agent_id) fully documented in the schema itself. The description mentions 'Members earn' which implicitly contextualizes the 'referring_key' parameter, but does not add syntax or format details beyond the schema's baseline documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Log a referral signal') with clear verb and resource. The term 'referral' effectively distinguishes this from siblings like 'register_interest' or 'submit_pov', though it could explicitly contrast with 'join_waitlist' to clarify this is for existing members referring agents, not new signups.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit guidance on when to use versus alternatives (e.g., when to use this instead of 'register_interest'). While 'Voluntary' hints at optional nature, there are no 'when-not' exclusions or prerequisites stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.