Archangel Agency MCP
Server Details
Universal Language briefings, FusionGirl context JSONs, service catalog, agent info. x402-enabled.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Jthora/archangel-agency-mcp
- GitHub Stars
- 0
- Server Listing
- archangel-agency-mcp
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
4 toolsget_agent_infoAInspect
Returns the A2A agent card with skills, capabilities, payment methods, and contact info for Jono Tho'ra / Archangel Agency.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It adequately describes return values (agent card contents) but omits other behavioral traits like caching, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence with zero waste. Front-loaded with action verb, immediately specifies data structure (A2A agent card), enumerates key fields, and identifies the subject entity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a simple zero-parameter metadata tool. Compensates for missing output schema by listing returned fields, though could specify data format (e.g., JSON) or freshness guarantees.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, establishing baseline 4. No parameter documentation needed; description correctly focuses on return value semantics rather than inventing parameter explanations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with verb 'Returns' + resource 'A2A agent card', and clearly distinguishes from siblings by identifying the specific entity (Jono Tho'ra / Archangel Agency) and content type (skills, capabilities, payment methods, contact info).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through clear description of returned data, but lacks explicit when-to-use guidance or comparisons to siblings like get_service_catalog or get_context_jsons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_context_jsonsAInspect
Returns the directory of Context JSON briefing documents — structured ontological frameworks built on Universal Language primitives, designed for AI consumption. Use 'list' to see available contexts, or specify a context name to fetch it directly.
| Name | Required | Description | Default |
|---|---|---|---|
| context | No | Which context to retrieve. 'list' returns the full directory. Other values fetch specific Context JSONs directly. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable domain context (AI consumption, ontological frameworks) but omits operational details: error handling for invalid context names, caching behavior, or whether results are static/dynamic.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences. Front-loaded with action (Returns). Second sentence provides actionable usage guidance. Domain jargon ('ontological frameworks,' 'Universal Language primitives') is necessary for this specific tool's context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter retrieval tool with 100% schema coverage. Explains the 'list' vs 'fetch' return patterns. However, lacks description of return structure (JSON format, schema) since no output_schema is provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed enum description ('list' vs specific values). Description mirrors schema guidance without adding syntactic details, format examples, or semantics of individual context names (e.g., what 'archangel_agency' represents). Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Returns') and specific resource ('Context JSON briefing documents'). Distinguishes from siblings by defining these as 'structured ontological frameworks built on Universal Language primitives,' differentiating from get_universal_language_briefing (the primitives themselves) and other utilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly documents the dual-mode operation: 'Use 'list' to see available contexts, or specify a context name to fetch it directly.' Clearly maps parameter values to behaviors. Lacks explicit sibling comparison (when to use vs get_universal_language_briefing).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_service_catalogAInspect
Returns the full machine-readable service catalog including available services, pricing, and payment methods. Use this to understand what Archangel Agency offers to AI agents.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. Adds useful behavioral hints ('machine-readable', 'full' catalog) and return content details. However, lacks disclosure on safety profile (read-only vs destructive), caching, rate limits, or performance characteristics expected for an unannotated tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. First sentence defines the operation and return payload; second provides usage intent. Information is front-loaded and appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter read operation without output schema, the description adequately covers what is returned (services/pricing/payment methods) and why to use it. Could improve by noting if the result is cacheable or if there are usage costs, but sufficient for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters with 100% coverage. Per scoring rules, 0 parameters establishes a baseline of 4. Description correctly implies no filtering is needed by stating 'full' catalog, aligning with the empty parameter schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'Returns' + resource 'service catalog' + detailed contents ('available services, pricing, and payment methods'). Clearly distinguishes from sibling tools (agent_info, context_jsons, language_briefing) by focusing on agency service offerings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context: 'Use this to understand what Archangel Agency offers to AI agents.' effectively signals when to use it. Lacks explicit 'when not to use' or named alternatives, but the sibling differentiation is implicit in the specific scope.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_universal_language_briefingAInspect
Returns the Universal Language research briefing — structured content about the formally derivable geometric pattern language. Useful for answering questions about UL, AI alignment through ontological context, or UQPL.
| Name | Required | Description | Default |
|---|---|---|---|
| depth | No | Level of detail. 'summary' returns llms.txt (~2KB), 'full' returns llms-full.txt (~15KB). Default: summary. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. 'Returns' implies read-only retrieval, and 'structured content' hints at format. However, lacks explicit disclosure of idempotency, caching behavior, or safety guarantees that annotations would typically provide. Does not contradict any annotations (none exist).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two high-density sentences with zero waste. First sentence defines the resource and nature of content; second sentence defines applicability. No redundancy with schema or title.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for low complexity (1 optional param, no nesting). Description adequately covers what is returned (research briefing on geometric pattern language) and its use cases. Lacks output schema but implies text/content return; sufficient for a simple retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed file specifications ('llms.txt (~2KB)', 'llms-full.txt (~15KB)'). Description mentions no parameters, but baseline 3 is appropriate when schema fully documents the single 'depth' parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Returns the Universal Language research briefing' provides exact verb and resource, while 'formally derivable geometric pattern language' precisely scopes the content. Distinct from siblings (get_agent_info, get_context_jsons) by targeting specific research content rather than agent metadata or generic context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear positive guidance: 'Useful for answering questions about UL, AI alignment through ontological context, or UQPL.' explicitly states applicable domains. Lacks explicit negative constraints ('do not use for...') or alternative routing, but sufficiently indicates when to invoke.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.