Seneschal
Server Details
Free, public, read-only REST + Model Context Protocol server exposing real-time and historical DeFi liquidation telemetry for Aave, Morpho, Spark and Compound on Ethereum mainnet, plus block-builder market share data from the operator's own slot-by-slot shadow recorder.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 9 of 9 tools scored. Lowest: 3.2/5.
Tools are mostly distinct, though list_at_risk_borrowers and list_borrowers overlap in purpose; descriptions clarify differences with extra filters. Other tools like builder leaderboard, flashloan providers, and borrower history are clearly separate.
All tool names use the 'seneschal_' prefix with underscores, but verbs vary: some use 'get_' or 'list_', while others are just nouns (e.g., 'seneschal_health', 'seneschal_builder_leaderboard'). This is minor inconsistency.
Nine tools cover the domain of DeFi monitoring well without being excessive. Each tool serves a specific purpose, and the count feels appropriate for the server's scope.
The tool set covers key areas: borrower states, history, liquidations, flash loan providers, builder market share, and an aggregate stats overview. Minor gaps like per-slot builder details or specific liquidation data are absent but not critical for the intended use.
Available Tools
9 toolsseneschal_builder_leaderboardBuilder leaderboardAInspect
Slot-by-slot ground-truth share of Ethereum mainnet block builders observed by Seneschal's shadow recorder, with total MEV captured per builder in the window. Cached for 60s.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Top-N builders to return. Default 20. | |
| window | No | Lookback window. Default 24h. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses caching for 60s but does not mention read-only nature, rate limits, or any side effects. Adequate but could be improved.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences effectively convey purpose, data content, and caching behavior without waste. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given two parameters and no output schema, the description covers the key aspects: data source (Seneschal shadow recorder), content (builder share and MEV), and caching. It could detail the return shape slightly more, but is largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with both parameters described (limit and window). The description adds minimal extra meaning beyond 'Top-N builders' and does not elaborate on parameter behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool returns slot-by-slot ground-truth share of Ethereum block builders and total MEV captured per builder, distinguishing it from sibling tools that deal with flashloans, borrowers, and liquidations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool versus alternatives, though the sibling tools cover different topics, so confusion is unlikely. Guidance on usage context is implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
seneschal_flashloan_providersFlash loan provider catalogueAInspect
Curated catalogue of Ethereum mainnet flash-loan providers (Aave V3, Balancer V2, Morpho Blue, Uniswap V3, FlashBank) with current fee in basis points, contract addresses, qualitative liquidity notes, and per-provider caveats. Helpful for searcher agents picking the cheapest viable provider for a liquidation or arbitrage strategy. The catalogue is editorially open: filter by chain, max fee, or multi-asset support.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain key, default "ethereum". Currently only ethereum is catalogued. | |
| max_fee_bps | No | Drop providers whose flat fee exceeds this in basis points (1 bp = 0.01%). | |
| multi_asset | No | If true, only return providers that support borrowing multiple assets in a single flash loan. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It implies a read-only query operation ('curated catalogue'), but does not explicitly state whether modifications occur or if there are side effects. The qualitative liquidity notes suggest safe data retrieval.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long with no redundancy. Every part adds value: purpose, contents, use case, and filter options. Front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains the return values (fee, addresses, liquidity notes, caveats) despite no output schema. Parameter semantics are covered by schema. The tool is a simple query with no nested objects, so the provided context is sufficient for an agent to decide to invoke it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all three parameters (chain, max_fee_bps, multi_asset). The description reiterates filtering but adds no new meaning beyond what the schema provides, warranting a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as a curated catalogue of Ethereum mainnet flash-loan providers, listing specific providers and details. It distinguishes from sibling tools (e.g., borrower lists, liquidations) by focusing exclusively on provider information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states it is 'helpful for searcher agents picking the cheapest viable provider' and mentions filter options, establishing clear usage context. It could be improved by explicitly stating when not to use it, but the purpose is well understood from the sibling set.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
seneschal_get_borrowerGet borrower snapshotAInspect
Returns the latest known state of address across every protocol where we have data (Aave, Morpho, Spark). Pass the EOA / contract address as a 0x-prefixed 20-byte hex string.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavior. It explains the data scope and input format but lacks details on data freshness, error handling, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, no fluff, and the most important information (purpose and parameter format) is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not explain the return structure (e.g., fields in the snapshot). For a retrieval tool, this is a gap, though the core concept is clear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage, the description adds meaning by specifying the address format (0x-prefixed 20-byte hex) and clarifying it returns state across protocols. This goes beyond the schema pattern.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns the latest known state of an address across multiple protocols (Aave, Morpho, Spark), distinguishing it from siblings like seneschal_get_borrower_history and seneschal_list_borrowers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage context is implied but not explicit. It does not specify when to use this tool versus alternatives (e.g., for history use the sibling tool) or any exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
seneschal_get_borrower_historyGet borrower historyAInspect
Returns a time series of (timestamp, health_factor, collateral_usd, debt_usd) observations for address on protocol. Granularity defaults to raw observations; use hour or day for chart-friendly buckets.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max rows fetched from history table before bucketing. | |
| address | Yes | ||
| protocol | Yes | Only aave and morpho have history tables. | |
| since_ms | No | Unix epoch ms. Defaults to now − 7d. | |
| until_ms | No | Unix epoch ms. Defaults to now. | |
| granularity | No | Bucket size; default raw. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It explains the return shape and granularity but does not disclose behavioral traits like data ordering, pagination, or potential side effects. It implies read-only behavior but could be more transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences convey the core purpose and an important usage tip with no redundancy. The key action ('Returns a time series...') is front-loaded, making it easy for an agent to quickly grasp.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 6 parameters (2 required) and no output schema, the description successfully covers the returned fields and the most important parameter (granularity). Defaults and other parameters are adequately handled by the schema. Minor omissions like ordering or data limits are acceptable given the tool's straightforward nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 83%, meeting the high-coverage baseline of 3. The description adds value by explaining granularity options and their intent ('chart-friendly buckets') and clarifies limit as 'Max rows fetched before bucketing'. This goes beyond the schema's basic descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns a time series of specific fields (timestamp, health_factor, collateral_usd, debt_usd) for a given address and protocol. This distinguishes it from sibling tools like seneschal_get_borrower (likely current state) or seneschal_list_borrowers (list view).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides guidance on granularity options ('use hour or day for chart-friendly buckets') but does not explicitly compare to sibling tools or state when not to use it. Implicitly suggests it's for historical data but lacks explicit alternatives or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
seneschal_healthService healthAInspect
Returns table sizes and data-source freshness timestamps for the Seneschal Data backend.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description indicates a read operation returning data but does not mention side effects, caching, or authentication requirements. Adequate but minimal disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 13 words, front-loaded with verb and resource. No unnecessary information; earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description adequately informs what data is returned. Could mention output format but not critical for a simple health tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema (100% coverage). Description adds meaning by specifying what the tool returns (table sizes and freshness timestamps), which is sufficient for a parameterless tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Returns' and specific resources 'table sizes and data-source freshness timestamps' for the Seneschal Data backend, effectively distinguishing it from sibling tools that focus on borrowers, liquidations, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for monitoring backend health but provides no explicit when-to-use or alternatives. Lacks guidance on when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
seneschal_list_at_risk_borrowersList at-risk borrowersBInspect
Current snapshot of borrowers across Aave, Morpho, and Spark whose health factor sits below max_hf, sorted ascending. Use min_debt_usd to ignore dust positions.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max rows. Default 50, max 500. | |
| max_hf | No | Return only borrowers with health factor strictly less than this. Default: no cap. | |
| protocol | No | Restrict to one protocol; omit for all. | |
| min_debt_usd | No | Ignore positions with debt smaller than this many USD. Default: 0. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must cover behavioral traits. It states it's a 'current snapshot' (point-in-time) and 'sorted ascending', but does not disclose pagination, authentication requirements, or response structure. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-loading the main purpose. No extraneous words. Exemplary succinctness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite clear purpose, the description lacks details about the output format (e.g., what fields are returned). Without an output schema, the agent has to guess the response structure, which is a significant gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% so parameters are already documented. The description adds context for max_hf and min_debt_usd but not for limit or protocol. Baseline 3 is appropriate as the description adds minimal value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists at-risk borrowers with health factor below max_hf, sorted ascending, and mentions three protocols. However, it omits 'compound' from the enumeration even though the schema includes it, causing a minor inconsistency.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus siblings like 'seneschal_list_borrowers'. The description does not mention alternatives or when not to use it, leaving the agent to infer from the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
seneschal_list_borrowersList borrowers (generic)AInspect
Generic discovery surface over the borrower snapshot table. Like seneschal_list_at_risk_borrowers but with both lower and upper HF bounds, optional max-debt cap, configurable sort field/direction, and offset-based pagination. Use this to walk the catalogue without knowing borrower addresses in advance.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max rows per page. Default 50, max 500. | |
| max_hf | No | Exclusive upper bound on health factor. | |
| min_hf | No | Inclusive lower bound on health factor. | |
| offset | No | Pagination offset. Default 0. | |
| sort_by | No | Default 'health_factor'. | |
| protocol | No | Restrict to one protocol; omit for all. | |
| sort_dir | No | Default 'asc'. | |
| max_debt_usd | No | Maximum debt in USD (default unbounded). | |
| min_debt_usd | No | Minimum debt in USD (default 0). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavioral traits. It mentions the tool operates on a 'snapshot table' and supports pagination, but does not specify whether it is read-only, idempotent, or if it has side effects. The description partially fulfills the transparency burden by explaining the filtering and sorting behavior, but lacks explicit safety characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two dense sentences. The first sentence establishes the purpose and the second adds critical details and usage guidance. Every word serves a purpose; there is no repetition or filler. It is front-loaded and highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (9 parameters, no output schema), the description adequately explains the tool's scope, filtering capabilities, and pagination. It does not describe the response format, but the tool is a list operation and the schema fully documents parameters. The description could be more complete by noting the default limit and max, but those are in the schema. Overall, it provides sufficient context for an agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, giving a baseline of 3. The description adds value beyond the schema by explaining the purpose of the extra parameters relative to the sibling (lower/upper HF bounds, max-debt cap, sort field/direction, offset pagination). This contextualizes the parameters and clarifies their role in the tool's functionality.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a 'generic discovery surface' over the borrower snapshot table, distinguishes it from the sibling `seneschal_list_at_risk_borrowers` by listing additional capabilities (lower/upper HF bounds, max-debt cap, configurable sort, offset pagination), and uses specific verbs like 'walk the catalogue'. The purpose is immediately clear and distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Use this to walk the catalogue without knowing borrower addresses in advance.', providing a clear use case. It contrasts with the sibling tool by noting differences in HF bounds and pagination, but does not explicitly state when not to use it or when to prefer alternatives. However, the guidance is sufficient for typical selection decisions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
seneschal_recent_liquidationsRecent liquidationsAInspect
Liquidations observed in the recent past, including both ones won by other liquidators (outcome=won_by_other) and ones we ourselves landed (outcome=we_landed). Sorted by timestamp descending.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max rows. Default 50, max 500. | |
| protocol | No | Restrict to one protocol. | |
| since_ms | No | Unix epoch milliseconds. Defaults to now − 24h. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It explains data content, outcome categories, and sorting order, but lacks details on read-only nature, pagination, or performance. Adequate but not extensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. Front-loaded with purpose and key details. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given three optional parameters, no output schema, and several siblings, the description covers data type, sorting, and outcome categories. Could mention default limit or protocol filtering but schema handles that. Adequately complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds value by explaining the outcome classification in the data, which is not in parameter descriptions. This enhances understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it lists recent liquidations, distinguishes between outcomes, and specifies sorting. It is specific and distinguishes from sibling tools like list_borrowers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives, but the context of liquidations and outcomes implies usage for tracking liquidation outcomes. Sibling list is present but no comparative guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
seneschal_stats_overviewPublic stats overviewAInspect
Aggregate snapshot powering the public stats dashboard at stats.seneschal.space: total positions tracked, debt under watch, HF distribution histogram, top-10 at-risk borrowers, 30-day liquidations-per-day series, builder market share for 24h/7d/30d windows, and 10 most recent on-chain liquidations. One call returns everything needed to render the dashboard.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It lists return data but does not disclose behavioral traits like data freshness, caching, or whether it queries live data. Being a read-only snapshot is implied but not explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys the tool's purpose and output. While it is a bit long due to listing many items, it remains clear and front-loaded with the key idea 'Aggregate snapshot.'
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description is remarkably complete. It enumerates all the data components needed to render a dashboard, leaving no ambiguity about what the tool returns. The sibling tools cover specific sub-queries, making this complete for its intended use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description doesn't need to explain parameters. It adds value by detailing the exact metrics returned (e.g., total positions, debt under watch, HF histogram), which compensates for the lack of output schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides an 'Aggregate snapshot' for the public stats dashboard, listing specific data points. It distinguishes itself from sibling tools like seneschal_list_borrowers or seneschal_get_borrower by being a one-call overview.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'One call returns everything needed to render the dashboard,' indicating when to use it. It implies that for specific details (e.g., individual borrower info) sibling tools are appropriate, though not stated explicitly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!