Skip to main content
Glama

Server Details

AI-powered intelligence for your development workflow via Indicate.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
health_checkA
Read-onlyIdempotent
Inspect

Check connectivity to the Indicate backend. Returns 'ok' if the server can reach the API, or an error message otherwise. Does not require authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish safety profile (readOnly, idempotent, non-destructive), while the description adds valuable behavioral specifics: the exact return value format ('ok' vs error message) and authentication requirements, providing actionable context beyond the structured hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently cover purpose, return values, and authentication requirements. Every clause earns its place—no redundancy, no waste. Perfectly front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter health check with an existing output schema, the description provides complete coverage: explains the connectivity check purpose, documents unauthenticated access, and describes return value semantics. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters, this earns the baseline score of 4. The description appropriately acknowledges the nullary nature by not inventing parameter semantics where none exist, matching the empty schema perfectly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check connectivity') and resource ('Indicate backend'), and distinguishes this diagnostic utility from data-centric siblings like list_data_sources and query_metric. It explicitly defines the success/failure return values.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage context by stating 'Does not require authentication,' which is critical for an unauthenticated health check endpoint. While it doesn't explicitly name alternatives (unnecessary for a unique diagnostic tool), it establishes prerequisites clearly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_data_sourcesA
Read-onlyIdempotent
Inspect

Step 2 — List data sources available within a tenant. (In the Indicate system a data source is called a 'data product'.) Examples: Google Analytics, Facebook Ads, vioma, Booking.com. Returns each data source's 'id', 'displayName', and 'semantic_context_id'. → Pass the chosen 'id' as 'data_source_id' and 'semantic_context_id' to list_metrics.

ParametersJSON Schema
NameRequiredDescriptionDefault
tenant_idYesTenant ID obtained from list_tenants.

Output Schema

ParametersJSON Schema
NameRequiredDescription
data_sourcesNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds substantial context beyond annotations: clarifies domain terminology (data source = 'data product'), provides concrete examples of data sources, and documents return fields (id, displayName, semantic_context_id). Annotations cover safety profile (readOnly, idempotent), freeing the description to focus on domain semantics and return structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Information-dense yet readable. Front-loaded with workflow step and purpose. Parenthetical clarifies terminology without cluttering flow. Examples are illustrative but brief. Arrow notation efficiently signals next step. Every sentence serves the agent's decision-making process.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Thoroughly complete for a multi-step workflow tool. Explains domain mapping, return value structure (critical for chaining to list_metrics), and provides concrete examples. Presence of output schema means full return documentation isn't required in description, yet it includes key field names anyway. Workflow context is fully established.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage and only one parameter (tenant_id), the schema description 'Tenant ID obtained from list_tenants' carries the semantic load. The description mentions 'within a tenant' but doesn't add significant meaning beyond the schema. Baseline 3 appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent clarity: specifies verb ('List'), resource ('data sources'), scope ('within a tenant'), and domain terminology ('data product'). Examples (Google Analytics, Facebook Ads) concretize the resource. Explicitly distinguishes itself as 'Step 2' and clarifies output usage with sibling tool list_metrics, clearly positioning it within the workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow positioning ('Step 2') and clear handoff instructions ('Pass the chosen id... to list_metrics'). Implicitly indicates prerequisite (tenant_id from list_tenants). The arrow notation (→) creates unambiguous guidance on when to use this tool and what to do with outputs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_metricsA
Read-onlyIdempotent
Inspect

Step 3 — List metrics and their dimensions for a data source. (In the Indicate system a metric is called a 'data cube'.) Each metric includes its dimensions; every dimension has a 'scope' that is either 'perspective' or 'group'. → Pass the metric 'id' as 'metric_id' to query_metric. → Pass dimension IDs with scope='perspective' as 'perspective_dimension_id' and scope='group' as 'group_dimension_id' to query_metric.

ParametersJSON Schema
NameRequiredDescriptionDefault
tenant_idYesTenant ID from list_tenants.
data_source_idYesData source ID from list_data_sources.
semantic_context_idYesBounded context ID from the 'semantic_context_id' field in the list_data_sources response.

Output Schema

ParametersJSON Schema
NameRequiredDescription
metricsNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds valuable domain model context: metrics contain dimensions, dimensions have scopes ('perspective' vs 'group'), and data flows to query_metric. Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sentence hierarchy: Step identifier, domain terminology note, dimension scope explanation, and workflow arrows (→) showing data flow to query_metric. No redundancy; every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a multi-step workflow tool with output schema present. Description correctly omits return value details (covered by output schema) while explaining the domain-specific output structure (dimensions with scopes) and downstream usage pattern.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage already documenting the source of each parameter (list_tenants, list_data_sources). Description provides workflow context ('Step 3') but does not add significant semantic detail about parameter formats or constraints beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb 'List' with resource 'metrics and their dimensions'. Explicitly identifies this as 'Step 3' in a workflow, distinguishing it from siblings like list_tenants/list_data_sources (prerequisites) and query_metric (consumer of this tool's output). Domain clarification 'data cube' adds precision.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent workflow guidance: explicitly labels as 'Step 3', cites prerequisite calls (list_tenants, list_data_sources) for obtaining parameter values, and provides explicit handoff instructions to query_metric with exact parameter name mappings ('metric_id', 'perspective_dimension_id', 'group_dimension_id').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tenantsA
Read-onlyIdempotent
Inspect

Step 1 — List all tenants the authenticated user can access. (In the Indicate system a tenant is called a 'space'.) Returns each tenant's 'id' and 'displayName'. → Pass the chosen tenant 'id' as 'tenant_id' to every subsequent tool call.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
tenantsNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds valuable context: return value structure ('id' and 'displayName') and domain terminology mapping. No contradictions with annotations. Could mention pagination or empty result behavior for a perfect 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure: front-loaded with 'Step 1', followed by scope, parenthetical terminology note, return value documentation, and workflow instruction. Every sentence earns its place; no waste. Arrow (→) effectively signals action item.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter discovery tool with existing output schema and annotations, description is complete. Covers purpose, return fields, sibling dependencies, and domain terminology. Adequate for an AI agent to select and invoke correctly as entry point.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters, which establishes baseline 4. Description compensates by documenting the implicit output parameter usage (passing 'id' as 'tenant_id'), effectively explaining the tool's interface contract despite no input params.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('List') + resource ('tenants'). Distinguishes from siblings by positioning as 'Step 1' and noting it provides the prerequisite 'tenant_id' for subsequent calls. Helpfully clarifies domain terminology ('tenant' is called 'space' in Indicate system).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly establishes workflow order ('Step 1') and instructs how to use output ('Pass the chosen tenant 'id' as 'tenant_id' to every subsequent tool call'), effectively communicating when to use this tool (first) versus siblings (after).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_metricA
Read-onlyIdempotent
Inspect

Step 4 — Fetch time-series data for a specific metric. All IDs are obtained from the previous steps in the workflow. Optionally filter by date range (YYYY-MM-DD). Returns daily-granularity data points.

ParametersJSON Schema
NameRequiredDescriptionDefault
end_dateNoEnd date in YYYY-MM-DD format. Optional — omit for all available data.
metric_idYesMetric ID from list_metrics (the 'id' field of a metric object).
tenant_idYesTenant ID from list_tenants.
start_dateNoStart date in YYYY-MM-DD format. Optional — omit for all available data.
data_source_idYesData source ID from list_data_sources.
group_dimension_idYesDimension ID where scope='group', from the dimensions array in the list_metrics response.
semantic_context_idNoBounded context ID (for reference / logging).
perspective_dimension_idYesDimension ID where scope='perspective', from the dimensions array in the list_metrics response.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). Description valuably adds output granularity ('daily-granularity data points') and workflow dependency context beyond what annotations provide. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four tightly constructed sentences with zero redundancy. Front-loaded with workflow step and action, followed by dependency note, optional filters, and return format. Every clause delivers actionable context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for an 8-parameter data query tool with existing output schema and full annotations. Covers workflow position, filtering capabilities, and return granularity. Could benefit from noting pagination behavior if applicable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with detailed field descriptions. Description adds workflow-level semantics by framing parameters as parts of a sequence ('previous steps') and reinforcing date format context, justifying a score above the baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states exact action ('Fetch time-series data'), resource ('metric'), and workflow position ('Step 4'). Clearly distinguishes from sibling list operations (list_tenants, list_metrics, etc.) by targeting specific data retrieval versus enumeration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear workflow context ('Step 4', 'All IDs are obtained from the previous steps'), establishing prerequisite dependencies. However, lacks explicit 'when not to use' guidance or direct comparison to sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources