Skip to main content
Glama

Nova Scotia Data Explorer

Server Details

Query and explore Nova Scotia open datasets via the Socrata SODA API.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

4 tools
get_dataset_metadataGet Nova Scotia Dataset MetadataAInspect

Retrieve full schema and metadata for a Nova Scotia Open Data dataset by its 8-character identifier (e.g. '3nka-59nz'). Returns all column field names, data types, and descriptions — essential before calling query_dataset so you know the exact field names to use in $select and $where clauses.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataset_idYes8-character Socrata dataset identifier (e.g. '3nka-59nz')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It successfully discloses what the tool returns ('all column field names, data types, and descriptions'), compensating for the missing output schema. However, it omits mention of potential error conditions (e.g., invalid dataset ID) or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero waste. First sentence establishes purpose and input; second sentence discloses output and critical usage context. Information is front-loaded and every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter input (100% schema coverage) and lack of output schema, the description appropriately compensates by detailing the return structure and explaining the operational relationship to query_dataset. No significant gaps remain for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description repeats the parameter example ('3nka-59nz') and identifier format already documented in the schema property description, adding no additional syntactic or semantic details beyond the structured schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb+resource ('Retrieve full schema and metadata') and clearly scopes it to Nova Scotia Open Data datasets by their 8-character identifier. It effectively distinguishes from siblings by contrasting with query_dataset (schema vs. data retrieval) and implying specificity vs. the broad listing/searching of the other siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use the tool ('essential before calling query_dataset') and why ('so you know the exact field names to use in $select and $where clauses'), naming the specific sibling tool that depends on this one's output. This provides clear workflow guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesList Nova Scotia Open Data CategoriesAInspect

Returns all dataset categories and popular tags available on the Nova Scotia Open Data portal. Use this first to discover valid category names before calling search_datasets with a category filter.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Clearly discloses return values ('categories and popular tags') and implies read-only nature via 'Returns'. Minor gap: doesn't mention caching, rate limits, or auth requirements, though less critical for list operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences. First defines action/resource, second provides usage workflow. Zero redundancy, front-loaded with core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description appropriately explains what gets returned ('categories and popular tags'). Sufficient for a simple parameterless discovery tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters with 100% coverage, so baseline is 4. Description appropriately makes no parameter claims since none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Returns' with clear resource 'dataset categories and popular tags' and scope 'Nova Scotia Open Data portal'. Explicitly distinguishes from sibling 'search_datasets' by stating this provides valid category names for that filter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: 'Use this first to discover valid category names before calling search_datasets with a category filter.' Clear sequencing when used with alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_datasetQuery a Nova Scotia Open DatasetAInspect

Run a SoQL query against a Nova Scotia Open Data dataset. SoQL is SQL-like.

Key clauses (combine with &): $select=col1,col2 — choose columns $where=field='value' — filter rows (use single quotes for strings) $where=field like '%val%' — partial match $order=field DESC — sort $limit=50 — row count (default 25, max 50000) $offset=50 — pagination $group=field — group by (use with aggregate functions) $q=search term — full-text search

Aggregates: count(*), sum(col), avg(col), min(col), max(col)

Examples: $where=year='2024'&$order=total DESC&$limit=10 $select=department,count()&$group=department&$order=count() DESC $where=area like '%Halifax%'&$limit=5

Always call get_dataset_metadata first to find exact field names.

ParametersJSON Schema
NameRequiredDescriptionDefault
soqlNoSoQL query string. Multiple clauses joined with '&'. String values in $where must use single quotes: field='value'. Do NOT URL-encode — pass the raw string.
limitNoMax rows to return (1–1000, default 25). Ignored if $limit is in soql.
dataset_idYes8-character Socrata dataset identifier (e.g. '3nka-59nz')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It thoroughly documents SoQL capabilities (clauses, aggregates, pagination limits), syntax constraints (single quotes for strings, & separators), and parameter precedence. Minor gap: no mention of rate limits, error responses, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear visual hierarchy: purpose statement, clause reference, aggregates list, examples block, and prerequisite warning. Every section serves the agent's operational needs. Examples are concrete and copy-pasteable. No redundant verbiage despite the length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter query tool with 100% schema coverage and no output schema, the description is comprehensive. It covers the query language specification, pagination behavior, filtering capabilities, and cross-tool workflow (metadata prerequisite). No gaps requiring documentation elsewhere.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% providing a baseline of 3. The description adds substantial value by documenting the SoQL mini-language syntax, providing three concrete query examples, explaining the raw string vs URL-encoding requirement, and clarifying the interplay between the standalone limit parameter and $limit within the soql string.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a precise action statement: 'Run a SoQL query against a Nova Scotia Open Data dataset.' It identifies the specific dialect (SoQL), the target domain (Nova Scotia Open Data), and implicitly distinguishes from sibling get_dataset_metadata by focusing on data retrieval rather than schema discovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Exceptional guidance including explicit prerequisite ('Always call get_dataset_metadata first to find exact field names'), parameter interaction rules ('Ignored if $limit is in soql'), and multiple concrete SoQL examples showing valid clause combinations and quoting requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_datasetsSearch Nova Scotia Open DatasetsAInspect

Search the Nova Scotia Open Data catalog (data.novascotia.ca) for datasets by keyword, category, or tag. Returns dataset names, IDs, descriptions, column names, and direct portal links. Use list_categories first to see valid category and tag names. Use the returned dataset ID with query_dataset or get_dataset_metadata for further exploration.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagNoFilter by an exact tag name from list_categories (e.g. 'population', 'fisheries')
limitNoMaximum number of results to return (1–50, default 10)
queryNoFree-text search query (e.g. 'population', 'fisheries', 'road network')
offsetNoOffset for pagination (default 0)
categoryNoFilter by exact category name from list_categories (e.g. 'Health and Wellness', 'Lands, Forests and Wildlife', 'Crime and Justice', 'Population and Demographics')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden and successfully discloses return structure ('Returns dataset names, IDs, descriptions, column names, and direct portal links'). However, it lacks explicit safety/read-only confirmation or operational limits like rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: function definition, return value disclosure, and workflow guidance. Zero redundancy; every sentence earns its place with specific actionable information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a search tool with 100% schema coverage and no output schema: it compensates by describing return values in text, explains the discovery workflow, and clarifies relationships to all three sibling tools. No gaps require external knowledge.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds valuable context that category/tag values must come from list_categories first (workflow dependency), and maps 'keyword' concept to the 'query' parameter, providing semantic glue beyond raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description provides specific verb ('Search'), resource ('Nova Scotia Open Data catalog'), and scope ('by keyword, category, or tag'). It clearly distinguishes this as the discovery entry point versus sibling metadata retrieval tools (get_dataset_metadata, query_dataset).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisite workflow ('Use list_categories first to see valid category and tag names') and clear forward navigation ('Use the returned dataset ID with query_dataset or get_dataset_metadata'). This creates a complete usage chain with siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources