Nova Scotia Data Explorer
Server Details
Search, explore, and analyze hundreds of datasets from the Nova Scotia government's datasets
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
4 toolsget_dataset_metadataGet Nova Scotia Dataset MetadataAInspect
Retrieve full schema and metadata for a Nova Scotia Open Data dataset by its 8-character identifier (e.g. '3nka-59nz'). Returns all column field names, data types, and descriptions — essential before calling query_dataset so you know the exact field names to use in $select and $where clauses.
| Name | Required | Description | Default |
|---|---|---|---|
| dataset_id | Yes | 8-character Socrata dataset identifier (e.g. '3nka-59nz') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It usefully details return contents (column names, types, descriptions) but omits operational behaviors like error handling (e.g., invalid dataset_id), authentication requirements, or caching characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two highly efficient sentences with zero redundancy. First sentence covers purpose and input; second covers output and usage context. Perfectly front-loaded and appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 1 parameter, no nested objects, and no output schema, the description adequately compensates by detailing the return structure (field names, types, descriptions). Would benefit from mentioning error cases (e.g., invalid ID), but otherwise complete for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description largely repeats parameter details already in the schema (8-character format, example ID). With schema doing the heavy lifting, baseline 3 is appropriate; description doesn't add significant semantic depth beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with 'Retrieve full schema and metadata' clearly identifying the resource (Nova Scotia Open Data dataset) and action. The mention of '8-character identifier' and return values distinguishes this from sibling search/list tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Strong workflow guidance establishing this as a prerequisite to query_dataset ('essential before calling query_dataset'). Explains why it's needed ('so you know the exact field names'). Lacks explicit 'when not to use' guidance, but clearly positions it in the data exploration workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_categoriesList Nova Scotia Open Data CategoriesAInspect
Returns all dataset categories and popular tags available on the Nova Scotia Open Data portal. Use this first to discover valid category names before calling search_datasets with a category filter.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It discloses what data is returned ('categories and popular tags'), but lacks operational details like rate limits, authentication requirements, or response format details that would be necessary for a complete behavioral picture.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences: first defines the function, second provides usage context. Zero redundancy and properly front-loaded with the essential purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameter-less discovery tool without output schema, the description adequately covers what is returned (categories/tags) and the discovery workflow. A 5 would require more detail on the return structure or pagination behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, triggering the baseline score of 4. The description does not need to compensate for parameter documentation since there are none to describe.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Returns' with clear resources 'dataset categories and popular tags' and specifies the system 'Nova Scotia Open Data portal'. It implicitly distinguishes from siblings by positioning this as a prerequisite to search_datasets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this first to discover valid category names before calling search_datasets', providing clear temporal guidance (when to use first) and naming the specific sibling tool relationship.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
query_datasetQuery a Nova Scotia Open DatasetAInspect
Run a SoQL query against a Nova Scotia Open Data dataset. SoQL is SQL-like.
Key clauses (combine with &): $select=col1,col2 — choose columns $where=field='value' — filter rows (use single quotes for strings) $where=field like '%val%' — partial match $order=field DESC — sort $limit=50 — row count (default 25, max 50000) $offset=50 — pagination $group=field — group by (use with aggregate functions) $q=search term — full-text search
Aggregates: count(*), sum(col), avg(col), min(col), max(col)
Examples: $where=year='2024'&$order=total DESC&$limit=10 $select=department,count()&$group=department&$order=count() DESC $where=area like '%Halifax%'&$limit=5
Always call get_dataset_metadata first to find exact field names.
| Name | Required | Description | Default |
|---|---|---|---|
| soql | No | SoQL query string. Multiple clauses joined with '&'. String values in $where must use single quotes: field='value'. Do NOT URL-encode — pass the raw string. | |
| limit | No | Max rows to return (1–1000, default 25). Ignored if $limit is in soql. | |
| dataset_id | Yes | 8-character Socrata dataset identifier (e.g. '3nka-59nz') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It explains SoQL syntax constraints (single quotes for strings, 'Do NOT URL-encode'), pagination behavior ($offset), limit interactions ('Ignored if $limit is in soql'), and aggregate functions. Minor gap: doesn't mention read-only nature or error response formats.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Lengthy but appropriately so for teaching a query language. Well-structured with visual hierarchy: preamble, clause reference list, aggregates section, examples block, and prerequisite note. Each section earns its place; no redundant filler. Minor deduction for density, but necessary for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a query tool lacking output schema: covers the domain-specific language (SoQL), prerequisite workflow, and parameter interactions. Missing only output format specification (JSON/CSV), but the row/column model is implied by the $select clause documentation. Appropriate for the complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3), but description adds substantial value beyond the schema: three concrete examples showing valid query strings, detailed syntax rules for each clause type ($select, $where, etc.), and aggregate function documentation. The examples demonstrate how to construct complex multi-clause queries that the raw schema cannot convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific action 'Run a SoQL query' and resource 'Nova Scotia Open Data dataset', clearly distinguishing it from sibling tools get_dataset_metadata (structure) and search_datasets (discovery). It explicitly names the query language (SoQL), setting clear expectations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit procedural prerequisite: 'Always call get_dataset_metadata first to find exact field names,' directly naming the sibling tool and establishing mandatory workflow order. Also implies when to use different clauses (filtering, sorting, aggregating) through the SoQL syntax reference.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_datasetsSearch Nova Scotia Open DatasetsAInspect
Search the Nova Scotia Open Data catalog (data.novascotia.ca) for datasets by keyword, category, or tag. Returns dataset names, IDs, descriptions, column names, and direct portal links. Use list_categories first to see valid category and tag names. Use the returned dataset ID with query_dataset or get_dataset_metadata for further exploration.
| Name | Required | Description | Default |
|---|---|---|---|
| tag | No | Filter by an exact tag name from list_categories (e.g. 'population', 'fisheries') | |
| limit | No | Maximum number of results to return (1–50, default 10) | |
| query | No | Free-text search query (e.g. 'population', 'fisheries', 'road network') | |
| offset | No | Offset for pagination (default 0) | |
| category | No | Filter by exact category name from list_categories (e.g. 'Health and Wellness', 'Lands, Forests and Wildlife', 'Crime and Justice', 'Population and Demographics') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the data source (data.novascotia.ca), return value structure ('dataset names, IDs, descriptions, column names, and direct portal links'), and workflow constraints. Lacks rate limit or detailed pagination behavior disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 defines purpose and returns, sentence 2 states prerequisites, sentence 3 defines next steps. Every clause earns its place; information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with full schema coverage and no output schema, the description adequately compensates by detailing return values and workflow. Completeness is strong but falls just short of exhaustive (could explicitly characterize pagination behavior beyond parameter definitions).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds crucial semantic constraints: category and tag parameters require pre-validation via list_categories ('Use list_categories first'), adding value beyond the schema's basic type descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Search'), resource ('Nova Scotia Open Data catalog'), and search modalities ('by keyword, category, or tag'). It also distinguishes from siblings by specifying what it returns versus what other tools do (e.g., 'Use the returned dataset ID with query_dataset...').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisites ('Use list_categories first to see valid category and tag names') and clear next-step workflow ('Use the returned dataset ID with query_dataset or get_dataset_metadata'). This provides explicit when-to-use guidance and alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!