Zoning Signal
Server Details
US municipal zoning intelligence — corridor analysis, place dossiers, named-pattern detection.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 8 of 8 tools scored.
Each tool has a clearly distinct purpose: pattern tracking, corridor/place description, product overview, listings, meeting records, and semantic search. No two tools overlap in functionality.
Most tools follow a verb_noun pattern (describe_*, list_*), but current_named_patterns and meeting_index and semantic_search deviate slightly in structure. Still, the naming is clear and predictable overall.
Eight tools cover the core operations of a regional planning observatory without excessive specialization or missing essentials. The scope is well-balanced for an agent to navigate the domain.
The tool set provides a complete lifecycle: discovery (list_*, semantic_search), deep dive (describe_*), temporal analysis (meeting_index), pattern awareness (current_named_patterns), and context (describe_zoning_signal). No obvious gaps.
Available Tools
8 toolscurrent_named_patternsCurrent Named PatternsAInspect
List every named pattern currently being tracked across the regional field. A named pattern is a coined recurring structure observed across multiple jurisdictions or multiple meetings (e.g., "The Quiet Revolution"). Returns pattern name, anchoring brief, brief URL, and spatial coverage.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It explains the output (pattern name, brief, URL, coverage) and that patterns are currently tracked. It does not discuss limitations or freshness, but the tool is straightforward and read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. The parenthetical definition of a named pattern is efficient and helpful. The description is front-loaded with the key purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but the description fully explains the return fields (pattern name, anchoring brief, brief URL, spatial coverage). For a simple list tool with no nested objects, this is complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters, so the baseline is 4. The description does not need to explain parameters, and it adds no extraneous parameter info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists named patterns, defines what a named pattern is, and specifies the return fields (pattern name, anchoring brief, URL, spatial coverage). This distinguishes it from sibling tools like describe_corridor or semantic_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the use case: retrieving all tracked patterns. It does not explicitly state when not to use it or mention alternatives, but the context of sibling tools (e.g., semantic_search for search, list_cities for cities) makes the purpose clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
describe_corridorDescribe CorridorAInspect
Return the dossier projection for a corridor, in the requested cognitive lens. Same lens enum and default as describe_place. Corridor projections surface cross-municipal dialectics and shared-infrastructure dynamics that no single place dossier captures.
| Name | Required | Description | Default |
|---|---|---|---|
| lens | No | The cognitive position to project. Defaults to "synthesis". Single-lens values surface a focused projection. | synthesis |
| slug | Yes | The corridor slug (e.g., "us-27-south-lake"). Use list_corridors to discover available slugs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full responsibility for behavioral disclosure. It describes the tool as returning a dossier projection, which implies a read operation, but does not mention side effects, authentication requirements, or error handling. The behavioral transparency is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the main purpose, and every sentence adds valuable information. No redundancy or extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only two parameters and no output schema, the description explains what the tool returns and its unique value. However, it does not describe the output format or error behavior, leaving some gaps for an AI agent. It is minimally complete but lacks full detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters documented. The description adds value by referencing the sibling tool's lens enum and default, and by explaining that single-lens values surface focused projections. This provides meaningful context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a dossier projection for a corridor with a cognitive lens. It distinguishes from the sibling describe_place by explaining that corridor projections surface cross-municipal dialectics and shared-infrastructure dynamics that place dossiers do not capture.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context by noting the same lens enum and default as describe_place, implying that this tool is for corridors instead of places. However, it does not explicitly state when to use or not use this tool, nor does it list alternatives beyond the sibling reference.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
describe_placeDescribe PlaceAInspect
Return the dossier projection for a city, in the requested cognitive lens. Defaults to the synthesis projection (the multidimensional view that holds all lenses in superposition and names the dialectics). Pass a single-lens value to get the focused cognitive position — useful when the agent is acting on behalf of a user with a specific stake (developer underwriting, investor thesis, attorney precedent search, resident orientation).
| Name | Required | Description | Default |
|---|---|---|---|
| lens | No | The cognitive position to project. Defaults to "synthesis". Single-lens values surface a focused projection from a specific stakeholder position. | synthesis |
| slug | Yes | The place slug (e.g., "clermont-florida"). Use list_cities to discover available slugs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description must cover behavioral traits. It explains the default behavior and the effect of lens values, but does not mention read-only nature, error responses, or permissions. Adequate but could be more thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences conveying the core function and parameter guidance without redundancy. Information is front-loaded, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description could benefit from briefly stating what the projection contains, but the concept of 'dossier projection' is plausible given the domain. The tool's purpose and parameters are well-covered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, but the description adds value by elaborating on the 'lens' parameter with stakeholder examples and cross-referencing 'list_cities' for the slug, providing practical guidance beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns a 'dossier projection' for a city, with a specific verb and resource. Differentiates from sibling tools like describe_corridor by focusing on cities and introduces the novel concept of cognitive lenses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly explains when to use the default synthesis projection versus a single-lens value, including concrete examples of stakeholder use cases. Lacks explicit 'when not to use' or alternative tool suggestions, but the context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
describe_zoning_signalDescribe Zoning SignalAInspect
Return the canonical product description for Zoning Signal — what the observatory is, the four artifact types it publishes, the regional scope of current coverage, and the methodology. Call once per session to ground subsequent tool calls in canonical context.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the intended behavior as a read-only retrieval of information, but does not explicitly state safety, idempotency, or lack of side effects. For a simple, parameterless tool, the description is adequate but lacks explicit behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the action and then providing a usage hint. Every sentence adds value, with no wasted words. It is appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless tool without an output schema, the description is complete enough: it specifies the content and when to call. However, it does not mention the output format (e.g., text structure), which could be beneficial but is not critical given sibling tools likely behave similarly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema is empty with zero parameters, and schema description coverage is 100% trivially. The description adds no parameter information, but none is required. Baseline for zero parameters is 4, and the description does not need to compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a canonical product description for Zoning Signal, listing specific content: observatory, four artifact types, regional scope, and methodology. It distinguishes itself from sibling tools like 'describe_corridor' and 'describe_place' by focusing on a unique entity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: 'Call once per session to ground subsequent tool calls in canonical context.' This instructs when to use the tool, but does not explicitly exclude scenarios or mention alternatives. The sibling tools imply alternative describe actions for other entities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_citiesList CitiesAInspect
List every city with a published place dossier. Optionally filter by state. Returns city, state, slug, signal strength, signal direction, and the dossier URL. Use to discover the available place-level coverage before calling describe_place.
| Name | Required | Description | Default |
|---|---|---|---|
| state | No | Optional US state name (e.g., "Florida") to filter the result set. Omit for all cities across all states. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description discloses return fields and optional filter but does not mention pagination or rate limits, which is acceptable for a simple list tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences covering purpose, filter, return fields, and usage hint. No wasted words, front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 1 optional parameter and no output schema, description explains return fields and usage adequately. Could mention ordering or limits but not necessary for this tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%; description adds value by specifying state is optional, US state name, and providing an example, beyond the schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'List' and resource 'every city with a published place dossier', and distinguishes from siblings like describe_place and list_corridors.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use to discover the available place-level coverage before calling describe_place', providing a clear usage context and a hint about an alternative tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_corridorsList CorridorsAInspect
List every published corridor page. A corridor is the cross-municipal economic-topology view — the cross-jurisdiction read on a shared infrastructure spine, aquifer, or commercial gravity field. Returns name, slug, constituent cities, primary axis, and URL.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the burden. It states the return fields but does not disclose any side effects, rate limits, or confirm read-only behavior. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the action, followed by concept and return fields. No extraneous information, every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless list tool with no output schema, the description covers the purpose, concept, and return fields. Missing details on pagination or ordering, but acceptable for this simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters with 100% coverage. The description adds value by explaining the corridor concept, which aids understanding beyond the schema. Baseline for 0 params is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists all published corridor pages and explains the concept of a corridor. It distinguishes from siblings like describe_corridor and list_cities by specifying it's a cross-municipal economic-topology view.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for getting a comprehensive list of corridors but does not provide explicit guidance on when to use alternatives or when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meeting_indexMeeting IndexAInspect
Return meeting readings for a specific city across an optional date range. A meeting reading is a plain-English read of one harvested planning-board, council, or commission meeting, with signal extraction and entity mapping. Use to drill from a city or corridor into the temporal record.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | City name (e.g., "Clermont"). Case-insensitive. | |
| to_date | No | Inclusive upper bound (ISO 8601 date). Omit for the latest reading. | |
| from_date | No | Inclusive lower bound (ISO 8601 date). Omit to span back to the earliest reading. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It discloses that the tool returns plain-English reads with signal extraction and entity mapping, indicating a read-only operation. However, it lacks details on behavior such as error handling, performance limits, or what happens for missing data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences. The first sentence states the core action, and the second provides definition and use case. No redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters, no output schema, and no annotations, the description adequately covers purpose, parameters, and use context. It explains what a meeting reading is and how to use the date range. Minor gaps: no mention of output format or empty result behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description adds context (optional date range, specific city) but does not significantly enhance parameter meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Return' and resource 'meeting readings' with precise context (city, date range, and definition of meeting reading). It clearly distinguishes from sibling tools like list_cities or describe_place, which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states 'Use to drill from a city or corridor into the temporal record,' providing a clear use case. However, it does not mention when not to use this tool or compare with alternatives like semantic_search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
semantic_searchSemantic SearchAInspect
Semantic search across the full corpus — every place dossier, corridor signal, meeting reading, and named-pattern brief. Returns results ranked by cosine similarity in a 1024-dimensional embedding space (Voyage AI 4 + Supabase pgvector). Use when the agent does not know the canonical entity slug or named-pattern title in advance — the search returns the readings whose semantic structure best matches the natural-language query, with type, title, similarity, and resolved URL per hit. Threshold 0.55, top 12.
| Name | Required | Description | Default |
|---|---|---|---|
| q | Yes | The natural-language query. A phrase, an entity name, or a thematic concept all work. Asymmetric query-time embedding handles short queries cleanly. Maximum 500 characters. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses embedding model (Voyage AI 4, 1024-dim), database (pgvector), ranking method (cosine similarity), and constraints (threshold, limit). Lacks mention of read-only nature, but that is implied for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences efficiently convey purpose, technique, usage, and output. Front-loaded with core function, followed by technical details and usage guidance. No redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (searching multiple sources with embeddings), the description is thorough: it lists all searchable content, explains ranking, output fields, threshold, and limit. No output schema, but hits format is described. Adequate for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Single parameter 'q' has full schema description. Tool description adds value by explaining acceptable query types (phrases, entities, concepts), character limit (500), and asymmetric embedding behavior, enriching understanding beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs semantic search across multiple corpora, lists the sources, and explains that results are ranked by cosine similarity. It distinguishes itself from siblings by specifying usage when exact identifiers are unknown.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when the agent does not know the canonical entity slug or named-pattern title in advance,' providing a clear when-to-use condition. Additionally mentions threshold (0.55) and top-12 results, guiding expectations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!