Skip to main content
Glama
ccedacero

nyc-property-intel

by ccedacero

get_nypd_crime

Retrieve NYPD crime complaints near a property to assess neighborhood safety. Filter by felony, misdemeanor, or offense type to evaluate risk for real estate decisions.

Instructions

Get NYPD crime complaints within a radius of a property.

Queries the local NYPD complaint database (NYC Open Data) using a
geospatial bounding-box search centered on the property's lat/lon. Returns
all complaint types — felonies, misdemeanors, and violations — filed within
the specified radius. Falls back to Socrata API if local table is unavailable.

Uses the property's PLUTO coordinates (lot centroid) for accuracy.
Default radius of 300 m covers roughly 3 city blocks in any direction.

Use this to assess neighborhood safety for buyers, lenders, or underwriters.
Compare felony vs misdemeanor breakdown, trend over years, and dominant
offense types (assault, burglary, grand larceny, etc.).

Provide either `address` OR `bbl` (not both).

Args:
    address: Street address, e.g. "350 5th Ave, Manhattan".
    bbl: 10-digit NYC BBL. Coordinates resolved from PLUTO.
    radius_meters: Search radius in meters (50–800, default 300 ≈ 3 blocks).
    law_category: Filter by "FELONY", "MISDEMEANOR", or "VIOLATION".
                  Case-insensitive.
    offense: Filter by offense keyword, e.g. "ASSAULT", "BURGLARY",
             "GRAND LARCENY", "ROBBERY". Case-insensitive.
    since_year: Return only complaints from this year onward (2006–present).
    limit: Max complaints to return (1–200, default 50).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
addressNo
bblNo
radius_metersNo
law_categoryNo
offenseNo
since_yearNo
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses the geospatial bounding-box search, use of PLUTO coordinates, default radius, and fallback behavior. It mentions filtering capabilities but does not address auth needs, rate limits, or error cases. Still, it provides meaningful behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a front-loaded purpose paragraph followed by detailed parameter list. Every sentence adds value, though some phrases could be trimmed. Overall, it is appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 7 parameters and existence of an output schema, the description covers the essential aspects: purpose, data source, filtering parameters, and use cases. It lacks info on output format and error handling, but these are partly addressed by the output schema and parameter defaults.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description compensates fully. It explains each parameter in detail: address/bbl mutual exclusivity, radius range and default, law_category case-insensitive values, offense examples, since_year range, and limit range. This adds significant meaning beyond the input schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get NYPD crime complaints within a radius of a property.' It specifies the data source, geospatial approach, and types of complaints returned. It distinguishes itself from sibling tools (e.g., get_311_complaints) by focusing on crime data, making selection unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases: 'Use this to assess neighborhood safety for buyers, lenders, or underwriters.' It also explains the fallback behavior to Socrata API. However, it does not directly compare with alternatives or specify when not to use, though the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ccedacero/nyc-property-intel'

If you have feedback or need assistance with the MCP directory API, please join our Discord server