get_results
Check the status of a data collection job and retrieve completed environmental site results from 80+ US federal sources, formatted as a scannable engineering document with critical permitting flags.
Instructions
Check the status of a data collection job and retrieve results. Poll every 10 seconds until status is "completed".
When complete, present results as a SCANNABLE ENGINEERING DOCUMENT — not a data dump. An engineer needs to answer: (1) What kills the project? (2) What complicates permitting? (3) What's the baseline context?
CRITICAL FLAGS — scan results FIRST and lead with these if present:
FEMA Zone AE/AO/VE or SFHA=true → "Site intersects SFHA — Zone [X]" (CRITICAL)
Floodway present → "Regulatory floodway — no-rise certification required" (CRITICAL)
Superfund count > 0 → "NPL Superfund site within search radius" (CRITICAL)
Wetland count > 10 → "High wetland density — Section 404 permitting likely" (HIGH)
303(d) impaired water → "TMDL required, stricter discharge limits" (HIGH)
Brownfields > 3 → "Phase I ESA recommended" (MODERATE)
Soils with HSG D → "Poorly draining soils — stormwater design impact" (MODERATE)
SECTION ORDER (skip sections with no data):
Site overview — address, coordinates, county, elevation range, land cover, area
FEMA flood zones — table: zone | subtype | SFHA | risk level
Soils — per unit: HSG, drainage class, slope, flood frequency, building/septic limitations
Atlas 14 rainfall — IDF table (rows: 15min, 1hr, 6hr, 12hr, 24hr, 3day; cols: 2yr–100yr). Bold the 24hr row. Include Atlas 14 volume.
Natural hazard risk — NRI ratings by hazard type
Wetlands — count, type breakdown, Section 404 note
Water resources — streams with distances, impaired waters
Contamination — Superfund, brownfields (with distances/status), USTs, NPDES
Seismic & dams — ASCE 7-22 params (SDS, SD1, SDC), nearby dams with hazard rating
Infrastructure — hospitals, fire stations, schools, EMS counts
Demographics — population, median income, vacancy rate
FORMATTING:
Use markdown tables for flood zones, rainfall IDF, soils, brownfields
Cite source agencies (FEMA, NRCS, NOAA Atlas 14, EPA) not just "GeoTap"
Include distances and bearings for nearby features (e.g., "0.8 mi NW")
"_noData: true" means queried but nothing found — mention where relevant ("no Superfund sites" is positive)
End with: "Data sourced from US federal agencies via GeoTap. Verify critical findings before engineering or regulatory decisions."
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| jobId | Yes | Job ID returned from collect_site_data |
Implementation Reference
- src/index.js:176-272 (handler)The get_results tool handler. Registered with McpServer via server.tool('get_results', ...). Accepts jobId parameter, polls GET /site-analysis/data-collect/{jobId}, and returns results with presentation guidance when status is 'completed'.
// ── Tool: get_results ──────────────────────────────────────────────── server.tool( 'get_results', `Check the status of a data collection job and retrieve results. Poll every 10 seconds until status is "completed". When complete, present results as a SCANNABLE ENGINEERING DOCUMENT — not a data dump. An engineer needs to answer: (1) What kills the project? (2) What complicates permitting? (3) What's the baseline context? CRITICAL FLAGS — scan results FIRST and lead with these if present: - FEMA Zone AE/AO/VE or SFHA=true → "Site intersects SFHA — Zone [X]" (CRITICAL) - Floodway present → "Regulatory floodway — no-rise certification required" (CRITICAL) - Superfund count > 0 → "NPL Superfund site within search radius" (CRITICAL) - Wetland count > 10 → "High wetland density — Section 404 permitting likely" (HIGH) - 303(d) impaired water → "TMDL required, stricter discharge limits" (HIGH) - Brownfields > 3 → "Phase I ESA recommended" (MODERATE) - Soils with HSG D → "Poorly draining soils — stormwater design impact" (MODERATE) SECTION ORDER (skip sections with no data): 1. Site overview — address, coordinates, county, elevation range, land cover, area 2. FEMA flood zones — table: zone | subtype | SFHA | risk level 3. Soils — per unit: HSG, drainage class, slope, flood frequency, building/septic limitations 4. Atlas 14 rainfall — IDF table (rows: 15min, 1hr, 6hr, 12hr, **24hr**, 3day; cols: 2yr–100yr). **Bold the 24hr row.** Include Atlas 14 volume. 5. Natural hazard risk — NRI ratings by hazard type 6. Wetlands — count, type breakdown, Section 404 note 7. Water resources — streams with distances, impaired waters 8. Contamination — Superfund, brownfields (with distances/status), USTs, NPDES 9. Seismic & dams — ASCE 7-22 params (SDS, SD1, SDC), nearby dams with hazard rating 10. Infrastructure — hospitals, fire stations, schools, EMS counts 11. Demographics — population, median income, vacancy rate FORMATTING: - Use markdown tables for flood zones, rainfall IDF, soils, brownfields - Cite source agencies (FEMA, NRCS, NOAA Atlas 14, EPA) not just "GeoTap" - Include distances and bearings for nearby features (e.g., "0.8 mi NW") - "_noData: true" means queried but nothing found — mention where relevant ("no Superfund sites" is positive) - End with: "Data sourced from US federal agencies via GeoTap. Verify critical findings before engineering or regulatory decisions."`, { jobId: z.string().describe('Job ID returned from collect_site_data'), }, async (params) => { try { const result = await callApi(`/site-analysis/data-collect/${encodeURIComponent(params.jobId)}`, 'GET', {}); const response = { ...result }; // Add presentation guidance when results are complete if (result.status === 'completed') { response._meta = { sources: '80+ US federal agencies (FEMA, USGS, NOAA, EPA, NRCS, USFWS, USACE, DOE, DOT, CDC, Census, and more)', retrievedAt: new Date().toISOString(), disclaimer: 'Data sourced from US federal agencies via GeoTap. Always verify critical data against authoritative sources before making engineering or regulatory decisions.', }; response._presentationGuide = { instructions: 'Present as a scannable engineering document. Lead with critical flags, then structured sections. Follow the HOW TO PRESENT RESULTS instructions.', priorityOrder: [ '1. Critical flags (floodway, SFHA, Superfund, high wetland density)', '2. FEMA flood zones (table: zone, subtype, SFHA, risk)', '3. Soils (HSG, drainage class, slope, limitations)', '4. Atlas 14 rainfall (IDF table — bold 24hr row)', '5. NRI hazard risk (badge grid by hazard type)', '6. Wetlands (count, types, Section 404 note)', '7. Water resources (streams, impaired waters with distances)', '8. Contamination (Superfund, brownfields, USTs with distances)', '9. Seismic & dams (ASCE 7-22 params, nearby dams)', '10. Infrastructure (hospitals, fire, schools, EMS counts)', '11. Solar/energy & demographics (collapsed/secondary)', ], tips: [ 'Lead with what kills or complicates the project — not context', 'Use tables for flood zones, rainfall IDF, brownfields, soils', 'Bold the 24-hr rainfall row — most referenced for stormwater design', 'For soils, always show HSG and drainage class (drives CN calculation)', 'Cite source agencies (FEMA, NRCS, NOAA Atlas 14, EPA)', '"No Superfund sites nearby" is positive — mention it', ] }; } else { response._instructions = `Job status: ${result.status}. Poll again in 10 seconds until status is "completed".`; } return { content: [{ type: 'text', text: JSON.stringify(response, null, 2) }] }; } catch (error) { if (error instanceof StructuredApiError) { return { content: [{ type: 'text', text: JSON.stringify(error.details, null, 2) }], isError: true }; } return { content: [{ type: 'text', text: JSON.stringify({ error: true, message: error.message }, null, 2) }], isError: true }; } } ); - src/index.js:176-272 (registration)Registration of the get_results tool via server.tool() call with the name 'get_results', schema definition, and handler function.
// ── Tool: get_results ──────────────────────────────────────────────── server.tool( 'get_results', `Check the status of a data collection job and retrieve results. Poll every 10 seconds until status is "completed". When complete, present results as a SCANNABLE ENGINEERING DOCUMENT — not a data dump. An engineer needs to answer: (1) What kills the project? (2) What complicates permitting? (3) What's the baseline context? CRITICAL FLAGS — scan results FIRST and lead with these if present: - FEMA Zone AE/AO/VE or SFHA=true → "Site intersects SFHA — Zone [X]" (CRITICAL) - Floodway present → "Regulatory floodway — no-rise certification required" (CRITICAL) - Superfund count > 0 → "NPL Superfund site within search radius" (CRITICAL) - Wetland count > 10 → "High wetland density — Section 404 permitting likely" (HIGH) - 303(d) impaired water → "TMDL required, stricter discharge limits" (HIGH) - Brownfields > 3 → "Phase I ESA recommended" (MODERATE) - Soils with HSG D → "Poorly draining soils — stormwater design impact" (MODERATE) SECTION ORDER (skip sections with no data): 1. Site overview — address, coordinates, county, elevation range, land cover, area 2. FEMA flood zones — table: zone | subtype | SFHA | risk level 3. Soils — per unit: HSG, drainage class, slope, flood frequency, building/septic limitations 4. Atlas 14 rainfall — IDF table (rows: 15min, 1hr, 6hr, 12hr, **24hr**, 3day; cols: 2yr–100yr). **Bold the 24hr row.** Include Atlas 14 volume. 5. Natural hazard risk — NRI ratings by hazard type 6. Wetlands — count, type breakdown, Section 404 note 7. Water resources — streams with distances, impaired waters 8. Contamination — Superfund, brownfields (with distances/status), USTs, NPDES 9. Seismic & dams — ASCE 7-22 params (SDS, SD1, SDC), nearby dams with hazard rating 10. Infrastructure — hospitals, fire stations, schools, EMS counts 11. Demographics — population, median income, vacancy rate FORMATTING: - Use markdown tables for flood zones, rainfall IDF, soils, brownfields - Cite source agencies (FEMA, NRCS, NOAA Atlas 14, EPA) not just "GeoTap" - Include distances and bearings for nearby features (e.g., "0.8 mi NW") - "_noData: true" means queried but nothing found — mention where relevant ("no Superfund sites" is positive) - End with: "Data sourced from US federal agencies via GeoTap. Verify critical findings before engineering or regulatory decisions."`, { jobId: z.string().describe('Job ID returned from collect_site_data'), }, async (params) => { try { const result = await callApi(`/site-analysis/data-collect/${encodeURIComponent(params.jobId)}`, 'GET', {}); const response = { ...result }; // Add presentation guidance when results are complete if (result.status === 'completed') { response._meta = { sources: '80+ US federal agencies (FEMA, USGS, NOAA, EPA, NRCS, USFWS, USACE, DOE, DOT, CDC, Census, and more)', retrievedAt: new Date().toISOString(), disclaimer: 'Data sourced from US federal agencies via GeoTap. Always verify critical data against authoritative sources before making engineering or regulatory decisions.', }; response._presentationGuide = { instructions: 'Present as a scannable engineering document. Lead with critical flags, then structured sections. Follow the HOW TO PRESENT RESULTS instructions.', priorityOrder: [ '1. Critical flags (floodway, SFHA, Superfund, high wetland density)', '2. FEMA flood zones (table: zone, subtype, SFHA, risk)', '3. Soils (HSG, drainage class, slope, limitations)', '4. Atlas 14 rainfall (IDF table — bold 24hr row)', '5. NRI hazard risk (badge grid by hazard type)', '6. Wetlands (count, types, Section 404 note)', '7. Water resources (streams, impaired waters with distances)', '8. Contamination (Superfund, brownfields, USTs with distances)', '9. Seismic & dams (ASCE 7-22 params, nearby dams)', '10. Infrastructure (hospitals, fire, schools, EMS counts)', '11. Solar/energy & demographics (collapsed/secondary)', ], tips: [ 'Lead with what kills or complicates the project — not context', 'Use tables for flood zones, rainfall IDF, brownfields, soils', 'Bold the 24-hr rainfall row — most referenced for stormwater design', 'For soils, always show HSG and drainage class (drives CN calculation)', 'Cite source agencies (FEMA, NRCS, NOAA Atlas 14, EPA)', '"No Superfund sites nearby" is positive — mention it', ] }; } else { response._instructions = `Job status: ${result.status}. Poll again in 10 seconds until status is "completed".`; } return { content: [{ type: 'text', text: JSON.stringify(response, null, 2) }] }; } catch (error) { if (error instanceof StructuredApiError) { return { content: [{ type: 'text', text: JSON.stringify(error.details, null, 2) }], isError: true }; } return { content: [{ type: 'text', text: JSON.stringify({ error: true, message: error.message }, null, 2) }], isError: true }; } } ); - src/index.js:212-214 (schema)Input schema for get_results: expects a single required parameter 'jobId' (string), described as 'Job ID returned from collect_site_data'.
{ jobId: z.string().describe('Job ID returned from collect_site_data'), }, - src/api.js:121-224 (helper)The callApi helper function used by get_results to make HTTP requests to the GeoTap backend API.
export async function callApi(endpoint, method, params) { const headers = { 'Content-Type': 'application/json', 'User-Agent': 'geotap-mcp-server/2.2.1' }; headers['X-API-Key'] = API_KEY; // Substitute path parameters like {siteId} with values from params let resolvedEndpoint = endpoint; const remainingParams = { ...params }; const pathParamRegex = /\{(\w+)\}/g; let match; while ((match = pathParamRegex.exec(endpoint)) !== null) { const paramName = match[1]; if (remainingParams[paramName] !== undefined) { resolvedEndpoint = resolvedEndpoint.replace(`{${paramName}}`, encodeURIComponent(remainingParams[paramName])); delete remainingParams[paramName]; } } let url = `${BASE_URL}${resolvedEndpoint}`; const fetchOptions = { method, headers }; if (method === 'GET' && remainingParams && Object.keys(remainingParams).length > 0) { const searchParams = new URLSearchParams(); for (const [key, value] of Object.entries(remainingParams)) { if (value !== undefined && value !== null) { searchParams.append(key, typeof value === 'object' ? JSON.stringify(value) : String(value)); } } url += `?${searchParams.toString()}`; } else if (method === 'POST' && remainingParams) { fetchOptions.body = JSON.stringify(remainingParams); } const response = await fetch(url, fetchOptions); if (!response.ok) { const errorText = await response.text().catch(() => 'Unknown error'); const structured = buildStructuredError(response.status, errorText, endpoint, method, params); throw new StructuredApiError(structured); } // Check content type to handle binary vs JSON responses const contentType = response.headers.get('content-type') || ''; // JSON response — parse and return directly if (contentType.includes('application/json')) { return response.json(); } // CSV/text response — return as text data inline if (contentType.includes('text/csv') || contentType.includes('text/plain')) { const text = await response.text(); const disposition = response.headers.get('content-disposition') || ''; const filenameMatch = disposition.match(/filename="?([^";\s]+)"?/); return { success: true, format: contentType.includes('csv') ? 'csv' : 'text', fileName: filenameMatch ? filenameMatch[1] : null, data: text, note: 'Data returned inline as text. Copy or save to a file.' }; } // Binary response (GeoTIFF, Shapefile, KML, etc.) — cannot pass through MCP // Return metadata + direct download URL so user can fetch it themselves if ( contentType.includes('application/octet-stream') || contentType.includes('image/tiff') || contentType.includes('application/zip') || contentType.includes('application/vnd') || contentType.includes('application/geo') ) { const disposition = response.headers.get('content-disposition') || ''; const filenameMatch = disposition.match(/filename="?([^";\s]+)"?/); const contentLength = response.headers.get('content-length'); // Consume the body so the connection is released await response.arrayBuffer(); return { success: true, format: 'binary', contentType, fileName: filenameMatch ? filenameMatch[1] : null, fileSize: contentLength ? `${(parseInt(contentLength) / 1024).toFixed(1)} KB` : 'unknown', downloadUrl: url, downloadMethod: method, downloadBody: method === 'POST' ? remainingParams : undefined, note: 'Binary file (e.g., GeoTIFF, Shapefile). Use the downloadUrl to fetch the file directly, or use the job-based export endpoint for a download link.', instructions: 'To download: make the same API request from a browser or HTTP client. The file will download directly.' }; } // Fallback: try JSON parse, fall back to text try { return response.json(); } catch { const text = await response.text(); return { success: true, data: text }; } }