search_items
Search for geospatial datasets like satellite imagery and weather data using spatial, temporal, and attribute filters through STAC APIs.
Instructions
Search for STAC items.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| collections | Yes | ||
| bbox | No | ||
| datetime | No | ||
| limit | No | ||
| query | No | ||
| output_format | No | text | |
| catalog_url | No |
Implementation Reference
- stac_mcp/tools/search_items.py:12-54 (handler)Core handler function that executes the search_items tool logic: searches STAC items via STACClient, formats results as formatted text or JSON.def handle_search_items( client: STACClient, arguments: dict[str, Any], ) -> list[TextContent] | dict[str, Any]: collections = arguments.get("collections") bbox = arguments.get("bbox") dt = arguments.get("datetime") query = arguments.get("query") limit = arguments.get("limit", 10) items = client.search_items( collections=collections, bbox=bbox, datetime=dt, query=query, limit=limit, ) if arguments.get("output_format") == "json": return {"type": "item_list", "count": len(items), "items": items} result_text = f"Found {len(items)} items:\n\n" asset_keys = set() for item in items: item_id = item.get("id", "unknown") collection_id = item.get("collection", "unknown") result_text += f"**{item_id}** (Collection: `{collection_id}`)\n" dt_value = item.get("datetime") if dt_value: result_text += f" Date: {dt_value}\n" bbox = item.get("bbox") if isinstance(bbox, list | tuple) and len(bbox) >= BBOX_MIN_COORDS: result_text += ( " BBox: " f"[{bbox[0]:.2f}, {bbox[1]:.2f}, {bbox[2]:.2f}, {bbox[3]:.2f}]\n" ) assets = item.get("assets") or {} asset_keys.update(assets.keys()) asset_count = len(assets) if hasattr(assets, "__len__") else 0 result_text += f" Assets: {asset_count}\n\n" result_text += "\n" if asset_keys: result_text += "Assets found across items:\n" for key in sorted(asset_keys): result_text += f" - {key}\n" return [TextContent(type="text", text=result_text)]
- stac_mcp/server.py:83-109 (registration)MCP tool registration using @app.tool decorator, defines input parameters (schema) and dispatches to execution engine.@app.tool async def search_items( collections: list[str] | str, bbox: list[float] | str | None = None, datetime: str | None = None, limit: int | None = 10, query: dict[str, Any] | str | None = None, output_format: str | None = "text", catalog_url: str | None = None, ) -> list[dict[str, Any]]: """Search for STAC items.""" arguments = preprocess_parameters( { "collections": collections, "bbox": bbox, "datetime": datetime, "limit": limit, "query": query, "output_format": output_format, } ) return await execution.execute_tool( "search_items", arguments=arguments, catalog_url=catalog_url, headers=None, )
- stac_mcp/tools/execution.py:56-67 (registration)Internal registry mapping 'search_items' tool name to its handler function for dispatch in execute_tool._TOOL_HANDLERS: dict[str, Handler] = { "search_collections": handle_search_collections, "get_collection": handle_get_collection, "search_items": handle_search_items, "get_item": handle_get_item, "estimate_data_size": handle_estimate_data_size, "get_root": handle_get_root, "get_conformance": handle_get_conformance, "get_queryables": handle_get_queryables, "get_aggregations": handle_get_aggregations, "sensor_registry_info": handle_sensor_registry_info, }
- stac_mcp/tools/params.py:10-100 (helper)Helper function to preprocess input parameters (e.g., parse stringified JSON for bbox, collections, query), used by search_items registration.def preprocess_parameters(arguments: dict[str, Any]) -> dict[str, Any]: """Preprocess tool parameters to handle various input formats. This function normalizes parameters that may come in as strings but should be other types (arrays, objects, etc.). This is particularly useful when MCP clients serialize parameters as strings. Args: arguments: Raw arguments dictionary from MCP client Returns: Preprocessed arguments with proper types """ if not arguments: return arguments processed = arguments.copy() # Handle bbox parameter - should be a list of 4 floats if "bbox" in processed and processed["bbox"] is not None: bbox = processed["bbox"] if isinstance(bbox, str): try: # Try to parse as JSON parsed = json.loads(bbox) if isinstance(parsed, list) and len(parsed) == 4: # noqa: PLR2004 processed["bbox"] = [float(x) for x in parsed] logger.debug( "Converted bbox from string to list: %s", processed["bbox"] ) except (json.JSONDecodeError, ValueError, TypeError) as e: logger.warning("Failed to parse bbox string: %s, error: %s", bbox, e) # Handle collections parameter - should be a list of strings if "collections" in processed and processed["collections"] is not None: collections = processed["collections"] if isinstance(collections, str): try: parsed = json.loads(collections) if isinstance(parsed, list): processed["collections"] = parsed logger.debug( "Converted collections from string to list: %s", processed["collections"], ) except (json.JSONDecodeError, ValueError, TypeError) as e: logger.warning( "Failed to parse collections string: %s, error: %s", collections, e ) # Handle aoi_geojson parameter - should be a dict/object if "aoi_geojson" in processed and processed["aoi_geojson"] is not None: aoi = processed["aoi_geojson"] if isinstance(aoi, str): try: parsed = json.loads(aoi) if isinstance(parsed, dict): processed["aoi_geojson"] = parsed logger.debug("Converted aoi_geojson from string to dict") except (json.JSONDecodeError, ValueError, TypeError) as e: logger.warning( "Failed to parse aoi_geojson string: %s, error: %s", aoi, e ) # Handle query parameter - should be a dict/object if "query" in processed and processed["query"] is not None: query = processed["query"] if isinstance(query, str): try: parsed = json.loads(query) if isinstance(parsed, dict): processed["query"] = parsed logger.debug("Converted query from string to dict") except (json.JSONDecodeError, ValueError, TypeError) as e: logger.warning("Failed to parse query string: %s, error: %s", query, e) if "limit" in processed and processed["limit"] is not None: limit = processed["limit"] if isinstance(limit, str): try: processed["limit"] = int(limit) logger.debug( "Converted limit from string to int: %d", processed["limit"] ) except ValueError as e: logger.warning( "Failed to convert limit string to int: %s, error: %s", limit, e ) return processed