Skip to main content
Glama

get_exoplanet_data

Retrieve exoplanet data from NASA's Exoplanet Archive using custom queries. Specify table (e.g., exoplanets, KOI) and output format (JSON, CSV, XML, IPAC) for filtered results.

Instructions

Get data from NASA's Exoplanet Archive.

Args: query: Specific query to filter results using Exoplanet Archive syntax. Example: "pl_orbper > 300 and pl_rade < 2" table: Table to query. Common options: exoplanets (confirmed planets), cumulative (Kepler Objects of Interest), koi (subset of cumulative), tce (Threshold Crossing Events). format: Output format. Options: json, csv, xml, ipac. Default: json.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
formatNojson
queryNo
tableNoexoplanets

Implementation Reference

  • The handler function for the 'get_exoplanet_data' tool. It queries the NASA Exoplanet Archive API using the provided parameters (query, table, format), handles different response formats (JSON, CSV, etc.), processes the data, and returns a formatted string summary of the results (limited to first 10 entries for large datasets). Uses direct httpx request since it's not under the standard NASA API base.
    @mcp.tool() async def get_exoplanet_data(query: str = None, table: str = "exoplanets", format: str = "json") -> str: """Get data from NASA's Exoplanet Archive. Args: query: Specific query to filter results using Exoplanet Archive syntax. Example: "pl_orbper > 300 and pl_rade < 2" table: Table to query. Common options: exoplanets (confirmed planets), cumulative (Kepler Objects of Interest), koi (subset of cumulative), tce (Threshold Crossing Events). format: Output format. Options: json, csv, xml, ipac. Default: json. """ base_url = "https://exoplanetarchive.ipac.caltech.edu/cgi-bin/nstedAPI/nph-nstedAPI" params = { "table": table, "format": format } if query: # Basic validation/sanitization could be added here if needed params["where"] = query # The exoplanet API doesn't use api.nasa.gov, so no NASA API key needed # It also might return non-JSON formats directly async with httpx.AsyncClient() as client: try: logger.info(f"Requesting Exoplanet data: {base_url} with params: {params}") response = await client.get(base_url, params=params, timeout=60.0) # Increased timeout for potentially large queries response.raise_for_status() content_type = response.headers.get("Content-Type", "").lower() # Handle different formats if format == "json" and "application/json" in content_type: try: data = response.json() except json.JSONDecodeError as json_err: logger.error(f"Exoplanet JSON decode error: {json_err}") return f"Error: Failed to decode JSON response from Exoplanet Archive. Response text: {response.text[:500]}" elif format != "json" and ("text/" in content_type or "application/xml" in content_type or "application/csv" in content_type): # Return raw text for non-JSON formats, limited length text_response = response.text limit = 2000 # Limit output size if len(text_response) > limit: return f"Received {format.upper()} data (truncated):n{text_response[:limit]}n... (response truncated)" else: return f"Received {format.upper()} data:n{text_response}" else: # Unexpected content type for the requested format logger.warning(f"Exoplanet API returned unexpected content type '{content_type}' for format '{format}'. URL: {response.url}") return f"Error: Exoplanet Archive returned unexpected content type '{content_type}'. Response text: {response.text[:500]}" # Process JSON data if not isinstance(data, list): logger.error(f"Unexpected non-list JSON response from Exoplanet Archive: {data}") return "Received unexpected JSON data format from Exoplanet Archive." if not data: return "No exoplanet data found for the specified query." result = [] total_found = len(data) display_limit = 10 if total_found > display_limit: result.append(f"Found {total_found} entries. Showing the first {display_limit}:") data_to_display = data[:display_limit] else: result.append(f"Found {total_found} entries:") data_to_display = data for entry in data_to_display: # Dynamically display available fields (up to a limit) entry_details = [] max_fields = 8 fields_shown = 0 for key, value in entry.items(): if fields_shown >= max_fields: entry_details.append(" ... (more fields exist)") break # Simple display, skip null/empty values if desired if value is not None and value != "": entry_details.append(f" {key}: {value}") fields_shown += 1 if entry_details: result.append("n" + "n".join(entry_details)) result.append("-" * 40) else: # Handle case where entry might be empty or only has nulls result.append(f"nEntry found, but no displayable data (ID might be {entry.get('id', 'N/A')}).") result.append("-" * 40) return "n".join(result) except httpx.HTTPStatusError as http_err: logger.error(f"Exoplanet API HTTP error: {http_err} - {http_err.response.status_code}") return f"Error: Exoplanet Archive returned HTTP status {http_err.response.status_code}. Response: {http_err.response.text[:500]}" except httpx.RequestError as req_err: logger.error(f"Exoplanet API request error: {req_err}") return f"Error: Failed to connect to Exoplanet Archive. {str(req_err)}" except Exception as e: logger.error(f"Error processing Exoplanet data: {str(e)}") return f"Error processing exoplanet data: {str(e)}"

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AnCode666/nasa-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server