Skip to main content
Glama

sparql_update

Modify knowledge graph data by executing SPARQL INSERT, DELETE, or UPDATE operations to add, change, or remove triples.

Instructions

Executes a SPARQL INSERT, DELETE, or UPDATE operation to modify graph data. Use this for adding, modifying, or removing triples from graphs.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sparqlYes

Implementation Reference

  • The main handler function for the 'sparql_update' tool. It authenticates the user, validates the SPARQL query, submits an 'apply_update' job to the backend, waits for the job result using streaming or polling, and returns a JSON response with the job outcome.
    async def sparql_update_tool( sparql: str, context: Context | None = None, ) -> str: """Execute a SPARQL INSERT/DELETE/UPDATE operation.""" auth = MCPAuthContext.from_context(context) auth.require_auth() if not sparql or not sparql.strip(): raise ValueError("sparql update is required and cannot be empty") metadata = await submit_job( base_url=backend_config.base_url, auth=auth, task_type="apply_update", payload={"sparql": sparql.strip()}, ) if context: await context.report_progress(10, 100) result = await _wait_for_job_result( job_stream, metadata, context, auth ) return _render_json({ "success": True, "job_id": metadata.job_id, **result, })
  • The @server.tool decorator registers the 'sparql_update' tool on the FastMCP server instance, providing the tool name, title, description, and input schema (inferred from the following function signature: sparql: str). This occurs within the register_graph_ops_tools function.
    @server.tool( name="sparql_update", title="Run SPARQL Update", description=( "Executes a SPARQL INSERT, DELETE, or UPDATE operation to modify graph data. " "Use this for adding, modifying, or removing triples from graphs." ), )
  • Supporting helper function used by sparql_update_tool (and other graph tools) to wait for job completion, handling both WebSocket streaming and fallback polling, extracting results or errors.
    async def _wait_for_job_result( job_stream: Optional[RealtimeJobClient], metadata: JobSubmitMetadata, context: Optional[Context], auth: MCPAuthContext, ) -> JsonDict: """Wait for job completion via WebSocket or polling, return result info including detail.""" events = None if job_stream and metadata.links.websocket: events = await stream_job(job_stream, metadata, timeout=STREAM_TIMEOUT_SECONDS) if events: if context: await context.report_progress(80, 100) # Check for completion status in events and extract result for event in reversed(events): event_type = event.get("type", "") if event_type in ("job_completed", "completed", "succeeded"): if context: await context.report_progress(100, 100) # Extract result from event payload result: JsonDict = {"status": "succeeded", "events": len(events)} payload = event.get("payload", {}) if isinstance(payload, dict): detail = payload.get("detail") if detail: result["detail"] = detail return result if event_type in ("failed", "error"): error = event.get("error", "Job failed") return {"status": "failed", "error": error} return {"status": "unknown", "event_count": len(events)} # Fall back to polling status_payload = ( await poll_job_until_terminal(metadata.links.status, auth) if metadata.links.status else None ) if context: await context.report_progress(100, 100) if status_payload: status = status_payload.get("status", "unknown") detail = status_payload.get("detail") if status == "failed": error = status_payload.get("error") or (detail.get("error") if isinstance(detail, dict) else None) return {"status": "failed", "error": error} # Include full detail in result for successful jobs result: JsonDict = {"status": status} if detail: result["detail"] = detail return result return {"status": "unknown"}
  • Call to register_graph_ops_tools in the standalone MCP server creation, which includes registration of the sparql_update tool.
    register_graph_ops_tools(mcp_server)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sophia-labs/mnemosyne-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server