Ontology MCP

by bigdata-coss

Integrations

  • Uses Docker to run GraphDB and provide SPARQL endpoint functionality

  • Provides access to Gemini models for text generation, chat completion, and model listing with support for various Gemini model variants

  • Enables running, managing, and interacting with Ollama models including execution, information retrieval, downloading, listing, deletion, and chat completion

Ontology MCP

Ontology MCP is a Model Context Protocol (MCP) server that connects GraphDB's SPARQL endpoints and Ollama models to Claude. This tool allows Claude to query and manipulate ontology data and leverage various AI models.

Key Features

  • Execute SPARQL query ( mcp_sparql_execute_query )
  • Execute SPARQL update query ( mcp_sparql_update )
  • List repositories ( mcp_sparql_list_repositories )
  • Query the graph list ( mcp_sparql_list_graphs )
  • Get resource information ( mcp_sparql_get_resource_info )
  • Run the model ( mcp_ollama_run )
  • Check model information ( mcp_ollama_show )
  • Download model ( mcp_ollama_pull )
  • Get model list ( mcp_ollama_list )
  • Delete model ( mcp_ollama_rm )
  • Chat completion ( mcp_ollama_chat_completion )
  • Check container status ( mcp_ollama_status )
  • Chat completed ( mcp_openai_chat )
  • Create image ( mcp_openai_image )
  • Text-to-speech ( mcp_openai_tts )
  • Speech-to-text ( mcp_openai_transcribe )
  • Generate embedding ( mcp_openai_embedding )
  • Generate text ( mcp_gemini_generate_text )
  • Chat completion ( mcp_gemini_chat_completion )
  • Get model list ( mcp_gemini_list_models )
  • ~~Generate images ( mcp_gemini_generate_images ) - Using Imagen model~~ (currently disabled)
  • ~~Generate videos ( mcp_gemini_generate_videos ) - Using Veo models~~ (currently disabled)
  • ~~Generate multimodal content ( mcp_gemini_generate_multimodal_content )~~ (currently disabled)

Note : Gemini's image creation, video creation, and multimodal content creation features are currently disabled due to API compatibility issues.

Supported Gemini Models
Model transformationinputoutput of powerOptimization Goal
Gemini 2.5 Flash Preview gemini-2.5-flash-preview-04-17Audio, images, video, textTextAdaptive thinking, cost-effectiveness
Gemini 2.5 Pro Preview gemini-2.5-pro-preview-03-25Audio, images, video, textTextEnhanced thinking and reasoning, multimodal understanding, advanced coding
Gemini 2.0 Flash gemini-2.0-flashAudio, images, video, textText, images (experimental), audio (coming soon)Next-generation capabilities, speed, thinking, real-time streaming, multimodal creation
Gemini 2.0 Flash-Lite gemini-2.0-flash-liteAudio, images, video, textTextCost-effective and low latency
Gemini 1.5 Flash gemini-1.5-flashAudio, images, video, textTextFast and versatile performance for a variety of tasks
Gemini 1.5 Flash-8B gemini-1.5-flash-8bAudio, images, video, textTextHigh volume and low intelligence tasks
Gemini 1.5 Pro gemini-1.5-proAudio, images, video, textTextComplex reasoning tasks that require more intelligence
Gemini embedding gemini-embedding-expTextText embeddingMeasuring the relevance of text strings
Imagen 3 imagen-3.0-generate-002TextimageGoogle's most advanced image generation model
Veo 2 veo-2.0-generate-001Text, ImagesvideoCreate high-quality videos
Gemini 2.0 Flash Live gemini-2.0-flash-live-001Audio, video, textText, AudioLow-latency, two-way voice and video interaction

HTTP request functions

  • Execute HTTP requests ( mcp_http_request ) - communicate with external APIs using various HTTP methods such as GET, POST, PUT, DELETE, etc.

Get started

1. Clone the repository

git clone https://github.com/bigdata-coss/agent_mcp.git cd agent_mcp

2. Run the GraphDB Docker container

Start the GraphDB server by running the following command from the project root directory:

docker-compose up -d

The GraphDB web interface runs at http://localhost:7200 .

3. Build and run the MCP server

# 의존성 설치 npm install # 프로젝트 빌드 npm run build # 서버 실행 (테스트용, Claude Desktop에서는 필요 없음) node build/index.js

4. Import RDF data

Go to the GraphDB web interface ( http://localhost:7200 ) and do the following:

  1. Create a repository:
    • “Setup” → “Repositories” → “Create new repository”
    • Repository ID: schemaorg-current-https (or whatever name you want)
    • Repository title: "Schema.org"
    • Click "Create"
  2. Get sample data:
    • Select the repository you created
    • “Import” → “RDF” → “Upload RDF files”
    • Upload an example file to the imports directory (e.g. imports/example.ttl )
    • Click "Import"

Note : The project includes example RDF files in the imports directory.

5. Setting up Claude Desktop

To use Ontology MCP in Claude Desktop, you need to update the MCP settings file:

  1. Open the Claude Desktop settings file:
    • Windows: %AppData%\Claude\claude_desktop_config.json
    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    • Linux: ~/.config/Claude/claude_desktop_config.json
  2. Add the following settings:
{ "mcpServers": { "a2a-ontology-mcp": { "command": "node", "args": ["E:\\codes\\a2a_mcp\\build"], "env": { "SPARQL_ENDPOINT": "http://localhost:7200", "OPENAI_API_KEY": "your-api-key", "GEMINI_API_KEY" : "your-api-key" }, "disabled": false, "autoApprove": [] } } }

IMPORTANT : Change the path in `args' to the actual absolute path to your project build directory.

  1. Restart Claude Desktop

License

This project is provided under the MIT License. See the LICENSE file for details.

You must be authenticated.

A
security – no known vulnerabilities
A
license - permissive license
A
quality - confirmed to work

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

A Model Context Protocol (MCP) server that connects GraphDB's SPARQL endpoints and Ollama models to Claude, enabling Claude to query and manipulate ontology data while leveraging various AI models.

  1. Key Features
    1. SPARQL related functions
    2. Features related to the Ollama model
    3. OpenAI related features
    4. Google Gemini related features
    5. HTTP request functions
  2. Get started
    1. 1. Clone the repository
    2. 2. Run the GraphDB Docker container
    3. 3. Build and run the MCP server
    4. 4. Import RDF data
    5. 5. Setting up Claude Desktop
  3. License

    Related MCP Servers

    • -
      security
      F
      license
      -
      quality
      A MCP server that exposes GraphQL schema information to LLMs like Claude. This server allows an LLM to explore and understand large GraphQL schemas through a set of specialized tools, without needing to load the whole schema into the context
      Last updated -
      14
      1
      JavaScript
      • Apple
      • Linux
    • -
      security
      A
      license
      -
      quality
      A Model Context Protocol server that enables LLMs to interact with GraphQL APIs by providing schema introspection and query execution capabilities.
      Last updated -
      536
      1
      MIT License
      • Apple
    • A
      security
      F
      license
      A
      quality
      A Model Context Protocol server that provides read-only access to Ontotext GraphDB, enabling LLMs to explore RDF graphs and execute SPARQL queries.
      Last updated -
      2
      4
      JavaScript
    • -
      security
      A
      license
      -
      quality
      Allows AI models to query and interact with FalkorDB graph databases through the Model Context Protocol (MCP) specification.
      Last updated -
      4
      TypeScript
      MIT License

    View all related MCP servers

    ID: mxvujkgabm