Nexus-MCP
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| EMBED_MODEL | No | Ollama model used for embeddings | nomic-embed-text |
| JUDGE_MODEL | No | Ollama model used as evaluator (first available if empty) | |
| OLLAMA_TIMEOUT | No | Request timeout in seconds | 120 |
| OLLAMA_BASE_URL | No | Ollama API endpoint | http://localhost:11434 |
| KNOWLEDGE_STORE_PATH | No | Path for the local RAG store | .foundry_knowledge.json |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": false
} |
| prompts | {
"listChanged": false
} |
| resources | {
"subscribe": false,
"listChanged": false
} |
| experimental | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| health_checkA | Check whether the local Ollama service is reachable. |
| list_modelsA | List all locally available Ollama models. |
| get_model_infoB | Get detailed information about a specific Ollama model. |
| pull_modelA | Download / update an Ollama model from the Ollama registry. Returns streaming status lines summarising the download progress. |
| delete_modelC | Delete a locally stored Ollama model to free disk space. |
| list_running_modelsA | List models currently loaded in memory (running in Ollama). |
| compare_modelsB | Run the same prompt against multiple models and return all responses side-by-side for comparison. |
| generateC | Run text generation with an Ollama model. Returns the model's raw completion for a given prompt. |
| chatB | Send a multi-turn conversation to an Ollama model. Messages should follow the format [{'role': 'user'|'assistant'|'system', 'content': '...'}]. |
| evaluate_responseB | Use a local judge model to score an LLM response on relevance, coherence, correctness, and completeness (1-5 each). |
| evaluate_agentB | Evaluate a multi-turn agent conversation on task completion, tool use, safety, and efficiency using a local judge model. |
| create_indexA | Create a named local vector index for RAG (Retrieval-Augmented Generation). Documents added to this index are embedded via Ollama. |
| list_indexesA | List all local knowledge indexes. |
| add_documentC | Add a text document to a knowledge index. The text is embedded automatically using the index's embedding model. |
| query_knowledgeB | Semantic search over a knowledge index. Returns the top-k most relevant documents for a natural language query. |
| delete_indexB | Delete a knowledge index and all its documents. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
| summarize | Summarize a piece of text. |
| rag_answer | Answer a question using retrieved context. |
| code_review | Review and critique a code snippet. |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
| models_resource | Returns JSON list of all locally available Ollama models. |
| running_resource | Returns JSON list of models currently loaded in Ollama memory. |
| indexes_resource | Returns JSON list of all local knowledge indexes. |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/deadSwank001/Nexus-MCP'
If you have feedback or need assistance with the MCP directory API, please join our Discord server