Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Server capabilities have not been inspected yet.

Tools

Functions exposed to the LLM to take actions

NameDescription
list_local_models

List all locally installed Ollama models with details

local_llm_chat

Chat with a local Ollama model

ollama_health_check

Check Ollama server health and provide diagnostics

system_resource_check

Check system resources and compatibility

suggest_models

Suggests the best locally installed model for a specific task based on user needs.

remove_model

Remove a model from local storage

start_ollama_server

Attempt to start Ollama server if it's not running

select_chat_model

Present available models and help user select one for chat

test_model_responsiveness

Test the responsiveness of a specific model by sending a simple prompt.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/paolodalprato/ollama-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server