Skip to main content
Glama

ollama_show_model

Read-onlyIdempotent

Retrieve detailed metadata for installed Ollama models to inspect architecture, license, quantization, and parameters before use.

Instructions

Retrieve detailed metadata about a specific installed Ollama model. Use this tool to inspect a model's architecture, license, quantization level, prompt template, and default parameters before using it with ollama_chat or ollama_generate. Do not use this to list all models — use ollama_list_models instead. Do not use this to download new models — use ollama_pull_model instead. Prerequisites: The model must already be installed locally (verify with ollama_list_models). Behavior: Read-only, idempotent, safe to retry. No authentication required. No rate limits. Returns the same metadata for the same model every time. On model-not-found error, returns an error object without throwing.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelYesExact Ollama model identifier to inspect (e.g., 'llama3.1:8b', 'mistral:latest'). Must match a 'name' from ollama_list_models output. If unsure which models are installed, call ollama_list_models first.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelfileNoThe full Modelfile content defining this model's configuration.
parametersNoRuntime parameter defaults (e.g., temperature, context length) as a formatted string.
templateNoGo template string used for prompt formatting.
detailsNoModel architecture details.
errorNoError message if the model was not found. Only present on failure.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide: it specifies that no authentication is required, there are no rate limits, it returns consistent metadata for the same model, and describes the error handling behavior ('On model-not-found error, returns an error object without throwing'). While annotations cover read-only, non-destructive, and idempotent aspects, the description enriches this with practical operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with purpose first, followed by usage guidelines, prerequisites, and behavioral details. Every sentence serves a clear function: the first establishes purpose, the second provides usage context, the third gives prerequisites, and the fourth adds behavioral transparency. There is no wasted text or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, comprehensive annotations (readOnlyHint, destructiveHint, idempotentHint, openWorldHint), 100% schema coverage, and the presence of an output schema, the description provides complete contextual information. It covers purpose, usage guidelines, prerequisites, and behavioral details that complement the structured data, making it fully sufficient for an agent to understand and use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single parameter. The description adds minimal additional context by mentioning that the model must match a 'name' from ollama_list_models output, but this is essentially restating what the schema description says. The baseline score of 3 reflects adequate but not exceptional value added beyond the comprehensive schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Retrieve detailed metadata') and resource ('specific installed Ollama model'), distinguishing it from siblings by explicitly naming ollama_list_models and ollama_pull_model as alternatives for different purposes. It provides concrete examples of what metadata is retrieved (architecture, license, quantization level, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('inspect a model... before using it with ollama_chat or ollama_generate'), when not to use it ('Do not use this to list all models...', 'Do not use this to download new models...'), and names specific alternatives (ollama_list_models, ollama_pull_model). It also includes prerequisites ('The model must already be installed locally').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/VrtxOmega/Ollama-Omega'

If you have feedback or need assistance with the MCP directory API, please join our Discord server