Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
HTTP_PORTNoHTTP server port (when TRANSPORT=http)3000
TRANSPORTNoTransport mode: stdio or httpstdio
MAX_RETRIESNoMaximum retry count for failed requests2
MAX_SESSIONSNoMaximum number of concurrent sessions100
DEFAULT_MODELNoDefault model for requestsdeepseek-chat
SHOW_COST_INFONoShow cost info in responsestrue
REQUEST_TIMEOUTNoRequest timeout in milliseconds60000
DEEPSEEK_API_KEYYesYour DeepSeek API key
FALLBACK_ENABLEDNoEnable automatic model fallback on errorstrue
DEEPSEEK_BASE_URLNoCustom API endpointhttps://api.deepseek.com
ENABLE_MULTIMODALNoEnable multimodal (image) input supportfalse
MAX_MESSAGE_LENGTHNoMaximum message content length (characters)100000
SESSION_TTL_MINUTESNoSession time-to-live in minutes30
MAX_SESSION_MESSAGESNoMax messages per session (sliding window)200
SKIP_CONNECTION_TESTNoSkip startup API connection testfalse
CIRCUIT_BREAKER_THRESHOLDNoConsecutive failures before circuit opens5
CIRCUIT_BREAKER_RESET_TIMEOUTNoMilliseconds before circuit half-opens30000

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}
prompts
{
  "listChanged": true
}
resources
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
deepseek_chat

Chat with DeepSeek AI models. Supports deepseek-chat for general conversations and deepseek-reasoner for complex reasoning tasks with chain-of-thought explanations. Features: multi-turn sessions (session_id), function calling (tools parameter), thinking mode, JSON output mode, multimodal input (when enabled), automatic cost tracking, and model fallback with circuit breaker resilience.

deepseek_sessions

Manage multi-turn conversation sessions. List active sessions, delete a specific session, or clear all sessions. Sessions store conversation history for use with the session_id parameter in deepseek_chat.

Prompts

Interactive templates invoked by user choice

NameDescription
debug_with_reasoningDebug code issues using DeepSeek R1 reasoning model with step-by-step analysis
code_review_deepComprehensive code review analyzing quality, security, performance, and best practices
research_synthesisResearch a topic and synthesize information into a structured report
strategic_planningAnalyze options and create strategic plans with reasoning for each decision
explain_like_im_fiveExplain complex topics in simple terms using analogies and reasoning
mathematical_proofProve mathematical statements with rigorous step-by-step reasoning
argument_validationAnalyze arguments for logical fallacies and reasoning errors
creative_ideationGenerate creative ideas with reasoning for feasibility and value
cost_comparisonCompare costs of different LLMs for a task and show savings with DeepSeek
pair_programmingInteractive coding assistant that explains reasoning for code decisions
function_call_debugDebug function calling issues with DeepSeek models
create_function_schemaGenerate JSON Schema for function calling from natural language description

Resources

Contextual data attached and managed by the client

NameDescription
modelsList of available DeepSeek models with capabilities, context limits, and pricing information
configCurrent server configuration including base URL, timeouts, session settings, and fallback status. API key is masked for security.
usageReal-time usage statistics including total requests, token consumption, costs, active sessions, and cache hit ratio. Updated on every read.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/arikusi/deepseek-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server