Skip to main content
Glama

MCP Platform

by jck411
runtime_config.yaml•3.88 kB
_runtime_config: created_from_defaults: true default_config_path: config.yaml is_runtime_config: true last_modified: 1756610711.701065 version: 17 chat: service: max_tool_hops: 8 streaming: enabled: true persistence: interval_ms: 200 min_chars: 1024 persist_deltas: false system_prompt: 'You are a helpful assistant with a sense of humor. You have access to to a list of tools like setting your own configuration. ' tool_notifications: enabled: true format: '{icon} Executing tool: {tool_name}' icon: "\U0001F527" show_args: true storage: persistence: db_path: chat_history.db retention: cleanup_interval_minutes: 30 clear_triggers_before_full_wipe: 2 max_age_hours: 24 max_messages: 1000 max_sessions: 2 saved_sessions: enabled: false max_saved: 50 retention_days: null websocket: allow_credentials: true allow_origins: - '*' endpoint: /ws/chat host: localhost max_message_size: 16777216 ping_interval: 20 ping_timeout: 10 port: 8000 connection_pool: keepalive_expiry_seconds: 30 max_connections: 10 max_keepalive_connections: 5 request_timeout_seconds: 30 total_timeout_seconds: 120 llm: active: openrouter providers: anthropic_thinking: base_url: https://openrouter.ai/api/v1 max_tokens: 4096 model: anthropic/claude-3-opus show_reasoning: true temperature: 0.7 thinking_mode: step_by_step custom_provider: base_url: https://api.example.com/v1 custom_param1: value1 custom_param2: 42 max_tokens: 2048 model: custom-model-v2 nested_config: sub_param: nested_value temperature: 0.8 groq: base_url: https://api.groq.com/openai/v1 max_tokens: 4096 model: llama-3.3-70b-versatile response_format: type: text temperature: 0.7 top_p: 1.0 openai: base_url: https://api.openai.com/v1 max_tokens: 4096 model: gpt-4o-mini temperature: 0.7 top_p: 1.0 openai_reasoning: base_url: https://api.openai.com/v1 max_completion_tokens: 8192 model: o1-preview openrouter: base_url: https://openrouter.ai/api/v1 max_tokens: 4096 model: google/gemini-2.5-flash-image-preview temperature: 0.9 top_p: 1.0 openrouter_reasoning: base_url: https://openrouter.ai/api/v1 include_thinking: true max_tokens: 8192 model: openai/o3-mini reasoning_effort: high temperature: 0.7 logging: advanced: async_logging: true buffer_size: 1000 log_file_path: logs/app.log log_rotation: daily log_to_file: false structured_logging: false format: '%(asctime)s - %(levelname)s - %(message)s' level: WARNING modules: chat: enable_features: llm_replies: true system_prompt: true tool_execution: true tool_results: true level: INFO truncate_lengths: llm_reply: 500 result: 200 connection_pool: enable_features: connection_events: true connection_reuse: false http_requests: false pool_stats: true level: WARNING max_log_entries: 1000 pool_stats_interval_seconds: 60 mcp: enable_features: connection_attempts: false health_checks: true tool_arguments: true tool_calls: true tool_results: true level: INFO truncate_lengths: tool_arguments_truncate: 500 tool_results_truncate: 200 mcp: config_file: servers_config.json connection: connection_timeout: 30.0 initial_reconnect_delay: 1.0 max_reconnect_attempts: 5 max_reconnect_delay: 30.0 ping_timeout: 10.0

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jck411/MCP_BACKEND_OPENROUTER'

If you have feedback or need assistance with the MCP directory API, please join our Discord server