Skip to main content
Glama
default.yaml887 B
backend: mcp thread_pool_max_workers: 128 mcp: transport: stdio host: "0.0.0.0" port: 8001 http: host: "0.0.0.0" port: 8002 flow: load_skill_metadata: flow_content: LoadSkillMetadataOp() load_skill: flow_content: LoadSkillOp() read_reference_file: flow_content: ReadReferenceFileOp() run_shell_command: flow_content: RunShellCommandOp() llm: default: backend: openai_compatible model_name: qwen-flash qwen3_30b_instruct: backend: openai_compatible model_name: qwen3-30b-a3b-instruct-2507 qwen3_30b_thinking: backend: openai_compatible model_name: qwen3-30b-a3b-thinking-2507 params: temperature: 0.6 qwen_flash: backend: openai_compatible model_name: qwen-flash embedding_model: default: backend: openai_compatible model_name: text-embedding-v4 params: dimensions: 1024

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/zouyingcao/agentskills-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server