Skip to main content
Glama
default.yaml1.95 kB
backend: mcp thread_pool_max_workers: 128 mcp: transport: stdio host: "0.0.0.0" port: 8001 http: host: "0.0.0.0" port: 8002 flow: history_calculate: flow_content: HistoryCalculateOp() enable_cache: true cache_expire_hours: 1 crawl_url: flow_content: Crawl4aiLongTextOp() >> ExtractLongTextOp() enable_cache: true cache_expire_hours: 1 description: "Web content parsing tool: retrieves and formats web page content based on the provided URL." input_schema: url: type: string description: "the provided URL" required: true extract_entities_code: flow_content: ExtractEntitiesCodeOp() << DashscopeSearchOp() enable_cache: true cache_expire_hours: 1 execute_code: flow_content: ExecuteCodeOp() enable_cache: true cache_expire_hours: 1 execute_shell: flow_content: ExecuteShellOp() enable_cache: true cache_expire_hours: 1 dashscope_search: flow_content: DashscopeSearchOp() enable_cache: true cache_expire_hours: 1 tavily_search: flow_content: TavilySearchOp() enable_cache: true cache_expire_hours: 1 mock_search: flow_content: MockSearchOp() enable_cache: true cache_expire_hours: 1 react_agent: flow_content: | ops = [HistoryCalculateOp(), ExtractEntitiesCodeOp() << DashscopeSearchOp(), DashscopeSearchOp()] ReactAgentOp(add_think_tool=True) << ops llm: default: backend: openai_compatible model_name: qwen-flash qwen3_30b_instruct: backend: openai_compatible model_name: qwen3-30b-a3b-instruct-2507 qwen3_30b_thinking: backend: openai_compatible model_name: qwen3-30b-a3b-thinking-2507 params: temperature: 0.6 qwen_flash: backend: openai_compatible model_name: qwen-flash embedding_model: default: backend: openai_compatible model_name: text-embedding-v4 params: dimensions: 1024

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/FlowLLM-AI/finance-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server