Skip to main content
Glama
example.agent.json1.87 kB
{ "agent": { "profile": { "name": "Your Agent name", "group": "Your Agent group", "description": "Your AI Agent Description", "contexts": [ "Some lore of your AI Agent 1", "Some lore of your AI Agent 2", "first objective that your AI Agent need to follow", "second objective that your AI Agent need to follow", "first knowledge of your AI Agent", "second knowledge of your AI Agent" ] }, "mcp_servers": { "npx_server_example": { "command": "npx", "args": ["-y", "@npm_package_example/npx_server_example"], "env": { "API_KEY": "YOUR_API_KEY" } }, "local_server_example": { "command": "node", "args": ["node /path/to/local_server/dist/index.js"] } }, "graph": { "max_steps": 200, "max_iterations": 15, "max_retries": 3, "execution_timeout_ms": 300000, "max_token_usage": 100000, "model": { "provider": "openai", "model_name": "gpt-4o", "temperature": 0.7, "max_tokens": 4096 } }, "memory": { "ltm_enabled": true, "size_limits": { "short_term_memory_size": 10, "max_insert_episodic_size": 20, "max_insert_semantic_size": 20, "max_retrieve_memory_size": 20, "limit_before_summarization": 10000 }, "thresholds": { "insert_semantic_threshold": 0.7, "insert_episodic_threshold": 0.6, "retrieve_memory_threshold": 0.5, "hitl_threshold": 0.7 }, "timeouts": { "retrieve_memory_timeout_ms": 20000, "insert_memory_timeout_ms": 10000 }, "strategy": "holistic" }, "rag": { "enabled": true, "top_k": 5, "embedding_model": "text-embedding-ada-002" } } }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/KasarLabs/snak'

If you have feedback or need assistance with the MCP directory API, please join our Discord server