Skip to main content
Glama

MemOS-MCP

by qinshu1109
mem_cube_config.yaml1.48 kB
user_id: "user11alice" cube_id: "user11alice/mem_cube_tree" text_mem: backend: "tree_text" config: extractor_llm: backend: "ollama" config: model_name_or_path: "qwen3:0.6b" temperature: 0.0 remove_think_prefix: true max_tokens: 8192 dispatcher_llm: backend: "ollama" config: model_name_or_path: "qwen3:0.6b" temperature: 0.0 remove_think_prefix: true max_tokens: 8192 graph_db: backend: "neo4j" config: uri: "bolt://123.57.48.226:7687" user: "neo4j" password: "12345678" db_name: "user11alice" auto_create: true embedder: backend: "ollama" config: model_name_or_path: "nomic-embed-text:latest" act_mem: backend: "kv_cache" config: memory_filename: "activation_memory.pickle" extractor_llm: backend: "huggingface" config: model_name_or_path: "Qwen/Qwen3-1.7B" temperature: 0.8 max_tokens: 1024 top_p: 0.9 top_k: 50 add_generation_prompt: true remove_think_prefix: false para_mem: backend: "lora" config: memory_filename: "parametric_memory.adapter" extractor_llm: backend: "huggingface" config: model_name_or_path: "Qwen/Qwen3-1.7B" temperature: 0.8 max_tokens: 1024 top_p: 0.9 top_k: 50 add_generation_prompt: true remove_think_prefix: false

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/qinshu1109/memos-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server