Skip to main content
Glama
by fmangot

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
PORTNoHTTP server port3000
NODE_ENVNoEnvironment (development/production)
ALLOWED_ORIGINSNoComma-separated list of allowed CORS origins (default: all in dev, none in prod)

Tools

Functions exposed to the LLM to take actions

NameDescription
sequential_thinking

Facilitates a detailed, step-by-step thinking process for problem-solving and analysis. Break down complex problems into manageable steps, revise and refine thoughts as understanding deepens, and branch into alternative paths of reasoning.

get_thought_sequence

Retrieves the complete sequence of thoughts for the current or specified session

get_thought_branch

Retrieves a specific branch of alternative reasoning paths

reset_thinking_session

Starts a new thinking session, clearing the current thought sequence

get_session_summary

Gets a summary of the current or specified thinking session

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/fmangot/Mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server