Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
MCP_PORTNoThe port the server runs on (default 8787).8787
NODE_ENVNoEnvironment setting, e.g., 'production'.
OPENAI_BASE_URLNoCustom base URL for OpenAI routing.
OPENAI_PROMPT_IDNoID for hosted prompt controls.
OPENAI_FLOW_MODELNoModel used for flow-related logic.
MCP_ROUTER_ENABLEDNoSet to 'true' to enable router-backed tool hydration at boot.
OPENAI_ASSISTANT_IDNoLegacy fallback for prompt/version or used for assistant-thread routing.
OPENAI_DEFAULT_MODELNoThe default OpenAI model to use.
OPENAI_PROMPT_VERSIONNoVersion for hosted prompt controls.
OPENAI_SERVER_API_KEYYesRequired for /ai/responses-proxy. Used for streaming Responses proxy and server-owned key orchestration.
OPENAI_ASSISTANT_MODELNoModel used specifically for assistant-related tasks.
OPENAI_REASONING_MODELNoModel used for tool-assisted reasoning.
COMMAND_VECTOR_STORE_IDNoOptional ID for file search or vector retrieval augmentation.
OPENAI_MAX_OUTPUT_TOKENSNoMaximum number of output tokens for OpenAI responses.
MCP_POSTCHECK_MAX_PROSE_CHARSNoResponse prose truncation guard (default 1200).1200
TEKAUTOMATE_STEPS_INSTRUCTIONS_FILENoPrompt file override for step instructions.
TEKAUTOMATE_BLOCKLY_INSTRUCTIONS_FILENoPrompt file override for Blockly instructions.

Capabilities

Server capabilities have not been inspected yet.

Tools

Functions exposed to the LLM to take actions

NameDescription

No tools

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/abnasim/TekAutomate-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server