Skip to main content
Glama
neuhausi

CanvasXpress MCP Server

by neuhausi

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
MCP_HOSTNoHTTP mode: host to bind to0.0.0.0
MCP_PORTNoHTTP mode: port to listen on8000
LLM_MODELNoAzure OpenAI model to use (e.g., gpt-4o-global, gpt-4o-mini-global)gpt-4o-global
GEMINI_MODELNoGoogle Gemini model to use (e.g., gemini-2.0-flash-exp, gemini-1.5-pro)gemini-2.0-flash-exp
LLM_PROVIDERNoLLM provider to use (openai or gemini)openai
MCP_TRANSPORTNoMCP transport mode (http for network access or stdio for local)http
GOOGLE_API_KEYNoYour Google API key from aistudio.google.com
LLM_ENVIRONMENTNoAzure OpenAI environment (nonprod or prod)nonprod
AZURE_OPENAI_KEYNoYour Azure OpenAI API key from genai.web.bms.com
EMBEDDING_PROVIDERNoEmbedding provider to use (local for BGE-M3, onnx for lightweight local, openai for Azure OpenAI, or gemini for Google Gemini)local
ONNX_EMBEDDING_MODELNoONNX embedding model to use (e.g., all-MiniLM-L6-v2, all-mpnet-base-v2)all-MiniLM-L6-v2
GEMINI_EMBEDDING_MODELNoGoogle Gemini embedding model to usetext-embedding-004
AZURE_OPENAI_API_VERSIONNoAzure OpenAI API version2024-02-01

Capabilities

Server capabilities have not been inspected yet.

Tools

Functions exposed to the LLM to take actions

NameDescription

No tools

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/neuhausi/canvasxpress-mcp-server-main'

If you have feedback or need assistance with the MCP directory API, please join our Discord server