Skip to main content
Glama

FastApply MCP Server

by betmoar

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
HOSTNoServer host bindinglocalhost
PORTNoServer port8000
DEBUGNoDebug modefalse
LOG_LEVELNoLogging verbosityINFO
CACHE_SIZENoCache entry limit1000
QDRANT_URLNoQdrant vector databasehttp://localhost:6333
MAX_FILE_SIZENo10MB file limit10485760
FAST_APPLY_URLNoYour FastApply server URLhttp://localhost:1234/v1
OPENAI_API_KEYNoOpenAI integration
QDRANT_API_KEYNoQdrant API key
WORKSPACE_ROOTNoWorkspace confinement
TIMEOUT_SECONDSNoOperation timeout30
FAST_APPLY_MODELNoModel identifierfastapply-1.5b
ALLOWED_EXTENSIONSNoAllowed file extensions.py,.js,.ts,.jsx,.tsx,.md,.json,.yaml,.yml
FAST_APPLY_API_KEYNoAPI key if required
FAST_APPLY_TIMEOUTNoRequest timeout30.0
FAST_APPLY_MAX_TOKENSNoResponse token limit8000
FAST_APPLY_TEMPERATURENoModel consistency0.05
FAST_APPLY_STRICT_PATHSNoPath validation1
MAX_CONCURRENT_OPERATIONSNoConcurrent operations4

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
list_tools

Return metadata for all exposed tools (unified mode).

call_tool

Handle tool calls with unified branching and robust safety checks.

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/betmoar/FastApply-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server