Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
RESPCODE_API_KEYYesYour API key from respcode.com

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
generate

Generate code with ONE AI model and execute it. Default: deepseek-coder on x86.

compete

Generate with ALL 4 AI models and execute each. Compare which produces best code!

collaborate

Models work together: first generates, others refine, then execute final result.

consensus

All 4 models generate solutions, Claude picks/merges best one, then execute.

execute

Execute YOUR code (no AI generation). Just run it on the sandbox. 1 credit.

history

View your recent prompts and execution results.

history_search

Search your prompt history by keyword.

rerun

Re-run a previous prompt on a different architecture.

credits

Check your credit balance and see pricing.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RespCodeAI/respcode-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server