Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
RESPCODE_API_KEYYesYour API key from respcode.com

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
generateC

Generate code with ONE AI model and execute it. Default: deepseek-coder on x86.

competeC

Generate with ALL 4 AI models and execute each. Compare which produces best code!

collaborateC

Models work together: first generates, others refine, then execute final result.

consensusC

All 4 models generate solutions, Claude picks/merges best one, then execute.

executeB

Execute YOUR code (no AI generation). Just run it on the sandbox. 1 credit.

historyB

View your recent prompts and execution results.

history_searchC

Search your prompt history by keyword.

rerunC

Re-run a previous prompt on a different architecture.

creditsB

Check your credit balance and see pricing.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RespCodeAI/respcode-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server