Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
BIO_MCP_EVO2_MODEL_SIZENoModel configuration size. Options: 7b, 40b7b
BIO_MCP_EVO2_CUDA_DEVICENoCUDA device index (e.g., 0)0
BIO_MCP_EVO2_NIM_API_KEYNoNvidia NIM API key (required for api mode)
BIO_MCP_EVO2_SBATCH_TIMENoSLURM time limit (for sbatch mode)01:00:00
BIO_MCP_EVO2_SBATCH_MEMORYNoSLURM memory limit (for sbatch mode)64G
BIO_MCP_EVO2_EXECUTION_MODENoExecution mode selection. Options: local, sbatch, singularity, docker, api
BIO_MCP_EVO2_SBATCH_GPU_TYPENoSLURM GPU type (for sbatch mode)h100
BIO_MCP_EVO2_SBATCH_PARTITIONNoSLURM partition (for sbatch mode)gpu

Tools

Functions exposed to the LLM to take actions

NameDescription

No tools

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bio-mcp/bio-mcp-evo2'

If you have feedback or need assistance with the MCP directory API, please join our Discord server