Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Schema

Prompts

Interactive templates invoked by user choice

NameDescription
compare-modelsCompare multiple Hugging Face models
summarize-paperSummarize an AI research paper from arXiv

Resources

Contextual data attached and managed by the client

NameDescription
Llama 3 8B InstructMeta's Llama 3 8B Instruct model
Mistral 7B Instruct v0.2Mistral AI's 7B instruction-following model
OpenChat 3.5Open-source chatbot based on Mistral 7B
Stable Diffusion XL 1.0SDXL text-to-image model
Databricks Dolly 15k15k instruction-following examples
SQuADStanford Question Answering Dataset
GLUEGeneral Language Understanding Evaluation benchmark
Summarize From FeedbackOpenAI summarization dataset
Diffusers DemoDemo of Stable Diffusion models
Chatbot DemoDemo of a Gradio chatbot interface
Midjourney v4 DiffusionReplica of Midjourney v4
StableVicunaFine-tuned Vicuna with RLHF

Tools

Functions exposed to the LLM to take actions

NameDescription
search-models

Search for models on Hugging Face Hub

get-model-info

Get detailed information about a specific model

search-datasets

Search for datasets on Hugging Face Hub

get-dataset-info

Get detailed information about a specific dataset

search-spaces

Search for Spaces on Hugging Face Hub

get-space-info

Get detailed information about a specific Space

get-paper-info

Get information about a specific paper on Hugging Face

get-daily-papers

Get the list of daily papers curated by Hugging Face

search-collections

Search for collections on Hugging Face Hub

get-collection-info

Get detailed information about a specific collection

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/shreyaskarnik/huggingface-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server