Skip to main content
Glama

embeddings

Generate vector embeddings for text inputs to enable semantic search, similarity analysis, and machine learning applications using DeepInfra's AI models.

Instructions

Generate embeddings for a list of texts using DeepInfra OpenAI-compatible API.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
inputsYes
modelNo

Implementation Reference

  • Handler function for the embeddings tool. It is decorated with @app.tool() for registration in FastMCP and generates embeddings via DeepInfra's OpenAI client.
    @app.tool() async def embeddings(inputs: list[str]) -> str: """Generate embeddings for a list of texts using DeepInfra OpenAI-compatible API.""" model = DEFAULT_MODELS["embeddings"] try: response = await client.embeddings.create( model=model, input=inputs, ) embeddings_list = [item.embedding for item in response.data] return str(embeddings_list) except Exception as e: return f"Error generating embeddings: {type(e).__name__}: {str(e)}"
  • Conditional check to enable and register the embeddings tool based on ENABLED_TOOLS configuration.
    if "all" in ENABLED_TOOLS or "embeddings" in ENABLED_TOOLS:
  • Helper configuration dictionary defining default models for all tools, including the embeddings model.
    DEFAULT_MODELS = { "generate_image": os.getenv("MODEL_GENERATE_IMAGE", "Bria/Bria-3.2"), "text_generation": os.getenv("MODEL_TEXT_GENERATION", "meta-llama/Llama-2-7b-chat-hf"), "embeddings": os.getenv("MODEL_EMBEDDINGS", "sentence-transformers/all-MiniLM-L6-v2"), "speech_recognition": os.getenv("MODEL_SPEECH_RECOGNITION", "openai/whisper-large-v3"), "zero_shot_image_classification": os.getenv("MODEL_ZERO_SHOT_IMAGE_CLASSIFICATION", "openai/gpt-4o-mini"), "object_detection": os.getenv("MODEL_OBJECT_DETECTION", "openai/gpt-4o-mini"), "image_classification": os.getenv("MODEL_IMAGE_CLASSIFICATION", "openai/gpt-4o-mini"), "text_classification": os.getenv("MODEL_TEXT_CLASSIFICATION", "microsoft/DialoGPT-medium"), "token_classification": os.getenv("MODEL_TOKEN_CLASSIFICATION", "microsoft/DialoGPT-medium"), "fill_mask": os.getenv("MODEL_FILL_MASK", "microsoft/DialoGPT-medium"), }
  • Function signature providing the input schema (list[str]) and output type (str) for the embeddings tool.
    async def embeddings(inputs: list[str]) -> str:

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/phuihock/mcp-deeinfra'

If you have feedback or need assistance with the MCP directory API, please join our Discord server