Skip to main content
Glama

text_classification

Classify text sentiment and categories using AI models to analyze and categorize textual content for various applications.

Instructions

Classify text sentiment/category using DeepInfra OpenAI-compatible API.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelNo
textYes

Implementation Reference

  • The core handler function for the 'text_classification' tool. It crafts a specific prompt for sentiment and category classification and invokes the DeepInfra OpenAI-compatible completions API using the configured model.
    @app.tool() async def text_classification(text: str) -> str: """Classify text using DeepInfra OpenAI-compatible API.""" model = DEFAULT_MODELS["text_classification"] prompt = f"""Analyze the following text and classify it. Determine the sentiment (positive, negative, neutral) and main category/topic. Provide your analysis in JSON format with 'sentiment' and 'category' fields. Text: {text} Response format: {{"sentiment": "positive/negative/neutral", "category": "topic"}}""" try: response = await client.completions.create( model=model, prompt=prompt, max_tokens=200, temperature=0.1, ) if response.choices: return response.choices[0].text else: return "Unable to classify text" except Exception as e: return f"Error classifying text: {type(e).__name__}: {str(e)}"
  • The conditional block that registers the text_classification tool with the FastMCP app if enabled via ENABLED_TOOLS environment variable.
    if "all" in ENABLED_TOOLS or "text_classification" in ENABLED_TOOLS: @app.tool()
  • Configuration dictionary defining the default model for the text_classification tool (and others), loaded from environment variables.
    DEFAULT_MODELS = { "generate_image": os.getenv("MODEL_GENERATE_IMAGE", "Bria/Bria-3.2"), "text_generation": os.getenv("MODEL_TEXT_GENERATION", "meta-llama/Llama-2-7b-chat-hf"), "embeddings": os.getenv("MODEL_EMBEDDINGS", "sentence-transformers/all-MiniLM-L6-v2"), "speech_recognition": os.getenv("MODEL_SPEECH_RECOGNITION", "openai/whisper-large-v3"), "zero_shot_image_classification": os.getenv("MODEL_ZERO_SHOT_IMAGE_CLASSIFICATION", "openai/gpt-4o-mini"), "object_detection": os.getenv("MODEL_OBJECT_DETECTION", "openai/gpt-4o-mini"), "image_classification": os.getenv("MODEL_IMAGE_CLASSIFICATION", "openai/gpt-4o-mini"), "text_classification": os.getenv("MODEL_TEXT_CLASSIFICATION", "microsoft/DialoGPT-medium"), "token_classification": os.getenv("MODEL_TOKEN_CLASSIFICATION", "microsoft/DialoGPT-medium"), "fill_mask": os.getenv("MODEL_FILL_MASK", "microsoft/DialoGPT-medium"), }
  • Function signature defining the input schema (text: str) and output type (str) for the tool, used by FastMCP for validation.
    async def text_classification(text: str) -> str:

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/phuihock/mcp-deeinfra'

If you have feedback or need assistance with the MCP directory API, please join our Discord server