Skip to main content
Glama

token_classification

Identify and classify named entities in text using natural language processing. Extract people, organizations, locations, and other entities from documents for data analysis.

Instructions

Perform token classification (NER) using DeepInfra OpenAI-compatible API.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYes

Implementation Reference

  • Main handler function for the 'token_classification' tool. It uses a prompted language model completion to perform named entity recognition (NER) on input text.
    @app.tool() async def token_classification(text: str) -> str: """Perform token classification (NER) using DeepInfra OpenAI-compatible API.""" model = DEFAULT_MODELS["token_classification"] prompt = f"""Perform named entity recognition on the following text. Identify all named entities (persons, organizations, locations, dates, etc.) and classify them. Provide your analysis in JSON format with an array of entities, each having 'entity', 'type', and 'position' fields. Text: {text} Response format: {{"entities": [{{"entity": "entity_name", "type": "PERSON/ORG/LOC/DATE/etc", "position": [start, end]}}]}}""" try: response = await client.completions.create( model=model, prompt=prompt, max_tokens=500, temperature=0.1, ) if response.choices: return response.choices[0].text else: return "Unable to perform token classification" except Exception as e: return f"Error performing token classification: {type(e).__name__}: {str(e)}"
  • Conditional registration of the token_classification tool using the @app.tool() decorator on the FastMCP app.
    if "all" in ENABLED_TOOLS or "token_classification" in ENABLED_TOOLS: @app.tool()
  • Default models configuration dictionary, including the model used for token_classification (default: microsoft/DialoGPT-medium).
    DEFAULT_MODELS = { "generate_image": os.getenv("MODEL_GENERATE_IMAGE", "Bria/Bria-3.2"), "text_generation": os.getenv("MODEL_TEXT_GENERATION", "meta-llama/Llama-2-7b-chat-hf"), "embeddings": os.getenv("MODEL_EMBEDDINGS", "sentence-transformers/all-MiniLM-L6-v2"), "speech_recognition": os.getenv("MODEL_SPEECH_RECOGNITION", "openai/whisper-large-v3"), "zero_shot_image_classification": os.getenv("MODEL_ZERO_SHOT_IMAGE_CLASSIFICATION", "openai/gpt-4o-mini"), "object_detection": os.getenv("MODEL_OBJECT_DETECTION", "openai/gpt-4o-mini"), "image_classification": os.getenv("MODEL_IMAGE_CLASSIFICATION", "openai/gpt-4o-mini"), "text_classification": os.getenv("MODEL_TEXT_CLASSIFICATION", "microsoft/DialoGPT-medium"), "token_classification": os.getenv("MODEL_TOKEN_CLASSIFICATION", "microsoft/DialoGPT-medium"), "fill_mask": os.getenv("MODEL_FILL_MASK", "microsoft/DialoGPT-medium"), }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/phuihock/mcp-deeinfra'

If you have feedback or need assistance with the MCP directory API, please join our Discord server