Skip to main content
Glama

MCP Hub

by CodeHalwell

ShallowCodeResearch_agent_llm_processor

Process text using an LLM for tasks like summarization, reasoning, and keyword extraction. Input text and specify the task to generate structured outputs with metadata on the MCP Hub.

Instructions

Wrapper for LLMProcessorAgent to process text with LLM. Returns: LLM processing result with output and metadata

Input Schema

NameRequiredDescriptionDefault
contextNoOptional context for processing
taskNoThe processing task ('summarize', 'reason', or 'extract_keywords')summarize
text_inputNoThe input text to process

Input Schema (JSON Schema)

{ "properties": { "context": { "description": "Optional context for processing", "type": "string" }, "task": { "default": "summarize", "description": "The processing task ('summarize', 'reason', or 'extract_keywords')", "enum": [ "summarize", "reason", "extract_keywords" ], "type": "string" }, "text_input": { "description": "The input text to process", "type": "string" } }, "type": "object" }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CodeHalwell/gradio-mcp-agent-hack'

If you have feedback or need assistance with the MCP directory API, please join our Discord server