Skip to main content
Glama
ravinwebsurgeon

DataForSEO MCP Server

ai_optimization_llm_response

Retrieve structured AI responses from models like Claude, Gemini, or ChatGPT for SEO optimization tasks, with web search capability for current information.

Instructions

This endpoint allows you to retrieve structured responses from a specific AI model, based on the input parameters

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
llm_typeYestype of llm. Must be one of: 'claude', 'gemini', 'chat_gpt', 'perplexity'
user_promptYesPrompt for the AI model. The question or task you want to send to the AI model. You can specify up to 500 characters in the user_prompt field
model_nameYesname of the AI model. consists of the actual model name and version name. if not sure which model to use, first call the ai_optimization_llm_models tool to get list of available models for the specified llm_type
temperatureNorandomness of the AI response optional field higher values make output more diverse; lower values make output more focused;
top_pNodiversity of the AI response, optional field, controls diversity of the response by limiting token selection;
web_searchNoenable web search for current information. When enabled, the AI model can access and cite current web information;
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states this 'retrieves' responses, implying a read-only operation, but doesn't clarify authentication requirements, rate limits, response format, error conditions, or whether this is a synchronous/async operation. The description is too minimal for a tool that interacts with external AI models.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that gets straight to the point. It's appropriately sized for a tool with good schema documentation, though it could be slightly more informative given the lack of annotations and output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters, no annotations, and no output schema, the description is inadequate. It doesn't explain what 'structured responses' means, doesn't mention response format or potential errors, and provides no context about the AI service being accessed. The schema handles parameter documentation well, but the description fails to compensate for missing behavioral and output information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description adds no additional parameter information beyond what's in the schema - it doesn't explain relationships between parameters, provide examples, or clarify dependencies. Baseline 3 is appropriate when the schema does all the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'retrieve structured responses from a specific AI model, based on the input parameters'. It specifies the verb ('retrieve'), resource ('structured responses'), and scope ('from a specific AI model'), making it clear this is a query/response tool. However, it doesn't explicitly differentiate from sibling tools like 'ai_optimization_llm_models' which lists models rather than getting responses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While the input schema's description for 'model_name' mentions calling 'ai_optimization_llm_models' first if unsure, this is not in the tool description itself. There's no mention of prerequisites, constraints, or comparison with other AI-related tools in the sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ravinwebsurgeon/seo-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server