Skip to main content
Glama
iMark21

AEAT MCP Server

by iMark21

search_tax_rules

Search Spanish IRPF tax rules for income, deductions, and casilla definitions using natural terms like 'alquiler' or 'dividendos' to find applicable regulations and limits.

Instructions

Searches across all IRPF tax rules data for a given keyword or concept. Searches in: work income, rental income, investment income, capital gains, deductions, and casilla definitions. Use natural terms like 'alquiler', 'dividendos', 'maternidad', 'vehiculo electrico', 'plan pensiones', 'vivienda habitual', 'despido', etc. Returns matching rules with casilla numbers, limits, and source articles. Source: AEAT Manual Practico Renta 2025.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSearch term (e.g., 'alquiler', 'dividendos', 'maternidad', 'despido', 'criptomonedas')
domainNoFilter by tax domain (default: search all)all
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what gets searched (tax rules across specified domains) and what gets returned (matching rules with casilla numbers, limits, source articles). However, it doesn't disclose important behavioral traits like whether this is a read-only operation, potential rate limits, authentication requirements, or pagination behavior for large result sets.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: the core functionality, usage guidance with examples, and return format with data source. Every sentence adds value without redundancy. It's appropriately sized for a search tool with two parameters and no annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 2 parameters, 100% schema coverage, and no output schema, the description provides good contextual completeness. It explains what domains are searched, gives natural language examples, describes the return format, and cites the data source. The main gap is the lack of output schema, so the agent must infer the structure from the description alone, but the description does specify what information will be returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, so the baseline is 3. The description adds meaningful context beyond the schema by providing natural language examples of search terms ('alquiler', 'dividendos', 'maternidad', etc.) and clarifying that searches occur across specific tax domains (work income, rental income, etc.). This helps the agent understand the semantic intent behind the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('searches across all IRPF tax rules data') and resource ('tax rules data'), with explicit scope ('for a given keyword or concept'). It distinguishes from sibling tools like 'search_casillas' by covering broader tax domains beyond just casillas, and from 'get_tax_form_info' by focusing on rule search rather than form retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: when searching tax rules by natural language terms across multiple tax domains. It gives concrete examples of search terms ('alquiler', 'dividendos', etc.) and mentions the data source. However, it doesn't explicitly state when NOT to use it or name specific alternatives among sibling tools, though the scope differentiation is implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/iMark21/aeat-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server