Skip to main content
Glama
Ownership verified

Server Details

MCP server for Italian tax and fiscal calculations: tax code (Codice Fiscale), IRPEF income tax, INPS social contributions, flat-rate regime (Forfettario), crypto capital gains, and live fiscal deadlines from Agenzia delle Entrate. All data sourced from official Italian law (TUIR, INPS circulars). Free tools + x402 micropayments for live data.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 9 of 9 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation5/5

Every tool has a clearly distinct purpose with no ambiguity. Each tool targets a specific Italian fiscal or administrative calculation (e.g., codice fiscale, crypto tax, IRPEF) or information retrieval task (e.g., ATECO groups, fiscal news, deadlines), and their descriptions precisely define their scope without overlap.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in Italian (e.g., calcola_codice_fiscale, lista_gruppi_ateco, novita_fiscali). The naming is uniform across all tools, using snake_case and descriptive terms that align with their functions, making them easily readable and predictable.

Tool Count5/5

With 9 tools, the count is well-scoped for the server's purpose of providing Italian fiscal and administrative tools. Each tool earns its place by covering key areas such as tax calculations, regulatory information, and deadlines, offering a comprehensive yet manageable set for agents to handle related tasks.

Completeness4/5

The tool set provides strong coverage for Italian fiscal calculations and information retrieval, including core tax types (e.g., IRPEF, INPS, forfettario), administrative codes, and regulatory updates. Minor gaps might exist, such as tools for specific deductions or advanced tax scenarios, but agents can likely work around these with the available tools for most common workflows.

Available Tools

9 tools
calcola_codice_fiscaleBInspect

Calcola il codice fiscale italiano di una persona fisica dato nome, cognome, data di nascita, sesso e comune di nascita. Fonte: DM 12/03/1974, DPR 605/1973, tabelle codici catastali ISTAT.

ParametersJSON Schema
NameRequiredDescriptionDefault
nomeYesNome della persona (es: Mario)
sessoYesSesso: M (maschile) o F (femminile)
cognomeYesCognome della persona (es: Rossi)
data_nascitaYesData di nascita in formato YYYY-MM-DD (es: 1990-01-01)
comune_nascitaYesComune di nascita (es: Roma, Milano, Napoli)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the legal sources (DM 12/03/1974, DPR 605/1973, ISTAT tables) which adds some context about algorithm authority, but doesn't describe what the tool returns (e.g., format of the fiscal code), error handling, rate limits, or authentication requirements for a calculation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences: one stating the purpose and inputs, another providing legal sources. It's front-loaded with the core functionality. The legal citation adds value but could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a calculation tool with 5 well-documented parameters but no output schema and no annotations, the description is moderately complete. It covers the purpose and inputs adequately but lacks information about the return value format, error conditions, or behavioral constraints that would be helpful for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly with examples and enum values. The description lists the parameters but doesn't add meaningful semantic context beyond what's in the schema (e.g., explaining how parameters combine in the algorithm). Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Calcola il codice fiscale italiano') and the resource ('di una persona fisica'), with explicit input parameters listed. It distinguishes itself from sibling tools by focusing on Italian fiscal code calculation rather than tax calculations, crypto taxes, or other fiscal operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying the required inputs for calculating an Italian fiscal code, but doesn't provide explicit guidance on when to use this tool versus alternatives (e.g., when you need a fiscal code vs. tax calculations). No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calcola_crypto_taxAInspect

Calcola le tasse sulle criptovalute per il 2026: imposta sostitutiva 33% sulle plusvalenze (26% per stablecoin EUR MiCA-compliant) e IVCA 0,2% sul valore del portafoglio al 31/12. Franchigia €2.000 abolita dal 2026. Fonte: Art. 67 TUIR, L. 207/2024 art. 1 commi 24-29.

ParametersJSON Schema
NameRequiredDescriptionDefault
plusvalenzaYesPlusvalenza netta realizzata nell'anno in euro. Può essere 0 o negativo (minusvalenza). Es: 10000
portafoglioYesValore totale del portafoglio crypto al 31 dicembre dell'anno in euro (base per IVCA). Es: 50000
tipo_criptoNoTipo di criptovaluta: ordinaria (BTC, ETH, altcoin) → aliquota 33%; stablecoin_eur (es. EURC, MiCA-compliant) → aliquota 26%.
minusvalenze_pregresseNoMinusvalenze non compensate da anni precedenti in euro (default: 0). Riportabili per 4 anni (Art. 68 co.9-bis TUIR).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: tax rates (33%/26%/0.2%), elimination of €2,000 exemption from 2026, and legal sources. However, it doesn't describe output format, error handling, or computational details beyond the tax rules.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: it immediately states the core purpose with all essential tax details in a single dense sentence, followed by legal citations. Every element serves a purpose with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a calculation tool with no annotations and no output schema, the description provides substantial context: tax rates, year, exemption changes, and legal references. It doesn't explain return values or error cases, but gives enough information for basic understanding of the tool's function and scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing detailed parameter documentation. The description adds context about tax year (2026) and legal framework, but doesn't provide additional parameter meaning beyond what's already in the schema descriptions. Baseline 3 is appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: calculating cryptocurrency taxes for 2026 with specific tax rates (33% on capital gains, 26% for EUR stablecoins, 0.2% IVCA on portfolio value). It distinguishes from siblings by focusing on crypto taxes rather than other fiscal calculations like IRPEF, INPS, or codice fiscale.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (cryptocurrency tax calculations for 2026 with specific Italian tax rules). It doesn't explicitly mention when not to use it or name alternatives among siblings, but the specificity makes usage context evident.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calcola_forfettarioAInspect

Calcola imposta sostitutiva (5% startup o 15% standard), INPS stimato e netto per il regime forfettario 2026. Limite accesso: €85.000 ricavi. Fonte: L. 190/2014 art. 1 commi 54-89, mod. L. 208/2015 e L. 145/2018.

ParametersJSON Schema
NameRequiredDescriptionDefault
ricaviYesRicavi/compensi annui lordi in euro (massimo 85.000)
ateco_idYesGruppo ATECO per il coefficiente di redditività. Es: professioni_tecniche, commercio, servizi_persona, costruzioni. Usa il tool "lista_gruppi_ateco" per vedere tutti i gruppi disponibili.
tipo_inpsNoTipo cassa INPS (default: gestione_separata)
regime_startupNoAliquota agevolata 5% per nuova attività (primi 5 anni, default: false). Richiede di non aver esercitato attività simile negli ultimi 3 anni.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral constraints: the revenue limit (€85,000), tax rates (5% startup or 15% standard), and legal basis. However, it doesn't mention error conditions, calculation assumptions, or what happens if inputs exceed limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero wasted words - it immediately states the calculation purpose, specifies key constraints, and cites legal sources. Every sentence adds essential context without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a calculation tool with no annotations and no output schema, the description provides good context about what's being calculated, constraints, and legal basis. However, it doesn't describe the output format or calculation methodology, which would be helpful given the absence of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description doesn't add parameter-specific information beyond what's already in the schema descriptions, though it does reinforce the revenue limit mentioned in the ricavi parameter schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific calculation being performed ('Calcola imposta sostitutiva, INPS stimato e netto'), identifies the target system ('regime forfettario 2026'), and distinguishes from siblings by focusing on a specific tax regime rather than general tax calculations like calcola_irpef or calcola_inps.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage context including the revenue limit ('Limite accesso: €85.000 ricavi'), legal basis ('Fonte: L. 190/2014...'), and year specificity ('2026'). It also distinguishes from sibling tools by being specifically for the forfettario regime rather than other tax calculations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calcola_inpsBInspect

Calcola i contributi INPS 2026 per artigiani, commercianti e gestione separata. Include riduzione 35% per forfettari (Art. 1 c.77 L.208/2015). Fonte: Circ. INPS n.14/2026 (art./comm.), n.8/2026 (gest. sep.).

ParametersJSON Schema
NameRequiredDescriptionDefault
regimeNoRegime fiscale (default: ordinario)
redditoYesReddito/compenso annuo in euro
tipo_attivitaYesTipo di attività / cassa INPS
tipo_collaboratoreNoTipo collaboratore — solo per gestione separata. Determina l'aliquota: autonomo_piva 26,07%, collaboratore_con_dis_coll 35,03%, collaboratore_senza_dis_coll 33,72%, pensionato 24%.
richiesta_riduzioneNoRichiesta riduzione 35% prevista per forfettari artigiani/commercianti (default: false)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the 35% reduction for forfettari and cites specific legal sources, which adds useful context. However, it doesn't describe what the tool returns (e.g., calculation results, error conditions), whether it's a read-only calculation or has side effects, or any rate limits or authentication requirements. For a calculation tool with no annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences that efficiently cover purpose and key features (35% reduction, legal sources). It's front-loaded with the main function. However, the inclusion of specific legal citations (Circ. INPS n.14/2026, n.8/2026) might be overly detailed for an AI agent's needs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters with 100% schema coverage but no annotations and no output schema, the description is moderately complete. It covers the tool's purpose and key feature (35% reduction) but lacks information about return values, error handling, or behavioral traits. For a calculation tool with multiple parameters, this leaves the agent without full guidance on what to expect from the tool's execution.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly with descriptions and enum values. The description adds marginal value by mentioning the 35% reduction for forfettari, which relates to the 'richiesta_riduzione' parameter, but doesn't provide additional syntax or format details beyond what the schema provides. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates INPS contributions for specific categories (artigiani, commercianti, gestione separata) and mentions the 35% reduction for forfettari. It distinguishes from siblings by focusing on INPS contributions rather than other fiscal calculations like IRPEF or crypto tax. However, it doesn't explicitly contrast with calcola_forfettario which might overlap.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying the target categories (artigiani, commercianti, gestione separata) and mentioning the 35% reduction for forfettari. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like calcola_forfettario or calcola_irpef, nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calcola_irpefAInspect

Calcola IRPEF 2026 (lorda e netta) con detrazioni e addizionale regionale opzionale. Scaglioni: 23% fino a €28.000, 33% fino a €50.000, 43% oltre. Fonte: Art. 11 TUIR (DPR 917/1986), L. 207/2024.

ParametersJSON Schema
NameRequiredDescriptionDefault
redditoYesReddito lordo annuo in euro (es: 35000)
regioneNoRegione per calcolo addizionale regionale (opzionale). Es: lombardia, lazio, campania, sicilia. Se omessa, viene restituito solo il range nazionale.
tipo_redditoYesTipo di reddito: dipendente (lavoro dipendente o pensione) oppure autonomo (partita IVA in regime ordinario)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what the tool calculates (gross/net IRPEF with deductions and optional regional surcharge) and provides tax bracket details and legal sources. However, it doesn't mention error handling, performance characteristics, or what happens when optional parameters are omitted. The behavioral information is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the tool's purpose and scope, the second provides tax bracket details and legal sources. Every element earns its place with no wasted words, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a calculation tool with no annotations and no output schema, the description provides good context about what it calculates, tax brackets, and legal sources. However, it doesn't describe the return format or what specific values will be returned (e.g., breakdown of calculations). Given the complexity of tax calculations, some additional detail about output structure would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all parameters thoroughly. The description mentions the optional regional surcharge parameter ('addizionale regionale opzionale') which aligns with the 'regione' parameter in the schema, but doesn't add significant meaning beyond what the schema provides. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb ('calcola' - calculate) and resource (IRPEF 2026 tax), including what it calculates (gross and net amounts with deductions and optional regional surcharge). It distinguishes from siblings by focusing specifically on IRPEF calculation rather than other tax types like forfettario, INPS, or crypto tax.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (calculating 2026 IRPEF with specific tax brackets and optional regional surcharge). It doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools, though the context implies this is for IRPEF specifically versus other tax calculations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lista_gruppi_atecoAInspect

Restituisce tutti i gruppi ATECO disponibili con i relativi coefficienti di redditività per il regime forfettario. Fonte: Allegato 4 L.190/2014.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the data source ('Allegato 4 L.190/2014'), which adds useful context, but does not mention behavioral traits like rate limits, error handling, or response format. The description is informative but lacks operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose ('Restituisce tutti i gruppi ATECO...') and adds necessary context (coefficients, regime, source) without waste. Every part earns its place, making it appropriately sized and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no annotations, no output schema), the description is complete enough for a data retrieval tool. It specifies what is returned and the source, but lacks details on output format or potential limitations, which is a minor gap given the low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description adds value by specifying what data is returned (ATECO groups with profitability coefficients) and the source, which compensates for the lack of parameters. Baseline is 4 for 0 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Restituisce' (returns) and the resource 'tutti i gruppi ATECO disponibili' (all available ATECO groups), specifying they include profitability coefficients for the forfettario regime. It distinguishes from siblings like 'calcola_forfettario' by focusing on data retrieval rather than calculation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing ATECO groups with profitability coefficients for the forfettario regime, but does not explicitly state when to use this tool versus alternatives like 'calcola_forfettario' or provide exclusions. The context is clear but lacks explicit guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

novita_fiscaliAInspect

Restituisce le ultime notizie e novità fiscali da FiscoOggi.it, aggiornate ogni settimana tramite scraper RSS. NOTA: Questo è un servizio a pagamento ($0.005/chiamata via x402).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumero massimo di articoli da restituire (1-50, default: 10)
categoriaNoFiltro per categoria (es: prassi, novità, comunicato — opzionale)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: the tool scrapes RSS, updates weekly, and is a paid service ($0.005 per call via x402). This adds valuable context beyond basic functionality, though it doesn't cover aspects like error handling or response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded: it states the core purpose in the first part, then adds important behavioral details (update frequency, scraping method, cost) in a clear note. Every sentence earns its place with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is somewhat complete but has gaps. It explains what the tool does and key behaviors (cost, update frequency), but lacks details on output format or error cases, which could be important for an AI agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (limit and categoria) fully. The description does not add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3 without compensating further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Restituisce le ultime notizie e novità fiscali da FiscoOggi.it' (Returns the latest fiscal news and updates from FiscoOggi.it). It specifies the source and type of content, but does not explicitly differentiate from sibling tools like 'scadenzario_fiscale' or 'risoluzioni_circolari' which might also provide fiscal information, so it's not a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for getting updated fiscal news weekly, but does not explicitly state when to use this tool versus alternatives like 'scadenzario_fiscale' (which might handle deadlines) or 'risoluzioni_circolari' (which might handle resolutions/circulars). It mentions it's a paid service, which provides some context but not clear alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

risoluzioni_circolariAInspect

Restituisce risoluzioni e circolari dell'Agenzia delle Entrate per anno, con possibilità di filtrare per tipo e ricerca testo libero. Aggiornate ogni settimana tramite scraper. Fonte: agenziaentrate.gov.it/portale/risoluzioni. NOTA: Questo è un servizio a pagamento ($0.01/chiamata via x402).

ParametersJSON Schema
NameRequiredDescriptionDefault
qNoTesto da cercare nell'oggetto del documento (es: iva, bonus, cessione)
annoNoAnno di riferimento (default: anno corrente)
tipoNoTipo di documento (default: tutti)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: the data source (agenziaentrate.gov.it), update frequency (weekly via scraper), and critical cost information ($0.01 per call via x402). It doesn't mention rate limits, authentication needs, or error handling, but covers important operational aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: purpose statement, operational details, and critical cost information. Every sentence adds essential value with zero wasted words, and the most important information (what the tool does) is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only query tool with 3 parameters and no output schema, the description provides good completeness: clear purpose, filtering capabilities, data source, update mechanism, and critical cost information. The main gap is lack of information about return format/structure, but given the tool's relative simplicity and the presence of good parameter documentation, this is a minor omission.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions filtering by year and type plus free text search, which aligns with but doesn't significantly expand upon the schema's parameter documentation. It provides context about the tool's purpose but minimal additional parameter semantics beyond what the schema already describes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Restituisce' - returns) and resources ('risoluzioni e circolari dell'Agenzia delle Entrate'), and distinguishes it from siblings by specifying it deals with tax agency resolutions/circulars rather than tax calculations, code generation, or other fiscal tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool (to retrieve tax agency documents with filtering by year, type, and free text search), but doesn't explicitly mention when NOT to use it or name specific alternatives among the sibling tools for different document types or sources.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scadenzario_fiscaleAInspect

Restituisce le scadenze fiscali del mese richiesto, aggiornate automaticamente dall'Agenzia delle Entrate ogni 6 ore tramite scraper. Fonte: agenziaentrate.gov.it/scadenzario. NOTA: Questo è un servizio a pagamento ($0.005/chiamata via x402).

ParametersJSON Schema
NameRequiredDescriptionDefault
annoNoAnno (default: anno corrente)
meseNoMese (1-12, default: mese corrente)
tipoNoFiltro per tipo di scadenza (opzionale)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond the input schema by specifying the data source (agenziaentrate.gov.it), update frequency (every 6 hours via scraper), and cost information ($0.005 per call via x402). This covers important operational aspects like freshness and pricing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core functionality first. The two sentences efficiently convey purpose, data source, update mechanism, and cost information without unnecessary elaboration. Every sentence adds value, though the structure could be slightly more polished.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 optional parameters, no output schema, no annotations), the description provides good contextual coverage. It explains the tool's purpose, data source, update mechanism, and cost structure. The main gap is the lack of output format description, but overall it's reasonably complete for a read-only data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions filtering by month but doesn't elaborate on parameter usage or interactions. Baseline 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Restituisce le scadenze fiscali del mese richiesto' (Returns fiscal deadlines for the requested month). It specifies the resource (fiscal deadlines), the scope (monthly), and distinguishes it from siblings by focusing on deadline retrieval rather than calculations or lists.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning the data source (Agenzia delle Entrate) and update frequency (every 6 hours), but does not explicitly state when to use this tool versus alternatives like 'novita_fiscali' or 'risoluzioni_circolari'. It provides some operational context but lacks explicit guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources