Skip to main content
Glama
Licinexus

licinexus-mcp

Official
by Licinexus

aggregate_licitacoes_por_periodo

Aggregate Brazilian public procurement bid counts and values over time, revealing monthly trends without paginating large result sets.

Instructions

Aggregate Brazilian public procurement bid counts (and optional value sums) over a time series — answers "how did volumes evolve month by month" without paginating tens of thousands of records.

Each bucket is computed by issuing a single PNCP list call per (bucket × modality) and reading totalRegistros from the response. With default modalities (Pregão Eletrônico + Dispensa + Inexigibilidade) and granularidade=mes, a 12-month range = 36 calls.

When esfera filter or value metrics are requested, the tool paginates the bucket internally (up to 50 pages = 2500 records per bucket) and aggregates client-side. Be conservative with date range × granularity in that mode.

Maximum total date range: 1830 days (~5 years). Each bucket call respects the PNCP 365-day-per-call limit.

Modality codes: 1 = Leilão - Eletrônico 2 = Diálogo Competitivo 3 = Concurso 4 = Concorrência - Eletrônica 5 = Concorrência - Presencial 6 = Pregão - Eletrônico 7 = Pregão - Presencial 8 = Dispensa de Licitação 9 = Inexigibilidade 10 = Manifestação de Interesse 11 = Pré-qualificação 12 = Credenciamento 13 = Leilão - Presencial

Default modalities: [6, 8, 9] (Pregão Eletrônico, Dispensa, Inexigibilidade).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
dataInicialYesStart date YYYYMMDD.
dataFinalYesEnd date YYYYMMDD.
granularidadeNoTime bucket size for the series.mes
modalidadesNoList of modality codes. Default: [6, 8, 9].
ufNoTwo-letter state code.
codigoMunicipioIbgeNoIBGE municipality code.
cnpjOrgaoNoProcuring agency CNPJ.
esferaNoFilter by sphere ('federal', 'estadual', 'municipal', 'distrital'). Forces paginated aggregation — be conservative with range × granularity.
metricasNoMetrics to include in each bucket. 'count' is free (single page hit). 'valorEstimadoTotal' and 'valorHomologadoTotal' force paginated aggregation.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description fully discloses behavior: internal mechanics (bucket computation via PNCP list calls, pagination up to 50 pages per bucket), performance implications (36 calls for 12-month default, more when paginating), and constraints (1830-day max, 365-day per call limit). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with purpose then behavioral details. Every sentence adds value—no filler. While somewhat long, the density of useful information justifies the length. Efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should hint at return structure. It mentions 'counts (and optional value sums) over a time series', which provides a general shape but lacks explicit fields. Given 9 parameters and no output schema, this is still quite complete; minor gap on exact output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage (baseline 3). Description adds meaning: lists modality codes with names, explains default modalities, warns that esfera and non-count metrics force pagination, and adds context about date range limits. This exceeds schema-only info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool aggregates time series counts and optional value sums, answering 'how did volumes evolve month by month' without paginating thousands. It distinguishes from sibling tools like search_licitacoes (which retrieves individual records) and compare_periodos (period comparison). The verb 'aggregate' and resource 'licitacoes' are specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance: use for time series without heavy pagination; be conservative with date range × granularity when esfera or value metrics force pagination. Implicitly suggests alternatives for granular record access (search_licitacoes). Includes maximum date range (1830 days) and internal call limits.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Licinexus/licinexus-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server