Skip to main content
Glama

Pasal.id — Indonesian Law

Ownership verified

Server Details

Search-first MCP for Indonesian laws: Pasal text, status, structure, provenance, and reports.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Aturio/pasal-id-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 11 of 11 tools scored. Lowest: 2.8/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct operation: metadata retrieval, specific part access, validity checking, structural hierarchy, pasal fetching, listing, health check, multi-pasal reading, issue reporting, full-text search, and within-law search. No significant functional overlap.

Naming Consistency4/5

Most tools follow a consistent verb_noun pattern (e.g., get_law_overview, list_laws). Minor deviations: 'ping' is a standalone health check, and 'get_pasal' omits the 'law' prefix, but the pattern is still clear.

Tool Count5/5

With 11 tools, the server covers browsing, searching, reading, and status checking for Indonesian laws without being overwhelming. The count is well-scoped for the domain.

Completeness5/5

The tool set covers all essential operations for a legal reference database: listing, searching (global and within law), reading individual articles and sections, accessing structure and metadata, checking status, and reporting data issues. No obvious gaps for read-only access.

Available Tools

11 tools
get_law_overviewRingkasan PeraturanB
Read-onlyIdempotent
Inspect

Tampilkan metadata kanonik, provenance, sumber, dan struktur satu peraturan Indonesia.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
law_idNo
law_typeNo
law_numberNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint=false. The description adds specific outputs (canonical metadata, provenance, sources, structure) without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence (10 words) that efficiently conveys the tool's purpose with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 optional parameters and numerous sibling tools, the description lacks guidance on how to uniquely identify a regulation (e.g., required parameter combinations). The output schema is present but parameter usage remains unclear.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but provides no parameter explanations. The four parameters are entirely undocumented, leaving the agent to guess their roles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Tampilkan' and resource 'metadata kanonik, provenance, sumber, dan struktur satu peraturan Indonesia', clearly differentiating from sibling tools like get_law_structure (structure only) or list_laws (list).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives no guidance on when to use this tool versus alternatives such as get_law_structure or list_laws, nor does it mention any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_law_partAmbil Bagian PeraturanB
Read-onlyIdempotent
Inspect

Ambil bagian tertentu dari satu peraturan, seperti Bab II, Pembukaan, Menimbang, Mengingat, Memutuskan, Penutup, Lampiran, atau node_id dari struktur.

ParametersJSON Schema
NameRequiredDescriptionDefault
cursorNo
law_idYes
numberNo
node_idNo
max_charsNo
node_typeNo
part_typeNo
include_childrenNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, destructiveHint. Description adds no extra behavioral context (e.g., output format, pagination). It does not contradict annotations, so score is baseline 3.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise and front-loaded with the purpose. However, the use of Indonesian may reduce clarity given English sibling names, but it is not verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters and an output schema, the description is too brief. It lacks guidance on cursor pagination, max_chars truncation, include_children behavior, and how to specify node_id vs part_type. The output schema exists but description does not reference it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, yet description only hints at part_type values (e.g., Bab II). It fails to explain other parameters like cursor, number, node_id, max_chars, node_type, include_children. The description does not compensate for the missing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a specific part of a regulation, listing examples like Bab II, Pembukaan, etc. It distinguishes from siblings (e.g., get_law_structure for entire structure, get_pasal for articles) by focusing on parts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives. The examples hint at usage but no when-not or comparisons to sibling tools. The description is in Indonesian while sibling names are in English, adding confusion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_law_statusCek Status PeraturanA
Read-onlyIdempotent
Inspect

Cek apakah peraturan Indonesia masih berlaku, diubah, atau dicabut.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
law_idNo
law_typeNo
law_numberNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint false, so safety is covered. The description adds the specific behavioral result (valid/amended/revoked). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One short sentence (6 words in Indonesian) that front-loads the purpose. No unnecessary words, perfectly scoped.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the basic purpose, but given 4 optional parameters with no guidance on combinations or prerequisites, the agent lacks context for correct invocation. Output schema likely helps, but parameter usage remains unclear.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description provides no parameter details (e.g., what law_type values are valid, how law_id differs from law_number). The agent must infer parameter meanings from names alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks whether Indonesian regulations are valid, amended, or revoked. It uses a specific verb ('cek') and resource ('status peraturan') and distinct from siblings like get_law_overview which provides general info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The usage is implied from the description (checking law status), but no explicit guidance on when to use vs alternatives like get_law_overview or search_laws. No exclusion criteria or alternative mentions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_law_structureLihat Struktur PeraturanA
Read-onlyIdempotent
Inspect

Lihat hierarki Bab, Bagian, Pasal, Pembukaan, Penutup, dan Lampiran satu peraturan tanpa memuat seluruh teks.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
depthNo
law_idNo
law_typeNo
law_numberNo
include_special_partsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, destructiveHint=false; description adds that it returns structure without full text, which is useful behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence directly states purpose and scope; no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While purpose is clear and output schema exists, the omission of parameter descriptions makes the tool incomplete for correct invocation, especially with 6 optional parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% and description provides no explanation of any parameter (e.g., 'depth', 'law_id'). Agent cannot infer how to specify the target regulation or control output depth.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'lihat' and resource 'hierarki Bab, Bagian, Pasal...' clearly distinguishing from sibling tools like 'get_pasal' (specific article) or 'get_law_overview' (likely summary).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus alternatives like 'get_law_part' or 'get_pasal'; context from sibling names implies purpose but not direct differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pasalAmbil PasalB
Read-onlyIdempotent
Inspect

Ambil teks resmi Pasal tertentu dari peraturan Indonesia.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
law_idNo
law_typeNo
law_numberNo
pasal_numberNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate a safe, read-only operation (readOnlyHint, idempotentHint, destructiveHint). The description adds value by specifying the tool returns 'official text' of a specific article, which is behavioral context beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the core action. However, it may be too brief given the tool's complexity (5 parameters), but it remains concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having an output schema (so return values don't need description), the text is insufficient for a tool with 5 undocumented parameters. It does not explain how to construct queries or handle optional parameters, leaving significant gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not explain any of the five parameters (year, law_id, law_type, law_number, pasal_number). With 0% schema description coverage, the description fails to compensate, leaving the agent without guidance on how to specify which law or article.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves official text of a specific article from Indonesian regulation. It uses a specific verb ('Ambil') and resource ('Pasal tertentu dari peraturan Indonesia'), distinguishing it from sibling tools like get_law_structure or get_law_overview.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, exclusions, or when not to use it. The single sentence lacks any usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_lawsDaftar PeraturanC
Read-onlyIdempotent
Inspect

Jelajahi daftar peraturan Indonesia dengan filter jenis, tahun, status, dan judul.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
yearNo
searchNo
statusNo
per_pageNo
issuing_bodyNo
regulation_typeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations include readOnlyHint, idempotentHint, openWorldHint, destructiveHint=false, which already disclose safety. The description adds no extra behavioral context (e.g., pagination, data freshness). With annotations present, a score of 3 is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise and front-loaded. However, it could be slightly expanded to include more details without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 7 parameters and an output schema, the description is too minimal. It does not mention pagination, default values, or the nature of results, making it incomplete for an agent to understand the full behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description lists only four filters ('jenis, tahun, status, dan judul') but the schema has seven parameters including page, per_page, issuing_body. With 0% schema description coverage, the description fails to fully compensate, leaving many parameters unexplained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists Indonesian regulations with filters, using the verb 'Jelajahi' (explore). However, it does not differentiate from the sibling tool 'search_laws' which likely provides search functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool over alternatives like 'search_laws'. Users are not told whether to use this for browsing vs. search_laws for text search.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pingPingA
Read-onlyIdempotent
Inspect

Cek kesehatan untuk client yang sudah terautentikasi.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnly, idempotent, and non-destructive traits. The description adds authentication requirement but doesn't disclose additional behavioral details beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, concise sentence in Indonesian. No wasted words; front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simplicity (no params, has output schema), description is sufficient. Could mention return value but output schema likely covers it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100%. Baseline for zero parameters is 4, and description adds no unnecessary parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states this is a health check for authenticated clients, using a specific verb ('cek kesehatan') and resource. It distinguishes from siblings which are all law-related.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. While siblings are different domains, the description doesn't advise using this for verifying connectivity before other calls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

read_law_sectionBaca Bagian PeraturanA
Read-onlyIdempotent
Inspect

Baca beberapa Pasal sekaligus dari satu peraturan dengan cross-ref terselesaikan, batas chars, dan kursor. Cocok untuk skenario membaca rangkaian Pasal (pasal_numbers / pasal_range / bab / whole) dalam sekali panggilan.

ParametersJSON Schema
NameRequiredDescriptionDefault
scopeYes
cursorNo
formatNostructured
law_idYes
includeNo
max_charsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior. The description adds behavioral details such as cross-reference resolution, character limit ('batas chars'), and cursor-based pagination, going beyond annotations without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the core function and then providing use case context without any redundant or verbose phrasing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 params, nested objects, output schema), the description covers the main features (cross-ref, cursor, char limit, scope types) and is consistent with annotations. It does not address error cases or prerequisites, but output schema handles return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description partially compensates by mentioning cursor, max_chars, and scope types (pasal_numbers, bab, etc.). However, it does not explain law_id (required), format, or the include parameter, leaving gaps for a tool with 6 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool reads multiple articles ('Pasal') at once from a single regulation, with resolved cross-references and cursor support, distinguishing it from sibling tools like get_pasal (likely single article) and get_law_overview.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates it is suitable for reading a series of articles ('rangkaian Pasal'), providing context, but does not explicitly exclude alternatives or state when not to use it, though the sibling list implies differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

report_issueLaporkan Masalah DataAInspect

Laporkan masalah data Pasal.id dari alur MCP, seperti OCR salah, peraturan hilang, Pasal hilang, tautan rusak, atau konten usang.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
titleYes
law_idNo
node_idNo
law_typeNo
law_numberNo
descriptionNo
report_typeYes
pasal_numberNo
contact_emailNo
reference_urlNo
current_contentNo
suggested_contentNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=false, destructiveHint=false, etc. The description adds that this is a reporting action, which aligns with readOnlyHint=false, but it does not disclose additional behavioral traits (e.g., whether reports are public, require authentication, or have rate limits). With annotations present, the bar is lower, and the description provides minimal extra value beyond stating the purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the purpose with concrete examples. It is front-loaded and contains no filler. However, it could slightly expand on parameter usage without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having an output schema which reduces need for return value documentation, the tool has 13 parameters with 0% schema description coverage and minimal narrative guidance. The description does not cover parameter usage, prerequisites, or error conditions, leaving the agent under-informed for a tool of this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for the 13 parameters. However, it only lists example report types ("OCR salah, peraturan hilang, Pasal hilang") without explaining individual fields like year, law_id, node_id, etc. The agent would lack guidance on how to populate optional parameters correctly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is for reporting data issues ("Laporkan masalah data") with specific examples like OCR errors, missing regulations, missing pasal, broken links, or outdated content. The verb 'Laporkan' is specific and the resource 'masalah data Pasal.id' is explicit. Sibling tools are all retrieval-oriented, so this distinguishes itself well.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use during MCP flow ("dari alur MCP") but does not explicitly state when to avoid or mention alternatives. The context is clear given sibling tools are for reading data, so users can infer this is for reporting problems. A score of 4 reflects clear context without exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_lawsCari PeraturanC
Read-onlyIdempotent
Inspect

Cari full-text di peraturan perundang-undangan Indonesia.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYes
year_toNo
languageNoid
year_fromNo
regulation_typeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, destructiveHint=false, so the description's 'Cari full-text' is consistent but adds nothing beyond annotations about behavioral traits like pagination or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is too minimal—just one short phrase. It is underspecified rather than concise, lacking structured details about the search functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters and the presence of an output schema, the description fails to cover how to use filters like year range or regulation type, making it incomplete for effective tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning no parameter descriptions in the schema. The tool description provides no explanation of parameters like 'limit', 'year_from', 'year_to', 'language', or 'regulation_type', leaving the agent without semantic guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches full-text in Indonesian regulations ('Cari full-text di peraturan perundang-undangan Indonesia'), distinguishing it from siblings like 'search_within_law' which searches within a specific law, and 'list_laws' which lists laws.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives such as 'search_within_law' or 'list_laws'. The description does not mention exclusions, prerequisites, or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_within_lawCari Dalam PeraturanB
Read-onlyIdempotent
Inspect

Cari istilah atau frasa di dalam satu peraturan yang sudah diketahui.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
limitNo
queryYes
law_idNo
law_typeNo
law_numberNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint true and destructiveHint false, so the safety profile is clear. The description adds no extra behavioral context like pagination (limit parameter) or identification requirements. With annotations present, a 3 is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One concise sentence with no wasted words. However, it could be more informative without sacrificing brevity, given the tool has multiple parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is insufficient for a 6-parameter tool. It lacks explanation of how to specify the regulation (law_id vs. year+number+type) and the limit parameter. An output schema exists but is not shown; however, the description does not help the agent understand what results to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It only hints at the 'query' parameter ('cari istilah atau frasa'), but omits explanation of year, limit, law_id, law_type, and law_number. Users need to infer how to identify the regulation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches for a term or phrase within a known regulation, distinguishing it from sibling tool 'search_laws' which searches across multiple laws. The verb 'cari' and specific resource 'satu peraturan yang sudah diketahui' are precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use or not use this tool versus alternatives. It does not specify prerequisites such as needing a law ID or combination of year/number/type to identify the regulation, nor does it mention when to prefer 'search_laws' instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.