Pasal.id — Indonesian Law
Server Details
Search-first MCP for Indonesian laws: Pasal text, status, structure, provenance, and reports.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Aturio/pasal-id-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 11 of 11 tools scored. Lowest: 2.8/5.
Each tool targets a distinct operation: metadata retrieval, specific part access, validity checking, structural hierarchy, pasal fetching, listing, health check, multi-pasal reading, issue reporting, full-text search, and within-law search. No significant functional overlap.
Most tools follow a consistent verb_noun pattern (e.g., get_law_overview, list_laws). Minor deviations: 'ping' is a standalone health check, and 'get_pasal' omits the 'law' prefix, but the pattern is still clear.
With 11 tools, the server covers browsing, searching, reading, and status checking for Indonesian laws without being overwhelming. The count is well-scoped for the domain.
The tool set covers all essential operations for a legal reference database: listing, searching (global and within law), reading individual articles and sections, accessing structure and metadata, checking status, and reporting data issues. No obvious gaps for read-only access.
Available Tools
11 toolsget_law_overviewRingkasan PeraturanBRead-onlyIdempotentInspect
Tampilkan metadata kanonik, provenance, sumber, dan struktur satu peraturan Indonesia.
| Name | Required | Description | Default |
|---|---|---|---|
| year | No | ||
| law_id | No | ||
| law_type | No | ||
| law_number | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint=false. The description adds specific outputs (canonical metadata, provenance, sources, structure) without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence (10 words) that efficiently conveys the tool's purpose with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 optional parameters and numerous sibling tools, the description lacks guidance on how to uniquely identify a regulation (e.g., required parameter combinations). The output schema is present but parameter usage remains unclear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate but provides no parameter explanations. The four parameters are entirely undocumented, leaving the agent to guess their roles.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Tampilkan' and resource 'metadata kanonik, provenance, sumber, dan struktur satu peraturan Indonesia', clearly differentiating from sibling tools like get_law_structure (structure only) or list_laws (list).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives no guidance on when to use this tool versus alternatives such as get_law_structure or list_laws, nor does it mention any prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_law_partAmbil Bagian PeraturanBRead-onlyIdempotentInspect
Ambil bagian tertentu dari satu peraturan, seperti Bab II, Pembukaan, Menimbang, Mengingat, Memutuskan, Penutup, Lampiran, atau node_id dari struktur.
| Name | Required | Description | Default |
|---|---|---|---|
| cursor | No | ||
| law_id | Yes | ||
| number | No | ||
| node_id | No | ||
| max_chars | No | ||
| node_type | No | ||
| part_type | No | ||
| include_children | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, destructiveHint. Description adds no extra behavioral context (e.g., output format, pagination). It does not contradict annotations, so score is baseline 3.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, concise and front-loaded with the purpose. However, the use of Indonesian may reduce clarity given English sibling names, but it is not verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters and an output schema, the description is too brief. It lacks guidance on cursor pagination, max_chars truncation, include_children behavior, and how to specify node_id vs part_type. The output schema exists but description does not reference it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, yet description only hints at part_type values (e.g., Bab II). It fails to explain other parameters like cursor, number, node_id, max_chars, node_type, include_children. The description does not compensate for the missing schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a specific part of a regulation, listing examples like Bab II, Pembukaan, etc. It distinguishes from siblings (e.g., get_law_structure for entire structure, get_pasal for articles) by focusing on parts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives. The examples hint at usage but no when-not or comparisons to sibling tools. The description is in Indonesian while sibling names are in English, adding confusion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_law_statusCek Status PeraturanARead-onlyIdempotentInspect
Cek apakah peraturan Indonesia masih berlaku, diubah, atau dicabut.
| Name | Required | Description | Default |
|---|---|---|---|
| year | No | ||
| law_id | No | ||
| law_type | No | ||
| law_number | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint false, so safety is covered. The description adds the specific behavioral result (valid/amended/revoked). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One short sentence (6 words in Indonesian) that front-loads the purpose. No unnecessary words, perfectly scoped.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the basic purpose, but given 4 optional parameters with no guidance on combinations or prerequisites, the agent lacks context for correct invocation. Output schema likely helps, but parameter usage remains unclear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description provides no parameter details (e.g., what law_type values are valid, how law_id differs from law_number). The agent must infer parameter meanings from names alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks whether Indonesian regulations are valid, amended, or revoked. It uses a specific verb ('cek') and resource ('status peraturan') and distinct from siblings like get_law_overview which provides general info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The usage is implied from the description (checking law status), but no explicit guidance on when to use vs alternatives like get_law_overview or search_laws. No exclusion criteria or alternative mentions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_law_structureLihat Struktur PeraturanARead-onlyIdempotentInspect
Lihat hierarki Bab, Bagian, Pasal, Pembukaan, Penutup, dan Lampiran satu peraturan tanpa memuat seluruh teks.
| Name | Required | Description | Default |
|---|---|---|---|
| year | No | ||
| depth | No | ||
| law_id | No | ||
| law_type | No | ||
| law_number | No | ||
| include_special_parts | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, destructiveHint=false; description adds that it returns structure without full text, which is useful behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence directly states purpose and scope; no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While purpose is clear and output schema exists, the omission of parameter descriptions makes the tool incomplete for correct invocation, especially with 6 optional parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% and description provides no explanation of any parameter (e.g., 'depth', 'law_id'). Agent cannot infer how to specify the target regulation or control output depth.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'lihat' and resource 'hierarki Bab, Bagian, Pasal...' clearly distinguishing from sibling tools like 'get_pasal' (specific article) or 'get_law_overview' (likely summary).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus alternatives like 'get_law_part' or 'get_pasal'; context from sibling names implies purpose but not direct differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pasalAmbil PasalBRead-onlyIdempotentInspect
Ambil teks resmi Pasal tertentu dari peraturan Indonesia.
| Name | Required | Description | Default |
|---|---|---|---|
| year | No | ||
| law_id | No | ||
| law_type | No | ||
| law_number | No | ||
| pasal_number | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate a safe, read-only operation (readOnlyHint, idempotentHint, destructiveHint). The description adds value by specifying the tool returns 'official text' of a specific article, which is behavioral context beyond annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that efficiently conveys the core action. However, it may be too brief given the tool's complexity (5 parameters), but it remains concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema (so return values don't need description), the text is insufficient for a tool with 5 undocumented parameters. It does not explain how to construct queries or handle optional parameters, leaving significant gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not explain any of the five parameters (year, law_id, law_type, law_number, pasal_number). With 0% schema description coverage, the description fails to compensate, leaving the agent without guidance on how to specify which law or article.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves official text of a specific article from Indonesian regulation. It uses a specific verb ('Ambil') and resource ('Pasal tertentu dari peraturan Indonesia'), distinguishing it from sibling tools like get_law_structure or get_law_overview.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, exclusions, or when not to use it. The single sentence lacks any usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_lawsDaftar PeraturanCRead-onlyIdempotentInspect
Jelajahi daftar peraturan Indonesia dengan filter jenis, tahun, status, dan judul.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| year | No | ||
| search | No | ||
| status | No | ||
| per_page | No | ||
| issuing_body | No | ||
| regulation_type | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations include readOnlyHint, idempotentHint, openWorldHint, destructiveHint=false, which already disclose safety. The description adds no extra behavioral context (e.g., pagination, data freshness). With annotations present, a score of 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, concise and front-loaded. However, it could be slightly expanded to include more details without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 7 parameters and an output schema, the description is too minimal. It does not mention pagination, default values, or the nature of results, making it incomplete for an agent to understand the full behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description lists only four filters ('jenis, tahun, status, dan judul') but the schema has seven parameters including page, per_page, issuing_body. With 0% schema description coverage, the description fails to fully compensate, leaving many parameters unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists Indonesian regulations with filters, using the verb 'Jelajahi' (explore). However, it does not differentiate from the sibling tool 'search_laws' which likely provides search functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is given on when to use this tool over alternatives like 'search_laws'. Users are not told whether to use this for browsing vs. search_laws for text search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pingPingARead-onlyIdempotentInspect
Cek kesehatan untuk client yang sudah terautentikasi.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnly, idempotent, and non-destructive traits. The description adds authentication requirement but doesn't disclose additional behavioral details beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, concise sentence in Indonesian. No wasted words; front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simplicity (no params, has output schema), description is sufficient. Could mention return value but output schema likely covers it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%. Baseline for zero parameters is 4, and description adds no unnecessary parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states this is a health check for authenticated clients, using a specific verb ('cek kesehatan') and resource. It distinguishes from siblings which are all law-related.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. While siblings are different domains, the description doesn't advise using this for verifying connectivity before other calls.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
read_law_sectionBaca Bagian PeraturanARead-onlyIdempotentInspect
Baca beberapa Pasal sekaligus dari satu peraturan dengan cross-ref terselesaikan, batas chars, dan kursor. Cocok untuk skenario membaca rangkaian Pasal (pasal_numbers / pasal_range / bab / whole) dalam sekali panggilan.
| Name | Required | Description | Default |
|---|---|---|---|
| scope | Yes | ||
| cursor | No | ||
| format | No | structured | |
| law_id | Yes | ||
| include | No | ||
| max_chars | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, and non-destructive behavior. The description adds behavioral details such as cross-reference resolution, character limit ('batas chars'), and cursor-based pagination, going beyond annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the core function and then providing use case context without any redundant or verbose phrasing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 params, nested objects, output schema), the description covers the main features (cross-ref, cursor, char limit, scope types) and is consistent with annotations. It does not address error cases or prerequisites, but output schema handles return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description partially compensates by mentioning cursor, max_chars, and scope types (pasal_numbers, bab, etc.). However, it does not explain law_id (required), format, or the include parameter, leaving gaps for a tool with 6 parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool reads multiple articles ('Pasal') at once from a single regulation, with resolved cross-references and cursor support, distinguishing it from sibling tools like get_pasal (likely single article) and get_law_overview.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates it is suitable for reading a series of articles ('rangkaian Pasal'), providing context, but does not explicitly exclude alternatives or state when not to use it, though the sibling list implies differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
report_issueLaporkan Masalah DataAInspect
Laporkan masalah data Pasal.id dari alur MCP, seperti OCR salah, peraturan hilang, Pasal hilang, tautan rusak, atau konten usang.
| Name | Required | Description | Default |
|---|---|---|---|
| year | No | ||
| title | Yes | ||
| law_id | No | ||
| node_id | No | ||
| law_type | No | ||
| law_number | No | ||
| description | No | ||
| report_type | Yes | ||
| pasal_number | No | ||
| contact_email | No | ||
| reference_url | No | ||
| current_content | No | ||
| suggested_content | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=false, destructiveHint=false, etc. The description adds that this is a reporting action, which aligns with readOnlyHint=false, but it does not disclose additional behavioral traits (e.g., whether reports are public, require authentication, or have rate limits). With annotations present, the bar is lower, and the description provides minimal extra value beyond stating the purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys the purpose with concrete examples. It is front-loaded and contains no filler. However, it could slightly expand on parameter usage without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema which reduces need for return value documentation, the tool has 13 parameters with 0% schema description coverage and minimal narrative guidance. The description does not cover parameter usage, prerequisites, or error conditions, leaving the agent under-informed for a tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for the 13 parameters. However, it only lists example report types ("OCR salah, peraturan hilang, Pasal hilang") without explaining individual fields like year, law_id, node_id, etc. The agent would lack guidance on how to populate optional parameters correctly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is for reporting data issues ("Laporkan masalah data") with specific examples like OCR errors, missing regulations, missing pasal, broken links, or outdated content. The verb 'Laporkan' is specific and the resource 'masalah data Pasal.id' is explicit. Sibling tools are all retrieval-oriented, so this distinguishes itself well.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use during MCP flow ("dari alur MCP") but does not explicitly state when to avoid or mention alternatives. The context is clear given sibling tools are for reading data, so users can infer this is for reporting problems. A score of 4 reflects clear context without exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_lawsCari PeraturanCRead-onlyIdempotentInspect
Cari full-text di peraturan perundang-undangan Indonesia.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| year_to | No | ||
| language | No | id | |
| year_from | No | ||
| regulation_type | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, idempotentHint=true, destructiveHint=false, so the description's 'Cari full-text' is consistent but adds nothing beyond annotations about behavioral traits like pagination or limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is too minimal—just one short phrase. It is underspecified rather than concise, lacking structured details about the search functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters and the presence of an output schema, the description fails to cover how to use filters like year range or regulation type, making it incomplete for effective tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning no parameter descriptions in the schema. The tool description provides no explanation of parameters like 'limit', 'year_from', 'year_to', 'language', or 'regulation_type', leaving the agent without semantic guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches full-text in Indonesian regulations ('Cari full-text di peraturan perundang-undangan Indonesia'), distinguishing it from siblings like 'search_within_law' which searches within a specific law, and 'list_laws' which lists laws.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives such as 'search_within_law' or 'list_laws'. The description does not mention exclusions, prerequisites, or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_within_lawCari Dalam PeraturanBRead-onlyIdempotentInspect
Cari istilah atau frasa di dalam satu peraturan yang sudah diketahui.
| Name | Required | Description | Default |
|---|---|---|---|
| year | No | ||
| limit | No | ||
| query | Yes | ||
| law_id | No | ||
| law_type | No | ||
| law_number | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint true and destructiveHint false, so the safety profile is clear. The description adds no extra behavioral context like pagination (limit parameter) or identification requirements. With annotations present, a 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One concise sentence with no wasted words. However, it could be more informative without sacrificing brevity, given the tool has multiple parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is insufficient for a 6-parameter tool. It lacks explanation of how to specify the regulation (law_id vs. year+number+type) and the limit parameter. An output schema exists but is not shown; however, the description does not help the agent understand what results to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It only hints at the 'query' parameter ('cari istilah atau frasa'), but omits explanation of year, limit, law_id, law_type, and law_number. Users need to infer how to identify the regulation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches for a term or phrase within a known regulation, distinguishing it from sibling tool 'search_laws' which searches across multiple laws. The verb 'cari' and specific resource 'satu peraturan yang sudah diketahui' are precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use or not use this tool versus alternatives. It does not specify prerequisites such as needing a law ID or combination of year/number/type to identify the regulation, nor does it mention when to prefer 'search_laws' instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!