Skip to main content
Glama
saidsurucu

Mevzuat MCP

by saidsurucu

search_within_mevzuat

Search a specific Turkish legislation's articles by ID using Boolean keywords. Returns matching articles sorted by relevance, ideal for navigating large laws.

Instructions

Search within a specific legislation's articles on bedesten.adalet.gov.tr.

Ideal for large legislation where get_mevzuat_content would return too much text. Fetches the full document, splits into individual articles (MADDE), and applies keyword search with Boolean operators. Returns only matching articles sorted by relevance score (match frequency).

Each result includes: article number (madde no), match count, and full article text.

Workflow: search_mevzuat → get mevzuatId → search_within_mevzuat(mevzuatId, keyword)

Example: To find investor compensation articles in Capital Markets Law:

  1. search_mevzuat(mevzuat_adi='sermaye piyasası', mevzuat_tur='KANUN') → mevzuatId

  2. search_within_mevzuat(mevzuat_id='...', keyword='yatırımcı AND tazmin')

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
mevzuat_idYesLegislation ID from search_mevzuat results (mevzuatId field). This is a string ID (e.g., '345097'), NOT the law number. First call search_mevzuat to get the mevzuatId.
keywordYesSearch query with Boolean operators (operators MUST be uppercase). Simple keyword: 'yatırımcı'. AND (both required): 'yatırımcı AND tazmin'. OR (at least one): 'yatırımcı OR müşteri'. NOT (exclude): 'yatırımcı NOT kurum'. Exact phrase: '"mali sıkıntı"'. Combined: '"mali sıkıntı" AND yatırımcı NOT kurum'.
case_sensitiveNoCase-sensitive matching (default: false)
max_resultsNoMaximum number of matching articles to return (1-50, default: 25)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description details the internal process: fetches full document, splits into articles, applies Boolean keyword search, sorts by relevance, and specifies result fields. It does not mention rate limits or potential large data usage, but it is sufficiently transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured and concise: opens with purpose, then usage context, behavioral details, result format, workflow, and example. No extraneous information; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 parameters, existing output schema) and good annotations, the description covers all necessary aspects: input requirements, processing logic, output structure, and usage workflow. It is fully complete for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds substantial value beyond the schema: explains how to obtain mevzuat_id, provides detailed Boolean operator syntax with examples for keyword, and clarifies defaults and constraints for case_sensitive and max_results. With 100% schema coverage, the description still enriches understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search within a specific legislation's articles'. It distinguishes from siblings like get_mevzuat_content by noting it is ideal for large legislation, and from other search_within_* tools by specifying the source domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use: 'Ideal for large legislation where get_mevzuat_content would return too much text.' Provides a step-by-step workflow and a concrete example, guiding the agent on correct invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/saidsurucu/mevzuat-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server