Skip to main content
Glama
saidsurucu

Yargı MCP

by saidsurucu

search_kvkk_decisions

Read-onlyIdempotent

Search Turkish data protection decisions (KVKK/GDPR) for privacy, consent, and data breach cases using Turkish keywords with advanced operators.

Instructions

Use this when searching Turkish data protection (KVKK/GDPR equivalent) decisions. For privacy, consent, and data breach cases.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
keywordsYesTurkish keywords. Supports +required -excluded "exact phrase" operators
pageNoPage number for results (1-50).

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Main handler function that performs the KVKK decisions search using Brave Search API, extracts metadata, and returns structured results.
    async def search_decisions(self, params: KvkkSearchRequest) -> KvkkSearchResult:
        """Search for KVKK decisions using Brave API."""
        
        search_query = self._construct_search_query(params.keywords)
        logger.info(f"KvkkApiClient: Searching with query: {search_query}")
        
        try:
            # Calculate offset for pagination
            offset = (params.page - 1) * params.pageSize
            
            response = await self.http_client.get(
                self.BRAVE_API_URL,
                headers={
                    "Accept": "application/json",
                    "Accept-Encoding": "gzip",
                    "x-subscription-token": self.brave_api_token
                },
                params={
                    "q": search_query,
                    "country": "TR",
                    "search_lang": "tr",
                    "ui_lang": "tr-TR",
                    "offset": offset,
                    "count": params.pageSize
                }
            )
            
            response.raise_for_status()
            data = response.json()
            
            # Extract search results
            decisions = []
            web_results = data.get("web", {}).get("results", [])
            
            for result in web_results:
                title = result.get("title", "")
                url = result.get("url", "")
                description = result.get("description", "")
                
                # Extract metadata from title
                metadata = self._extract_decision_metadata_from_title(title)
                
                # Extract decision ID from URL
                decision_id = self._extract_decision_id_from_url(url)
                
                decision = KvkkDecisionSummary(
                    title=title,
                    url=HttpUrl(url) if url else None,
                    description=description,
                    decision_id=decision_id,
                    publication_date=metadata.get("decision_date"),
                    decision_number=metadata.get("decision_number")
                )
                decisions.append(decision)
            
            # Get total results if available
            total_results = None
            query_info = data.get("query", {})
            if "total_results" in query_info:
                total_results = query_info["total_results"]
            
            return KvkkSearchResult(
                decisions=decisions,
                total_results=total_results,
                page=params.page,
                pageSize=params.pageSize,
                query=search_query
            )
            
        except httpx.RequestError as e:
            logger.error(f"KvkkApiClient: HTTP request error during search: {e}")
            return KvkkSearchResult(
                decisions=[], 
                total_results=0, 
                page=params.page, 
                pageSize=params.pageSize,
                query=search_query
            )
        except Exception as e:
            logger.error(f"KvkkApiClient: Unexpected error during search: {e}")
            return KvkkSearchResult(
                decisions=[], 
                total_results=0, 
                page=params.page, 
                pageSize=params.pageSize,
                query=search_query
            )
  • Input schema defining the parameters for the search: keywords, page, and pageSize.
    class KvkkSearchRequest(BaseModel):
        """Model for KVKK (Personal Data Protection Authority) search request via Brave API."""
        keywords: str = Field(..., description="""
            Keywords to search for in KVKK decisions. 
            The search will automatically include 'site:kvkk.gov.tr "karar özeti"' to target KVKK decision summaries.
            Examples: "açık rıza", "veri güvenliği", "kişisel veri işleme"
        """)
        page: int = Field(1, ge=1, le=50, description="Page number for search results (1-50).")
        pageSize: int = Field(10, ge=1, le=10, description="Number of results per page (1-10).")
  • Output schema for search results including list of decisions, pagination info, and query.
    class KvkkSearchResult(BaseModel):
        """Model for the overall search result for KVKK decisions."""
        decisions: List[KvkkDecisionSummary] = Field(default_factory=list, description="List of KVKK decisions found.")
        total_results: Optional[int] = Field(None, description="Value")
        page: int = Field(1, description="Current page number of results.")
        pageSize: int = Field(10, description="Number of results per page.")
        query: Optional[str] = Field(None, description="The actual search query sent to Brave API.")
  • Schema for individual decision summary in search results.
    class KvkkDecisionSummary(BaseModel):
        """Model for a single KVKK decision summary from Brave search results."""
        title: Optional[str] = Field(None, description="Decision title from search results.")
        url: Optional[HttpUrl] = Field(None, description="URL to the KVKK decision page.")
        description: Optional[str] = Field(None, description="Brief description or snippet from search results.")
        decision_id: Optional[str] = Field(None, description="Value")
        publication_date: Optional[str] = Field(None, description="Value")
        decision_number: Optional[str] = Field(None, description="Value")
  • Helper methods for constructing search query, extracting decision ID from URL, and metadata from title used in the search handler.
    def _construct_search_query(self, keywords: str) -> str:
        """Construct the search query for Brave API."""
        base_query = 'site:kvkk.gov.tr "karar özeti"'
        if keywords.strip():
            return f"{base_query} {keywords.strip()}"
        return base_query
    
    def _extract_decision_id_from_url(self, url: str) -> Optional[str]:
        """Extract decision ID from KVKK decision URL."""
        try:
            # Example URL: https://www.kvkk.gov.tr/Icerik/7288/2021-1303
            parsed_url = urlparse(url)
            path_parts = parsed_url.path.strip('/').split('/')
            
            if len(path_parts) >= 3 and path_parts[0] == 'Icerik':
                # Extract the decision ID from the path
                decision_id = '/'.join(path_parts[1:])  # e.g., "7288/2021-1303"
                return decision_id
            
        except Exception as e:
            logger.debug(f"Could not extract decision ID from URL {url}: {e}")
        
        return None
    
    def _extract_decision_metadata_from_title(self, title: str) -> Dict[str, Optional[str]]:
        """Extract decision metadata from title string."""
        metadata = {
            "decision_date": None,
            "decision_number": None
        }
        
        if not title:
            return metadata
        
        # Extract decision date (DD/MM/YYYY format)
        date_match = re.search(r'(\d{1,2}/\d{1,2}/\d{4})', title)
        if date_match:
            metadata["decision_date"] = date_match.group(1)
        
        # Extract decision number (YYYY/XXXX format)
        number_match = re.search(r'(\d{4}/\d+)', title)
        if number_match:
            metadata["decision_number"] = number_match.group(1)
        
        return metadata
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, openWorldHint=true, and idempotentHint=true. The description adds context about the search domain (Turkish data protection decisions) and case types, which is useful but doesn't disclose additional behavioral traits like rate limits, authentication needs, or result format. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with just two sentences that directly address purpose and usage context. Every word earns its place with no redundant information or fluff. It's appropriately sized for a search tool with good annotations and schema coverage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has comprehensive annotations (readOnly, openWorld, idempotent), 100% schema description coverage, and an output schema exists, the description provides adequate context. It specifies the domain and use cases, though it could be slightly more complete by mentioning result format or distinguishing more clearly from sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters well-documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3. No compensation is needed given the high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for Turkish data protection (KVKK/GDPR equivalent) decisions, specifying the resource domain. It distinguishes from generic 'search' by focusing on KVKK decisions, though it doesn't explicitly differentiate from sibling tools like 'search_bddk_decisions' or 'search_rekabet_kurumu_decisions' which search different regulatory domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context with 'Use this when searching Turkish data protection (KVKK/GDPR equivalent) decisions' and mentions specific case types (privacy, consent, data breach). However, it doesn't explicitly state when NOT to use this tool or name alternative tools for similar searches in other domains, leaving some ambiguity about tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/saidsurucu/yargi-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server