search_rekabet_kurumu_decisions
Search Competition Authority decisions in Turkey by title, decision number, publication date, or text. Filter results by decision type for antitrust and competition law research.
Instructions
Search Competition Authority (Rekabet Kurumu) decisions for competition law and antitrust
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| KararSayisi | No | Decision number (Karar Sayısı). | |
| KararTarihi | No | Decision date (Karar Tarihi), e.g., DD.MM.YYYY. | |
| KararTuru | No | Parameter description | ALL |
| PdfText | No | Search in decision text. Use "\"kesin cümle\"" for precise matching. | |
| YayinlanmaTarihi | No | Publication date (Yayım Tarihi), e.g., DD.MM.YYYY. | |
| page | No | Page number to fetch for the results list. | |
| sayfaAdi | No | Search in decision title (Başlık). |
Implementation Reference
- mcp_server_main.py:1011-1066 (handler)Main handler for the 'search_rekabet_kurumu_decisions' MCP tool. Maps user-friendly decision type names to GUID enums, constructs RekabetKurumuSearchRequest, calls RekabetKurumuApiClient.search_decisions, and returns results as dict.description="Search Competition Authority (Rekabet Kurumu) decisions for competition law and antitrust", annotations={ "readOnlyHint": True, "openWorldHint": True, "idempotentHint": True } ) async def search_rekabet_kurumu_decisions( sayfaAdi: str = Field("", description="Search in decision title (Başlık)."), YayinlanmaTarihi: str = Field("", description="Publication date (Yayım Tarihi), e.g., DD.MM.YYYY."), PdfText: str = Field( "", description='Search in decision text. Use "\\"kesin cümle\\"" for precise matching.' ), KararTuru: Literal[ "ALL", "Birleşme ve Devralma", "Diğer", "Menfi Tespit ve Muafiyet", "Özelleştirme", "Rekabet İhlali" ] = Field("ALL", description="Parameter description"), KararSayisi: str = Field("", description="Decision number (Karar Sayısı)."), KararTarihi: str = Field("", description="Decision date (Karar Tarihi), e.g., DD.MM.YYYY."), page: int = Field(1, ge=1, description="Page number to fetch for the results list.") ) -> Dict[str, Any]: """Search Competition Authority decisions.""" karar_turu_guid_enum = KARAR_TURU_ADI_TO_GUID_ENUM_MAP.get(KararTuru) try: if karar_turu_guid_enum is None: logger.warning(f"Invalid user-provided KararTuru: '{KararTuru}'. Defaulting to TUMU (all).") karar_turu_guid_enum = RekabetKararTuruGuidEnum.TUMU except Exception as e_map: logger.error(f"Error mapping KararTuru '{KararTuru}': {e_map}. Defaulting to TUMU.") karar_turu_guid_enum = RekabetKararTuruGuidEnum.TUMU search_query = RekabetKurumuSearchRequest( sayfaAdi=sayfaAdi, YayinlanmaTarihi=YayinlanmaTarihi, PdfText=PdfText, KararTuruID=karar_turu_guid_enum, KararSayisi=KararSayisi, KararTarihi=KararTarihi, page=page ) logger.info(f"Tool 'search_rekabet_kurumu_decisions' called. Query: {search_query.model_dump_json(exclude_none=True, indent=2)}") try: result = await rekabet_client_instance.search_decisions(search_query) return result.model_dump() except Exception as e: logger.exception("Error in tool 'search_rekabet_kurumu_decisions'.") return RekabetSearchResult(decisions=[], retrieved_page_number=page, total_records_found=0, total_pages=0).model_dump()
- rekabet_mcp_module/models.py:26-35 (schema)Pydantic model defining input schema for Rekabet Kurumu search parameters, used by the tool handler.class RekabetKurumuSearchRequest(BaseModel): """Model for Rekabet Kurumu (Turkish Competition Authority) search request.""" sayfaAdi: str = Field("", description="Title") YayinlanmaTarihi: str = Field("", description="Date") PdfText: str = Field("", description="Text") KararTuruID: RekabetKararTuruGuidEnum = Field(RekabetKararTuruGuidEnum.TUMU, description="Type") KararSayisi: str = Field("", description="No") KararTarihi: str = Field("", description="Date") page: int = Field(1, ge=1, description="Page")
- rekabet_mcp_module/client.py:73-209 (helper)Core search logic in RekabetKurumuApiClient: builds query params, fetches HTML from rekabet.gov.tr/Kararlar, parses with BeautifulSoup, extracts decision summaries and pagination info.async def search_decisions(self, params: RekabetKurumuSearchRequest) -> RekabetSearchResult: request_path = self.SEARCH_PATH final_query_params = self._build_search_query_params(params) logger.info(f"RekabetKurumuApiClient: Performing search. Path: {request_path}, Parameters: {final_query_params}") try: response = await self.http_client.get(request_path, params=final_query_params) response.raise_for_status() html_content = response.text except httpx.RequestError as e: logger.error(f"RekabetKurumuApiClient: HTTP request error during search: {e}") raise soup = BeautifulSoup(html_content, 'html.parser') processed_decisions: List[RekabetDecisionSummary] = [] total_records: Optional[int] = None total_pages: Optional[int] = None pagination_div = soup.find("div", class_="yazi01") if pagination_div: text_content = pagination_div.get_text(separator=" ", strip=True) total_match = re.search(r"Toplam\s*:\s*(\d+)", text_content) if total_match: try: total_records = int(total_match.group(1)) logger.debug(f"Total records found from pagination: {total_records}") except ValueError: logger.warning(f"Could not convert 'Toplam' value to int: {total_match.group(1)}") else: logger.warning("'Toplam :' string not found in pagination section.") results_per_page_assumed = 10 if total_records is not None: calculated_total_pages = math.ceil(total_records / results_per_page_assumed) total_pages = calculated_total_pages if calculated_total_pages > 0 else (1 if total_records > 0 else 0) logger.debug(f"Calculated total pages: {total_pages}") if total_pages is None: # Fallback if total_records couldn't be parsed last_page_link = pagination_div.select_one("li.PagedList-skipToLast a") if last_page_link and last_page_link.has_attr('href'): qs = parse_qs(urlparse(last_page_link['href']).query) if 'page' in qs and qs['page']: try: total_pages = int(qs['page'][0]) logger.debug(f"Total pages found from 'Last >>' link: {total_pages}") except ValueError: logger.warning(f"Could not convert page value from 'Last >>' link to int: {qs['page'][0]}") elif total_records == 0 : total_pages = 0 # If no records, 0 pages elif total_records is not None and total_records > 0 : total_pages = 1 # If records exist but no last page link (e.g. single page) else: logger.warning("'Last >>' link not found in pagination section.") decision_tables_container = soup.find("div", id="kararList") if not decision_tables_container: logger.warning("`div#kararList` (decision list container) not found. HTML structure might have changed or no decisions on this page.") else: decision_tables = decision_tables_container.find_all("table", class_="equalDivide") logger.info(f"Found {len(decision_tables)} 'table' elements with class='equalDivide' for parsing.") if not decision_tables and total_records is not None and total_records > 0 : logger.warning(f"Page indicates {total_records} records but no decision tables found with class='equalDivide'.") for idx, table in enumerate(decision_tables): logger.debug(f"Processing table {idx + 1}...") try: rows = table.find_all("tr") if len(rows) != 3: logger.warning(f"Table {idx + 1} has an unexpected number of rows ({len(rows)} instead of 3). Skipping. HTML snippet:\n{table.prettify()[:500]}") continue # Row 1: Publication Date, Decision Number, Related Cases Link td_elements_r1 = rows[0].find_all("td") pub_date = td_elements_r1[0].get_text(strip=True) if len(td_elements_r1) > 0 else None dec_num = td_elements_r1[1].get_text(strip=True) if len(td_elements_r1) > 1 else None related_cases_link_tag = td_elements_r1[2].find("a", href=True) if len(td_elements_r1) > 2 else None related_cases_url_str: Optional[str] = None karar_id_from_related: Optional[str] = None if related_cases_link_tag and related_cases_link_tag.has_attr('href'): related_cases_url_str = urljoin(self.BASE_URL, related_cases_link_tag['href']) qs_related = parse_qs(urlparse(related_cases_link_tag['href']).query) if 'kararId' in qs_related and qs_related['kararId']: karar_id_from_related = qs_related['kararId'][0] # Row 2: Decision Date, Decision Type td_elements_r2 = rows[1].find_all("td") dec_date = td_elements_r2[0].get_text(strip=True) if len(td_elements_r2) > 0 else None dec_type_text = td_elements_r2[1].get_text(strip=True) if len(td_elements_r2) > 1 else None # Row 3: Title and Main Decision Link title_cell = rows[2].find("td", colspan="5") decision_link_tag = title_cell.find("a", href=True) if title_cell else None title_text: Optional[str] = None decision_landing_url_str: Optional[str] = None karar_id_from_main_link: Optional[str] = None if decision_link_tag and decision_link_tag.has_attr('href'): title_text = decision_link_tag.get_text(strip=True) href_val = decision_link_tag['href'] if href_val.startswith(self.DECISION_LANDING_PATH_TEMPLATE + "?kararId="): # Ensure it's a decision link decision_landing_url_str = urljoin(self.BASE_URL, href_val) qs_main = parse_qs(urlparse(href_val).query) if 'kararId' in qs_main and qs_main['kararId']: karar_id_from_main_link = qs_main['kararId'][0] else: logger.warning(f"Table {idx+1} decision link has unexpected format: {href_val}") else: logger.warning(f"Table {idx+1} could not find title/decision link tag.") current_karar_id = karar_id_from_main_link or karar_id_from_related if not current_karar_id: logger.warning(f"Table {idx+1} Karar ID not found. Skipping. Title (if any): {title_text}") continue # Convert string URLs to HttpUrl for the model final_decision_url = HttpUrl(decision_landing_url_str) if decision_landing_url_str else None final_related_cases_url = HttpUrl(related_cases_url_str) if related_cases_url_str else None processed_decisions.append(RekabetDecisionSummary( publication_date=pub_date, decision_number=dec_num, decision_date=dec_date, decision_type_text=dec_type_text, title=title_text, decision_url=final_decision_url, karar_id=current_karar_id, related_cases_url=final_related_cases_url )) logger.debug(f"Table {idx+1} parsed successfully: Karar ID '{current_karar_id}', Title '{title_text[:50] if title_text else 'N/A'}...'") except Exception as e: logger.warning(f"RekabetKurumuApiClient: Error parsing decision summary {idx+1}: {e}. Problematic Table HTML:\n{table.prettify()}", exc_info=True) continue return RekabetSearchResult( decisions=processed_decisions, total_records_found=total_records, retrieved_page_number=params.page, total_pages=total_pages if total_pages is not None else 0 )
- rekabet_mcp_module/models.py:8-15 (helper)Enum defining GUID values for Rekabet Kurumu decision types, used in search query parameters.class RekabetKararTuruGuidEnum(str, Enum): TUMU = "ALL" # Represents "All" or "Select Decision Type" BIRLESME_DEVRALMA = "2fff0979-9f9d-42d7-8c2e-a30705889542" # Merger and Acquisition DIGER = "dda8feaf-c919-405c-9da1-823f22b45ad9" # Other MENFI_TESPIT_MUAFIYET = "95ccd210-5304-49c5-b9e0-8ee53c50d4e8" # Negative Clearance and Exemption OZELLESTIRME = "e1f14505-842b-4af5-95d1-312d6de1a541" # Privatization REKABET_IHLALI = "720614bf-efd1-4dca-9785-b98eb65f2677" # Competition Infringement
- mcp_auth/policy.py:176-176 (registration)Access policy granting read permission to tools matching 'search_rekabet.*' pattern, including search_rekabet_kurumu_decisions.engine.add_tool_scope_policy("search_rekabet.*", ["mcp:tools:read"])