get_car_details
Retrieve comprehensive vehicle information from Turbo.az automotive listings using a listing ID or URL to access detailed specifications, pricing, and seller details.
Instructions
Fetches detailed listing info from Turbo.az. Requires listing ID or URL.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| listing_id | Yes | Listing ID (e.g. 1234567) or full URL |
Implementation Reference
- src/scraper.py:374-412 (handler)The main handler function that executes the web scraping logic to fetch car details.
async def get_car_details(self, listing_id: str) -> dict: """Gets detailed information of a specific listing.""" # Can be URL or ID if listing_id.startswith("http"): url = listing_id else: url = f"{BASE_URL}/autos/{listing_id}" logger.info(f"Fetching details: {url}") def _scrape(): driver = self._get_driver() try: driver.get(url) WebDriverWait(driver, 20).until( EC.presence_of_element_located((By.CLASS_NAME, "product")) ) details = {"url": url} # Title try: title = driver.find_element(By.CLASS_NAME, "product-title") details["title"] = title.text.strip() except NoSuchElementException: details["title"] = "N/A" # Price (sidebar: product-price__i--bold) try: price = driver.find_element(By.CSS_SELECTOR, ".product-price__i--bold") details["price"] = price.text.strip() except NoSuchElementException: try: price = driver.find_element(By.CLASS_NAME, "product-price__i") details["price"] = price.text.strip() except NoSuchElementException: details["price"] = "N/A" - src/server.py:133-146 (registration)Registration of the 'get_car_details' tool in the MCP server definitions.
Tool( name="get_car_details", description="Fetches detailed listing info from Turbo.az. Requires listing ID or URL.", inputSchema={ "type": "object", "properties": { "listing_id": { "type": "string", "description": "Listing ID (e.g. 1234567) or full URL" } }, "required": ["listing_id"] } ), - src/server.py:204-213 (handler)Logic within the server's call_tool function that routes the 'get_car_details' request to the scraper.
elif name == "get_car_details": listing_id = arguments.get("listing_id") if not listing_id: return [TextContent(type="text", text="Error: listing_id is required")] details = await scraper.get_car_details(listing_id) # Fetch images and include them as ImageContent content_list = [TextContent(type="text", text=json.dumps(details, ensure_ascii=False, indent=2))]