list_companies
Retrieve and paginate through all companies stored in Freshdesk to streamline company data management and support operations.
Instructions
List all companies in Freshdesk with pagination support.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| per_page | No |
Implementation Reference
- src/freshdesk_mcp/server.py:1057-1103 (handler)The primary handler function for the 'list_companies' MCP tool. It is decorated with @mcp.tool(), which both defines and registers the tool. Fetches companies from the Freshdesk API endpoint /api/v2/companies with pagination, validation, error handling, and pagination parsing.async def list_companies(page: Optional[int] = 1, per_page: Optional[int] = 30) -> Dict[str, Any]: """List all companies in Freshdesk with pagination support.""" # Validate input parameters if page < 1: return {"error": "Page number must be greater than 0"} if per_page < 1 or per_page > 100: return {"error": "Page size must be between 1 and 100"} url = f"https://{FRESHDESK_DOMAIN}/api/v2/companies" params = { "page": page, "per_page": per_page } headers = { "Authorization": f"Basic {base64.b64encode(f'{FRESHDESK_API_KEY}:X'.encode()).decode()}", "Content-Type": "application/json" } async with httpx.AsyncClient() as client: try: response = await client.get(url, headers=headers, params=params) response.raise_for_status() # Parse pagination from Link header link_header = response.headers.get('Link', '') pagination_info = parse_link_header(link_header) companies = response.json() return { "companies": companies, "pagination": { "current_page": page, "next_page": pagination_info.get("next"), "prev_page": pagination_info.get("prev"), "per_page": per_page } } except httpx.HTTPStatusError as e: return {"error": f"Failed to fetch companies: {str(e)}"} except Exception as e: return {"error": f"An unexpected error occurred: {str(e)}"}
- src/freshdesk_mcp/server.py:21-52 (helper)Utility helper function used by list_companies (and others) to parse the HTTP Link header for pagination information (next/prev pages). Called within the handler to enrich the response with pagination metadata.def parse_link_header(link_header: str) -> Dict[str, Optional[int]]: """Parse the Link header to extract pagination information. Args: link_header: The Link header string from the response Returns: Dictionary containing next and prev page numbers """ pagination = { "next": None, "prev": None } if not link_header: return pagination # Split multiple links if present links = link_header.split(',') for link in links: # Extract URL and rel match = re.search(r'<(.+?)>;\s*rel="(.+?)"', link) if match: url, rel = match.groups() # Extract page number from URL page_match = re.search(r'page=(\d+)', url) if page_match: page_num = int(page_match.group(1)) pagination[rel] = page_num return pagination