get_tax_article
Retrieve official French tax law articles from legifrance.gouv.fr by specifying article IDs to access current legal information.
Instructions
Get information about a tax law article from legifrance.gouv.fr
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| article_id | Yes | ||
| ctx | No |
Implementation Reference
- french_tax_mcp/server.py:496-499 (registration)Registration of the 'get_tax_article' MCP tool using the @mcp.tool decorator with name and description.@mcp.tool( name="get_tax_article", description="Get information about a tax law article from legifrance.gouv.fr", )
- french_tax_mcp/server.py:500-526 (handler)Handler function registered as MCP tool 'get_tax_article'. Logs the request, calls the scraper function, and handles errors.async def get_tax_article_wrapper( article_id: str, ctx: Optional[Context] = None, ) -> Optional[Dict]: """Get information about a tax law article from legifrance.gouv.fr. Args: article_id: Article identifier (e.g., '200', '4B') ctx: MCP context for logging Returns: Dict: Dictionary containing article information """ try: if ctx: await ctx.info(f"Getting tax article information for {article_id}") result = await get_tax_article(article_id) return result except Exception as e: if ctx: await ctx.error(f"Failed to get tax article: {e}") return { "status": "error", "message": f"Error getting tax article: {str(e)}", }
- Core scraping logic in LegalScraper.get_tax_article method: constructs Legifrance URL, fetches page, parses HTML, extracts info, formats result.async def get_tax_article(self, article_id: str) -> Dict: """Scrape information about a tax law article from legifrance.gouv.fr. Args: article_id: Article identifier (e.g., '200', '4B') Returns: Dictionary containing information about the article """ logger.info(f"Scraping information for article {article_id}") try: # Construct URL url = f"/codes/id/LEGITEXT000006069577/LEGIARTI000{article_id}" # Get the page response = await self.get_page(url) # Parse HTML soup = self.parse_html(response.text) # Extract article information article_info = self._extract_article_info(soup, article_id) return self.format_result( status="success", data=article_info, message=f"Successfully retrieved information for article {article_id}", source_url=f"{BASE_URL}{url}", ) except Exception as e: logger.error(f"Error scraping article information: {e}") return self.format_result( status="error", message=f"Failed to retrieve article information: {str(e)}", data={"article": article_id}, error=e, )
- Convenience free function that delegates to the singleton LegalScraper instance's get_tax_article method.async def get_tax_article(article_id: str) -> Dict: """Scrape information about a tax law article from legifrance.gouv.fr. Args: article_id: Article identifier (e.g., '200', '4B') Returns: Dictionary containing information about the article """ return await legal_scraper.get_tax_article(article_id)