list_conversions
Find previously converted Office documents by viewing cached conversion records with output paths and extracted image counts.
Instructions
List all cached document conversions.
Shows all documents that have been converted, including their output paths and number of extracted images. Useful for finding previously converted files.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/officereader_mcp/server.py:245-291 (handler)The tool handler for 'list_conversions' which retrieves cache information from the converter and formats a detailed textual summary including total counts, file listings, and raw JSON data.elif name == "list_conversions": cache_info = converter.get_cache_info() # Build a formatted summary summary_lines = [ "=" * 50, " OfficeReader-MCP Cache Information", "=" * 50, "", cache_notice, f"Output directory: {cache_info.get('output_dir', 'N/A')}", "", f"Total cached conversions: {cache_info['total_conversions']}", f"Total cache size: {cache_info.get('total_size_human', 'N/A')}", "", ] if cache_info['conversions']: summary_lines.append("-" * 50) summary_lines.append(" Cached Documents") summary_lines.append("-" * 50) for i, conv in enumerate(cache_info['conversions'], 1): summary_lines.append(f"\n[{i}] {conv['name']}") summary_lines.append(f" Directory: {conv['path']}") if conv['markdown_files']: summary_lines.append(f" Markdown: {conv['markdown_files'][0]}") summary_lines.append(f" Images: {conv['image_count']} files") summary_lines.append(f" Size: {conv.get('size_human', 'N/A')}") if conv.get('modified'): summary_lines.append(f" Modified: {conv['modified']}") else: summary_lines.append("-" * 50) summary_lines.append("No cached conversions found.") summary_lines.append("") summary_lines.append("To convert a document, use the 'convert_document' tool:") summary_lines.append(" file_path: <path to your .docx or .doc file>") summary_lines.append("") summary_lines.append("=" * 50) summary_lines.append("\n--- Raw JSON Data ---") return [TextContent( type="text", text="\n".join(summary_lines) + "\n" + json.dumps(cache_info, ensure_ascii=False, indent=2) )]
- src/officereader_mcp/server.py:117-128 (registration)Registration of the 'list_conversions' tool in the list_tools handler, including its description and empty input schema.Tool( name="list_conversions", description="""List all cached document conversions. Shows all documents that have been converted, including their output paths and number of extracted images. Useful for finding previously converted files.""", inputSchema={ "type": "object", "properties": {}, }, ), Tool(
- Core helper method in DocxConverter that scans the cache output directory, lists conversion subdirectories, computes sizes and metadata for markdown files and images, and returns structured cache information. OfficeConverter proxies this method.def get_cache_info(self) -> dict: """Get information about cached conversions.""" conversions = [] total_size = 0 if not self.output_dir.exists(): return { "cache_dir": str(self.cache_dir), "output_dir": str(self.output_dir), "conversions": [], "total_conversions": 0, "total_size_bytes": 0, "total_size_human": "0 B", } for item in self.output_dir.iterdir(): if item.is_dir(): md_files = list(item.glob("*.md")) images_dir = item / "images" images = list(images_dir.glob("*")) if images_dir.exists() else [] # Calculate sizes md_size = sum(f.stat().st_size for f in md_files if f.exists()) img_size = sum(f.stat().st_size for f in images if f.exists()) dir_size = md_size + img_size total_size += dir_size # Get modification time mod_time = "" if md_files: mod_time = datetime.fromtimestamp(md_files[0].stat().st_mtime).isoformat() conversions.append({ "name": item.name, "path": str(item), "markdown_files": [str(f) for f in md_files], "image_count": len(images), "image_paths": [str(f) for f in images], "size_bytes": dir_size, "size_human": self._human_readable_size(dir_size), "modified": mod_time, }) return { "cache_dir": str(self.cache_dir), "output_dir": str(self.output_dir), "conversions": conversions, "total_conversions": len(conversions), "total_size_bytes": total_size, "total_size_human": self._human_readable_size(total_size), }
- Proxy method in OfficeConverter that delegates cache info retrieval to the underlying DocxConverter instance.def get_cache_info(self) -> dict: """Get information about cached conversions.""" return self._docx_converter.get_cache_info()