list_data_files
Discover available CSV and SQLite files in your project data directory to begin analyzing tabular datasets with the MCP Tabular Data Analysis Server.
Instructions
List available data files in the project data directory.
Args:
data_dir: Relative path to data directory (default: "data")
Returns:
Dictionary containing list of available CSV and SQLite files
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| data_dir | No | data |
Implementation Reference
- src/mcp_tabular/server.py:536-585 (handler)The handler function for the 'list_data_files' MCP tool. It scans the specified data directory for CSV and SQLite files, collects metadata (size, relative path), peeks at CSV column headers, and returns a structured list separated by file type.def list_data_files(data_dir: str = "data") -> dict[str, Any]: """ List available data files in the project data directory. Args: data_dir: Relative path to data directory (default: "data") Returns: Dictionary containing list of available CSV and SQLite files """ data_path = _resolve_path(data_dir) if not data_path.exists(): return { "data_directory": str(data_path), "exists": False, "files": [] } csv_files = [] db_files = [] for file_path in sorted(data_path.iterdir()): if file_path.is_file(): suffix = file_path.suffix.lower() file_info = { "name": file_path.name, "path": str(file_path.relative_to(_PROJECT_ROOT)), "size_bytes": file_path.stat().st_size, } if suffix == ".csv": # Try to get basic info about CSV try: df = pd.read_csv(str(file_path), nrows=0) file_info["columns"] = df.columns.tolist() file_info["column_count"] = len(df.columns) except Exception: pass csv_files.append(file_info) elif suffix in (".db", ".sqlite", ".sqlite3"): db_files.append(file_info) return { "data_directory": str(data_path.relative_to(_PROJECT_ROOT)), "absolute_path": str(data_path), "csv_files": csv_files, "sqlite_files": db_files, "total_files": len(csv_files) + len(db_files), }