Enables reading and analyzing Excel, CSV, and JSON files hosted on GitHub using raw URLs for data access and processing.
Supports analyzing Excel, CSV, and JSON files stored on Google Drive via public links for remote data access.
Provides a ready-to-use deployed HTTP server instance for immediate Excel and JSON file analysis without local installation.
Excel Analyser MCP
A Node.js MCP server for reading and analyzing Excel (.xlsx), CSV (.csv), and JSON (.json) files. Supports multiple transport protocols (stdio, HTTP, SSE) and designed for scalable, chunked, and column/field-specific data access, making it ideal for AI agents and automation workflows that need to process large datasets efficiently.
š Quick Start - Configuration
Excel Analyser MCP supports multiple transport protocols: stdio (npm/CLI), HTTP streamable, and SSE.
ā” Ready-to-Use HTTP Server (Recommended)
The fastest way to get started! Use our deployed server without any installation:
MCP Client Configuration (HTTP - Ready to Use):
š That's it! No installation required. Start analyzing files immediately.
š Example Usage Prompt (HTTP - Use Cloud URLs):
ā ļø Important for HTTP: Use cloud URLs (GitHub raw, Google Drive public links, etc.) since the server runs remotely and cannot access your local files.
NPM/Stdio Transport (Self-hosted)
Perfect for MCP clients like Claude Desktop, Cursor, and other CLI-based integrations.
mcp.json Configuration:
š Example Usage Prompt (Stdio - Use Local Paths):
ā ļø Important for Stdio: Use absolute local file paths since the server runs on your machine and can access your local files directly.
HTTP Transport (Self-hosted)
Ideal for web applications, REST API integrations, and serverless deployments.
Start HTTP Server:
MCP Client Configuration (HTTP):
š Example Usage Prompt (Self-hosted HTTP - Use Local or Cloud URLs):
ā ļø Important for Self-hosted HTTP: You can use local absolute paths or cloud URLs since your server can access both local files and remote URLs.
SSE Transport (Self-hosted)
For real-time streaming applications (deprecated but still supported).
Start SSE Server:
MCP Client Configuration (SSE):
Development Scripts (Self-hosted)
What's New in v2.1.0
š Multi-Transport Support: Now supports stdio (npm), HTTP streamable, and SSE transports for maximum flexibility
š HTTP Transport: Perfect for web applications and REST API integrations
š” SSE Transport: Real-time streaming capabilities for advanced use cases
āļø Easy Configuration: Simple command-line arguments to choose your preferred transport
What's New in v2.0.0
New : A powerful new tool for efficiently searching large JSON files based on field values.
Efficient Streaming: All JSON tools (
read_json,query_json,get_json_chunk) have been re-architected to use streaming. This means they can process gigabyte-sized files with minimal memory usage, preventing crashes and ensuring scalability.
Features
Multi-Transport Support: Choose between stdio (npm), HTTP streamable, or SSE transports
Read Excel/CSV/JSON files and output all or selected columns/fields as JSON
Efficient Streaming: Handle multi-gigabyte JSON files with constant, low memory usage.
Powerful JSON Querying: Quickly search and filter large JSON files without loading the entire file into memory.
Chunked Access: Process large files iteratively by fetching data in configurable chunks.
Column/Field filtering: Extract only the columns or fields you need.
MCP server integration: Expose tools for AI agents and automation.
Getting Started
Prerequisites
Node.js (v18 or higher recommended)
Installation
Running the MCP Server
Or configure your MCP agent to launch this file with Node.js and --stdio.
MCP Tools
1. read_excel
Description: Reads an Excel or CSV file and returns a preview (first 100 rows) and metadata for large files, or the full data for small files.
Parameters:
filePath(string, required): Path to the Excel or CSV file on disk (.xlsx or .csv)columns(array of strings, optional): Columns to include in the output. If not specified, all columns are included.
Returns:
For large files:
{ preview: [...], totalRows, columns, message }For small files: Full data as an array
Example Request:
2. get_chunk
Description: Fetches a chunk of rows from a CSV or Excel file, with optional column filtering. Useful for processing large files in batches.
Parameters:
filePath(string, required): Path to the Excel or CSV file on disk (.xlsx or .csv)columns(array of strings, optional): Columns to include in the outputstart(integer, optional, default 0): Row index to start from (0-based)limit(integer, optional, default 1000): Number of rows to return in the chunk
Returns:
{ chunk: [...], start, limit, totalRows }
Example Request:
Example Response:
3. read_json
Description: Efficiently reads a large JSON file to provide a quick preview (first 100 entries) and metadata without loading the entire file into memory. This is the recommended first step for analyzing a new JSON file.
Parameters:
filePath(string, required): Path to the JSON file on disk (.json)fields(array of strings, optional): Fields to include in the output. If not specified, all fields are included.
Returns:
For large files (>1000 entries):
{ preview: [...], totalEntries, fields, message }For small files: Full data as an array
Example Request:
Example Response (large file):
4. query_json
Description: Performs a fast, memory-efficient search on a large JSON file. It streams the file and returns all entries that match the specified query, up to a limit of 1000 results. This is the ideal tool for finding specific data within a large dataset.
Parameters:
filePath(string, required): Path to the JSON file on disk (.json).query(object, required): The query to execute on the JSON data.field(string): The field to query (e.g., 'trading_symbol').operator(enum): The query operator. Can becontains,equals,startsWith, orendsWith.value(string): The value to match against.
Returns:
{ matches: [...], matchCount, totalEntriesScanned, message }
Example Request:
Example Response:
5. get_json_chunk
Description: Fetches a specific chunk of entries from a JSON file. This tool is designed for iterative analysis, where you need to process every entry in the file sequentially, one chunk at a time. It uses efficient streaming to access the requested chunk without re-reading the whole file.
Parameters:
filePath(string, required): Path to the JSON file on disk (.json)fields(array of strings, optional): Fields to include in the outputstart(integer, optional, default 0): Entry index to start from (0-based)limit(integer, optional, default 1000): Number of entries to return in the chunk
Returns:
{ chunk: [...], start, limit, totalEntries }
Example Request:
Example Response:
How to Choose the Right JSON Tool
Use this guide to select the most efficient tool for your task:
To explore a new JSON file:
1st: Use
read_json. It will give you the total number of entries, all available fields, and a preview of the first 100 entries.
To find specific data:
Use . It's the fastest and most memory-efficient way to search for entries that match a specific condition (e.g., find all users where
statusisactive).
To process every entry:
Use . This is for when you need to perform an action on every single entry in the file, such as categorizing support tickets or performing a complex calculation. Call it in a loop, incrementing the
startparameter, until you have processed alltotalEntries.
Usage with AI Agents
Configure your AI agent (e.g., Cursor AI, Copilot) to connect to this MCP server.
Use
read_excelorread_jsonfor a quick preview and metadata.Use
get_chunkorget_json_chunkto iterate through large files in batches for scalable analysis.JSON files with more than 1000 entries automatically use pagination for optimal performance.
Example Usage
Here's an example of how you can use this MCP server with an AI agent to analyze files.
Important: The MCP server requires absolute file paths for security and reliability reasons.
Analyzing an Excel/CSV File
Scenario: You want to get a summary of dummy_excel_file.xlsx.
1. Initial Request to AI Agent:
You: Can you analyze the file at
/home/john/documents/dummy_excel_file.xlsxand give me the column names and the first few rows?
2. AI Agent uses the
The agent would make a tool call similar to this:
3. Response from the MCP Server:
If the file is large, the server will return a preview:
Searching a Large JSON File
Scenario: You want to find all stocks with "TITAN" in their trading symbol from a very large JSON file.
1. Initial Request to AI Agent:
You: Can you find all entries in
/data/NSE.jsonwhere thetrading_symbolcontainsTITAN?
2. AI Agent uses the
3. Response from the MCP Server:
Analyzing a JSON File Iteratively
Scenario: You want to analyze a large JSON dataset of employee records, chunk by chunk.
1. Initial Request to AI Agent:
You: Can you analyze the employee data in
/home/john/data/employees.jsonand show me the first chunk?
2. AI Agent uses the
3. Response for Large JSON File:
š Cloud Deployment
Deploy your MCP server to the internet so others can use it via HTTP transport!
Quick Deploy to Railway
Fork/clone this repository
Connect to Railway: railway.app ā New Project ā Deploy from GitHub
Access your server:
https://your-app.railway.app/mcp
Use Your Deployed Server
š Full deployment guide: See docs/DEPLOYMENT.md for detailed instructions, security considerations, and alternative platforms.
š Documentation
Additional documentation is available in the docs/ directory:
docs/DEPLOYMENT.md- Complete deployment guide for Railway and other cloud platformsdocs/URL_SUPPORT.md- Guide for adding URL support to handle cloud-based file access
Notes
Supported file types:
.xlsx,.csv, and.jsonfilesJSON file requirements: Must contain an array of objects
Excel files: Only the first sheet is used by default in chunked operations
Automatic pagination: JSON files with >1000 entries automatically use pagination
Chunk sizes: Default 1000 for optimal performance, configurable per request
Error handling: Comprehensive error messages for file not found, invalid formats, etc.
Testing
The project includes test files for both Excel/CSV and JSON functionality in the tests/ directory:
tests/test-readExcelFile.js- Test Excel/CSV reading functionalitytests/test-readJsonFile.js- Test JSON reading functionalitytests/test-http-transport.js- Test HTTP transport connectivitytests/test-deployment.js- Test deployed server functionalitytests/test-data.json- Sample JSON file for testingtests/dummy_excel_file.xlsx- Sample Excel file for testing
To run tests:
Feedback & Support
We value your feedback and are committed to improving Excel Analyser MCP. Here are several ways you can reach out:
š Found a Bug?
GitHub Issues: Report bugs here
Please include:
Steps to reproduce the issue
Expected vs actual behavior
File type and size (if applicable)
Error messages or logs
š” Feature Requests & Enhancements
GitHub Issues: Request features here
GitHub Discussions: Start a discussion for ideas and general feedback
š General Feedback
Email: contactakagrawal@gmail.com
GitHub Discussions: General feedback
š¤ Contributing
We welcome contributions! Please see our Contributing Guidelines for more information on how to:
Submit pull requests
Report issues
Suggest improvements
Help with documentation
ā Show Your Support
If you find this project helpful, please consider:
Giving it a ā star on GitHub
Sharing it with others who might benefit
Contributing to the codebase
License
ISC