Analyzes GitHub repositories to extract business value, technical benefits, tech stack information, and key features for case study generation.
Retrieves documents from Google Drive for processing and saves generated case studies back to Google Drive as part of the document workflow.
Uses Ollama with Gemma3 8B-Instruct model for local LLM processing to extract structured business insights from documents and repositories.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Case Study Generator MCP Serveranalyze github.com/acme/ecommerce-platform for business insights"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Case Study Generator MCP Server
A Model Context Protocol (MCP) server that processes document content and GitHub repositories with Gemma3 to extract structured business insights for case study generation.
Overview
This MCP server provides three main capabilities:
Document Processing - Extract business insights from documents (proposals, case studies, contracts)
GitHub Repository Analysis - Analyze repositories for business value and technical benefits
Company Research - Real-time web research using Tavily + AI analysis for company insights
The server uses Gemma3 8B-Instruct via Ollama for local LLM processing, ensuring privacy and control over your data.
Architecture
Claude Desktop: Handles document retrieval, reasoning, writing, and saving
MCP Server: Processes data with Gemma3 and returns structured insights
Ollama/Gemma3: Local LLM for business analysis and insight extraction
Prerequisites
Required Software
Python 3.11+ - Programming language runtime
Ollama - Local LLM inference server
Gemma3 Model - Language model for analysis
Install Ollama
Visit ollama.ai and install for your platform.
After installation, pull the Gemma3 model:
Verify Ollama is running:
Installation
Option 1: Using venv (Recommended)
Option 2: Using Poetry
Configuration
Environment Variables (Optional)
Create a .env file in the project root:
GitHub Token Setup
For better GitHub API rate limits, create a personal access token:
Go to GitHub Settings → Developer settings → Personal access tokens
Generate a new token with
public_reposcopeAdd to
.envfile or set as environment variable
Tavily API Setup (For Company Research)
For real company research capabilities, get a Tavily API key:
Sign up at tavily.com
Get your API key from the dashboard
Add
TAVILY_API_KEY=your_key_hereto.envfile
Note: Without Tavily, company research will use LLM pattern matching only.
Usage
Starting the MCP Server
The server communicates via stdio and will wait for MCP protocol messages.
Integration with Claude Desktop
Add to your Claude Desktop MCP configuration:
Example Usage in Claude Desktop
MCP Tools
1. process_document_text
Extract business insights from document content.
Parameters:
text(required): Document content textdoc_type(optional): Type of document - "proposal", "case_study", "contract", or "general"
Returns:
2. analyze_github_repo
Analyze GitHub repository for business value.
Parameters:
repo_url(required): GitHub repository URL
Returns:
3. research_company_basic
Real company research using web search + AI analysis.
Parameters:
company_name(required): Name of the companycompany_context(optional): Additional context about the company
Returns:
Testing
Manual Testing
Test each tool individually:
Health Check
The server provides a health check resource:
Returns status of all components including Gemma3, GitHub API, and processors.
Troubleshooting
Common Issues
1. Ollama Connection Error
Solution: Ensure Ollama is running (ollama serve) and the model is pulled (ollama pull gemma3n:e4b).
2. GitHub Rate Limit
Solution: Add a GitHub token to your .env file for higher limits.
3. Model Not Found
Solution: Pull the model with ollama pull ollama3n:4b.
4. Import Errors
Solution: Install dependencies with pip install -r requirements.txt.
5. Company Research Limited
Solution: Get a Tavily API key from tavily.com and add to .env file.
Performance Optimization
Memory Usage: Ollama3n 4B requires ~4-6GB RAM for optimal performance
Processing Time: Document processing typically takes 5-15 seconds
Concurrent Requests: Server handles one request at a time by design
Logging
Enable debug logging:
Project Structure
Development
Contributing
Fork the repository
Create a feature branch
Make changes with tests
Submit a pull request
Code Style
Use Black for formatting:
black .Use isort for imports:
isort .Use mypy for type checking:
mypy .
License
MIT License - see LICENSE file for details.
Support
For issues and questions:
Check the troubleshooting section above
Review the project configuration in
project_config.mdOpen an issue with detailed error logs and steps to reproduce