Skip to main content
Glama
MEGATEST_THREE_USE_CASES.mdβ€’29.1 kB
# Megatest: Three Major Use Cases ## Development, GitHub CI/CD, and End-User Validation ## 🎯 Overview The megatest framework serves **three distinct audiences** with different needs: 1. πŸ”§ **DEVELOPERS** - Building and debugging the MCP server 2. πŸ€– **GITHUB CI/CD** - Automated quality validation 3. πŸ‘€ **END USERS** - Validating their MCPB installation works Each use case requires different configuration, expectations, and outputs. --- ## πŸ”§ USE CASE 1: DEVELOPMENT ### Who: Developers building the MCP server ### Goal: Fast feedback during development, easy debugging ### Configuration ```bash # Location: Local repo (easy to find) export MEGATEST_LOCATION=local # Cleanup: Keep failures (for debugging) export MEGATEST_CLEANUP=on-success # Level: Quick smoke tests usually pytest tests/megatest/ -m megatest_smoke # 2 min ``` ### Typical Workflow ```bash # 1. Make code changes vim src/advanced_memory/services/search_service.py # 2. Run quick smoke test (2 min) pytest tests/megatest/ -m megatest_smoke # 3. If it passes, continue development # If it fails, check test-results/megatest/latest/ for artifacts # 4. Before committing, run standard test (10 min) pytest tests/megatest/ -m megatest_standard # 5. Commit if tests pass git commit -m "feat: improved search" ``` ### Output Location ``` repo/test-results/megatest/ β”œβ”€β”€ 2025-10-15_09-30-45/ (deleted - passed) β”œβ”€β”€ 2025-10-15_10-15-20/ (deleted - passed) └── 2025-10-15_11-45-30/ (KEPT - failed, for debugging) β”œβ”€β”€ test_data/ β”œβ”€β”€ artifacts/ β”œβ”€β”€ logs/ β”‚ └── errors.log ← Check this for failure reason └── report.html ``` ### Benefits for Developers - βœ… **Fast feedback** (2-10 minutes) - βœ… **Easy debugging** (artifacts preserved on failure) - βœ… **Local access** (same directory as code) - βœ… **No quota waste** (automated, not manual testing) - βœ… **Confidence** (systematic validation before PR) ### Developer Commands ```bash # Quick check (2 min) just megatest-smoke # Standard check (10 min) just megatest-standard # Full validation (90 min - before major changes) just megatest-full # With debugging (keep all artifacts) MEGATEST_CLEANUP=archive pytest tests/megatest/ -m megatest_full ``` --- ## πŸ€– USE CASE 2: GITHUB CI/CD ### Who: Automated quality gates in GitHub Actions ### Goal: Prevent broken code from merging, validate every PR ### Configuration ```bash # Location: Hidden (temp, clean) export MEGATEST_LOCATION=hidden # Cleanup: Immediate (save disk space) export MEGATEST_CLEANUP=immediate # Artifacts: Upload to GitHub # (Artifacts saved BEFORE cleanup via separate step) ``` ### GitHub Workflows (Multi-Level) #### Fast CI (Every Push/PR) ```yaml # .github/workflows/ci.yml name: CI - Fast Path on: [push, pull_request] jobs: megatest-smoke: name: Megatest Smoke Test runs-on: ubuntu-latest timeout-minutes: 5 steps: - uses: actions/checkout@v4 - name: Set up Python uses: actions/setup-python@v4 with: python-version: "3.12" - name: Install dependencies run: | pip install uv uv sync --dev - name: Run Smoke Test env: MEGATEST_LOCATION: hidden MEGATEST_CLEANUP: immediate run: | pytest tests/megatest/ -v -m megatest_smoke --tb=short - name: Upload artifacts (if test fails) if: failure() uses: actions/upload-artifact@v4 with: name: smoke-test-failure path: /tmp/megatest_*/artifacts/ ``` **Triggers**: Every push, every PR **Time**: 2-3 minutes **Fails build**: Yes (blocks merge) --- #### Standard CI (PR Validation) ```yaml # .github/workflows/pr-validation.yml name: PR Validation on: pull_request: branches: [main, master] jobs: megatest-standard: name: Megatest Standard Test runs-on: ubuntu-latest timeout-minutes: 15 steps: - uses: actions/checkout@v4 - name: Set up Python uses: actions/setup-python@v4 with: python-version: "3.12" - name: Install dependencies run: | pip install uv uv sync --dev - name: Run Standard Test env: MEGATEST_LOCATION: hidden MEGATEST_CLEANUP: immediate run: | pytest tests/megatest/ -v -m megatest_standard --tb=short - name: Save artifacts before cleanup if: always() run: | mkdir -p artifacts cp -r /tmp/megatest_*/artifacts/* artifacts/ || true - name: Upload test report if: always() uses: actions/upload-artifact@v4 with: name: megatest-standard-report path: artifacts/ - name: Comment PR with results if: always() uses: actions/github-script@v6 with: script: | const fs = require('fs'); const report = fs.readFileSync('artifacts/report.html', 'utf8'); github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: `### πŸ§ͺ Megatest Standard Results\n\n${report}` }); ``` **Triggers**: Every PR **Time**: 10-12 minutes **Fails build**: Yes (prevents merge of broken code) --- #### Weekly Full Validation ```yaml # .github/workflows/weekly-megatest.yml name: Weekly Full Megatest on: schedule: - cron: '0 2 * * 0' # 2 AM every Sunday workflow_dispatch: # Manual trigger jobs: megatest-full: name: Megatest Full Blast runs-on: ubuntu-latest timeout-minutes: 120 steps: - uses: actions/checkout@v4 - name: Set up Python uses: actions/setup-python@v4 with: python-version: "3.12" - name: Install dependencies run: | pip install uv uv sync --dev npm install -g docsify-cli # For Docsify validation - name: Run Full Blast Test env: MEGATEST_LOCATION: hidden MEGATEST_CLEANUP: immediate run: | pytest tests/megatest/ -v -m megatest_full --tb=short --durations=20 - name: Save all artifacts if: always() run: | mkdir -p full-blast-artifacts cp -r /tmp/megatest_*/artifacts/* full-blast-artifacts/ || true - name: Upload complete report if: always() uses: actions/upload-artifact@v4 with: name: megatest-full-blast-report path: full-blast-artifacts/ retention-days: 90 # Keep for 3 months - name: Create release quality badge if: success() uses: schneegans/dynamic-badges-action@v1.7.0 with: auth: ${{ secrets.GIST_SECRET }} gistID: <gist-id> filename: megatest-status.json label: Megatest message: PASSING color: green ``` **Triggers**: Weekly (Sunday 2 AM), manual **Time**: 90-120 minutes **Creates**: Release quality badge, complete artifacts --- ### Release Validation (Before Publishing) ```yaml # .github/workflows/release.yml name: Release on: push: tags: - 'v*' jobs: megatest-integration: name: Pre-Release Megatest runs-on: ubuntu-latest timeout-minutes: 60 steps: - uses: actions/checkout@v4 - name: Run Integration Test env: MEGATEST_LOCATION: hidden MEGATEST_CLEANUP: archive # Keep for release records run: | pytest tests/megatest/ -v -m megatest_integration - name: Upload release validation report uses: actions/upload-artifact@v4 with: name: release-validation-report-${{ github.ref_name }} path: artifacts/ retention-days: 365 # Keep for 1 year (release records) publish: needs: megatest-integration if: success() # ... publish to PyPI, etc. ``` **Triggers**: Version tag push **Time**: 45-60 minutes **Blocks release**: Yes (must pass to publish) ### Benefits for GitHub CI/CD - βœ… **Automated quality gates** (no manual testing) - βœ… **Prevents broken releases** (blocks bad code) - βœ… **Comprehensive validation** (all tools tested) - βœ… **Artifact preservation** (uploaded to GitHub) - βœ… **Performance tracking** (trends over time) - βœ… **PR feedback** (results commented automatically) --- ## πŸ‘€ USE CASE 3: DEPLOYED AT USER (MCPB VALIDATION) ### Who: End users who installed your MCPB package ### Goal: **Prove the MCP server works in their environment** ### The Power of User Validation When a user installs your MCPB (e.g., in Claude Desktop): ```bash # User installs your MCPB # Drag-and-drop advanced-memory-mcp.mcpb into Claude Desktop # Or: Install via marketplace ``` **User's question**: *"How do I know this actually works?"* **Your answer**: *"Run the validation test!"* ```bash # User runs validation (from their terminal) advanced-memory validate # Or via pytest if they have it pytest tests/megatest/ -m megatest_smoke ``` ### Configuration for User Validation ```bash # Location: Visible (Documents - they can inspect) export MEGATEST_LOCATION=visible # Cleanup: Archive (proof it worked) export MEGATEST_CLEANUP=archive # Level: Smoke (quick validation) # Time: 2 minutes ``` ### User Experience #### Before Validation ``` User: "I installed Advanced Memory MCP via MCPB..." User: "...but how do I know it's working?" ``` #### Run Validation ```bash $ advanced-memory validate ╔══════════════════════════════════════════════════════════╗ β•‘ ADVANCED MEMORY VALIDATION TEST β•‘ ╠══════════════════════════════════════════════════════════╣ β•‘ This will test your installation is working correctly β•‘ β•‘ Time: ~2 minutes β•‘ β•‘ Safe: Uses isolated test environment β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• πŸ” Running validation tests... βœ… Test 1/10: Server initializes βœ… Test 2/10: Can create notes βœ… Test 3/10: Can read notes βœ… Test 4/10: Can search notes βœ… Test 5/10: Can update notes βœ… Test 6/10: Can delete notes βœ… Test 7/10: Multi-project works βœ… Test 8/10: Tags work βœ… Test 9/10: Relations work βœ… Test 10/10: Knowledge graph works ╔══════════════════════════════════════════════════════════╗ β•‘ VALIDATION COMPLETE - ALL TESTS PASSED β•‘ ╠══════════════════════════════════════════════════════════╣ β•‘ Status: βœ… YOUR INSTALLATION IS WORKING PERFECTLY! β•‘ β•‘ β•‘ β•‘ Tests run: 10/10 passed (100%) β•‘ β•‘ Time: 2m 15s β•‘ β•‘ β•‘ β•‘ Report saved to: β•‘ β•‘ C:\Users\sandr\Documents\megatest-results\... β•‘ β•‘ β•‘ β•‘ πŸŽ‰ Advanced Memory MCP is ready to use! β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• ``` #### After Validation ``` User: "WOW! It actually works! I can see the test results!" User: *Opens Documents/megatest-results/2025-10-15_validation_PASS/* User: *Sees generated test notes, artifacts, HTML report* User: "This MCP server is HIGH QUALITY - they test everything!" ``` ### Why This Is Powerful #### 1. **Proves Quality** (Trust) - User sees actual test execution - User sees all tests pass - User sees generated artifacts - **Proof of quality** > marketing claims #### 2. **Troubleshooting** (Support) ```bash User: "MCP server doesn't work for me" Support: "Please run: advanced-memory validate" # If validation passes: Support: "Your installation is fine, the issue is configuration" # If validation fails: Support: "Please send the report from Documents/megatest-results/" Support: "We can diagnose the exact issue" ``` #### 3. **Demonstrates Capabilities** (Discovery) - User sees what the MCP server can do - Test creates sample notes, searches, exports - **Shows features** through working examples - User learns by inspecting test artifacts #### 4. **Environment Validation** (Compatibility) - Tests in **user's actual environment** - Catches environment-specific issues - Validates dependencies are installed - Proves it works on their OS/configuration --- ## 🎯 User Validation Implementation ### CLI Command: `advanced-memory validate` ```python # src/advanced_memory/cli/commands/validate.py import typer from rich.console import Console app = typer.Typer() console = Console() @app.command() def validate( level: str = typer.Option("smoke", help="Test level: smoke, standard, full"), keep_artifacts: bool = typer.Option(True, help="Keep test artifacts for inspection"), open_report: bool = typer.Option(True, help="Open HTML report after completion"), ): """ Validate your Advanced Memory installation. This runs a comprehensive test to ensure everything is working correctly. Test data is created in an isolated environment (safe). """ console.print("\n[bold cyan]Advanced Memory Validation Test[/bold cyan]") console.print("=" * 60) console.print(f"Level: {level}") console.print(f"Time: ~{get_level_time(level)}") console.print(f"Safe: Uses isolated test environment") console.print("=" * 60 + "\n") # Set environment for user validation os.environ["MEGATEST_LOCATION"] = "visible" # Documents folder os.environ["MEGATEST_CLEANUP"] = "archive" if keep_artifacts else "immediate" # Run pytest with appropriate marker marker = f"megatest_{level}" result = subprocess.run( ["pytest", "tests/megatest/", "-v", "-m", marker, "--tb=short"], capture_output=True, text=True ) # Display results if result.returncode == 0: console.print("\n[bold green]βœ… VALIDATION PASSED[/bold green]") console.print("\nπŸŽ‰ Your Advanced Memory installation is working perfectly!") if keep_artifacts: report_path = find_latest_report() console.print(f"\nπŸ“Š Report saved to: {report_path}") if open_report: import webbrowser webbrowser.open(str(report_path)) else: console.print("\n[bold red]❌ VALIDATION FAILED[/bold red]") console.print("\n⚠️ Some tests failed. Please check the report for details.") report_path = find_latest_report() console.print(f"\nπŸ“Š Report saved to: {report_path}") console.print("\nPlease share this report when requesting support.") ``` ### User Documentation #### In README.md ```markdown ## Installation Validation After installing Advanced Memory MCP, validate it works: \`\`\`bash # Quick validation (2 minutes) advanced-memory validate # Standard validation (10 minutes) advanced-memory validate --level=standard # Full validation (90 minutes) advanced-memory validate --level=full \`\`\` This will: - βœ… Test all core functionality - βœ… Create sample notes in isolated environment - βœ… Generate HTML report - βœ… Save artifacts to Documents/megatest-results/ - βœ… Open report in browser (shows what passed/failed) **Safe**: Uses isolated test environment, never touches your data. ``` #### In MCPB Package Description ```markdown ## Quality Assurance This MCP server includes comprehensive validation testing: - πŸ§ͺ **Built-in validation**: `advanced-memory validate` - ⚑ **Quick smoke test**: 2 minutes - πŸ”§ **Standard test**: 10 minutes - πŸ’₯ **Full validation**: 90 minutes **Prove it works** before you use it! ``` ### Benefits for End Users - βœ… **Confidence** (proof MCP works in their environment) - βœ… **Troubleshooting** (clear diagnostics) - βœ… **Discovery** (see what MCP can do) - βœ… **Quality signal** (shows professional development) - βœ… **Support** (easy to share diagnostic report) --- ## πŸ“Š Use Case Comparison Matrix | Aspect | Development | GitHub CI/CD | End User | |--------|-------------|--------------|----------| | **Who** | Developers | Automation | Users | | **Goal** | Fast feedback | Quality gate | Prove it works | | **Location** | Local (repo) | Hidden (temp) | Visible (Documents) | | **Cleanup** | On-success | Immediate | Archive | | **Level** | Smoke/Standard | Smoke/Standard | Smoke | | **Time** | 2-10 min | 2-10 min | 2 min | | **Frequency** | Per change | Per PR | Once (install) | | **Artifacts** | Keep on fail | Upload to GitHub | Always keep | | **Report** | Optional | Always | Always (proof) | | **Open report** | No | No | Yes (browser) | --- ## 🎯 Configuration Profiles ### Profile 1: Development (Default) ```python # tests/megatest/profiles/development.json { "location": "local", "cleanup": "on-success", "default_level": "smoke", "open_report_on_failure": true, "keep_recent": 3, "artifacts": { "save_screenshots": false, "save_exports": false, "save_logs": true } } ``` ### Profile 2: GitHub CI/CD ```python # tests/megatest/profiles/ci.json { "location": "hidden", "cleanup": "immediate", "default_level": "standard", "upload_artifacts": true, "fail_fast": true, "artifacts": { "save_screenshots": true, "save_exports": true, "save_logs": true, "compress": true } } ``` ### Profile 3: User Validation ```python # tests/megatest/profiles/user-validation.json { "location": "visible", "cleanup": "archive", "default_level": "smoke", "open_report": true, "show_progress": true, "friendly_messages": true, "artifacts": { "save_screenshots": true, "save_exports": true, "save_test_data": true, "generate_html_report": true } } ``` ### Load Profile ```bash # Use specific profile advanced-memory validate --profile=user-validation # Or set environment export MEGATEST_PROFILE=development pytest tests/megatest/ -m megatest_smoke ``` --- ## πŸš€ User Validation Features ### Feature 1: Friendly Output ```python # User-facing validation uses rich, friendly output from rich.console import Console from rich.progress import track console = Console() console.print("[bold cyan]Testing your installation...[/bold cyan]") for test in track(tests, description="Running tests..."): result = run_test(test) if result.passed: console.print(f"βœ… {test.name}") else: console.print(f"❌ {test.name}: {result.error}") ``` ### Feature 2: HTML Report (Beautiful) ```html <!-- megatest_report.html --> <!DOCTYPE html> <html> <head> <title>Advanced Memory Validation Report</title> <style> body { font-family: system-ui; max-width: 1200px; margin: 0 auto; padding: 20px; } .pass { color: green; } .fail { color: red; } .summary { background: #f0f0f0; padding: 20px; border-radius: 8px; } </style> </head> <body> <h1>πŸŽ‰ Advanced Memory Validation Report</h1> <div class="summary"> <h2>Summary</h2> <p><strong>Status:</strong> <span class="pass">βœ… ALL TESTS PASSED</span></p> <p><strong>Tests Run:</strong> 10/10</p> <p><strong>Duration:</strong> 2m 15s</p> <p><strong>Timestamp:</strong> 2025-10-15 14:30:45</p> </div> <h2>Test Results</h2> <ul> <li class="pass">βœ… Server initialization</li> <li class="pass">βœ… Note creation</li> <li class="pass">βœ… Note reading</li> <!-- ... --> </ul> <h2>Sample Artifacts</h2> <p>Test created sample notes to demonstrate functionality:</p> <ul> <li><a href="artifacts/test_data/note1.md">Sample Note 1</a></li> <li><a href="artifacts/test_data/note2.md">Sample Note 2</a></li> </ul> <h2>What This Means</h2> <p><strong>βœ… Your Advanced Memory installation is working correctly!</strong></p> <p>You can now use it with confidence. All core features have been validated.</p> </body> </html> ``` ### Feature 3: Sample Artifacts (Learning) User can inspect test artifacts to see capabilities: ``` ~/Documents/megatest-results/2025-10-15_validation_PASS/ β”œβ”€β”€ test_data/ β”‚ β”œβ”€β”€ sample_note_1.md ← User can read these β”‚ β”œβ”€β”€ sample_note_2.md β”‚ └── (10 sample notes showing features) β”‚ β”œβ”€β”€ artifacts/ β”‚ └── screenshots/ (if Level 5) β”‚ └── megatest_report.html ← Opens in browser automatically ``` **User learns**: - "Oh, this is how notes are formatted!" - "I can see wikilinks work like this" - "The search feature found this note" - "Tags work this way" ### Feature 4: Troubleshooting Support **When user reports issues**: ``` User: "Advanced Memory doesn't work for me" Support: "Please run: advanced-memory validate" Support: "Send the report from Documents/megatest-results/" # User sends report # Support team sees: - Exact error messages - Environment details - Which tests failed - Complete diagnostics Support: "I see the issue - you need to install X dependency" # Or: "Your Python version is incompatible" # Or: "Your file permissions are wrong" ``` **Benefit**: **10x faster support** with clear diagnostics --- ## πŸ“¦ MCPB Package Integration ### Include Validation in MCPB #### manifest.json ```json { "name": "advanced-memory-mcp", "version": "0.13.0", "mcpServers": { "advanced-memory": { "command": "uv", "args": ["run", "advanced-memory", "mcp"] } }, "scripts": { "validate": "uv run pytest tests/megatest/ -m megatest_smoke", "validate-full": "uv run pytest tests/megatest/ -m megatest_full" } } ``` #### README for MCPB Users ```markdown ## Validation After installing this MCPB package, validate it works: \`\`\`bash # From the MCPB directory npm run validate # Or if you have pytest installed pytest tests/megatest/ -m megatest_smoke \`\`\` This will: - βœ… Create isolated test environment - βœ… Test all core functionality - βœ… Generate validation report - βœ… Open report in browser **Time**: ~2 minutes **Safe**: Never touches your data **Proof**: Shows your installation works! ``` ### Marketing Value **In your MCPB listing**: > **✨ Quality Guaranteed** > > This MCP server includes built-in validation testing. > After installation, run `npm run validate` to prove it works! > > - πŸ§ͺ 10 comprehensive tests > - ⚑ 2-minute validation > - πŸ“Š Beautiful HTML report > - πŸ›‘οΈ Production-safe (isolated environment) > > **Try before you trust!** We're confident enough to let you test everything. **Marketing impact**: Shows you stand behind your code! --- ## πŸŽ‰ The Three Use Cases in Action ### Use Case 1: Developer (You) ```bash # Monday morning: Quick check cd advanced-memory-mcp pytest tests/megatest/ -m megatest_smoke # 2 min # Tuesday: Before PR pytest tests/megatest/ -m megatest_standard # 10 min # Friday: Before release pytest tests/megatest/ -m megatest_full # 90 min # β†’ Check Documents/megatest-results/ for complete report ``` ### Use Case 2: GitHub (Automation) ```yaml # Automatically on every push - name: Smoke Test run: pytest tests/megatest/ -m megatest_smoke # 2 min - blocks bad commits # Automatically on every PR - name: Standard Test run: pytest tests/megatest/ -m megatest_standard # 10 min - blocks bad PRs # Automatically weekly - name: Full Blast Test run: pytest tests/megatest/ -m megatest_full # 90 min - comprehensive validation # Automatically on release - name: Integration Test run: pytest tests/megatest/ -m megatest_integration # 45 min - blocks bad releases ``` ### Use Case 3: End User (Customer) ```bash # After installing MCPB $ advanced-memory validate # Or from MCPB directory $ npm run validate # Result: 2 minutes later βœ… ALL TESTS PASSED πŸ“Š Report: Documents/megatest-results/2025-10-15_validation_PASS/ πŸŽ‰ Your installation is working perfectly! # User opens report, sees: - Beautiful HTML with all tests green - Sample notes demonstrating features - Performance metrics showing speed - Proof the MCP server is high quality ``` --- ## πŸ’‘ Competitive Advantage ### Most MCP Servers (Competitors) ``` User: "How do I know this works?" MCP: "Just try it and see..." User: "But what if it's broken?" MCP: "File an issue on GitHub..." User: πŸ˜• (uncertainty, low trust) ``` ### Your MCP Server (With Megatest) ``` User: "How do I know this works?" MCP: "Run 'npm run validate' - 2 minutes" User: *Runs test, sees all green* User: πŸŽ‰ (confidence, high trust, sees quality) User: "WOW! This is professional! They actually test everything!" User: *Shares on social media* User: "Best MCP server I've found - has built-in validation!" ``` ### Trust Signals 1. βœ… **Built-in testing** (shows confidence) 2. βœ… **Quick validation** (removes uncertainty) 3. βœ… **Visible results** (transparent quality) 4. βœ… **Professional** (industry best practice) 5. βœ… **User-friendly** (easy to run) --- ## πŸ“‹ Implementation Checklist ### For Development Use - [ ] Create megatest structure - [ ] Implement Level 1 (smoke) - [ ] Configure local + on-success cleanup - [ ] Add to justfile: `just megatest-smoke` ### For GitHub CI/CD Use - [ ] Add smoke test to ci.yml (every push) - [ ] Add standard test to pr-validation.yml - [ ] Add full test to weekly.yml - [ ] Configure artifact upload - [ ] Add status badges ### For End User Validation - [ ] Create `advanced-memory validate` CLI command - [ ] Configure visible + archive mode - [ ] Generate beautiful HTML report - [ ] Auto-open report in browser - [ ] Add to MCPB scripts - [ ] Document in README - [ ] Market as quality feature --- ## 🎯 Example User Scenarios ### Scenario 1: New User Install ```bash # User just installed MCPB User: *Drags advanced-memory-mcp.mcpb into Claude* User: *Sees "Advanced Memory installed"* User: "Let me validate it works..." User: $ advanced-memory validate # 2 minutes later βœ… ALL TESTS PASSED πŸŽ‰ Your installation works perfectly! πŸ“Š Report: C:\Users\sandr\Documents\megatest-results\... User: "Awesome! I can trust this now." ``` ### Scenario 2: Troubleshooting ```bash # User has issues User: "My searches aren't working" Support: "Run: advanced-memory validate" User: *Runs validation* ❌ Test 4/10: Search failed Error: Missing dependency 'ripgrep' User: "Ah! I need to install ripgrep" User: *Installs ripgrep* User: *Re-runs validation* βœ… ALL TESTS PASSED User: "Fixed! Thanks for the diagnostic tool!" ``` ### Scenario 3: Environment Issues ```bash # User on weird OS/configuration User: "Does this work on Windows ARM?" Support: "Run validation and send report" User: *Runs on Windows ARM* βœ… 9/10 tests passed ❌ 1/10 failed: PDF export (LaTeX not available) Support: "Your MCP works! PDF export needs LaTeX, but everything else works." User: "Perfect - I don't need PDF export anyway." ``` --- ## πŸŽ‰ Summary: Three Perfect Use Cases ### 1. πŸ”§ Development - **Who**: You and your team - **What**: Fast feedback, easy debugging - **Where**: repo/test-results/ - **When**: Every change - **Value**: Development velocity ### 2. πŸ€– GitHub CI/CD - **Who**: Automated quality gates - **What**: Prevent broken code - **Where**: GitHub Actions (temp) - **When**: Every PR, weekly, releases - **Value**: Quality assurance ### 3. πŸ‘€ End User Validation - **Who**: People who installed your MCPB - **What**: Prove it works in their environment - **Where**: Documents/megatest-results/ - **When**: After installation - **Value**: **Trust, confidence, quality signal** ## πŸ† The Power of User Validation **When user drops your MCPB into Claude Desktop**: Instead of wondering "Does this work?", they can **PROVE it works** in 2 minutes! **This is HUGE for**: - βœ… User confidence - βœ… Professional image - βœ… Support efficiency - βœ… Marketing differentiation - βœ… Quality signaling **Your MCP server becomes**: "The one that actually tests itself!" **Beautiful! 🎨** The three use cases make perfect sense! πŸš€ --- *Three use cases documented: October 15, 2025* *Development + GitHub + User Validation = Complete coverage* *Your megatest framework serves everyone!*

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sandraschi/notepadpp-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server