Memtech MCP Server
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Memtech MCP Serverfind DDR2 read latency timing RTL"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Memtech MCP Server
An MCP (Model Context Protocol) server that exposes Memtech's memory-ASIC engineering knowledge base to AI assistants. Connects an MCP-capable client (Claude Code, Claude Desktop, Cursor, etc.) to a Qdrant vector database holding RTL, lab logs, DFM rules, and silicon design knowledge that has been ingested through Memtech's platform.
Status: v0.1.0 — early access. This release ships one tool (
search_memtech_kb). Five additional tools are coming in v0.2 (see Roadmap).
What this is
This server lets an AI assistant search Memtech's engineering corpus using natural language. Ask Claude "show me the DDR2 controller code that handles read latency timing" and the assistant calls this server, which embeds the query, searches the configured Qdrant collection, and returns the top-matching chunks with their metadata. The assistant then grounds its answer in the retrieved chunks instead of hallucinating from generic training data.
The server is intentionally a thin shim: no LLM logic, no business rules, just retrieval. This is the layer of the Memtech platform that customers can audit, fork, and self-host. The reasoning, eval harness, and corpus management live in Memtech's platform; this MCP server connects to them.
Who this is for
This repository is for developers integrating Memtech into an AI workflow. There are two main audiences:
Application developers building memory-IC engineering tools that should call Memtech as part of an LLM-driven workflow. You will run this server locally (or as a Docker container) and register it with your MCP-capable client. The Apache-2.0 license lets you fork, modify, and redistribute.
Platform teams at organizations evaluating Memtech for a self-hosted deployment. You want to read the source, validate the architecture, and run integration tests against your own Qdrant cluster before recommending Memtech to your engineering teams.
If you are an end user — a memory-ASIC engineer who wants to ask Memtech questions through your AI chat — you do not run this server directly. Your organization's platform team will configure it on your behalf, or you will use Memtech's hosted endpoint when it goes GA. This repository is for the developers who set up the connection.
Prerequisites
Before you can run this server, you need:
A Qdrant cluster with a populated collection of Memtech-format chunks. If you do not yet have one, contact Memtech for an evaluation environment.
A Voyage AI API key for the
voyage-code-2embedding model. Get one at dash.voyageai.com.Python 3.11 or newer, or Docker if you prefer the containerized deployment.
Quick start
git clone https://github.com/California-Memtech/mcp.git memtech-mcp
cd memtech-mcp
# Install with uv (recommended) or pip
uv sync
# or: pip install -e .
# Configure credentials
cp .env.example .env
# Edit .env with your Qdrant URL, Qdrant API key, Voyage API key, and target collection
# Smoke-test the server runs and connects
python -m memtech_mcp.server
# Should print "Connected to Qdrant" then hang waiting for MCP traffic. Ctrl+C to exit.Then register with your MCP client. For Claude Code:
claude mcp add-json memtech-kb '{"command":"python","args":["-m","memtech_mcp.server"]}' -s user
claude mcp list
# Expected: memtech-kb: python -m memtech_mcp.server - ✓ ConnectedFor Claude Desktop, Cursor, GitHub Copilot, Gemini CLI, and other clients, see docs/CLIENT_SETUP.md.
Architecture (one paragraph)
The server uses FastMCP for the MCP protocol layer. On a tool call, it embeds the query with Voyage AI's voyage-code-2 (1536-dim, code-aware), searches the configured Qdrant collection, and returns the top-K matching chunks with their metadata (file path, symbol, source type, classification). Credentials and the target collection name come from environment variables — no secrets in code, no defaults that might leak data across deployments. Full details in docs/ARCHITECTURE.md.
Tools (v0.1.0)
Tool | Description | Status |
| Semantic search over the configured Memtech collection. Returns ranked chunks with text, file path, symbol, and similarity score. | ✅ Available |
| Yield-rate prediction with failure-mechanism analysis. | 🚧 v0.2 |
| Root-cause hypothesis ranking from a symptom description. | 🚧 v0.2 |
| Patch suggestions for RTL bugs based on lab evidence. | 🚧 v0.2 |
| ATE (automated test equipment) plan generation. | 🚧 v0.2 |
| Admin-scoped ingest of new chunks (reserved for v0.2 with proper scope enforcement). | 🚧 v0.2 |
The v0.2 tools call reasoning endpoints rather than raw retrieval and require the Memtech platform's gateway to be in place. They are stubbed in memtech_mcp/tools/ with NotImplementedError for forward compatibility.
Project status
This is v0.1.0 — minimal, working, suitable for evaluation deployments and as the foundation for v0.2.
✅ Working
search_memtech_kbover a Qdrant + Voyage stack✅ Unit tests with mocked dependencies; integration smoke test gated on credentials
✅ Apache-2.0 licensed
✅ Multi-stage Dockerfile parallel to ARM's MCP server
⏳ Memtech platform gateway integration (v0.2)
⏳ The remaining five tools (v0.2)
⏳ Pre-built Docker images on a public registry (v0.2)
See docs/ROADMAP.md for the detailed plan.
Documentation
docs/ARCHITECTURE.md— Why the code is shaped this way; how it relates to the rest of the Memtech platformdocs/CLIENT_SETUP.md— Per-client setup (Claude Code, Claude Desktop, Cursor, Copilot, Gemini)docs/ROADMAP.md— v0.2, v1.0, and beyonddocs/CONTRIBUTING.md— Development workflow, testing, code style
License
Apache License 2.0 — see LICENSE.
Copyright © 2026 California Memtech and Contributors. All rights reserved.
The Apache-2.0 license is a deliberate choice. It allows external organizations to fork and self-host this MCP shim while permitting Memtech to keep the rest of the platform (gateway, eval harness, re-ranker, audit logging, corpus management) proprietary in separate repositories. The shim is the contract between Memtech and the LLM ecosystem; the platform's reasoning and operations layer is Memtech's accumulated engineering moat.
Acknowledgments
This server's architecture and Docker packaging follow the patterns established by ARM's MCP server, which is a useful reference for any MCP server in a chip-design context. We've adapted their structure to Memtech's specifics: a hosted-Qdrant backend (rather than embedded vectors) and a tenant-scoped collection model.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/California-Memtech/mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server