# 1 MCP Server ๐
> **MCP of MCPs** โ automatically discover and configure MCP servers on your machine (remote or local).
After setup, you can usually just say:
> โI want to perform . Call the `deep_search` tool and follow the outlined steps.โ
The goal is that you only install **this** MCP server, and it handles the rest (searching servers, selecting servers, configuring servers, etc.).
### Demo video ๐ฅ: [https://youtu.be/W4EAmaTTb2A](https://youtu.be/W4EAmaTTb2A)
## Quick Setup
Choose **one** of the following:
1. **Remote** (simplest & fastest โก๐จ)
2. **Local (prebuilt)** โ **Docker**, **uvx**, or **npx**
3. **Local (from source)** โ run this repo directly
### 1) Remote ๐โก๐จ
Use the hosted endpoint (recommended for the simplest setup).
**Docs + guided setup:** [https://mcp.1mcpserver.com/](https://mcp.1mcpserver.com/)
#### Configure your MCP client
Add the following entry to your client config file:
* **Cursor**: `./.cursor/mcp.json`
* **Gemini CLI**: `./gemini/settings.json` (see Gemini docs)
* **Claude Desktop**:
* macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
* Windows: `%APPDATA%\Claude\claude_desktop_config.json`
* **Codex**:
* macOS: `~/.codex/config.toml`
* Windows: `%USERPROFILE%\.codex\config.toml`
**Remote config (JSON):**
```json
{
"mcpServers": {
"1mcpserver": {
"url": "https://mcp.1mcpserver.com/mcp/",
"headers": {
"Accept": "text/event-stream",
"Cache-Control": "no-cache"
}
}
}
}
```
If you already have other servers configured, just merge this entry under `mcpServers` For example:
```json
{
"mcpServers": {
"1mcpserver": {
"url": "https://mcp.1mcpserver.com/mcp/",
"headers": {
"Accept": "text/event-stream",
"Cache-Control": "no-cache"
}
},
"file-system": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "."]
}
}
}
```
**Tip:** If your client supports it, move the config file into your **home directory** to apply globally.
---
### 2) Local (prebuilt) ๐ป
Use this when you want everything local, or when your MCP client only supports **STDIO**.
#### 2A) Docker ๐ณ
> Use this if you want an isolated runtime and a single, reproducible command.
```bash
docker run --rm -i \
-e DATADIR=/data \
-v "$PWD/db:/data" \
<YOUR_DOCKER_IMAGE_HERE>
```
```json
{
"mcpServers": {
"1mcpserver": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"-e",
"DATADIR=/data",
"-v",
"${PWD}/db:/data",
"<YOUR_DOCKER_IMAGE_HERE>"
]
}
}
}
```
#### 2B) uvx ๐
> Use this if you publish the server as a Python package and want a one-liner.
```bash
uvx <YOUR_PACKAGE_NAME> --local
```
```json
{
"mcpServers": {
"1mcpserver": {
"command": "uvx",
"args": ["<YOUR_PACKAGE_NAME>", "--local"]
}
}
}
```
#### 2C) npx ๐ฆ
> Use this if you publish a Node wrapper / launcher and want a one-liner.
```bash
npx -y <YOUR_NPM_PACKAGE_NAME>
```
```json
{
"mcpServers": {
"1mcpserver": {
"command": "npx",
"args": ["-y", "<YOUR_NPM_PACKAGE_NAME>"]
}
}
}
```
---
### 3) Local (from source) ๐งฉ
Clone this repo and run directly.
```bash
git clone https://github.com/particlefuture/MCPDiscovery.git
cd MCPDiscovery
uv sync
uv run server.py --local
```
```json
{
"mcpServers": {
"1mcpserver": {
"command": "/path/to/uv",
"args": [
"--directory",
"<PATH_TO_CLONED_REPO>",
"run",
"server.py",
"--local"
]
}
}
}
```
> If your client supports remote `url` servers, you can use the **Remote** setup instead.
#### Optional: grant file-system access ๐
If you want your LLM to have file-system access, add an MCP filesystem server and point it at the directory you want to allow:
```json
{
"mcpServers": {
"file-system": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "~/"]
}
}
}
```
---
## Architecture ๐ง
There are two search modes:
### Quick Search โก
For explicit requests like: โI want an MCP server that handles payments.โ
Returns a shortlist of relevant MCP servers.
### Deep Search ๐
For higher-level or complex goals like: โBuild a website that analyzes other websites.โ
The LLM breaks the goal into components/steps, finds MCP servers for each part, and if something is missing, it asks whether to:
* ignore that part,
* break it down further, or
* implement it ourselves.
Deep Search stages:
1. **Planning** โ identify servers, keys, and config changes
2. **Testing** โ verify servers (via `test_server_template_code`)
3. **Acting** โ execute the workflow using the configured servers
---
## Change Log ๐
* July 31 2025: Upgrade to 0.2.0. Added agentic planning.
* Dec 12 2025: Support for Gemini + Codex
* Dec 13 2025: Easier local setup with docker, npm, and uvx.ย
## Future ๐ฎ
* Better demo videos (new domain, narrated walkthrough)
* Model Context Communication Protocol (MCCP): standard server-to-server messaging
* Avoid calling tools with an `internal_` prefix unless instructed
* Improve MCP server database schema: server, description, url, config json, extra setup (docker/api key/etc)
## Credits ๐
Data sources:
* wong2/awesome-mcp-servers
* metorial/mcp-containers
* punkpeye/awesome-mcp-servers
* modelcontextprotocol/servers
Published to:
* [https://mcpservers.org/](https://mcpservers.org/)
* [https://glama.ai/mcp/servers](https://glama.ai/mcp/servers)
## Troubleshooting ๐งฐ
* If using a venv and you get `ModuleNotFoundError` even after installing: delete the venv and recreate it.