browse-ai
BrowseAI Dev
Reliable research infrastructure for AI agents — real-time web search, evidence extraction, and structured citations. Every claim is backed by a URL. Every answer has a confidence score.
Agent → BrowseAI → Internet → Verified answers + sourcesWebsite · Playground · API Docs · Discord
How It Works
search → fetch pages → extract claims → build evidence graph → cited answerEvery answer goes through a 5-step verification pipeline. No hallucination. Every claim is backed by a real source. Confidence scores are evidence-based — computed from source count, domain diversity, claim grounding, and citation depth.
Quick Start
Python SDK
pip install browseaifrom browseai import BrowseAI
client = BrowseAI(api_key="bai_xxx")
# Research with citations
result = client.ask("What is quantum computing?")
print(result.answer)
print(f"Confidence: {result.confidence:.0%}")
for source in result.sources:
print(f" - {source.title}: {source.url}")Framework integrations:
pip install browseai[langchain] # LangChain tools
pip install browseai[crewai] # CrewAI integration# LangChain
from browseai.integrations.langchain import BrowseAIAskTool
tools = [BrowseAIAskTool(api_key="bai_xxx")]
# CrewAI
from browseai.integrations.crewai import BrowseAITool
researcher = Agent(tools=[BrowseAITool(api_key="bai_xxx")])MCP Server (Claude Desktop, Cursor, Windsurf)
npx browse-ai setupOr manually add to your MCP config:
{
"mcpServers": {
"browse-ai": {
"command": "npx",
"args": ["-y", "browse-ai"],
"env": {
"SERP_API_KEY": "your-search-key",
"OPENROUTER_API_KEY": "your-llm-key"
}
}
}
}REST API
# With your own keys (BYOK — free, no limits)
curl -X POST https://browseai.dev/api/browse/answer \
-H "Content-Type: application/json" \
-H "X-Tavily-Key: tvly-xxx" \
-H "X-OpenRouter-Key: sk-or-xxx" \
-d '{"query": "What is quantum computing?"}'
# With a BrowseAI API key
curl -X POST https://browseai.dev/api/browse/answer \
-H "Content-Type: application/json" \
-H "Authorization: Bearer bai_xxx" \
-d '{"query": "What is quantum computing?"}'Self-Host
git clone https://github.com/BrowseAI-HQ/BrowserAI-Dev.git
cd BrowserAI-Dev
pnpm install
cp .env.example .env
# Fill in: SERP_API_KEY, OPENROUTER_API_KEY
pnpm devAPI Keys
Three ways to authenticate:
Method | How | Limits |
BYOK (recommended) | Pass | Unlimited, free |
BrowseAI API Key | Pass | Unlimited (uses your stored keys) |
Demo | No auth needed | 5 queries/hour per IP |
Get a BrowseAI API key from the dashboard — it bundles your Tavily + OpenRouter keys into one key for CLI, MCP, and API use.
Project Structure
/apps/api Fastify API server (port 3001)
/apps/mcp MCP server (stdio transport, npm: browse-ai)
/packages/shared Shared types, Zod schemas, constants
/packages/python-sdk Python SDK (PyPI: browseai)
/src React frontend (Vite, port 8080)
/supabase Database migrationsAPI Endpoints
Endpoint | Description |
| Search the web |
| Fetch and parse a page |
| Extract structured claims from a page |
| Full pipeline: search + extract + cite |
| Compare raw LLM vs evidence-backed answer |
| Get a shared result |
| Total queries answered |
| Top cited source domains |
| Usage analytics (authenticated) |
MCP Tools
Tool | Description |
| Search the web for information on any topic |
| Fetch and parse a web page into clean text |
| Extract structured claims from a page |
| Full pipeline: search + extract + cite |
| Compare raw LLM vs evidence-backed answer |
Python SDK
Method | Description |
| Search the web |
| Fetch and parse a page |
| Extract claims from a page |
| Full pipeline with citations |
| Raw LLM vs evidence-backed |
Async support: AsyncBrowseAI with the same API.
Examples
See the examples/ directory for ready-to-run agent recipes:
Example | Description |
Simple research agent with citations | |
Research libraries/docs before writing code | |
Compare raw LLM vs evidence-backed answers | |
BrowseAI as a LangChain tool | |
Multi-agent research team with CrewAI |
Environment Variables
Variable | Required | Description |
| Yes | Web search API key (Tavily) |
| Yes | LLM API key (OpenRouter) |
| No | Redis URL (falls back to in-memory cache) |
| No | Supabase project URL |
| No | Supabase service role key |
| No | API server port (default: 3001) |
Tech Stack
API: Node.js, TypeScript, Fastify, Zod
Search: Tavily API
Parsing: @mozilla/readability + linkedom
AI: Gemini 2.5 Flash via OpenRouter
Caching: In-memory with intelligent TTL (time-sensitive queries get shorter TTL)
Frontend: React, Tailwind CSS, shadcn/ui, Framer Motion
MCP: @modelcontextprotocol/sdk
Python SDK: httpx, Pydantic
Database: Supabase (PostgreSQL)
Community
Discord — questions, feedback, showcase
GitHub Issues — bugs, feature requests
Contributing
See CONTRIBUTING.md for setup instructions, coding conventions, and PR process.
License
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/BrowseAI-HQ/browse-ai'
If you have feedback or need assistance with the MCP directory API, please join our Discord server