BGPT - Scientific Paper Search
Server Details
Search scientific papers with structured experimental data from full-text studies
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- connerlambden/bgpt-mcp
- GitHub Stars
- 12
- Server Listing
- BGPT
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
2 toolslookup_paperAInspect
Look up a single paper by its DOI.
Args: doi: The DOI of the paper (e.g. "10.1038/s41586-024-07386-0"). api_key: Optional: Your Stripe subscription ID for paid access. Get one at https://bgpt.pro/mcp
Returns: Paper with title, DOI, Raw Data, methods, results, quality scores, and 25+ metadata fields — or an error if not found. Costs $0.02 if found, free if not.
| Name | Required | Description | Default |
|---|---|---|---|
| doi | Yes | ||
| api_key | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it specifies cost implications ('Costs $0.02 if found, free if not'), error handling ('or an error if not found'), and output details. It also hints at authentication (api_key for paid access) and rate/access limits via the cost note. No contradictions exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by structured sections for Args and Returns. Every sentence adds value: the first states the action, the Args explain parameters, and the Returns detail output and cost/error behavior. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no annotations, but with output schema), the description is complete. It covers purpose, parameters, output content, cost, error handling, and implies authentication. With an output schema present, it appropriately doesn't need to detail return structure further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It does so by explaining both parameters: 'doi' is described with an example and purpose, and 'api_key' is clarified as optional for paid access with a URL for obtaining it. This adds essential meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Look up a single paper') and resource ('by its DOI'), distinguishing it from the sibling 'search_papers' which implies broader searching rather than direct lookup by identifier. The purpose is precise and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying it's for looking up a single paper by DOI, which naturally distinguishes it from 'search_papers' (likely for broader queries). However, it does not explicitly state when not to use this tool or name alternatives beyond the sibling, nor does it mention prerequisites like authentication needs beyond the optional api_key.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_papersAInspect
Search BGPT's database of scientific papers by keyword.
Args: query: Search terms (e.g. "CRISPR gene editing efficiency") Short, concise queries are best. English language only. Don't include years or filters — use the days_back and num_results params instead. num_results: Number of results to return (1-100, default 16). First 50 results are free, then billed at $0.01/result for paid users. days_back: Only return papers published within the last N days. api_key: Optional: Your Stripe subscription ID for paid access. Get one at https://bgpt.pro/mcp
Returns: Papers with title, DOI, Raw Data, methods, results, quality scores, and 25+ metadata fields.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| api_key | No | ||
| days_back | No | ||
| num_results | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool requires a query, has billing implications (first 50 results free, then $0.01/result for paid users), supports optional API key for paid access, and returns papers with specific metadata fields. It doesn't mention rate limits or authentication requirements beyond the API key.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (Args, Returns) and front-loaded with the core purpose. Most sentences earn their place by providing essential information, though the billing details could be slightly more concise. Overall, it's appropriately sized for a 4-parameter tool with complex behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search functionality with billing implications), no annotations, and the presence of an output schema, the description is complete enough. It covers purpose, usage, parameters, behavioral traits, and return content. The output schema existence means the description doesn't need to detail return values, which it appropriately references without over-explaining.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description fully compensates by providing detailed semantic information for all parameters: query (search terms, format guidance), num_results (range, default, billing implications), days_back (time filtering), and api_key (optional, purpose, how to obtain). This adds substantial value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches BGPT's database of scientific papers by keyword, providing a specific verb ('search') and resource ('scientific papers'). It distinguishes from the sibling tool 'lookup_paper' by focusing on keyword-based searching rather than direct lookup, though the distinction could be more explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool (searching by keyword) and includes some guidance on query formulation (short, concise, English-only). It mentions not to include years or filters in the query, directing users to use other parameters instead. However, it doesn't explicitly contrast with the sibling tool 'lookup_paper' or provide when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.