Skip to main content
Glama

qubitsok — Quantum Computing Jobs, Papers & Researchers

Server Details

Quantum computing jobs, arXiv papers & researcher profiles from qubitsok.com

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

7 tools
getJobDetailsJob DetailsA
Read-only
Inspect

Get full details for a specific quantum computing job by its numeric ID. Use after searchJobs when the user wants more information about a specific position. Returns: job summary, required skills, nice-to-have skills, responsibilities, visa sponsorship, salary, location, and apply URL. Requires a valid job_id from searchJobs results. Returns error if ID not found.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesNumeric job ID from searchJobs results
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm readOnlyHint=true (safe read), while description adds valuable return value structure (lists specific fields: visa sponsorship, salary, etc.) and error behavior ('Returns error if ID not found'). Adds meaningful context beyond structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four efficient sentences cover purpose, usage context, return payload, prerequisites, and error handling. Front-loaded with action verb, no redundancy, every sentence earns its place despite listing multiple return fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates effectively for missing output schema by enumerating return fields (job summary, skills, salary, etc.) and error conditions. Complete coverage for a single-parameter retrieval tool with clear annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage but description adds semantic context that the ID comes from searchJobs results ('Requires a valid job_id from searchJobs results'), reinforcing parameter relationships and workflow that pure schema doesn't convey.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Get') + resource ('full details for a specific quantum computing job') + scope ('by its numeric ID'). Explicitly distinguishes from sibling searchJobs by specifying this retrieves a specific job by ID versus searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use after searchJobs when the user wants more information about a specific position' and notes prerequisite 'Requires a valid job_id from searchJobs results.' Clear workflow guidance distinguishing from the search sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getLatestPapersToday's PapersA
Read-only
Inspect

Get today's quantum computing papers from arXiv — no parameters needed. Use when the user asks "what's new in quantum computing?" or wants a daily paper briefing. Returns the most recent day's papers with title, authors, date, AI-generated hook (one-line summary), and tags. For date-range or topic-filtered search, use searchPapers instead. Use getPaperDetails for full abstract and analysis of a specific paper.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (1-50, default 10)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With readOnlyHint=true in annotations, the description adds valuable context about the return structure (title, authors, AI-generated hook, tags) and scope (most recent day only). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences: purpose, usage trigger, return values, alternatives. Zero waste. Front-loaded with the core action and scoped domain.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description compensates by detailing return fields (hook, tags, etc.). Single optional parameter is fully documented in schema. Sufficient for a simple read-only aggregator tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. The phrase 'no parameters needed' adds critical semantic context that the tool operates with zero required configuration, guiding the agent to invoke without user input gathering.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + resource 'quantum computing papers' + source 'arXiv'. Explicitly distinguishes from sibling 'searchPapers' (for date-range/topic-filtered) and 'getPaperDetails' (for full abstracts), making selection unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit trigger phrases ('what's new in quantum computing?', 'daily paper briefing') tell exactly when to invoke. Names specific alternatives for different use cases: 'searchPapers' for filtering and 'getPaperDetails' for specific paper analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getMarketOverviewQuantum Market OverviewA
Read-only
Inspect

Get a snapshot of the quantum computing landscape — no parameters needed. Use when the user asks broad questions like "how's the quantum job market?", "what are trending topics?", or wants an overview of the quantum computing industry. Returns: total active jobs, top hiring companies, jobs by role type, papers published this week, total researchers tracked, and trending technology tags. For specific job/paper/researcher searches, use the dedicated search tools instead.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only safety; description adds valuable behavioral context by enumerating exact return fields (total active jobs, top hiring companies, papers published, etc.) and clarifying the aggregate 'snapshot' nature of the data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: purpose with constraint, usage with examples, and returns with alternatives. Zero waste, properly front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite missing output schema, description fully compensates by listing all return data points. Covers scope, usage triggers, and sibling differentiation, making it complete for this simple read-only aggregate tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema contains zero parameters; description appropriately notes 'no parameters needed' which satisfies the baseline requirement for zero-parameter tools.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' with clear resource 'snapshot of the quantum computing landscape' and explicitly distinguishes from sibling search tools by stating this is for broad questions vs specific searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use with concrete examples ("how's the quantum job market?", "trending topics?") and clear when-not-to-use instruction directing users to 'dedicated search tools instead' for specific lookups.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getPaperDetailsPaper DetailsA
Read-only
Inspect

Get full details for a specific quantum computing paper by its arXiv ID (e.g., "2401.12345"). Use after searchPapers or getLatestPapers when the user wants to dive deep into a specific paper. Returns: complete abstract, all authors, publication date, AI-generated tags with reasons, hook (one-line summary), methodology, gist, and key findings. Requires a valid paper_id from search results. Returns error if not found.

ParametersJSON Schema
NameRequiredDescriptionDefault
paper_idYesArXiv paper ID (e.g., "2401.12345")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true (safe read), description adds valuable behavioral context: enumerates specific return fields (AI-generated tags with reasons, hook, methodology, etc.) and error handling ('Returns error if not found'). Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five sentences arranged logically: purpose → usage guidance → output contract → input requirement → error handling. No redundant information; each sentence advances understanding of how/when to invoke.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read tool, description compensates for missing output schema by detailing return structure (abstract, authors, tags, etc.). Domain context (quantum computing) and sibling relationships are clear. No gaps given tool simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete parameter documentation. Description adds semantic context beyond schema: validity constraint ('valid paper_id') and provenance guidance ('from search results'), helping the agent understand proper parameter sourcing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' with clear resource 'full details for a specific quantum computing paper' and scope 'by its arXiv ID'. Distinguishes from sibling searchPapers by specifying this is for 'dive deep' vs search/listing behavior.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('Use after searchPapers or getLatestPapers'), names specific sibling alternatives, and clarifies prerequisite ('Requires a valid paper_id from search results'). Also documents error condition.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

searchCollaboratorsFind ResearchersA
Read-only
Inspect

Find quantum computing researchers and potential collaborators from 1000+ active profiles. Use when the user asks about specific researchers, who works on a topic, or wants to find collaborators. NOT for jobs (use searchJobs) or papers (use searchPapers). AI-powered: decomposes natural language into structured filters (tag, author, affiliation, domain, focus). Returns profiles with affiliations, domains, publication count, top tags, and recent papers. Data from arXiv papers published in the last 12 months. Max 50 results. Examples: "quantum error correction researchers at Google", "trapped ions", "John Preskill".

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (1-50, default 10)
queryYesSearch term: researcher name, affiliation, tag, or research topic. Examples: "quantum error correction", "MIT", "John Preskill"
affiliation_typeNoFilter by affiliation type
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Rich disclosure beyond annotations: explains AI-powered decomposition of natural language, data provenance ('arXiv papers published in the last 12 months'), return structure ('profiles with affiliations, domains, publication count...'), and result limits ('Max 50'). Annotations only indicate read-only safety; description carries full burden for behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Information-dense structure flows logically: scope → usage conditions → exclusions → mechanism → data source → limits → examples. No redundant sentences; each clause adds distinct decision-critical information for an agent selecting between multiple search tools.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive despite no output schema. Description fully compensates by detailing return values ('profiles with affiliations... recent papers') and explaining the 12-month data window, which is essential for temporal relevance expectations. Adequately covers the 3-parameter complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% establishing baseline 3. Description adds significant value by explaining the NLP processing ('decomposes natural language into structured filters') and providing concrete query patterns ('quantum error correction researchers at Google', 'John Preskill') that clarify how the query parameter handles complex semantic searches beyond simple keyword matching.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Find'), resource ('quantum computing researchers'), and scope ('1000+ active profiles'). Explicitly distinguishes from sibling tools searchJobs and searchPapers by name, clarifying this is for people, not positions or publications.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use ('user asks about specific researchers, who works on a topic') and when-not-to-use ('NOT for jobs... or papers') plus named alternatives. Clear trigger conditions minimize confusion with sibling search tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

searchJobsSearch Quantum JobsA
Read-only
Inspect

Search 500+ quantum computing job listings using natural language. Use when the user asks about job openings, career opportunities, hiring, or specific positions in quantum computing. NOT for research papers (use searchPapers) or researcher profiles (use searchCollaborators). Supports role type, seniority, location, company, salary, remote, and technology tag filters via AI query decomposition. Limitations: quantum computing jobs only, last 90 days, max 20 results. Promoted listings appear first (marked). After finding jobs, suggest getJobDetails for full info. Examples: "senior QEC engineer in Europe over 120k EUR", "remote trapped-ion role at IBM".

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-20, default 5)
queryYesNatural language job search query. Examples: "quantum error correction engineer in Europe", "remote senior researcher at IBM", "entry-level trapped ion jobs over 100k USD"
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only operation; description adds crucial behavioral context: temporal limits (90 days), result limits (max 20), ranking behavior (promoted listings first), and workflow guidance (suggest getJobDetails for full info).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Information-dense with clear structure: purpose → usage → exclusions → capabilities → limitations → workflow → examples. Every sentence provides actionable guidance with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without output schema, description adequately covers result scope (max 20), data freshness (90 days), special handling (promoted listings), and clear next-step guidance (getJobDetails). Complete for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage (baseline 3), description adds valuable semantic context about the 'query' parameter: it supports AI query decomposition for filters like role type, seniority, location, salary, remote status, and technology tags.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (Search) + resource (quantum computing job listings) + scope (500+). Explicitly distinguishes from siblings searchPapers and searchCollaborators with clear exclusion clauses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit 'Use when' clause covering job openings/career opportunities/hiring. Provides clear 'NOT for' guidance naming specific alternatives (searchPapers, searchCollaborators).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

searchPapersSearch Quantum PapersA
Read-only
Inspect

Search quantum computing research papers from arXiv. Use when the user asks about recent research, specific papers, or academic topics in quantum computing. NOT for jobs (use searchJobs) or researcher profiles (use searchCollaborators). Supports natural language queries decomposed via AI into structured filters (topic, tag, author, affiliation, domain). Date range defaults to last 7 days; max lookback 12 months. Returns newest first, max 50 results. Use getPaperDetails for full abstract and analysis of a specific paper. Examples: "trapped ion papers from Google", "QEC review papers this month", "quantum error correction".

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (1-50, default 10)
queryNoNatural language query to filter papers by topic, author, affiliation, or tag. Uses Gemini AI to decompose into structured filters. Examples: "quantum error correction", "trapped ion papers from Google", "review papers on QEC"
end_dateNoEnd date in YYYY-MM-DD format. Default: today
start_dateNoStart date in YYYY-MM-DD format. Default: 7 days ago
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true; description adds critical constraints missing from annotations: 'max lookback 12 months', 'Returns newest first', and explains AI decomposition behavior. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured progression: purpose → usage → exclusions → behavioral constraints → related tools → examples. Each sentence provides unique value; no redundancy despite information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description compensates by documenting result ordering, limits (max 50), and directing to getPaperDetails for full data. All 4 parameters well-covered; completeness is high for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3). Description adds 'max lookback 12 months' constraint not in schema, and consolidates query examples that illustrate natural language capabilities, adding semantic context beyond structured descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Search' + specific resource 'quantum computing research papers from arXiv'. Distinguishes from siblings by explicitly stating 'NOT for jobs (use searchJobs) or researcher profiles (use searchCollaborators)' and clarifies relationship to getPaperDetails.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('when the user asks about recent research, specific papers, or academic topics') and when not to use with named alternatives. Also directs to getPaperDetails for full abstracts, creating clear decision boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources