jobs_search
Search for job listings on X.com using specific keywords to find relevant employment opportunities.
Instructions
Search for the jobs on x.com/jobs
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Example value: developer |
Search for job listings on X.com using specific keywords to find relevant employment opportunities.
Search for the jobs on x.com/jobs
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Example value: developer |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention return format (job IDs? full listings?), pagination behavior, rate limits, or whether the search is real-time. It only specifies the target domain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is front-loaded and contains no filler words. However, given the complete absence of annotations and output schema, the description is arguably too brief to stand alone as sufficient documentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool, the description covers the basic operational scope but lacks necessary context about response structure and behavioral constraints that would typically be provided by annotations or an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with the 'query' parameter example 'developer'. The description adds no semantic context about what the query accepts (keywords, company names, locations), but since the schema is fully documented, it meets the baseline without requiring compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (search) and resource (jobs on x.com/jobs), specifying the exact platform domain. It implicitly distinguishes from the general 'search' sibling by restricting scope to X's job platform, though it doesn't explicitly contrast the two tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus the general 'search' tool or other discovery methods. No mention of prerequisites like authentication requirements or geographic limitations of X's job listings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/BACH-AI-Tools/bachai-twitter-api45'
If you have feedback or need assistance with the MCP directory API, please join our Discord server