X Twitter Scraper - Xquik MCP Server
Server Details
Real-time X (Twitter) data: 120+ API endpoints, search, bulk extraction, writes
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Xquik-dev/x-twitter-scraper
- GitHub Stars
- 39
- Server Listing
- X Twitter Scraper
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 2.2/5 across 2 of 2 tools scored.
The two tools have entirely distinct purposes with zero overlap: 'explore' is strictly for browsing the local API specification catalog (no network calls), while 'xquik' performs actual API execution. An agent cannot confuse the documentation browser with the execution engine.
The naming follows no consistent pattern: 'explore' uses a generic descriptive verb while 'xquik' uses a branded/product name as a noun. There is no shared verb_noun schema, prefix convention, or casing standard tying them together.
The server claims to wrap 121 endpoints across 12 major categories (tweet operations, user management, DMs, media extraction, monitoring, AI features) yet exposes only 1 functional execution tool. Cramming this scope into a single monolithic tool represents an extreme mismatch with MCP granularity best practices; this domain warrants 10-15 discrete tools.
Despite the monolithic presentation, the 'xquik' tool description claims lifecycle coverage for the domain: create/read tweets, user lookup, engagement actions (like/follow), DMs, bulk extraction, monitoring, and AI composition. Only minor gaps like list management or advanced search filters might be unclear from the description.
Available Tools
2 toolsexploreCRead-onlyIdempotentInspect
Search the Xquik X (Twitter) API specification. Browse 122 endpoints across 12 categories: tweet search, user lookup, user tweets, user likes, user media, favoriters, mutual followers, bookmarks, notifications, timeline, DM history, media download, articles, trends, radar, 23 extraction tools (followers, following, replies, quotes, retweets, mentions, community members, list members, space participants), giveaway draws, account monitoring, webhooks, AI composition, style analysis, drafts, write actions, credits. No network calls - searches an in-memory endpoint catalog. Free, no authentication needed to browse.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | JavaScript async arrow function to execute. For explore: filters spec.endpoints (EndpointInfo[]). For xquik: calls xquik.request(path, options?) to execute X/Twitter API operations. Auth is injected automatically. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context that the operation is local ('in-memory endpoint catalog', 'no network calls') which complements the readOnly/destructive annotations. However, it completely omits the critical behavioral detail that the tool apparently executes arbitrary asynchronous code provided by the user. The execution model, sandboxing constraints, and error handling for this code execution are undisclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description front-loads the core purpose reasonably well, but the exhaustive enumeration of all 121 endpoints and 23 extraction tools consumes significant space without clarifying how the 'code' parameter interacts with these categories. The list borders on reference material rather than descriptive guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool apparently executes user-provided async functions—a high-complexity operation—the description is incomplete. It lacks explanation of the execution environment, return value structure, error handling for malformed code, or examples of valid 'code' inputs. The annotations confirm safety but the description does not explain the execution mechanism.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage, the description adds no context for the 'code' parameter. More critically, the description's framing (browsing categories) does not align with the parameter's nature (executing async functions). A baseline of 3 would apply for silence, but the conceptual mismatch between the description and the parameter's execution semantics warrants a lower score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description claims the tool searches/browses an API specification catalog, but fails to reconcile this with the input parameter named 'code' that accepts an 'Async arrow function to execute.' This creates a fundamental ambiguity: is this a search interface or a code execution sandbox? The agent cannot determine the actual purpose from the description alone given this mismatch.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description notes 'No network calls' and 'no authentication needed,' it provides no guidance on when to use this tool versus the sibling 'xquik' tool, nor does it explain how to construct the required 'code' parameter or what the async function should return. The agent lacks critical usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xquikCDestructiveInspect
Execute X (Twitter) API calls via Xquik. Search tweets by keyword. Get user profiles by username. Fetch tweet threads and articles. Post tweets, reply to tweets, like and retweet, follow and unfollow users, send DMs. Run bulk extractions: scrape followers, extract reply threads, collect retweets, download media. Monitor accounts for new tweets with webhook delivery. Run giveaway draws from tweet replies. Compose algorithm-optimized tweets with AI scoring. Analyze and clone writing styles. Browse trending topics from 7 news sources. 122 endpoints, reads from $0.00015/call. Requires API key authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | JavaScript async arrow function to execute. For explore: filters spec.endpoints (EndpointInfo[]). For xquik: calls xquik.request(path, options?) to execute X/Twitter API operations. Auth is injected automatically. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds pricing ($0.00015/call), endpoint count (121), and authentication requirements beyond the annotations. However, fails to disclose critical execution model details: sandbox security, available libraries in the runtime, or what variables/global objects the code can access.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely bloated run-on structure that dumps 15+ capabilities in dense prose. While information-dense, it lacks prioritization—the critical 'code execution' nature is buried under feature marketing. The pricing/auth information is useful but appended awkwardly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool that executes arbitrary code (high complexity), the description omits essential context: return value structure, error handling patterns, rate limiting behavior, and the execution environment's API surface. Without output schema, these absences are critical gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single 'code' parameter, establishing a baseline of 3. The description mentions '121 endpoints' which implicitly hints at the function's purpose, but provides no syntax guidance, examples, or explanation of how to structure the async arrow function to interact with the Twitter API.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description lists extensive API capabilities (search, post, DM, scrape, etc.) but obscures the actual mechanism: the tool accepts arbitrary JavaScript code via a single 'code' parameter to execute these operations. It fails to distinguish from sibling 'explore'—likely a discovery tool versus this execution wrapper.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use 'xquik' versus sibling 'explore', nor how to select between the 121 endpoints. No prerequisites mentioned (e.g., when API key auth is required vs optional).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.