Skip to main content
Glama

X Twitter Scraper - Xquik MCP Server

Ownership verified

Server Details

Real-time X (Twitter) data: 120+ API endpoints, search, bulk extraction, writes

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Xquik-dev/x-twitter-scraper
GitHub Stars
39
Server Listing
X Twitter Scraper

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.2/5 across 2 of 2 tools scored.

Server CoherenceB
Disambiguation5/5

The two tools have entirely distinct purposes with zero overlap: 'explore' is strictly for browsing the local API specification catalog (no network calls), while 'xquik' performs actual API execution. An agent cannot confuse the documentation browser with the execution engine.

Naming Consistency2/5

The naming follows no consistent pattern: 'explore' uses a generic descriptive verb while 'xquik' uses a branded/product name as a noun. There is no shared verb_noun schema, prefix convention, or casing standard tying them together.

Tool Count1/5

The server claims to wrap 121 endpoints across 12 major categories (tweet operations, user management, DMs, media extraction, monitoring, AI features) yet exposes only 1 functional execution tool. Cramming this scope into a single monolithic tool represents an extreme mismatch with MCP granularity best practices; this domain warrants 10-15 discrete tools.

Completeness4/5

Despite the monolithic presentation, the 'xquik' tool description claims lifecycle coverage for the domain: create/read tweets, user lookup, engagement actions (like/follow), DMs, bulk extraction, monitoring, and AI composition. Only minor gaps like list management or advanced search filters might be unclear from the description.

Available Tools

2 tools
exploreC
Read-onlyIdempotent
Inspect

Search the Xquik X (Twitter) API specification. Browse 122 endpoints across 12 categories: tweet search, user lookup, user tweets, user likes, user media, favoriters, mutual followers, bookmarks, notifications, timeline, DM history, media download, articles, trends, radar, 23 extraction tools (followers, following, replies, quotes, retweets, mentions, community members, list members, space participants), giveaway draws, account monitoring, webhooks, AI composition, style analysis, drafts, write actions, credits. No network calls - searches an in-memory endpoint catalog. Free, no authentication needed to browse.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesJavaScript async arrow function to execute. For explore: filters spec.endpoints (EndpointInfo[]). For xquik: calls xquik.request(path, options?) to execute X/Twitter API operations. Auth is injected automatically.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context that the operation is local ('in-memory endpoint catalog', 'no network calls') which complements the readOnly/destructive annotations. However, it completely omits the critical behavioral detail that the tool apparently executes arbitrary asynchronous code provided by the user. The execution model, sandboxing constraints, and error handling for this code execution are undisclosed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description front-loads the core purpose reasonably well, but the exhaustive enumeration of all 121 endpoints and 23 extraction tools consumes significant space without clarifying how the 'code' parameter interacts with these categories. The list borders on reference material rather than descriptive guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that the tool apparently executes user-provided async functions—a high-complexity operation—the description is incomplete. It lacks explanation of the execution environment, return value structure, error handling for malformed code, or examples of valid 'code' inputs. The annotations confirm safety but the description does not explain the execution mechanism.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage, the description adds no context for the 'code' parameter. More critically, the description's framing (browsing categories) does not align with the parameter's nature (executing async functions). A baseline of 3 would apply for silence, but the conceptual mismatch between the description and the parameter's execution semantics warrants a lower score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description claims the tool searches/browses an API specification catalog, but fails to reconcile this with the input parameter named 'code' that accepts an 'Async arrow function to execute.' This creates a fundamental ambiguity: is this a search interface or a code execution sandbox? The agent cannot determine the actual purpose from the description alone given this mismatch.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description notes 'No network calls' and 'no authentication needed,' it provides no guidance on when to use this tool versus the sibling 'xquik' tool, nor does it explain how to construct the required 'code' parameter or what the async function should return. The agent lacks critical usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xquikC
Destructive
Inspect

Execute X (Twitter) API calls via Xquik. Search tweets by keyword. Get user profiles by username. Fetch tweet threads and articles. Post tweets, reply to tweets, like and retweet, follow and unfollow users, send DMs. Run bulk extractions: scrape followers, extract reply threads, collect retweets, download media. Monitor accounts for new tweets with webhook delivery. Run giveaway draws from tweet replies. Compose algorithm-optimized tweets with AI scoring. Analyze and clone writing styles. Browse trending topics from 7 news sources. 122 endpoints, reads from $0.00015/call. Requires API key authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesJavaScript async arrow function to execute. For explore: filters spec.endpoints (EndpointInfo[]). For xquik: calls xquik.request(path, options?) to execute X/Twitter API operations. Auth is injected automatically.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds pricing ($0.00015/call), endpoint count (121), and authentication requirements beyond the annotations. However, fails to disclose critical execution model details: sandbox security, available libraries in the runtime, or what variables/global objects the code can access.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely bloated run-on structure that dumps 15+ capabilities in dense prose. While information-dense, it lacks prioritization—the critical 'code execution' nature is buried under feature marketing. The pricing/auth information is useful but appended awkwardly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool that executes arbitrary code (high complexity), the description omits essential context: return value structure, error handling patterns, rate limiting behavior, and the execution environment's API surface. Without output schema, these absences are critical gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single 'code' parameter, establishing a baseline of 3. The description mentions '121 endpoints' which implicitly hints at the function's purpose, but provides no syntax guidance, examples, or explanation of how to structure the async arrow function to interact with the Twitter API.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description lists extensive API capabilities (search, post, DM, scrape, etc.) but obscures the actual mechanism: the tool accepts arbitrary JavaScript code via a single 'code' parameter to execute these operations. It fails to distinguish from sibling 'explore'—likely a discovery tool versus this execution wrapper.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use 'xquik' versus sibling 'explore', nor how to select between the 121 endpoints. No prerequisites mentioned (e.g., when API key auth is required vs optional).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.