Skip to main content
Glama

Server Details

Twitter/X, Instagram, TikTok data for AI agents. 1.5B+ posts, profiles & comments. No API keys.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Available Tools

34 tools
cancelOperationTry in Inspector

Cancel running operation. Required: operationId. Gracefully stops operation at next checkpoint. Returns confirmation. Use checkOperationStatus to verify cancellation completed.

ParametersJSON Schema
NameRequiredDescriptionDefault
_requestIdNo
operationIdYesThe operation ID
checkAccessKeyStatusTry in Inspector

Check access key status without revealing key. Required: authentication. Returns: status, metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
_requestIdNo
checkOperationStatusTry in Inspector

Check status and retrieve results from background operations. Required: operationId. HANDLES TWO TYPES: (1) Query operations (op_toolname_xxx): returns paginated results + dataDumpExportOperationId. (2) Export operations (op_datadump_xxx): returns download URL for CSV download. CRITICAL: You MUST keep polling until operation finishes. DO NOT stop until status is completed/failed/cancelled. POLLING LOOP: (1) Call immediately after getting operation ID. (2) If status=running, wait exactly 5 seconds. (3) Call again after 5 seconds. (4) Repeat step 2-3 continuously until status changes to completed/failed/cancelled. (5) Only stop when operation is finished. Returns: For queries - results, pagination, dataDumpExportOperationId. For exports - downloadUrl, fileName, totalRows. NEVER make calls without 5 second waits between them.

ParametersJSON Schema
NameRequiredDescriptionDefault
_requestIdNo
operationIdYesThe operation ID
countTweetsTry in Inspector

Count tweets containing a specific phrase within a date range. Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. Returns the total count of matching tweets (int), or zero if none found. QUERY SYNTAX: Use double quotes for exact phrase match (e.g., "climate crisis"). Without quotes, matches any word. ALWAYS use quotes when user requests exact count or specific phrase. Examples: "machine learning" (exact phrase count), bitcoin (any tweet with bitcoin). Filters: date range (startDate/endDate in YYYY-MM-DD). IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended. Default: startDate=6 months ago if not provided. Use for analytics and trend analysis without retrieving full tweet data.

ParametersJSON Schema
NameRequiredDescriptionDefault
phraseYesCount only tweets containing the phrase
endDateNoEnd date in YYYY-MM-DD format. Default: current date
startDateNoStart date in YYYY-MM-DD format. Default: 6 months ago
_requestIdNo
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
getInstagramCommentsByPostIdTry in Inspector

Get COMMENT CONTENT (text, likes) for an Instagram post. Returns the actual comment objects with text and metadata. RETURNS COMMENT DATA: id, text, username, createdAtDate, likeCount, childCommentCount. Use for reading what people said. NOT FOR USER PROFILES: To get detailed user profiles (bio, followerCount, followingCount) of commenters, use getInstagramPostInteractingUsers with interactionType="commenters" instead. IMPORTANT: postId must be in strong_id format (e.g., "3606450040306139062_4836333238") - use the full "id" value from other Instagram tools, NOT just the media_id. Server-side pagination (100 comments per page). Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Ideal for: sentiment analysis, reading discussions, analyzing comment content, engagement patterns. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Date filters: OMIT startDate/endDate parameters by default. ONLY pass these if user explicitly requests specific date range (YYYY-MM-DD format). IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended. Optional fields: ["id", "text", "username", "createdAtDate", "likeCount"]. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "text", "username", "createdAtDate"]. AVAILABLE FIELDS: Core: id, text, parentPostId, type, parentCommentId, repliedToCommentId, childCommentCount, userId, username, fullName, createdAt, createdAtTimestamp, createdAtDate. Engagement: likeCount. Status: status, isSpam, hasTranslation. EXAMPLES: ["id", "text"] for minimal, ["id", "text", "username", "createdAtDate", "likeCount"] for basic analysis, or specify all fields if needed.
postIdYesREQUIRED FORMAT: strong_id (e.g., "3606450040306139062_4836333238"). This is the complete post identifier consisting of media_id + underscore + user_id. When receiving the post id from other instagram tools, use the full "id" valueDO NOT use only the media_id portion.
endDateNo
startDateNo
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
getInstagramPostInteractingUsersTry in Inspector

Get USER PROFILES of people who interacted with an Instagram post. Returns full user data (bio, followerCount, followingCount, etc.). RETURNS USER PROFILES: id, username, fullName, biography, followerCount, followingCount, isVerified, profilePicUrl. Use for analyzing WHO engaged with a post. NOT FOR COMMENT TEXT: To read the actual comment content (what people wrote), use getInstagramCommentsByPostId instead. INTERACTION TYPES: "commenters" (users who commented), "likers" (users who liked). WHEN TO USE THIS TOOL: Analyzing commenters/likers demographics, finding influencers who engaged, building audience profiles, network analysis of who interacts with posts. WHEN TO USE getInstagramCommentsByPostId: Reading comment text, sentiment analysis of what was said, analyzing discussion content. Server-side pagination (1000 users per page with default fields). Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Optional fields (default: ["id", "username", "fullName"]). Available: biography, isPrivate, isVerified, followerCount, followingCount, mediaCount, profilePicUrl. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "username", "fullName"]. AVAILABLE FIELDS: Core: id, username, fullName, biography, isPrivate, isVerified. Engagement: followerCount, followingCount, mediaCount. Profile: profilePicUrl, profilePicId, profileUrl, externalUrl, hasAnonymousProfilePicture. Timestamps: lastFetch, lastFetchDatetime, xLastUpdated. EXAMPLES: ["id", "username"] for minimal, ["username", "fullName", "followerCount"] for basic info, or specify all fields if needed.
postIdYesREQUIRED FORMAT: strong_id (e.g., "3606450040306139062_4836333238"). This is the complete post identifier consisting of media_id + underscore + user_id. When receiving the post id from other instagram tools, use the full "id" valueDO NOT use only the media_id portion.
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
interactionTypeYesType of interaction to retrieve users for. Options: "commenters" (users who commented on the post), "likers" (users who liked the post). Each type queries different relationships in the data.
getInstagramPostsByIdsTry in Inspector

Get multiple Instagram posts by IDs (1-100 IDs per request). Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. Returns only found posts, omitting not-found IDs for flexibility. First searches database, then external API for missing/stale data in parallel. Use when you have multiple exact post IDs. NOT for search - use getInstagramPostsByKeywords. PERFORMANCE: Much more efficient than multiple single-ID calls. Batches database queries and parallelizes API calls. IMPORTANT: postIds must be in strong_id format (e.g., "3606450040306139062_4836333238") - use the full "id" value from other Instagram tools, NOT just the media_id. Optional fields parameter for performance: ["id", "caption", "likeCount"]. Returns: results array with id, caption, userId, username, createdAtDate, engagement metrics, count, dataSource. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "caption", "username", "createdAtDate"]. AVAILABLE FIELDS: Core: id, postType, userId, username, fullName, caption, createdAt, createdAtTimestamp, createdAtDate. Engagement: likeCount, commentCount, reshareCount, videoPlayCount. Media: mediaType, codeUrl, imageUrl, videoUrl, audioOnlyUrl, profilePicUrl, videoSubtitlesUri, subtitles, videoDuration. EXAMPLES: ["id", "caption"] for minimal, ["id", "caption", "username", "createdAtDate", "likeCount"] for basic analysis, or specify all fields if needed.
postIdsYesArray of Instagram post IDs to fetch (1-100 IDs). Returns only found posts, omitting not-found IDs. REQUIRED FORMAT: strong_id (e.g., "3606450040306139062_4836333238"). This is the complete post identifier consisting of media_id + underscore + user_id. When receiving the post id from other instagram tools, use the full "id" value. DO NOT use only the media_id portion.
_requestIdNo
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
getInstagramPostsByKeywordsTry in Inspector

Search Instagram posts by keywords with server-side pagination (100 posts per page). Searches in both post captions and video subtitles. Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full dataset without pagination limits. Ideal for: content analysis, hashtag trends, brand monitoring across thousands of posts. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows) plus dataDumpExportOperationId. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). First searches database, then external API if data is stale (>1 week) or no results found. QUERY SYNTAX: Use double quotes for exact phrase match (e.g., "travel photography"). Without quotes, matches any word. ALWAYS use quotes when user requests exact search or specific phrase. CRITICAL: MUST use AND/OR operators between phrases, including quotes and parentheses. Examples: ("fashion week" OR "style trends"), ("sunset beach" AND "california coast"), (travel AND (food OR dining)). Simple queries: "fashion week" (single exact phrase), travel food (any word). Date filters: OMIT startDate/endDate parameters by default. ONLY pass these if user explicitly requests specific date range (YYYY-MM-DD format). IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended. Optional fields parameter for performance: ["id", "caption", "username", "createdAtDate", "likeCount", "subtitles"]. Returns: results array, count, query, pagination object, dataDumpExportOperationId for CSV file url. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query string. Supports keywords and phrases for searching Instagram posts.
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "caption", "username", "createdAtDate"]. AVAILABLE FIELDS: Core: id, postType, userId, username, fullName, caption, createdAt, createdAtTimestamp, createdAtDate. Engagement: likeCount, commentCount, reshareCount, videoPlayCount. Media: mediaType, codeUrl, imageUrl, videoUrl, audioOnlyUrl, profilePicUrl, videoSubtitlesUri, subtitles, videoDuration. EXAMPLES: ["id", "caption"] for minimal, ["id", "caption", "username", "createdAtDate", "likeCount"] for basic analysis, or specify all fields if needed.
endDateNo
startDateNo
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
getInstagramPostsByUserTry in Inspector

Get posts by Instagram user ID or username with server-side pagination (100 posts per page). Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. Use identifierType="id" for numeric user ID, identifierType="username" for username. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full dataset without pagination limits. Ideal for: engagement analysis, content trends, posting patterns across all posts. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows) plus dataDumpExportOperationId. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). Date filters: OMIT startDate/endDate parameters by default to retrieve all posts. ONLY pass these if user explicitly requests specific date range (YYYY-MM-DD format). IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended. Optional fields parameter for performance: ["id", "caption", "username", "createdAtDate", "likeCount"]. Returns: results array, count, pagination object, dataDumpExportOperationId for CSV file url. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "caption", "username", "createdAtDate"]. AVAILABLE FIELDS: Core: id, postType, userId, username, fullName, caption, createdAt, createdAtTimestamp, createdAtDate. Engagement: likeCount, commentCount, reshareCount, videoPlayCount. Media: mediaType, codeUrl, imageUrl, videoUrl, audioOnlyUrl, profilePicUrl, videoSubtitlesUri, subtitles, videoDuration. EXAMPLES: ["id", "caption"] for minimal, ["id", "caption", "username", "createdAtDate", "likeCount"] for basic analysis, or specify all fields if needed.
endDateNo
startDateNo
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
identifierYesUser ID (numeric) or username depending on identifierType.
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
identifierTypeYesType of identifier provided. Use "id" for numeric user ID, "username" for username.
getInstagramUserTry in Inspector

Get Instagram user profile by ID or username. Use identifierType="id" for numeric user ID, identifierType="username" for username. For username: Use ONLY when you have the precise username (e.g., "cristiano"). For person names or fuzzy search, use searchInstagramUsers instead. Optional fields parameter for performance (default: ["id", "username", "fullName"]). Available fields: id, username, fullName, biography, isPrivate, isVerified, followerCount, followingCount, mediaCount, profilePicUrl, and more. Returns: single user profile with userId, username, fullName, followerCount, followingCount, mediaCount, biography, isVerified, isPrivate, profilePicUrl. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "username", "fullName"]. AVAILABLE FIELDS: Core: id, username, fullName, biography, isPrivate, isVerified. Engagement: followerCount, followingCount, mediaCount. Profile: profilePicUrl, profilePicId, profileUrl, externalUrl, hasAnonymousProfilePicture. Timestamps: lastFetch, lastFetchDatetime, xLastUpdated. EXAMPLES: ["id", "username"] for minimal, ["username", "fullName", "followerCount"] for basic info, or specify all fields if needed.
_requestIdNo
identifierYesUser ID (numeric) or username depending on identifierType.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
identifierTypeYesType of identifier provided. Use "id" for numeric user ID, "username" for username.
getInstagramUserConnectionsTry in Inspector

Get Instagram user connections (followers or following) with server-side pagination (100 users per page). Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. Use connectionType="followers" for users who follow them, connectionType="following" for users they follow. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full dataset without pagination limits. Ideal for: network analysis, audience demographics, engagement patterns. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows). SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). Optional fields parameter for performance (default: ["id", "username", "fullName"]). Available fields: id, username, fullName, biography, isPrivate, isVerified, followerCount, followingCount, mediaCount, profilePicUrl, and more. DATA FRESHNESS: Automatically checks data age (> 1 week triggers refresh from API). FORCE LATEST: Use sparingly - forceLatest=true bypasses cache for real-time data (increases latency/costs). Returns: results array, count (users in current page), pagination object (resultsCount, totalRows, totalPages, pageSize, tableName), totalDataCount (actual total on Instagram), dataSource. CRITICAL - Understanding totalRows vs totalDataCount: totalRows indicates ONLY what we have in our database. totalDataCount (when present) shows the actual count from Instagram. If totalDataCount is missing or undefined, you CANNOT claim totalRows represents all connections - it only shows our partial database data. If totalDataCount > totalRows, we only have partial data. Always check if totalDataCount exists before making claims about total counts. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "username", "fullName"]. AVAILABLE FIELDS: Core: id, username, fullName, biography, isPrivate, isVerified. Engagement: followerCount, followingCount, mediaCount. Profile: profilePicUrl, profilePicId, profileUrl, externalUrl, hasAnonymousProfilePicture. Timestamps: lastFetch, lastFetchDatetime, xLastUpdated. EXAMPLES: ["id", "username"] for minimal, ["username", "fullName", "followerCount"] for basic info, or specify all fields if needed.
usernameYesInstagram username (without @ symbol)
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
connectionTypeYesType of connection to retrieve. Use "followers" for users who follow this account, "following" for users this account follows.
getInstagramUsersByKeywordsTry in Inspector

Search for USERS who authored Instagram posts matching keywords. Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. USE CASE: Find users who have posted content about specific topics, keywords, or phrases. Returns unique, deduplicated user profiles. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full dataset without pagination limits. Ideal for: audience discovery, influencer identification, community analysis. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows) plus dataDumpExportOperationId. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). QUERY SYNTAX: Use double quotes for exact phrase match (e.g., "sustainable fashion"). Without quotes, matches any word. ALWAYS use quotes when user requests exact search or specific phrase. Examples: "travel photography" (exact), fashion style (any word), "digital nomad" lifestyle (exact phrase + keyword). FILTERS: - startDate/endDate: Filter by post date (YYYY-MM-DD format). OMIT by default, only use if user explicitly requests date range. IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended. Optional fields parameter for performance (default: ["id", "username", "fullName"]). Available fields: id, username, fullName, biography, isPrivate, isVerified, followerCount, followingCount, mediaCount, profilePicUrl, and more. AGGREGATE FIELDS (from matching posts) - MUST BE EXPLICITLY REQUESTED IN FIELDS: aggRelevance (relevance score for sorting), relevantPostsCount (count of matching posts per user), relevantPostsLikesSum, relevantPostsCommentsSum, relevantPostsResharesSum, relevantPostsVideoPlaysSum. These return aggregated metrics from all matched posts for each user. Returns: results array of unique user profiles, count, pagination object, dataDumpExportOperationId for CSV. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesFull-text search of Instagram post captions to find users who authored matching posts. Searches posts, returns UNIQUE user authors (deduplicated). EXACT PHRASES: Wrap in double quotes - "sustainable fashion" matches that exact phrase. KEYWORDS: Without quotes, matches posts containing any of the words - travel food photography. BOOLEAN OPERATORS: MUST explicitly use the keywords AND, OR, NOT (uppercase or lowercase). NO implicit operators - space between words means OR by default. Examples requiring explicit operators: Use "travel photography" AND nature (not "travel photography nature"). Use fashion OR style (not "fashion style"). PARENTHESES: Group terms for precise logic - (travel OR adventure) AND ("sustainable living" NOT luxury). FORBIDDEN: DO NOT use filter operators with colons (from:, to:, since:, until:) - use dedicated parameters instead. Query examples: "climate change" | fashion OR beauty | "digital nomad" AND remote | (startup OR entrepreneur) NOT "venture capital"
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "username", "fullName"]. AVAILABLE FIELDS: Core: id, username, fullName, biography, isPrivate, isVerified. Engagement: followerCount, followingCount, mediaCount. Profile: profilePicUrl, profilePicId, profileUrl, externalUrl, hasAnonymousProfilePicture. Timestamps: lastFetch, lastFetchDatetime, xLastUpdated. Aggregations (from matching posts, not all posts of the user): aggRelevance (relevance score), relevantPostsCount (count of matching posts), relevantPostsLikesSum, relevantPostsCommentsSum, relevantPostsResharesSum, relevantPostsVideoPlaysSum. EXAMPLES: ["id", "username"] for minimal, ["username", "fullName", "followerCount", "relevantPostsLikesSum", "relevantPostsCount"] to include engagement aggregations.
endDateNo
startDateNo
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
getRedditCommentsByKeywordsTry in Inspector

Search Reddit comments by keywords with server-side pagination (100 comments per page). Searches in comment body text. Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full dataset without pagination limits. Ideal for: sentiment analysis, discussion trends, community opinions across thousands of comments. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows) plus dataDumpExportOperationId. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). DATABASE-ONLY: Searches existing database records only. QUERY SYNTAX: Use double quotes for exact phrase match (e.g., "artificial intelligence"). Without quotes, matches any word. ALWAYS use quotes when user requests exact search or specific phrase. CRITICAL: MUST use AND/OR operators between phrases, including quotes and parentheses. Examples: ("machine learning" OR "artificial intelligence"), ("crypto" AND "investment"), (AI AND (gaming OR esports)). Simple queries: "machine learning" (single exact phrase), AI crypto (any word). Date filters: OMIT startDate/endDate parameters by default. ONLY pass these if user explicitly requests specific date range (YYYY-MM-DD format). IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended. FILTERS: subreddit (limit to specific subreddit without r/ prefix). Optional fields parameter for performance: ["id", "body", "authorUsername", "postSubredditName", "score", "createdAtDate"]. Returns: results array, count, query, pagination object, dataDumpExportOperationId for CSV file url. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesFull-text search of comment content. Searches comment body text. EXACT PHRASES: Wrap in double quotes - "machine learning" matches that exact phrase. KEYWORDS: Without quotes, matches comments containing any of the words - AI robotics blockchain. BOOLEAN OPERATORS: MUST explicitly use the keywords AND, OR (uppercase or lowercase). NO implicit operators - space between words means OR by default. Examples requiring explicit operators: Use "deep learning" AND python (not "deep learning python"). Use tensorflow OR pytorch (not "tensorflow pytorch"). PARENTHESES: Group terms for precise logic - (AI OR "artificial intelligence") AND ethics. Query examples: "climate change" | AI OR blockchain | "neural networks" AND python | (startup OR entrepreneur)
fieldsNoPERFORMANCE OPTIMIZATION: Specify comment fields you need. DEFAULT (if omitted): ["id", "body", "authorUsername", "createdAtDate"]. AVAILABLE FIELDS: Core: id, body, parentPostId, parentId. Author: authorId, authorUsername. Subreddit: postSubredditName, postSubredditId. Engagement: score, upvotes, downvotes, controversiality. Meta: depth, isSubmitter, stickied, collapsed, edited, distinguished. Timestamps: createdAt, createdAtTimestamp, createdAtDate, lastFetch, lastFetchDatetime, xLastUpdated.
endDateNo
startDateNo
subredditNoFilter comments by subreddit name (without r/ prefix). Use this parameter to limit search to comments from a specific subreddit. Example: subreddit="wallstreetbets" finds all comments in r/wallstreetbets.
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
getRedditPostsByKeywordsTry in Inspector

Search Reddit posts by keywords with server-side pagination (100 posts per page). Searches in post titles and selftext. Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full dataset without pagination limits. Ideal for: sentiment analysis, subreddit trends, community discussions across thousands of posts. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows) plus dataDumpExportOperationId. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). First searches database, then external API if data is stale or missing. QUERY SYNTAX: Use double quotes for exact phrase match (e.g., "artificial intelligence"). Without quotes, matches any word. ALWAYS use quotes when user requests exact search or specific phrase. CRITICAL: MUST use AND/OR operators between phrases, including quotes and parentheses. Examples: ("machine learning" OR "artificial intelligence"), ("crypto" AND "investment"), (AI AND (gaming OR esports)). Simple queries: "machine learning" (single exact phrase), AI crypto (any word). Date filters: OMIT startDate/endDate parameters by default. ONLY pass these if user explicitly requests specific date range (YYYY-MM-DD format). IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended. FILTERS: sort (relevance, hot, top, new, comments), time (hour, day, week, month, year, all), subreddit (limit to specific subreddit). Optional fields parameter for performance: ["id", "title", "selftext", "authorUsername", "subredditName", "score", "commentsCount", "createdAtDate"]. Returns: results array, count, query, pagination object, dataDumpExportOperationId for CSV file url. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort order for results. Default: relevance.
timeNoTime filter for results. Default: all.
queryYesFull-text search of post content. Searches post titles and selftext. EXACT PHRASES: Wrap in double quotes - "machine learning" matches that exact phrase. KEYWORDS: Without quotes, matches posts containing any of the words - AI robotics blockchain. BOOLEAN OPERATORS: MUST explicitly use the keywords AND, OR (uppercase or lowercase). NO implicit operators - space between words means OR by default. Examples requiring explicit operators: Use "deep learning" AND python (not "deep learning python"). Use tensorflow OR pytorch (not "tensorflow pytorch"). PARENTHESES: Group terms for precise logic - (AI OR "artificial intelligence") AND ethics. Query examples: "climate change" | AI OR blockchain | "neural networks" AND python | (startup OR entrepreneur)
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "title", "authorUsername", "subredditName", "createdAtDate"]. AVAILABLE FIELDS: Core: id, title, selftext, url, permalink, postUrl, thumbnail. Author: authorId, authorUsername. Subreddit: subredditName, subredditId. Engagement: score, upvotes, downvotes, upvoteRatio, commentsCount, crosspostsCount. Flags: isSelf, isVideo, isOriginalContent, over18, spoiler, locked, stickied, archived. Meta: linkFlairText, postHint, domain, crosspostParent. Timestamps: createdAt, createdAtTimestamp, createdAtDate, lastFetch, lastFetchDatetime, xLastUpdated. EXAMPLES: ["id", "title", "score"] for minimal, ["title", "selftext", "score", "commentsCount"] for content analysis.
endDateNo
startDateNo
subredditNoFilter posts by subreddit name (without r/ prefix). Use this parameter to limit search to a specific subreddit. Example: subreddit="wallstreetbets" finds all posts in r/wallstreetbets.
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
getRedditPostWithCommentsByIdTry in Inspector

Get Reddit post by ID with its comments. Returns both the post data and paginated comments in a single response. Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE comments dataset as CSV. The CSV contains ONLY comments (not the post). Use checkOperationStatus with dataDumpExportOperationId to get S3 download link. CODE EXECUTION: Download CSV and use code execution to analyze full comments dataset without pagination limits. Ideal for: sentiment analysis, discussion patterns, community engagement analysis. RESPONSE STRUCTURE: Returns { results: { post: {...}, comments: [...] }, pagination: {...} }. Post is a single object (first page only), comments are paginated (100 per page). FIRST CALL: Omit pageNumber and tableName. Creates cached table for comments, returns page 1 with post data and pagination metadata. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional comment pages. Post data is NOT returned on subsequent pages. FIELD SELECTION: Use postFields for post data optimization, commentFields for comment data optimization. First searches database for both post and comments, then external API if data is stale or missing. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
postIdYesReddit post ID to fetch. Returns the post data along with its comments.
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
postFieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "title", "authorUsername", "subredditName", "createdAtDate"]. AVAILABLE FIELDS: Core: id, title, selftext, url, permalink, postUrl, thumbnail. Author: authorId, authorUsername. Subreddit: subredditName, subredditId. Engagement: score, upvotes, downvotes, upvoteRatio, commentsCount, crosspostsCount. Flags: isSelf, isVideo, isOriginalContent, over18, spoiler, locked, stickied, archived. Meta: linkFlairText, postHint, domain, crosspostParent. Timestamps: createdAt, createdAtTimestamp, createdAtDate, lastFetch, lastFetchDatetime, xLastUpdated. EXAMPLES: ["id", "title", "score"] for minimal, ["title", "selftext", "score", "commentsCount"] for content analysis.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
commentFieldsNoPERFORMANCE OPTIMIZATION: Specify comment fields you need. DEFAULT (if omitted): ["id", "body", "authorUsername", "createdAtDate"]. AVAILABLE FIELDS: Core: id, body, parentPostId, parentId. Author: authorId, authorUsername. Subreddit: postSubredditName, postSubredditId. Engagement: score, upvotes, downvotes, controversiality. Meta: depth, isSubmitter, stickied, collapsed, edited, distinguished. Timestamps: createdAt, createdAtTimestamp, createdAtDate, lastFetch, lastFetchDatetime, xLastUpdated.
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
getRedditSubredditsByKeywordsTry in Inspector

Search for SUBREDDITS where Reddit posts match keywords. Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. USE CASE: Find subreddits that have content about specific topics, keywords, or phrases. Returns unique, deduplicated subreddit profiles. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full dataset without pagination limits. Ideal for: community discovery, topic mapping, niche identification. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows) plus dataDumpExportOperationId. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). QUERY SYNTAX: Use double quotes for exact phrase match (e.g., "machine learning"). Without quotes, matches any word. ALWAYS use quotes when user requests exact search or specific phrase. Examples: "data science" | python help | "web development" AND tutorial | (gaming OR esports) NOT "mobile games". FILTERS: - startDate/endDate: Filter by post date (YYYY-MM-DD format). OMIT by default, only use if user explicitly requests date range. IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended. Optional fields parameter for performance (default: ["id", "displayName", "title", "subscribersCount"]). Available fields: id, displayName, title, publicDescription, description, subscribersCount, activeUserCount, and more. AGGREGATE FIELDS (from matching posts) - MUST BE EXPLICITLY REQUESTED IN FIELDS: aggRelevance (relevance score for sorting), relevantPostsCount (count of matching posts per subreddit), relevantPostsUpvotesSum, relevantPostsCommentsCountSum. These return aggregated metrics from all matched posts for each subreddit. Returns: results array of unique subreddit profiles, count, pagination object, dataDumpExportOperationId for CSV. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesFull-text search of Reddit post titles and content to find subreddits where matching posts were made. Searches posts, returns UNIQUE subreddits (deduplicated). EXACT PHRASES: Wrap in double quotes - "machine learning" matches that exact phrase. KEYWORDS: Without quotes, matches posts containing any of the words - python tutorial help. BOOLEAN OPERATORS: MUST explicitly use the keywords AND, OR, NOT (uppercase or lowercase). NO implicit operators - space between words means OR by default. Examples requiring explicit operators: Use "python tutorial" AND beginner (not "python tutorial beginner"). Use programming OR coding (not "programming coding"). PARENTHESES: Group terms for precise logic - (python OR javascript) AND ("web development" NOT framework). FORBIDDEN: DO NOT use filter operators with colons (from:, to:, since:, until:) - use dedicated parameters instead. Query examples: "machine learning" | python help | "data science" AND visualization | (gaming OR esports) NOT "mobile games"
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "displayName", "title", "subscribersCount"]. AVAILABLE FIELDS: Core: id, displayName, title, publicDescription, description. Stats: subscribersCount, activeUserCount. Meta: subredditType, over18, lang, url, subredditUrl. Images: iconImg, bannerImg, headerImg, communityIcon. Timestamps: createdAt, createdAtTimestamp, createdAtDate, lastFetch, lastFetchDatetime, xLastUpdated. Aggregations (from matching posts): aggRelevance (relevance score), relevantPostsCount (count of matching posts), relevantPostsUpvotesSum, relevantPostsCommentsCountSum. EXAMPLES: ["id", "displayName", "subscribersCount"] for minimal, ["displayName", "subscribersCount", "relevantPostsCount", "relevantPostsUpvotesSum"] to include engagement aggregations.
endDateNoEnd date filter (YYYY-MM-DD format)
startDateNoStart date filter (YYYY-MM-DD format)
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
getRedditSubredditWithPostsByNameTry in Inspector

Get Reddit subreddit by name with its posts. Returns both the subreddit data and paginated posts in a single response. Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE posts dataset as CSV. The CSV contains ONLY posts (not the subreddit). Use checkOperationStatus with dataDumpExportOperationId to get S3 download link. CODE EXECUTION: Download CSV and use code execution to analyze full posts dataset without pagination limits. Ideal for: subreddit analysis, content trends, community activity analysis. RESPONSE STRUCTURE: Returns { results: { subreddit: {...}, posts: [...] }, pagination: {...} }. Subreddit is a single object (first page only), posts are paginated (100 per page). FIRST CALL: Omit pageNumber and tableName. Creates cached table for posts, returns page 1 with subreddit data and pagination metadata. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional post pages. Subreddit data is NOT returned on subsequent pages. FIELD SELECTION: Use subredditFields for subreddit data optimization, postFields for post data optimization. First searches database for both subreddit and posts, then external API if data is stale or missing. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
postFieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "title", "authorUsername", "subredditName", "createdAtDate"]. AVAILABLE FIELDS: Core: id, title, selftext, url, permalink, postUrl, thumbnail. Author: authorId, authorUsername. Subreddit: subredditName, subredditId. Engagement: score, upvotes, downvotes, upvoteRatio, commentsCount, crosspostsCount. Flags: isSelf, isVideo, isOriginalContent, over18, spoiler, locked, stickied, archived. Meta: linkFlairText, postHint, domain, crosspostParent. Timestamps: createdAt, createdAtTimestamp, createdAtDate, lastFetch, lastFetchDatetime, xLastUpdated. EXAMPLES: ["id", "title", "score"] for minimal, ["title", "selftext", "score", "commentsCount"] for content analysis.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
subredditNameYesReddit subreddit name to fetch (without r/ prefix). Example: "wallstreetbets", "programming". Returns the subreddit data along with its posts.
subredditFieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "displayName", "title", "subscribersCount"]. AVAILABLE FIELDS: Core: id, displayName, title, publicDescription, description. Stats: subscribersCount, activeUserCount. Meta: subredditType, over18, lang, url, subredditUrl. Images: iconImg, bannerImg, headerImg, communityIcon. Timestamps: createdAt, createdAtTimestamp, createdAtDate, lastFetch, lastFetchDatetime, xLastUpdated. EXAMPLES: ["id", "displayName", "subscribersCount"] for minimal, ["displayName", "publicDescription", "subscribersCount", "activeUserCount"] for discovery.
getRedditUserTry in Inspector

Get Reddit user profile by username. Returns user profile including karma breakdown (link, comment, total), account status (gold, mod, employee), and profile info. Use without u/ prefix (e.g., "spez" not "u/spez"). Optional fields parameter for performance (default: ["id", "username", "totalKarma"]). Available fields: id, username, profileUrl, profilePicUrl, snoovatarImg, linkKarma, commentKarma, totalKarma, awardeeKarma, awarderKarma, isGold, isMod, isEmployee, hasVerifiedEmail, isSuspended, verified, isBlocked, acceptFollowers, hasSubscribed, hideFromRobots, prefShowSnoovatar, profileDescription, profileBannerUrl, profileTitle, createdAt. Returns: single user profile with id, username, karma metrics, account flags, and profile details. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "username", "totalKarma"]. AVAILABLE FIELDS: Core: id, username, profileUrl, profilePicUrl, snoovatarImg. Karma: linkKarma, commentKarma, totalKarma, awardeeKarma, awarderKarma. Status: isGold, isMod, isEmployee, hasVerifiedEmail, isSuspended, verified, isBlocked, acceptFollowers, hasSubscribed, hideFromRobots, prefShowSnoovatar. Profile: profileDescription, profileBannerUrl, profileTitle. Timestamps: createdAt, createdAtTimestamp, createdAtDate, lastFetch, lastFetchDatetime, xLastUpdated. EXAMPLES: ["id", "username"] for minimal, ["username", "totalKarma", "profileDescription"] for basic info.
usernameYesReddit username (without u/ prefix)
_requestIdNo
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
getRedditUsersByKeywordsTry in Inspector

Search for USERS who authored Reddit posts matching keywords. Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. USE CASE: Find users who have posted content about specific topics, keywords, or phrases. Returns unique, deduplicated user profiles. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full dataset without pagination limits. Ideal for: audience discovery, influencer identification, community analysis. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows) plus dataDumpExportOperationId. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). QUERY SYNTAX: Use double quotes for exact phrase match (e.g., "machine learning"). Without quotes, matches any word. ALWAYS use quotes when user requests exact search or specific phrase. Examples: "data science" | python help | "web development" AND tutorial | (gaming OR esports) NOT "mobile games". FILTERS: - startDate/endDate: Filter by post date (YYYY-MM-DD format). OMIT by default, only use if user explicitly requests date range. IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended.

  • subreddit: Filter to specific subreddit (without r/ prefix). Optional fields parameter for performance (default: ["id", "username", "totalKarma"]). Available fields: id, username, profileUrl, profilePicUrl, snoovatarImg, linkKarma, commentKarma, totalKarma, profileDescription, and more. AGGREGATE FIELDS (from matching posts) - MUST BE EXPLICITLY REQUESTED IN FIELDS: aggRelevance (relevance score for sorting), relevantPostsCount (count of matching posts per user), relevantPostsUpvotesSum, relevantPostsCommentsCountSum. These return aggregated metrics from all matched posts for each user. Returns: results array of unique user profiles, count, pagination object, dataDumpExportOperationId for CSV. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesFull-text search of Reddit post titles and content to find users who authored matching posts. Searches posts, returns UNIQUE user authors (deduplicated). EXACT PHRASES: Wrap in double quotes - "machine learning" matches that exact phrase. KEYWORDS: Without quotes, matches posts containing any of the words - python tutorial help. BOOLEAN OPERATORS: MUST explicitly use the keywords AND, OR, NOT (uppercase or lowercase). NO implicit operators - space between words means OR by default. Examples requiring explicit operators: Use "python tutorial" AND beginner (not "python tutorial beginner"). Use programming OR coding (not "programming coding"). PARENTHESES: Group terms for precise logic - (python OR javascript) AND ("web development" NOT framework). FORBIDDEN: DO NOT use filter operators with colons (from:, to:, since:, until:) - use dedicated parameters instead. Query examples: "machine learning" | python help | "data science" AND visualization | (gaming OR esports) NOT "mobile games"
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "username", "totalKarma"]. AVAILABLE FIELDS: Core: id, username, profileUrl, profilePicUrl, snoovatarImg. Karma: linkKarma, commentKarma, totalKarma, awardeeKarma, awarderKarma. Status: isGold, isMod, isEmployee, hasVerifiedEmail, isSuspended, verified, isBlocked, acceptFollowers, hasSubscribed, hideFromRobots, prefShowSnoovatar. Profile: profileDescription, profileBannerUrl, profileTitle. Timestamps: createdAt, createdAtTimestamp, createdAtDate, lastFetch, lastFetchDatetime, xLastUpdated. Aggregations (from matching posts): aggRelevance (relevance score), relevantPostsCount (count of matching posts), relevantPostsUpvotesSum, relevantPostsCommentsCountSum. EXAMPLES: ["id", "username"] for minimal, ["username", "totalKarma", "relevantPostsCount", "relevantPostsUpvotesSum"] to include engagement aggregations.
endDateNoEnd date filter (YYYY-MM-DD format)
startDateNoStart date filter (YYYY-MM-DD format)
subredditNoFilter results to a specific subreddit (without r/ prefix)
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
getTwitterPostCommentsTry in Inspector

Get comments (replies) to specific post with server-side pagination (100 posts per page). Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze all comments. Ideal for: sentiment analysis, discussion themes, community engagement analysis. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows) plus dataDumpExportOperationId. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). First searches database, then external API if data is stale (>10 days). Date filter: OMIT startDate by default. ONLY pass if user explicitly requests filtering from specific date (YYYY-MM-DD format). IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended. Use to analyze community response and discussion. NOT for quotes - use getTwitterPostQuotes. Optional fields parameter for performance: ["id", "text", "authorUsername", "createdAt"]. Returns: results array, pagination object, dataDumpExportOperationId for CSV file url. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "text", "authorUsername", "createdAtDate"]. AVAILABLE FIELDS: Core: id, text, authorId, authorUsername, createdAt, createdAtDate. Engagement: retweetCount, replyCount, likeCount, quoteCount, impressionCount, bookmarkCount. Metadata: lang, possiblySensitive, suspended, deleted, source. Relations: conversationId, quotedTweetId, retweetedTweetId, replyToTweetId, replyToUserId, replyToUsername, originalTweetId (original tweet ID if edited, equals own ID if unedited), editedTweets (array of edited version IDs). Content: hashtags, mentions, mediaUrls, grokGeneratedContent (array of Grok AI generated content grok_post_id, grok_url, media_id). Location: country, region, city. EXAMPLES: ["id", "text"] for minimal, ["id", "text", "retweetCount", "likeCount", "hashtags"] for basic analysis, or specify all fields if needed.
postIdYes
startDateNo
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
getTwitterPostInteractingUsersTry in Inspector

Get users who interacted with a specific Twitter post (commenters, quoters, or retweeters) with server-side pagination (1000 users per page with default fields). Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. INTERACTION TYPES: "commenters" (users who replied to the post), "quoters" (users who quoted the post), "retweeters" (users who retweeted the post). CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full dataset without pagination limits. Ideal for: audience analysis, engagement patterns, network graphs. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows). SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). Optional fields parameter for performance (default: ["id", "username", "name"]). If default fields are used page size will be 1000, if additional fields are added it will be 100. Available fields: id, username, name, description, location, followersCount, followingCount, verified, profileImageUrl, and more. DATA FRESHNESS: Automatically checks data age (> 1 week triggers refresh from API). FORCE LATEST: Use sparingly - forceLatest=true bypasses cache for real-time data (increases latency/costs). Returns: results array, count (number of users in response), pagination object (resultsCount, totalPages, totalRows, pageSize, tableName), totalDataCount (when available), dataSource. Use for: Finding who engaged with a specific post, analyzing post reach and audience, building engagement networks. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "username", "name"]. AVAILABLE FIELDS: Core: id, username, name, description, location, verified, verifiedType, protected. Engagement: followersCount, followingCount, tweetCount, listedCount, likesCount, mediaCount. Profile: profileImageUrl, profileBannerUrl, profileInterstitialType. Metadata: source, status, pinnedTweetId, isVerified, accountBasedIn, locationAccurate, label, labelType. Analytics: collectedFollowingCount, collectedFollowersCount, collectedFollowersCoverage, collectedFollowingCoverage, avgTweetsPerDayLastMonth. Advanced: nLang, nLangsFiltered, inauthenticType, isInauthentic, isInauthenticProbScore, isInauthenticCalculatedAt. Timestamps: xFetchedAt, modifiedAt, createdAt, xModifiedAt. Account History: verifiedSinceDatetime, usernameChanges, lastUsernameChangeDatetime. EXAMPLES: ["id", "username"] for minimal, ["username", "name", "description, followersCount"] for basic info, or specify all fields if needed.
postIdYes
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
interactionTypeYesType of interaction to retrieve users for. Options: "commenters" (users who replied), "quoters" (users who quoted), "retweeters" (users who retweeted). Each type queries different relationships in the data.
getTwitterPostQuotesTry in Inspector

Get quote posts of specific post with server-side pagination (100 posts per page). Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze all quote tweets. Ideal for: sentiment analysis on reactions, commentary patterns, viral spread analysis. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows) plus dataDumpExportOperationId. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). First searches database, then external API if data is stale (>10 days). Date filter: OMIT startDate by default. ONLY pass if user explicitly requests filtering from specific date (YYYY-MM-DD format). IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended. Use to analyze commentary on post. NOT for retweets - use getTwitterPostRetweets. Optional fields parameter for performance: ["id", "text", "authorUsername", "createdAt"]. Returns: results array, pagination object, dataDumpExportOperationId for CSV file url. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "text", "authorUsername", "createdAtDate"]. AVAILABLE FIELDS: Core: id, text, authorId, authorUsername, createdAt, createdAtDate. Engagement: retweetCount, replyCount, likeCount, quoteCount, impressionCount, bookmarkCount. Metadata: lang, possiblySensitive, suspended, deleted, source. Relations: conversationId, quotedTweetId, retweetedTweetId, replyToTweetId, replyToUserId, replyToUsername, originalTweetId (original tweet ID if edited, equals own ID if unedited), editedTweets (array of edited version IDs). Content: hashtags, mentions, mediaUrls, grokGeneratedContent (array of Grok AI generated content grok_post_id, grok_url, media_id). Location: country, region, city. EXAMPLES: ["id", "text"] for minimal, ["id", "text", "retweetCount", "likeCount", "hashtags"] for basic analysis, or specify all fields if needed.
postIdYes
startDateNo
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
getTwitterPostRetweetsTry in Inspector

Get retweets of specific post with server-side pagination (100 posts per page, database-only). Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full retweet dataset. Ideal for: amplification analysis, reach studies, engagement patterns. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows) plus dataDumpExportOperationId. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). Database-only search for historical retweet data. Date filter: OMIT startDate by default. ONLY pass if user explicitly requests filtering from specific date (YYYY-MM-DD format). IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended. Use to analyze post amplification patterns. NOT for quotes - use getTwitterPostQuotes. Optional fields parameter for performance: ["id", "authorUsername", "createdAt"]. Returns: results array, pagination object, dataDumpExportOperationId for CSV file url. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "text", "authorUsername", "createdAtDate"]. AVAILABLE FIELDS: Core: id, text, authorId, authorUsername, createdAt, createdAtDate. Engagement: retweetCount, replyCount, likeCount, quoteCount, impressionCount, bookmarkCount. Metadata: lang, possiblySensitive, suspended, deleted, source. Relations: conversationId, quotedTweetId, retweetedTweetId, replyToTweetId, replyToUserId, replyToUsername, originalTweetId (original tweet ID if edited, equals own ID if unedited), editedTweets (array of edited version IDs). Content: hashtags, mentions, mediaUrls, grokGeneratedContent (array of Grok AI generated content grok_post_id, grok_url, media_id). Location: country, region, city. EXAMPLES: ["id", "text"] for minimal, ["id", "text", "retweetCount", "likeCount", "hashtags"] for basic analysis, or specify all fields if needed.
postIdYes
startDateNo
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
getTwitterPostsByAuthorTry in Inspector

Get posts from author by ID or username with server-side pagination (100 posts per page). Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. Use identifierType="id" for numeric author ID, identifierType="username" for username. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full dataset without pagination limits. Ideal for: statistical analysis, trend detection, data visualization, processing thousands of posts. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows) plus dataDumpExportOperationId. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). Date filters: OMIT startDate/endDate parameters by default to retrieve all posts. ONLY pass these if user explicitly requests specific date range (YYYY-MM-DD format). IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended. Optional fields parameter for performance: ["id", "text", "createdAt", "retweetCount"]. Returns: results array, count, pagination object, dataDumpExportOperationId for CSV file url. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "text", "authorUsername", "createdAtDate"]. AVAILABLE FIELDS: Core: id, text, authorId, authorUsername, createdAt, createdAtDate. Engagement: retweetCount, replyCount, likeCount, quoteCount, impressionCount, bookmarkCount. Metadata: lang, possiblySensitive, suspended, deleted, source. Relations: conversationId, quotedTweetId, retweetedTweetId, replyToTweetId, replyToUserId, replyToUsername, originalTweetId (original tweet ID if edited, equals own ID if unedited), editedTweets (array of edited version IDs). Content: hashtags, mentions, mediaUrls, grokGeneratedContent (array of Grok AI generated content grok_post_id, grok_url, media_id). Location: country, region, city. EXAMPLES: ["id", "text"] for minimal, ["id", "text", "retweetCount", "likeCount", "hashtags"] for basic analysis, or specify all fields if needed.
endDateNo
startDateNo
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
identifierYesAuthor ID (numeric) or username depending on identifierType.
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
identifierTypeYesType of identifier provided. Use "id" for numeric author ID, "username" for username.
getTwitterPostsByIdsTry in Inspector

Get multiple Twitter posts by numeric IDs (1-100 IDs per request). Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. Returns only found tweets, omitting not-found IDs for flexibility. First searches database, then external API for missing/stale data in parallel. Use when you have multiple exact post IDs. NOT for search - use getTwitterPostsByKeywords. PERFORMANCE: Much more efficient than multiple single-ID calls. Batches database queries and parallelizes API calls. Optional fields parameter for performance: ["id", "text", "retweetCount"]. Returns: results array with id, text, authorId, createdAt, metrics (retweets, replies, quotes), count, dataSource. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "text", "authorUsername", "createdAtDate"]. AVAILABLE FIELDS: Core: id, text, authorId, authorUsername, createdAt, createdAtDate. Engagement: retweetCount, replyCount, likeCount, quoteCount, impressionCount, bookmarkCount. Metadata: lang, possiblySensitive, suspended, deleted, source. Relations: conversationId, quotedTweetId, retweetedTweetId, replyToTweetId, replyToUserId, replyToUsername, originalTweetId (original tweet ID if edited, equals own ID if unedited), editedTweets (array of edited version IDs). Content: hashtags, mentions, mediaUrls, grokGeneratedContent (array of Grok AI generated content grok_post_id, grok_url, media_id). Location: country, region, city. EXAMPLES: ["id", "text"] for minimal, ["id", "text", "retweetCount", "likeCount", "hashtags"] for basic analysis, or specify all fields if needed.
postIdsYesArray of Twitter post IDs to fetch (1-100 IDs). Returns only found tweets, omitting not-found IDs.
_requestIdNo
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
getTwitterPostsByKeywordsTry in Inspector

Search posts by keywords with server-side pagination (100 posts per page). Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full dataset without pagination limits. Ideal for: sentiment analysis, trend detection, content analysis across thousands of posts. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows) plus dataDumpExportOperationId. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). Returns by default: id, text, authorUsername, createdAtDate. First searches database, then external API if data is stale or missing. QUERY SYNTAX: Use double quotes for exact phrase match (e.g., "artificial intelligence"). Without quotes, matches any word. ALWAYS use quotes when user requests exact search or specific phrase. CRITICAL: MUST use AND/OR operators between phrases, including quotes and parentheses. Examples: ("machine learning" OR "artificial intelligence"), ("climate change" AND "environmental policy"), (AI AND (crypto OR blockchain)). Simple queries: "machine learning" (single exact phrase), AI crypto (any word). Filters: language, authorId/authorUsername. Date filters: OMIT startDate/endDate by default. ONLY pass if user explicitly requests specific date range (YYYY-MM-DD format). IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended. FIELDS parameter (optional): Specify to get additional/different fields. Available: Core (id, text, authorId, authorUsername, createdAt), Engagement (retweetCount, replyCount, quoteCount, impressionCount, bookmarkCount), Metadata (lang, source, suspended, deleted), Relations (conversationId, quotedTweetId, retweetedTweetId, replyToTweetId, replyToUserId, replyToUsername), Content (hashtags, mentions, mediaUrls), Location (country, region, city). Returns: results array, pagination object, dataDumpExportOperationId for CSV file url. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesFull-text search of post content ONLY. Searches the text/content of posts, NOT author information. EXACT PHRASES: Wrap in double quotes - "machine learning" matches that exact phrase. KEYWORDS: Without quotes, matches posts containing any of the words - AI robotics blockchain. BOOLEAN OPERATORS: MUST explicitly use the keywords AND, OR, NOT (uppercase or lowercase). NO implicit operators - space between words means OR by default. Examples requiring explicit operators: Use "deep learning" AND python (not "deep learning python"). Use tensorflow OR pytorch (not "tensorflow pytorch"). PARENTHESES: Group terms for precise logic - (AI OR "artificial intelligence") AND ethics. FORBIDDEN: NEVER use from:username or author filters in this parameter. Use the authorUsername parameter instead. FORBIDDEN: DO NOT use filter operators with colons (from:, to:, lang:, since:, until:) - use dedicated parameters instead. Query examples: "climate change" | AI OR blockchain | "neural networks" AND python | (startup OR entrepreneur) NOT "venture capital"
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "text", "authorUsername", "createdAtDate"]. AVAILABLE FIELDS: Core: id, text, authorId, authorUsername, createdAt, createdAtDate. Engagement: retweetCount, replyCount, likeCount, quoteCount, impressionCount, bookmarkCount. Metadata: lang, possiblySensitive, suspended, deleted, source. Relations: conversationId, quotedTweetId, retweetedTweetId, replyToTweetId, replyToUserId, replyToUsername, originalTweetId (original tweet ID if edited, equals own ID if unedited), editedTweets (array of edited version IDs). Content: hashtags, mentions, mediaUrls, grokGeneratedContent (array of Grok AI generated content grok_post_id, grok_url, media_id). Location: country, region, city. EXAMPLES: ["id", "text"] for minimal, ["id", "text", "retweetCount", "likeCount", "hashtags"] for basic analysis, or specify all fields if needed.
endDateNo
authorIdNoFilter posts by author ID (numeric string). Alternative to authorUsername.
languageNo
startDateNo
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
authorUsernameNoFilter posts by author username. Use this parameter to search posts from a specific user. Example: authorUsername="elonmusk" finds all posts by @elonmusk. NEVER use from:username in query - always use this parameter instead.
getTwitterUserTry in Inspector

Get Twitter user profile by ID or username. Use identifierType="id" for numeric user ID, identifierType="username" for username. For username: Use ONLY when you have the precise username (e.g., "elonmusk"). For person names or fuzzy search, use searchTwitterUsers instead. Optional fields parameter for performance (default: ["id", "username", "name"]). Available fields: id, username, name, description, location, followersCount, followingCount, verified, profileImageUrl, and more. Returns: single user profile with id, username, name, bio, followers_count, following_count, tweet_count, created_at, authenticity_score, inauthentic_type. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "username", "name"]. AVAILABLE FIELDS: Core: id, username, name, description, location, verified, verifiedType, protected. Engagement: followersCount, followingCount, tweetCount, listedCount, likesCount, mediaCount. Profile: profileImageUrl, profileBannerUrl, profileInterstitialType. Metadata: source, status, pinnedTweetId, isVerified, accountBasedIn, locationAccurate, label, labelType. Analytics: collectedFollowingCount, collectedFollowersCount, collectedFollowersCoverage, collectedFollowingCoverage, avgTweetsPerDayLastMonth. Advanced: nLang, nLangsFiltered, inauthenticType, isInauthentic, isInauthenticProbScore, isInauthenticCalculatedAt. Timestamps: xFetchedAt, modifiedAt, createdAt, xModifiedAt. Account History: verifiedSinceDatetime, usernameChanges, lastUsernameChangeDatetime. EXAMPLES: ["id", "username"] for minimal, ["username", "name", "description, followersCount"] for basic info, or specify all fields if needed.
_requestIdNo
identifierYesUser ID (numeric) or username depending on identifierType.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
identifierTypeYesType of identifier provided. Use "id" for numeric user ID, "username" for username.
getTwitterUserConnectionsTry in Inspector

Get Twitter user connections (followers or following) with server-side pagination (1000 users per page with default fields). Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. Use connectionType="followers" for users who follow them, connectionType="following" for users they follow. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full dataset without pagination limits. Ideal for: network analysis, audience demographics, engagement patterns. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows). SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). Optional fields parameter for performance (default: ["id", "username", "name"]). If default fields are used page size will be 1000, if additional fields are added it will be 100. Available fields: id, username, name, description, location, followersCount, followingCount, verified, profileImageUrl, and more. DATA FRESHNESS: Automatically checks data age (> 1 week triggers refresh from API). FORCE LATEST: Use sparingly - forceLatest=true bypasses cache for real-time data (increases latency/costs). Returns: results array, count (number of users in response), pagination object (resultsCount, totalPages, totalRows, pageSize, tableName), totalDataCount (actual total on Twitter), dataSource. CRITICAL - Understanding totalRows vs totalDataCount: totalRows indicates ONLY what we have in our database. totalDataCount (when present) shows the actual count from Twitter. If totalDataCount is missing or undefined, you CANNOT claim totalRows represents all connections - it only shows our partial database data. If totalDataCount > totalRows, we only have partial data. Always check if totalDataCount exists before making claims about total counts. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "username", "name"]. AVAILABLE FIELDS: Core: id, username, name, description, location, verified, verifiedType, protected. Engagement: followersCount, followingCount, tweetCount, listedCount, likesCount, mediaCount. Profile: profileImageUrl, profileBannerUrl, profileInterstitialType. Metadata: source, status, pinnedTweetId, isVerified, accountBasedIn, locationAccurate, label, labelType. Analytics: collectedFollowingCount, collectedFollowersCount, collectedFollowersCoverage, collectedFollowingCoverage, avgTweetsPerDayLastMonth. Advanced: nLang, nLangsFiltered, inauthenticType, isInauthentic, isInauthenticProbScore, isInauthenticCalculatedAt. Timestamps: xFetchedAt, modifiedAt, createdAt, xModifiedAt. Account History: verifiedSinceDatetime, usernameChanges, lastUsernameChangeDatetime. EXAMPLES: ["id", "username"] for minimal, ["username", "name", "description, followersCount"] for basic info, or specify all fields if needed.
usernameYesTwitter username (without @ symbol).
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
connectionTypeYesType of connection to retrieve. Use "followers" for users who follow this account, "following" for users this account follows.
getTwitterUsersByKeywordsTry in Inspector

Search for USERS who authored tweets/comments/quotes/retweets matching keywords. Returns operation ID - IMMEDIATELY call checkOperationStatus to get results. CRITICAL: Results are ONLY available via checkOperationStatus - do not try other tools or wait for user prompt. USE CASE: Find users who have posted content about specific topics, keywords, or phrases. Returns unique, deduplicated user profiles. CSV EXPORT: Response includes dataDumpExportOperationId for downloading COMPLETE dataset as CSV. Use checkOperationStatus with dataDumpExportOperationId to get S3 download link (ready in ~30-60 seconds). CODE EXECUTION: Download CSV and use code execution to analyze full dataset without pagination limits. Ideal for: audience discovery, influencer identification, community analysis. FIRST CALL: Omit pageNumber and tableName. Creates cached table, returns page 1 with pagination metadata (tableName, totalPages, totalRows) plus dataDumpExportOperationId. SUBSEQUENT PAGES: Use tableName from first response with pageNumber (2, 3, etc.) to fetch additional pages. Cannot pass pageNumber without tableName. BULK FETCH: Optionally use pageNumberEnd with pageNumber and tableName to fetch multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 returns pages 1-5). QUERY SYNTAX: Use double quotes for exact phrase match (e.g., "artificial intelligence"). Without quotes, matches any word. ALWAYS use quotes when user requests exact search or specific phrase. Examples: "machine learning" (exact), AI crypto (any word), "climate change" policy (exact phrase + keyword). FILTERS: - startDate/endDate: Filter by tweet date (YYYY-MM-DD format). OMIT by default, only use if user explicitly requests date range. IMPORTANT!!!!!: THE CURRENT YEAR IS 2026. When user requests relative dates (last week, last month), verify the current date from your system context and double-check the calculated dates - models often get the year wrong, searching one year earlier than intended.

  • language: Filter tweets by language (en, EN, English, es, Spanish, etc.). Optional fields parameter for performance (default: ["id", "username", "name"]). Available fields: id, username, name, description, location, followersCount, followingCount, verified, profileImageUrl, and more. AGGREGATE FIELDS (from matching tweets) - MUST BE EXPLICITLY REQUESTED IN FIELDS: aggRelevance (relevance score for sorting), relevantTweetsCount (count of matching tweets per user), relevantTweetsImpressionsSum, relevantTweetsLikesSum, relevantTweetsQuotesSum, relevantTweetsRepliesSum, relevantTweetsRetweetsSum. These return aggregated metrics from all matched tweets for each user. Returns: results array of unique user profiles, count, pagination object, dataDumpExportOperationId for CSV. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesFull-text search of tweet content to find users who authored matching posts. Searches tweets/comments/quotes/retweets, returns UNIQUE user authors (deduplicated). EXACT PHRASES: Wrap in double quotes - "machine learning" matches that exact phrase. KEYWORDS: Without quotes, matches posts containing any of the words - AI robotics blockchain. BOOLEAN OPERATORS: MUST explicitly use the keywords AND, OR, NOT (uppercase or lowercase). NO implicit operators - space between words means OR by default. Examples requiring explicit operators: Use "deep learning" AND python (not "deep learning python"). Use tensorflow OR pytorch (not "tensorflow pytorch"). PARENTHESES: Group terms for precise logic - (AI OR "artificial intelligence") AND ethics. FORBIDDEN: DO NOT use filter operators with colons (from:, to:, lang:, since:, until:) - use dedicated parameters instead. Query examples: "climate change" | AI OR blockchain | "neural networks" AND python | (startup OR entrepreneur) NOT "venture capital"
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "username", "name"]. AVAILABLE FIELDS: Core: id, username, name, description, location, verified, verifiedType, protected. Engagement: followersCount, followingCount, tweetCount, listedCount, likesCount, mediaCount. Profile: profileImageUrl, profileBannerUrl, profileInterstitialType. Metadata: source, status, pinnedTweetId, isVerified, accountBasedIn, locationAccurate, label, labelType. Analytics: collectedFollowingCount, collectedFollowersCount, collectedFollowersCoverage, collectedFollowingCoverage, avgTweetsPerDayLastMonth. Advanced: nLang, nLangsFiltered, inauthenticType, isInauthentic, isInauthenticProbScore, isInauthenticCalculatedAt. Timestamps: xFetchedAt, modifiedAt, createdAt, xModifiedAt. Account History: verifiedSinceDatetime, usernameChanges, lastUsernameChangeDatetime. Aggregations (from matching tweets, not all tweets of the user): aggRelevance (relevance score), relevantTweetsCount (count of matching tweets), relevantTweetsImpressionsSum, relevantTweetsLikesSum, relevantTweetsQuotesSum, relevantTweetsRepliesSum, relevantTweetsRetweetsSum. EXAMPLES: ["id", "username"] for minimal, ["username", "name", "followersCount", "relevantTweetsLikesSum", "relevantTweetsCount"] to include engagement aggregations.
endDateNo
languageNo
startDateNo
tableNameNoCached table name from previous pagination request. Required when fetching pageNumber > 1. Returned in first page response.
_requestIdNo
pageNumberNoPage number to fetch (1-indexed). Must be provided with tableName to fetch subsequent pages. Omit for first page.
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
forceLatestNoUSE SPARINGLY: Force fetching the latest data from the API, bypassing cache checks. Only use when explicitly required (e.g., "get the latest", "most recent", "real-time"). WARNING: Increases latency and API costs. Default: false (uses intelligent caching).
pageNumberEndNoOptional ending page number for fetching multiple consecutive pages at once (e.g., pageNumber=1, pageNumberEnd=5 fetches pages 1-5). Must be >= pageNumber. Omit to fetch single page only. Requires tableName.
getUserAccessKeyTry in Inspector

Retrieve authenticated user access key. Required: authentication, confirmation. Returns: access key, metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
_requestIdNo
confirmRetrievalYesMust be true to retrieve key. Security confirmation required.
searchInstagramUsersTry in Inspector

Search users by person name, partial username, or fuzzy match using real-time external API. PRIMARY USE: When given person's name (e.g., "Cristiano Ronaldo", "Kim Kardashian"), partial info, or uncertain username. Use for: Name-based search, finding multiple candidates, fuzzy matching, discovering users. NOT for: Exact username lookup (use getInstagramUserByUsername when username is certain). Optional fields parameter for performance (default: ["id", "username", "fullName"]). Available fields: id, username, fullName, biography, isPrivate, isVerified, followerCount, followingCount, mediaCount, profilePicUrl, and more. Returns: array of matching users (default 10, max 10) with userId, username, fullName, followerCount, biography, profilePicUrl. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesSearch query for Instagram users. Supports partial name or username matching.
limitNoMaximum number of users to return. Default: 10, Max: 10.
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "username", "fullName"]. AVAILABLE FIELDS: Core: id, username, fullName, biography, isPrivate, isVerified. Engagement: followerCount, followingCount, mediaCount. Profile: profilePicUrl, profilePicId, profileUrl, externalUrl, hasAnonymousProfilePicture. Timestamps: lastFetch, lastFetchDatetime, xLastUpdated. EXAMPLES: ["id", "username"] for minimal, ["username", "fullName", "followerCount"] for basic info, or specify all fields if needed.
_requestIdNo
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
searchRedditSubredditsTry in Inspector

Search Reddit subreddits by keywords using real-time external API. Searches subreddit names and descriptions to find communities. Use for: Discovering communities about topics, finding niche subreddits, exploring Reddit communities. Optional fields parameter for performance (default: ["id", "displayName", "title", "subscribersCount"]). Available fields: id, displayName, title, publicDescription, description, subscribersCount, activeUserCount, subredditType, over18, lang, url, subredditUrl, iconImg, bannerImg, headerImg, communityIcon, createdAt. Returns: array of matching subreddits (default 50, max 50) with id, name, description, subscriber count. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of subreddits to return. Default: 50, Max: 50.
queryYesSearch query for Reddit subreddits. Searches subreddit names and descriptions.
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "displayName", "title", "subscribersCount"]. AVAILABLE FIELDS: Core: id, displayName, title, publicDescription, description. Stats: subscribersCount, activeUserCount. Meta: subredditType, over18, lang, url, subredditUrl. Images: iconImg, bannerImg, headerImg, communityIcon. Timestamps: createdAt, createdAtTimestamp, createdAtDate, lastFetch, lastFetchDatetime, xLastUpdated. EXAMPLES: ["id", "displayName", "subscribersCount"] for minimal, ["displayName", "publicDescription", "subscribersCount", "activeUserCount"] for discovery.
_requestIdNo
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
searchRedditUsersTry in Inspector

Search Reddit users by name, username, or profile description using real-time external API. PRIMARY USE: When given a person's name (e.g., "spez", "GallowBoob"), partial info, or uncertain username. Use for: Name-based search, finding multiple candidates, fuzzy matching, discovering users. NOT for: Exact username lookup (use getRedditUser when username is certain). Optional fields parameter for performance (default: ["id", "username", "totalKarma"]). Available fields: id, username, profileUrl, profilePicUrl, snoovatarImg, linkKarma, commentKarma, totalKarma, awardeeKarma, awarderKarma, isGold, isMod, isEmployee, hasVerifiedEmail, isSuspended, verified, isBlocked, acceptFollowers, hasSubscribed, hideFromRobots, prefShowSnoovatar, profileDescription, profileBannerUrl, profileTitle, createdAt. Returns: array of matching users (default 50, max 50) with id, username, karma metrics, profile info. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesSearch query for Reddit users. Can be username, name, or keywords from profile.
limitNoMaximum number of users to return. Default: 50, Max: 50.
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "username", "totalKarma"]. AVAILABLE FIELDS: Core: id, username, profileUrl, profilePicUrl, snoovatarImg. Karma: linkKarma, commentKarma, totalKarma, awardeeKarma, awarderKarma. Status: isGold, isMod, isEmployee, hasVerifiedEmail, isSuspended, verified, isBlocked, acceptFollowers, hasSubscribed, hideFromRobots, prefShowSnoovatar. Profile: profileDescription, profileBannerUrl, profileTitle. Timestamps: createdAt, createdAtTimestamp, createdAtDate, lastFetch, lastFetchDatetime, xLastUpdated. EXAMPLES: ["id", "username"] for minimal, ["username", "totalKarma", "profileDescription"] for basic info.
_requestIdNo
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.
searchTwitterUsersTry in Inspector

Search users by person name, partial username, or fuzzy match using real-time external API. PRIMARY USE: When given person's name (e.g., "Elon Musk", "Sam Altman"), partial info, or uncertain username. Use for: Name-based search, finding multiple candidates, fuzzy matching, discovering users. NOT for: Exact username lookup (use getTwitterUserByUsername when username is certain). Optional fields parameter for performance (default: ["id", "username", "name"]). Available fields: id, username, name, description, location, followersCount, followingCount, verified, profileImageUrl, and more. Returns: array of matching users (default 10, max 10) with id, username, name, bio, followers_count. This is a safe, read-only tool for analyzing searchable information.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
limitNo
fieldsNoPERFORMANCE OPTIMIZATION: Specify fields you need. DEFAULT (if omitted): ["id", "username", "name"]. AVAILABLE FIELDS: Core: id, username, name, description, location, verified, verifiedType, protected. Engagement: followersCount, followingCount, tweetCount, listedCount, likesCount, mediaCount. Profile: profileImageUrl, profileBannerUrl, profileInterstitialType. Metadata: source, status, pinnedTweetId, isVerified, accountBasedIn, locationAccurate, label, labelType. Analytics: collectedFollowingCount, collectedFollowersCount, collectedFollowersCoverage, collectedFollowingCoverage, avgTweetsPerDayLastMonth. Advanced: nLang, nLangsFiltered, inauthenticType, isInauthentic, isInauthenticProbScore, isInauthenticCalculatedAt. Timestamps: xFetchedAt, modifiedAt, createdAt, xModifiedAt. Account History: verifiedSinceDatetime, usernameChanges, lastUsernameChangeDatetime. EXAMPLES: ["id", "username"] for minimal, ["username", "name", "description, followersCount"] for basic info, or specify all fields if needed.
_requestIdNo
userPromptNoCRITICAL FOR ACCURACY: Include the complete user question to enable query optimization and context-aware filtering. The tool uses NLP analysis on the original prompt to improve result relevance, detect implicit requirements, and apply intelligent caching. Omitting this may result in suboptimal or incomplete results.

FAQ

How do I claim this server?

To claim this server, publish a /.well-known/glama.json file on your server's domain with the following structure:

{ "$schema": "https://glama.ai/mcp/schemas/connector.json", "maintainers": [ { "email": "your-email@example.com" } ] }

The email address must match the email associated with your Glama account. Once verified, the server will appear as claimed by you.

What are the benefits of claiming a server?
  • Control your server's listing on Glama, including description and metadata
  • Receive usage reports showing how your server is being used
  • Get monitoring and health status updates for your server
Try in Browser

Your Connectors

Sign in to create a connector for this server.