Skip to main content
Glama

web_data_walmart_seller

Extract structured Walmart seller data from URLs using cached lookups for reliable access to product listings and seller information.

Instructions

Quickly read structured walmart seller data. Requires a valid walmart seller URL. This can be a cache lookup, so it can be more reliable than scraping

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYes

Implementation Reference

  • server.js:315-323 (registration)
    Defines the tool metadata for 'walmart_seller', which is used to register the tool as 'web_data_walmart_seller' with dataset_id 'gd_m7ke48w81ocyu4hhz0', description, and input schema requiring 'url'.
    id: 'walmart_seller', dataset_id: 'gd_m7ke48w81ocyu4hhz0', description: [ 'Quickly read structured walmart seller data.', 'Requires a valid walmart seller URL.', 'This can be a cache lookup, so it can be more reliable than scraping', ].join('\n'), inputs: ['url'], }, {
  • Dynamically constructs the Zod input schema for the tool based on the 'inputs' array (['url'] for this tool), using z.string().url() for 'url'.
    let parameters = {}; for (let input of inputs) { let param_schema = input=='url' ? z.string().url() : z.string(); parameters[input] = defaults[input] !== undefined ? param_schema.default(defaults[input]) : param_schema; } addTool({ name: `web_data_${id}`, description, parameters: z.object(parameters),
  • server.js:683-744 (registration)
    Registers the MCP tool 'web_data_walmart_seller' via addTool, including name construction `web_data_walmart_seller`, schema, and execute handler.
    addTool({ name: `web_data_${id}`, description, parameters: z.object(parameters), execute: tool_fn(`web_data_${id}`, async(data, ctx)=>{ let trigger_response = await axios({ url: 'https://api.brightdata.com/datasets/v3/trigger', params: {dataset_id, include_errors: true}, method: 'POST', data: [data], headers: api_headers(), }); if (!trigger_response.data?.snapshot_id) throw new Error('No snapshot ID returned from request'); let snapshot_id = trigger_response.data.snapshot_id; console.error(`[web_data_${id}] triggered collection with ` +`snapshot ID: ${snapshot_id}`); let max_attempts = 600; let attempts = 0; while (attempts < max_attempts) { try { if (ctx && ctx.reportProgress) { await ctx.reportProgress({ progress: attempts, total: max_attempts, message: `Polling for data (attempt ` +`${attempts + 1}/${max_attempts})`, }); } let snapshot_response = await axios({ url: `https://api.brightdata.com/datasets/v3` +`/snapshot/${snapshot_id}`, params: {format: 'json'}, method: 'GET', headers: api_headers(), }); if (['running', 'building'].includes(snapshot_response.data?.status)) { console.error(`[web_data_${id}] snapshot not ready, ` +`polling again (attempt ` +`${attempts + 1}/${max_attempts})`); attempts++; await new Promise(resolve=>setTimeout(resolve, 1000)); continue; } console.error(`[web_data_${id}] snapshot data received ` +`after ${attempts + 1} attempts`); let result_data = JSON.stringify(snapshot_response.data); return result_data; } catch(e){ console.error(`[web_data_${id}] polling error: ` +`${e.message}`); attempts++; await new Promise(resolve=>setTimeout(resolve, 1000)); } } throw new Error(`Timeout after ${max_attempts} seconds waiting ` +`for data`); }), });
  • The core handler logic for executing 'web_data_walmart_seller': triggers the BrightData dataset 'gd_m7ke48w81ocyu4hhz0' with input data, polls the snapshot status every second up to 10 minutes, and returns the collected structured data as JSON string.
    execute: tool_fn(`web_data_${id}`, async(data, ctx)=>{ let trigger_response = await axios({ url: 'https://api.brightdata.com/datasets/v3/trigger', params: {dataset_id, include_errors: true}, method: 'POST', data: [data], headers: api_headers(), }); if (!trigger_response.data?.snapshot_id) throw new Error('No snapshot ID returned from request'); let snapshot_id = trigger_response.data.snapshot_id; console.error(`[web_data_${id}] triggered collection with ` +`snapshot ID: ${snapshot_id}`); let max_attempts = 600; let attempts = 0; while (attempts < max_attempts) { try { if (ctx && ctx.reportProgress) { await ctx.reportProgress({ progress: attempts, total: max_attempts, message: `Polling for data (attempt ` +`${attempts + 1}/${max_attempts})`, }); } let snapshot_response = await axios({ url: `https://api.brightdata.com/datasets/v3` +`/snapshot/${snapshot_id}`, params: {format: 'json'}, method: 'GET', headers: api_headers(), }); if (['running', 'building'].includes(snapshot_response.data?.status)) { console.error(`[web_data_${id}] snapshot not ready, ` +`polling again (attempt ` +`${attempts + 1}/${max_attempts})`); attempts++; await new Promise(resolve=>setTimeout(resolve, 1000)); continue; } console.error(`[web_data_${id}] snapshot data received ` +`after ${attempts + 1} attempts`); let result_data = JSON.stringify(snapshot_response.data); return result_data; } catch(e){ console.error(`[web_data_${id}] polling error: ` +`${e.message}`); attempts++; await new Promise(resolve=>setTimeout(resolve, 1000)); } } throw new Error(`Timeout after ${max_attempts} seconds waiting ` +`for data`); }),
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dsouza-anush/brightdata-mcp-heroku'

If you have feedback or need assistance with the MCP directory API, please join our Discord server