translation.json•6.96 kB
{
"Extract structured data from websites using AI with natural language prompts": "Extract structured data from websites using AI with natural language prompts",
"\nFollow these steps to obtain your Firecrawl API Key:\n\n1. Visit [Firecrawl](https://firecrawl.dev) and create an account.\n2. Log in and navigate to your dashboard.\n3. Locate and copy your API key from the API settings section.\n": "\nFollow these steps to obtain your Firecrawl API Key:\n\n1. Visit [Firecrawl](https://firecrawl.dev) and create an account.\n2. Log in and navigate to your dashboard.\n3. Locate and copy your API key from the API settings section.\n",
"Scrape Website": "Scrape Website",
"Start Crawl": "Start Crawl",
"Crawl Results": "Crawl Results",
"Custom API Call": "Custom API Call",
"Scrape a website by performing a series of actions like clicking, typing, taking screenshots, and extracting data.": "Scrape a website by performing a series of actions like clicking, typing, taking screenshots, and extracting data.",
"Start crawling multiple pages from a website based on specified rules and patterns.": "Start crawling multiple pages from a website based on specified rules and patterns.",
"Get the results of a crawl job.": "Get the results of a crawl job.",
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
"Website URL": "Website URL",
"Timeout (ms)": "Timeout (ms)",
"Perform Actions Before Scraping": "Perform Actions Before Scraping",
"Action Properties": "Action Properties",
"Extraction Type": "Extraction Type",
"Extraction Properties": "Extraction Properties",
"URL": "URL",
"Exclude Paths": "Exclude Paths",
"Include Paths": "Include Paths",
"Maximum Path Depth": "Maximum Path Depth",
"Maximum Discovery Depth": "Maximum Discovery Depth",
"Ignore Sitemap": "Ignore Sitemap",
"Ignore Query Parameters": "Ignore Query Parameters",
"Limit": "Limit",
"Allow Backward Links": "Allow Backward Links",
"Allow External Links": "Allow External Links",
"Deliver Results to Webhook": "Deliver Results to Webhook",
"Webhook Properties": "Webhook Properties",
"Crawl ID": "Crawl ID",
"Method": "Method",
"Headers": "Headers",
"Query Parameters": "Query Parameters",
"Body": "Body",
"Response is Binary ?": "Response is Binary ?",
"No Error on Failure": "No Error on Failure",
"Timeout (in seconds)": "Timeout (in seconds)",
"The webpage URL to scrape.": "The webpage URL to scrape.",
"Maximum time to wait for the page to load (in milliseconds).": "Maximum time to wait for the page to load (in milliseconds).",
"Enable to perform a sequence of actions on the page before scraping (like clicking buttons, filling forms, etc.). See [Firecrawl Actions Documentation](https://docs.firecrawl.dev/api-reference/endpoint/scrape#body-actions) for details on available actions and their parameters.": "Enable to perform a sequence of actions on the page before scraping (like clicking buttons, filling forms, etc.). See [Firecrawl Actions Documentation](https://docs.firecrawl.dev/api-reference/endpoint/scrape#body-actions) for details on available actions and their parameters.",
"Properties for actions that will be performed on the page.": "Properties for actions that will be performed on the page.",
"Choose how to extract data from the webpage.": "Choose how to extract data from the webpage.",
"Properties for data extraction from the webpage.": "Properties for data extraction from the webpage.",
"The base URL to start crawling from.": "The base URL to start crawling from.",
"URL pathname regex patterns that exclude matching URLs from the crawl. For example, if you set \"excludePaths\": [\"blog/.*\"] for the base URL firecrawl.dev, any results matching that pattern will be excluded, such as https://www.firecrawl.dev/blog/firecrawl-launch-week-1-recap.": "URL pathname regex patterns that exclude matching URLs from the crawl. For example, if you set \"excludePaths\": [\"blog/.*\"] for the base URL firecrawl.dev, any results matching that pattern will be excluded, such as https://www.firecrawl.dev/blog/firecrawl-launch-week-1-recap.",
"URL pathname regex patterns that include matching URLs in the crawl. Only the paths that match the specified patterns will be included in the response. For example, if you set \"includePaths\": [\"blog/.*\"] for the base URL firecrawl.dev, only results matching that pattern will be included, such as https://www.firecrawl.dev/blog/firecrawl-launch-week-1-recap.": "URL pathname regex patterns that include matching URLs in the crawl. Only the paths that match the specified patterns will be included in the response. For example, if you set \"includePaths\": [\"blog/.*\"] for the base URL firecrawl.dev, only results matching that pattern will be included, such as https://www.firecrawl.dev/blog/firecrawl-launch-week-1-recap.",
"Maximum depth to crawl relative to the base URL. Basically, the max number of slashes the pathname of a scraped URL may contain.": "Maximum depth to crawl relative to the base URL. Basically, the max number of slashes the pathname of a scraped URL may contain.",
"Maximum depth to crawl based on discovery order. The root site and sitemapped pages has a discovery depth of 0. For example, if you set it to 1, and you set ignoreSitemap, you will only crawl the entered URL and all URLs that are linked on that page.": "Maximum depth to crawl based on discovery order. The root site and sitemapped pages has a discovery depth of 0. For example, if you set it to 1, and you set ignoreSitemap, you will only crawl the entered URL and all URLs that are linked on that page.",
"Ignore the website sitemap when crawling": "Ignore the website sitemap when crawling",
"Do not re-scrape the same path with different (or none) query parameters": "Do not re-scrape the same path with different (or none) query parameters",
"Maximum number of pages to crawl. Default limit is 10000.": "Maximum number of pages to crawl. Default limit is 10000.",
"Enables the crawler to navigate from a specific URL to previously linked pages.": "Enables the crawler to navigate from a specific URL to previously linked pages.",
"Allows the crawler to follow links to external websites.": "Allows the crawler to follow links to external websites.",
"Enable to send crawl results to a webhook URL.": "Enable to send crawl results to a webhook URL.",
"Properties for webhook configuration.": "Properties for webhook configuration.",
"The ID of the crawl job to check.": "The ID of the crawl job to check.",
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
"Enable for files like PDFs, images, etc..": "Enable for files like PDFs, images, etc..",
"GET": "GET",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "DELETE",
"HEAD": "HEAD"
}