zh.json•5.83 kB
{
"Extract structured data from websites using AI with natural language prompts": "Extract structured data from websites using AI with natural language prompts",
"\nFollow these steps to obtain your Firecrawl API Key:\n\n1. Visit [Firecrawl](https://firecrawl.dev) and create an account.\n2. Log in and navigate to your dashboard.\n3. Locate and copy your API key from the API settings section.\n": "\nFollow these steps to obtain your Firecrawl API Key:\n\n1. Visit [Firecrawl](https://firecrawl.dev) and create an account.\n2. Log in and navigate to your dashboard.\n3. Locate and copy your API key from the API settings section.\n",
"Scrape Website": "Scrape Website",
"Extract Structured Data": "Extract Structured Data",
"Crawl": "Crawl",
"Crawl Results": "Crawl Results",
"Map Websites": "Map Websites",
"Custom API Call": "自定义 API 呼叫",
"Scrape a website by performing a series of actions like clicking, typing, taking screenshots, and extracting data.": "Scrape a website by performing a series of actions like clicking, typing, taking screenshots, and extracting data.",
"Extract structured data from multiple URLs using AI.": "Extract structured data from multiple URLs using AI.",
"Crawl multiple pages from a website based on specified rules and patterns.": "Crawl multiple pages from a website based on specified rules and patterns.",
"Get the results of a crawl job.": "Get the results of a crawl job.",
"Input a website and get all the urls on the website.": "Input a website and get all the urls on the website.",
"Make a custom API call to a specific endpoint": "将一个自定义 API 调用到一个特定的终点",
"Website URL": "Website URL",
"Timeout (ms)": "Timeout (ms)",
"Perform Actions Before Scraping": "Perform Actions Before Scraping",
"Action Properties": "Action Properties",
"Output Format": "Output Format",
"Extraction Prompt": "Extraction Prompt",
"Schema Mode": "Schema Mode",
"Data Definition": "Data Definition",
"URLs": "URLs",
"Enable Web Search": "Enable Web Search",
"Timeout (seconds)": "Timeout (seconds)",
"Data Schema Type": "Data Schema Type",
"URL": "URL",
"Prompt": "Prompt",
"Limit": "Limit",
"Only Main Content": "Only Main Content",
"Deliver Results to Webhook": "Deliver Results to Webhook",
"Webhook Properties": "Webhook Properties",
"Crawl ID": "Crawl ID",
"Main Website URL": "Main Website URL",
"Include subdomain": "Include subdomain",
"Method": "方法",
"Headers": "信头",
"Query Parameters": "查询参数",
"Body": "正文内容",
"Response is Binary ?": "Response is Binary ?",
"No Error on Failure": "失败时没有错误",
"Timeout (in seconds)": "超时(秒)",
"The webpage URL to scrape.": "The webpage URL to scrape.",
"Maximum time to wait for the page to load (in milliseconds).": "Maximum time to wait for the page to load (in milliseconds).",
"Enable to perform a sequence of actions on the page before scraping (like clicking buttons, filling forms, etc.). See [Firecrawl Actions Documentation](https://docs.firecrawl.dev/api-reference/endpoint/scrape#body-actions) for details on available actions and their parameters.": "Enable to perform a sequence of actions on the page before scraping (like clicking buttons, filling forms, etc.). See [Firecrawl Actions Documentation](https://docs.firecrawl.dev/api-reference/endpoint/scrape#body-actions) for details on available actions and their parameters.",
"Properties for actions that will be performed on the page.": "Properties for actions that will be performed on the page.",
"Choose what format you want your output in.": "Choose what format you want your output in.",
"Prompt for extracting data.": "Prompt for extracting data.",
"Data schema type.": "Data schema type.",
"Add one or more URLs to extract data from.": "Add one or more URLs to extract data from.",
"Describe what information you want to extract.": "Describe what information you want to extract.",
"Enable web search to find additional context.": "Enable web search to find additional context.",
"Timeout in seconds after which the task will be cancelled": "Timeout in seconds after which the task will be cancelled",
"For complex schema, you can use advanced mode.": "For complex schema, you can use advanced mode.",
"The base URL to start crawling from.": "The base URL to start crawling from.",
"Maximum number of pages to crawl. Default limit is 10.": "Maximum number of pages to crawl. Default limit is 10.",
"Only return the main content of the page, excluding headers, navs, footers, etc.": "Only return the main content of the page, excluding headers, navs, footers, etc.",
"Enable to send crawl results to a webhook URL.": "Enable to send crawl results to a webhook URL.",
"Properties for webhook configuration.": "Properties for webhook configuration.",
"The ID of the crawl job to check.": "The ID of the crawl job to check.",
"The webpage URL to start scraping from.": "The webpage URL to start scraping from.",
"Include and crawl pages from subdomains of the target website (e.g., blog.example.com, shop.example.com) in addition to the main domain.": "Include and crawl pages from subdomains of the target website (e.g., blog.example.com, shop.example.com) in addition to the main domain.",
"Maximum number of links to return (max: 100,000)": "Maximum number of links to return (max: 100,000)",
"Authorization headers are injected automatically from your connection.": "授权头自动从您的连接中注入。",
"Enable for files like PDFs, images, etc..": "Enable for files like PDFs, images, etc..",
"Simple": "Simple",
"Advanced": "Advanced",
"GET": "获取",
"POST": "帖子",
"PATCH": "PATCH",
"PUT": "弹出",
"DELETE": "删除",
"HEAD": "黑色"
}