zh.json•4.02 kB
{
"The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.": "The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.",
"Server URL": "服务器 URL",
"Access Token": "Access Token",
"LocalAI Instance URL": "LocalAI Instance URL",
"LocalAI Access Token": "LocalAI Access Token",
"Ask LocalAI": "Ask LocalAI",
"Custom API Call": "自定义 API 呼叫",
"Ask LocalAI anything you want!": "Ask LocalAI anything you want!",
"Make a custom API call to a specific endpoint": "将一个自定义 API 调用到一个特定的终点",
"Model": "Model",
"Question": "Question",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Frequency penalty": "Frequency penalty",
"Presence penalty": "Presence penalty",
"Roles": "角色",
"Method": "方法",
"Headers": "信头",
"Query Parameters": "查询参数",
"Body": "正文内容",
"Response is Binary ?": "Response is Binary ?",
"No Error on Failure": "失败时没有错误",
"Timeout (in seconds)": "超时(秒)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
"Authorization headers are injected automatically from your connection.": "授权头自动从您的连接中注入。",
"Enable for files like PDFs, images, etc..": "Enable for files like PDFs, images, etc..",
"GET": "获取",
"POST": "帖子",
"PATCH": "PATCH",
"PUT": "弹出",
"DELETE": "删除",
"HEAD": "黑色"
}