hi.json•3.26 kB
{
"OpenRouter": "OpenRouter",
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "Use any AI model to generate code, text, or images via OpenRouter.ai.",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n",
"Ask LLM": "Ask LLM",
"Custom API Call": "Custom API Call",
"Ask any model supported by Open Router.": "Ask any model supported by Open Router.",
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
"Model": "Model",
"Prompt": "Prompt",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Method": "Method",
"Headers": "Headers",
"Query Parameters": "Query Parameters",
"Body": "Body",
"No Error on Failure": "No Error on Failure",
"Timeout (in seconds)": "Timeout (in seconds)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.",
"The prompt to send to the model.": "The prompt to send to the model.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
"GET": "GET",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "DELETE",
"HEAD": "HEAD"
}