Skip to main content
Glama
ECNU3D

Universal Image Generator MCP Server

by ECNU3D
glm-4.md30.2 kB
Title: ZHIPU AI OPEN PLATFORM URL Source: https://bigmodel.cn/dev/api/normal-model/glm-4 Published Time: Wed, 09 Jul 2025 03:20:58 GMT Markdown Content: GLM-4 ----- GLM-4 offers multiple models suitable for various application scenarios. View the GLM-4 series [model comparison](https://bigmodel.cn/dev/howuse/model) to select the most suitable model. Model Codes: glm-4-plus, glm-4-0520, glm-4, glm-4-air, glm-4-airx, glm-4-long, glm-4-flash Synchronous Call ---------------- **Interface Request** | Type | Description | | --- | --- | | Method | https | | Request URL | https://open.bigmodel.cn/api/paas/v4/chat/completions | | Call Method | Synchronous call, wait for the model to complete execution and return the final result or use SSE call | | Character Encoding | UTF-8 | | Request Format | JSON | | Response Format | JSON or Standard Stream Event | | Request Type | POST | | Development Language | Any development language capable of making HTTP requests | ### Request Parameters | Parameter Name | Type | Required | Parameter Description | | --- | --- | --- | --- | | model | String | Yes | The model code to be called. | | messages | List<Object> | Yes | The current conversation message list as the model’s prompt input, provided in JSON array format, e.g., {“role”: “user”, “content”: “Hello”}. Possible message types include system messages, user messages, assistant messages, and tool messages. | | request_id | String | No | Passed by the user side, needs to be unique; used to distinguish each request. If not provided by the user side, the platform will generate one by default. | | do_sample | Boolean | No | When do_sample is true, sampling strategy is enabled; when do_sample is false, sampling strategy parameters such as temperature and top_p will not take effect. Default value is true. | | stream | Boolean | No | This parameter should be set to false or omitted when using synchronous call. It indicates that the model returns all content at once after generating all content. Default value is false. If set to true, the model will return the generated content in chunks via standard Event Stream. When the Event Stream ends, a data: [DONE] message will be returned. | | temperature | Float | No | Sampling temperature, controls the randomness of the output, must be a positive number within the range: [0.0, 1.0], default value is 0.95. | | top_p | Float | No | Another method of temperature sampling, value range is: [0.0, 1.0], default value is 0.7. | | max_tokens | Integer | No | The maximum number of tokens for model output, maximum output is 4095, default value is 1024. | | stop | List | No | The model will stop generating when it encounters the stop specified characters. Currently only supports a single stop word, format: [“stop_word1”]. | | tools | List | No | Tools that the model can call. | | type | String | No | Tool list: Includes function calling, knowledge base retrieval, and web search. For parameter configuration, refer to the**Tools format**. | | tool_choice | String or Object | No | Used to control which function the model chooses to call, only supplemented when the tool type is function. Default is auto, currently only supports auto. | | user_id | String | No | The unique ID of the end user, helps the platform intervene in illegal activities, generate illegal or improper information, or other abuse by the end user. ID length requirement: at least 6 characters, up to 128 characters. | ### Message Field Description **System Message Format** | Parameter Name | Type | Required | Parameter Description | | --- | --- | --- | --- | | role | String | Yes | The role information of the message, should be `system` | | content | String | Yes | Message content | **User Message Format** | Parameter Name | Type | Required | Parameter Description | | --- | --- | --- | --- | | role | String | Yes | The role information of the message, should be `user` | | content | String | Yes | Message content | **Assistant Message Format** | Parameter Name | Type | Required | Parameter Description | | --- | --- | --- | --- | | role | String | Yes | The role information of the message, should be `assistant` | | content | String | Yes | “content” or “tool_calls” required. Message content. Includes the `tool_calls` field, `content` field is empty. | | tool_calls | List | Yes | “content” or “tool_calls” required. Tool call messages generated by the model | | id | String | Yes | Tool ID | | type | String | Yes | Tool type, supports `web_search`, `retrieval`, `function` | | function | Object | No | Not empty when type is “function” | | name | String | Yes | Function name | | arguments | String | Yes | Model-generated function call parameters list in JSON format. Note that the model may generate invalid JSON or fabricate parameters not in your function specification. Validate these parameters in your code before calling the function. | **Tool Message Format** `Tool Message` represents the return result after calling the tool. The model then outputs a natural language message to the user based on the `tool message`. | Parameter Name | Type | Required | Parameter Description | | --- | --- | --- | --- | | role | String | Yes | The role information of the message, should be `tool` | | content | String | Yes | The content of the tool message, the return result after calling the tool. | | tool_call_id | String | Yes | The record of the tool call. | **Tool Message格式** `Tool Message`表示调用工具后的返回结果。模型然后根据`工具消息`输出自然语言格式的消息给用户。 | 参数名称 | 类型 | 必填 | 参数描述 | | --- | --- | --- | --- | | role | String | 是 | 消息的角色信息,此时应为`tool`。 | | content | String | 是 | 工具消息的内容,调用工具后的返回结果。 | | tool_call_id | String | 是 | 工具调用的记录。 | **Tool Message Format** `Tool Message` indicates the returned result after invoking a tool. The model then generates a natural language message for the user based on the `tool message`. | Parameter Name | Type | Required | Parameter Description | | --- | --- | --- | --- | | role | String | Yes | The role of the message, should be `tool` at this time. | | content | String | Yes | The content of the tool message, i.e., the result returned after invoking the tool. | | tool_call_id | String | Yes | The record of the tool invocation. | ### Tools Format **[Web_Search](https://www.bigmodel.cn/dev/howuse/websearch)** | Parameter | Type | Required | Description | | --- | --- | --- | --- | | `type` | string | Yes | `"web_search"`, indicates the tool type is web search | | `enable` | boolean | No | Whether to enable search functionality. Default is `false`. Set to `true` to enable. | | `search_engine` | string | Yes | Type of search engine. Default is `search_std`. Supports: `search_std`, `search_pro`, `search_pro_sogou`, `search_pro_quark`, `search_pro_jina`, `search_pro_bing`. | | `search_query` | string | No | Force trigger a search | | `count` | Int | No | Number of returned results Range: 1-50, max 50 results per search Default is `10` Supported engines: `search_std`, `search_pro`, `search_pro_sogou` For `search_pro_sogou`: allowed values are 10, 20, 30, 40, 50 | | `search_domain_filter` | String | No | Limits search results to specified whitelisted domains. Whitelist: input domains directly (e.g., `www.example.com`) Supported engines: `search_std`, `search_pro`, `search_pro_sogou`, `search_pro_Jina` | | `search_recency_filter` | String | No | Limits search to a specific time range. Default is `noLimit` Values: `oneDay`, within a day `oneWeek`, within a week `oneMonth`, within a month `oneYear`, within a year `noLimit`, no limit (default) Supported engines: `search_std`, `search_pro`, `search_pro_sogou`, `search_pro_quark` | | `content_size` | String | No | Number of characters for webpage summaries. Default is `medium` `medium`: Balanced mode for most queries. 400-600 characters `high`: Maximizes context for comprehensive answers, 2500 characters. | | `result_sequence` | string | No | Specifies whether search results are shown before or after model response. Options: `"before"`, `"after"`. Default is `"after"` | | `search_result` | boolean | No | Whether to return detailed sources of search results. Default is `false` | | `require_search` | boolean | No | Whether to force model response based on search result. Default is `false` | | `search_prompt` | string | No | Prompt to customize how search results are processed. Default Prompt: ``` You are an intelligent Q&A expert with the ability to synthesize information, recognize time, understand semantics, and clean contradictory data. The current date is {{current_date}}. Use this as the only time reference. Based on the following information, provide a comprehensive and accurate answer to the user's question. Only extract valuable content for the answer. Ensure the answer is timely and authoritative. State the answer directly without citing data sources or internal processes. ``` 1 2 3 | **[Function Call](https://www.bigmodel.cn/dev/howuse/functioncall)** | Parameter | Type | Required | Description | | --- | --- | --- | --- | | `type` | string | Yes | `"function"`, indicates the tool type is function call | | `name` | string | Yes | Function name. Can only contain letters, digits, underscores, and hyphens. Max length 64. | | `description` | string | Yes | Function description, explaining its purpose and behavior. | | `parameters` | object | Yes | Parameters defined using JSON Schema. Must pass a JSON Schema object to accurately define accepted parameters. Omit if no parameters are needed when calling the function. ``` { "parameters": { "type": "object", "properties": { "location": { "type": "String", "description": "City, e.g., Beijing" }, "unit": { "type": "String", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 It is recommended to disable `do_sample` or lower `temperature` and `top_p` to improve success rate when using FunctionCall. | **[Retrieval](https://www.bigmodel.cn/dev/howuse/retrieval)** | Parameter Name | Type | Required | Description | | --- | --- | --- | --- | | `type` | string | Yes | `"retrieval"`, indicates the tool type is knowledge base retrieval | | `knowledge_id` | string | Yes | Knowledge base ID, created or obtained from the platform | | `prompt_template` | string | No | Prompt template for requesting the model, a custom request template containing placeholders `{{ knowledge }}` and `{{ question }}` Default template ``` Search for the answer to the question “{{question}}” in the document "{{ knowledge }}”. If an answer is found, respond only using statements from the document; if no answer is found, use your own knowledge to answer and inform the user that the information is not from the document. Do not repeat the question, start the answer directly. ``` 1 2 3 4 | ### Response Content | Parameter Name | Type | Parameter Description | | --- | --- | --- | | id | String | Task ID | | created | Long | Request creation time, Unix timestamp in seconds | | model | String | Model name | | choices | List | Model output content for the current conversation | | index | Integer | Result index | | finish_reason | String | Reason for model inference termination. Can be ‘stop’, ‘tool_calls’, ‘length’, ‘sensitive’, or ‘network_error’. | | message | Object | Model-returned text message | | role | String | Current conversation role, default is ‘assistant’ (model) | | content | String | Current conversation content. Hits function is null, otherwise returns model inference result. | | tool_calls | List<Object> | Function names and parameters generated by the model that should be called. | | function | Object | Contains the function name and JSON format parameters generated by the model. | | name | String | Model-generated function name. | | arguments | Object | JSON format of the function call parameters generated by the model. Validate the parameters before calling the function. | | id | String | Unique identifier for the hit function. | | type | String | Tool type called by the model, currently only supports ‘function’. | | usage | Object | Token usage statistics returned when the model call ends. | | prompt_tokens | Integer | Number of tokens in user input | | completion_tokens | Integer | Number of tokens in model output | | total_tokens | Integer | Total number of tokens | | web_search | List | Information related to web search results. | | icon | String | Icon of the source website | | title | String | Title of the search result | | link | String | Web link of the search result | | media | String | Media source name of the search result webpage | | publish_date | String | Website publication date | | content | String | Text content quoted from the search result webpage | | refer | String | Reference index. | ### Request Example ``` from zhipuai import ZhipuAI client = ZhipuAI(api_key="") # Please fill in your own APIKey response = client.chat.completions.create( model="glm-4", # Please fill in the model name you want to call messages=[ {"role": "user", "content": "As a marketing expert, please create an attractive slogan for my product"}, {"role": "assistant", "content": "Sure, to create an attractive slogan, please tell me some information about your product"}, {"role": "user", "content": "ZhipuAI Open Platform"}, {"role": "assistant", "content": "Ignite the future, ZhipuAI paints the infinite, making innovation within reach!"}, {"role": "user", "content": "Create a more precise and attractive slogan"} ], ) print(response) ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 **Response Example:** ``` { "created": 1703487403, "id": "8239375684858666781", "model": "glm-4", "request_id": "8239375684858666781", "choices": [ { "finish_reason": "stop", "index": 0, "message": { "content": "With AI painting the blueprint — ZhipuAI, making every moment of innovation possible.", "role": "assistant" } } ], "usage": { "completion_tokens": 217, "prompt_tokens": 31, "total_tokens": 248 } } ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Streaming Output ---------------- ### Response Content | Parameter Name | Type | Parameter Description | | --- | --- | --- | | id | String | Task ID generated by the ZhipuAI Open Platform, used when querying the request result. | | created | Long | Request creation time, Unix timestamp in seconds. | | choices | List | Model output content for the current conversation. | | index | Integer | Result index. | | finish_reason | String | Reason for model inference termination. ‘stop’ indicates natural end or stop word trigger, ‘tool_calls’ indicates a function hit, ‘length’ indicates token length limit reached, ‘sensitive’ indicates content intercepted by the security review interface (user should judge and decide whether to retract public content), ‘network_error’ indicates model inference exception. | | delta | Object | Incremental text information returned by the model. | | role | String | Current dialogue role, default is ‘assistant’ (model). | | content | String | Current dialogue content. Null when a function is hit, otherwise returns model inference results. | | tool_calls | List | Function names and parameters generated by the model to be called. | | function | Object | Contains the function name and JSON formatted parameters generated by the model. | | name | String | Function name generated by the model. | | arguments | Object | JSON format of the function call parameters generated by the model. Validate parameters before calling the function. | | id | String | Unique identifier for the function hit. | | type | String | Tool type called by the model, currently only supports ‘function’. | | usage | Object | Token usage statistics returned when the model call ends. | | prompt_tokens | Integer | Number of tokens in user input. | | completion_tokens | Integer | Number of tokens in model output. | | total_tokens | Integer | Total number of tokens. | | web_search | List | Information related to web search results. | | icon | String | Icon of the source website. | | title | String | Title of the search result. | | link | String | Web link of the search result. | | media | String | Media source name of the search result webpage. | | publish_date | String | Website publication date | | content | String | Quoted text content from the search result webpage. | ### Request Example: The latest GLM-4 series models support new features such as system prompts, function calling, retrieval, and Web_Search. To use these new features, you need to upgrade to the latest version of the Python SDK. If you have an older version of the SDK installed, please update to the newest version. ``` pip install --upgrade zhipuai ``` 1 ``` from zhipuai import ZhipuAI client = ZhipuAI(api_key="") # Please fill in your own APIKey response = client.chat.completions.create( model="glm-4", # Please fill in the model name you want to call messages=[ {"role": "system", "content": "You are a helpful assistant that answers various questions, providing professional, accurate, and insightful advice."}, {"role": "user", "content": "I am very interested in the planets of the solar system, especially Saturn. Please provide basic information about Saturn, including its size, composition, ring system, and any unique astronomical phenomena."}, ], stream=True, ) for chunk in response: print(chunk.choices[0].delta) ``` 1 2 3 4 5 6 7 8 9 10 11 12 **Response Example:** ``` data: {"id":"8313807536837492492","created":1706092316,"model":"glm-4","choices":[{"index":0,"delta":{"role":"assistant","content":"Saturn"}}]} data: {"id":"8313807536837492492","created":1706092316,"model":"glm-4","choices":[{"index":0,"delta":{"role":"assistant","content":" is"}}]} .... data: {"id":"8313807536837492492","created":1706092316,"model":"glm-4","choices":[{"index":0,"delta":{"role":"assistant","content":" a"}}]} data: {"id":"8313807536837492492","created":1706092316,"model":"glm-4","choices":[{"index":0,"delta":{"role":"assistant","content":" gas"}}]} data: {"id":"8313807536837492492","created":1706092316,"model":"glm-4","choices":[{"index":0,"finish_reason":"length","delta":{"role":"assistant","content":""}}],"usage":{"prompt_tokens":60,"completion_tokens":100,"total_tokens":160}} data: [DONE] ``` 1 2 3 4 5 6 7 Function Call ------------- **Request Example:** ``` from zhipuai import ZhipuAI client = ZhipuAI(api_key="") # Please fill in your own APIKey tools = [ { "type": "function", "function": { "name": "query_train_info", "description": "Query train schedules based on user-provided information", "parameters": { "type": "object", "properties": { "departure": { "type": "string", "description": "Departure city or station", }, "destination": { "type": "string", "description": "Destination city or station", }, "date": { "type": "string", "description": "Date of the train to be queried", }, }, "required": ["departure", "destination", "date"], }, } } ] messages = [ { "role": "user", "content": "Can you help me check the train tickets from Beijing South Station to Shanghai on January 1, 2024?" } ] response = client.chat.completions.create( model="glm-4", # Please fill in the model name you want to call messages=messages, tools=tools, tool_choice="auto", ) print(response) ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 **Response Example:** ``` { "id": "8231168139794583938", "model": "glm-4", "request_id": "8231168139794583938", "created": 1703490288, "choices": [ { "finish_reason": "tool_calls", "index": 0, "message": { "role": "assistant", "tool_calls": [ { "id": "call_8231168139794583938", "index": 0, "type": "function", "function": { "arguments": '{"date": "2024-01-01","departure": "Beijing South Station","destination": "Shanghai"}', "name": "query_train_info" } } ] } } ], "usage": { "completion_tokens": 31, "prompt_tokens": 120, "total_tokens": 151 } } ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Asynchronous Call ----------------- **Interface Request** | Type | Description | | --- | --- | | Transfer Method | HTTPS | | Request URL | `https://open.bigmodel.cn/api/paas/v4/async/chat/completions` | | Call Method | Asynchronous, results must be obtained through a query interface | | Character Encoding | UTF-8 | | Request Format | JSON | | Response Format | JSON | | HTTP Method | POST | | Development Language | Any development language that can initiate HTTP requests | ### Request Parameters Request parameters are the same as for synchronous API calls. ### Response Parameters | Parameter Name | Type | Description | | --- | --- | --- | | request_id | String | Task ID submitted by the client when initiating the request or task ID generated by the platform. | | id | String | Task ID generated by the ZhipuAI Open Platform, used when querying the result. | | model | String | Model name called in the API request. | | task_status | string | Request processing status: `PROCESSING` (processing), `SUCCESS` (successful), `FAIL` (failed). This status must be queried to determine the result. | ### Call Example ``` from zhipuai import ZhipuAI client = ZhipuAI(api_key="") # Please fill in your own APIKey response = client.chat.asyncCompletions.create( model="glm-4", # Please fill in the model name you want to call messages=[ { "role": "user", "content": "As the king of fairy tales, please write a short fairy tale with the theme of always keeping a kind heart. The story should inspire children's interest in learning and imagination, while helping them better understand and accept the moral and values contained in the story." } ], ) print(response) ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 **Response Example** ``` id='123456789' request_id='654321' model='glm-4' task_status='PROCESSING' ``` 1 Task Result Query ----------------- **Interface Request** | Type | Description | | --- | --- | | Transfer Method | HTTPS | | Request URL | `https://open.bigmodel.cn/api/paas/v4/async-result/{id}` | | Call Method | Synchronous call, waiting for the model to fully execute and return the final result | | Character Encoding | UTF-8 | | Request Format | JSON | | Response Format | JSON | | HTTP Method | GET | | Development Language | Synchronous call, waiting for the model to fully execute and return the final result | ### Request Parameters | Parameter Name | Type | Required | Description | | --- | --- | --- | --- | | id | String | Yes | Task ID | ### Response Parameters | Parameter Name | Type | Description | | --- | --- | --- | | model | String | Model name | | choices | List | Current dialogue model output content, currently only returns one | | index | Integer | Result index | | finish_reason | String | Reason for model inference termination. “stop” indicates natural end or stop word trigger, “length” indicates token length limit reached. | | message | Object | Model returned text message | | role | String | Current dialogue role, currently default is assistant (model) | | content | String | Current dialogue content | | tool_calls | List | Function names and parameters generated by the model to be called. | | function | Object | | | name | String | Function name generated by the model. | | arguments | String | JSON format of the function call parameters generated by the model. Note that the JSON generated by the model may not be valid and may contain parameters not defined in the function schema. Validate parameters in your code before calling the function. | | id | String | Unique identifier for the function hit. | | type | String | Tool type called by the model, currently only supports ‘function’. | | task_status | String | Processing status: PROCESSING (processing), SUCCESS (successful), FAIL (failed) | | request_id | String | Task ID submitted by the client when initiating the request or task ID generated by the platform. | | id | String | Task ID generated by the ZhipuAI Open Platform, used when querying the request result. | | usage | Object | Token statistics for this model call | | prompt_tokens | int | Number of tokens in user input | | completion_tokens | int | Number of tokens in model output | | total_tokens | int | Total number of tokens | ### Request Example: ``` import time from zhipuai import ZhipuAI client = ZhipuAI(api_key="") # Please fill in your own APIKey response = client.chat.asyncCompletions.create( model="glm-4", # Please fill in the model name you want to call messages=[ { "role": "user", "content": "As the king of fairy tales, please write a short fairy tale with the theme of always keeping a kind heart. The story should inspire children's interest in learning and imagination, while helping them better understand and accept the moral and values contained in the story." } ], ) task_id = response.id task_status = '' get_cnt = 0 while task_status != 'SUCCESS' and task_status != 'FAILED' and get_cnt <= 40: result_response = client.chat.asyncCompletions.retrieve_completion_result(id=task_id) print(result_response) task_status = result_response.task_status time.sleep(2) get_cnt += 1 ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 **Response Example:** ``` {"id":"123456789","request_id":"123123123","model":null,"task_status":"PROCESSING"} {"id":"123456789","request_id":"123123123","model":null,"task_status":"PROCESSING"} ... ... {"id":"123456789","request_id":"123123123","model":null,"task_status":"PROCESSING"} { "id": "123456789", "request_id": "123123123", "model": "glm-4", "task_status": "SUCCESS", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "content": "Once upon a time, there was a beautiful village where children loved to play, learn, and explore together. Among them was a little boy named Xiaoming, who had a kind heart and was always willing to help others. One day, Xiaoming found a little bird with a wounded wing in the forest, unable to fly. Xiaoming felt sorry for the bird and took it home, caring for it with warmth and love. Under Xiaoming's careful care, the bird's wing gradually healed, and it began to fly around the room. Seeing the bird's miraculous recovery, Xiaoming became very interested in birds and wanted to learn more about them. He started reading books about birds, studying their habits and lifestyles. Through his studies, Xiaoming gained a deep understanding of birds and formed a deep friendship with the bird. One day, while walking in the forest, Xiaoming found a little rabbit trapped in a hunter's trap. Without hesitation, Xiaoming rescued the rabbit. The rabbit looked at Xiaoming gratefully and told him about a mysterious treasure in the forest—a magic gem that could grant wishes. Curious, Xiaoming decided to find the gem. He set out on an adventure with the bird and the rabbit. Along the way, they encountered many challenges, but Xiaoming always kept a kind heart and bravely faced every difficulty. He not only learned how to get along with the animals in the forest but also acquired many survival skills. After some time, Xiaoming finally found the magic gem. The gem emitted a dazzling light, taking Xiaoming and his friends to a beautiful world. There, they met an intelligent old man who told Xiaoming that the power of the gem came from a kind heart. Only those with a kind heart could activate the gem's power and fulfill their wishes. Xiaoming understood this truth, thanked the old man, and returned to the real world with the gem. He used the gem's power to help others, making the village a better place. Xiaoming became a role model in the village. Through his actions, the children understood the importance of always keeping a kind heart. From then on, Xiaoming and the villagers lived happily together. The children who heard Xiaoming's story understood the importance of a kind heart. They took Xiaoming as an example and strived to become loving and responsible individuals. In this process, their interest in learning and imagination was also stimulated, growing into excellent children. This story tells us that always keeping a kind heart and using our actions to influence those around us can unlock our potential and achieve our dreams. Let's all strive to be people with kind hearts.", "role": "assistant", "tool_calls": null } } ], "usage": { "prompt_tokens": 52, "completion_tokens": 470, "total_tokens": 522 } } ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ECNU3D/universal-image-generator-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server