ClickUp Operator

� lMgN ���ddlmZddlmZmZmZmZddlmZddl m Z ddl Z ddl m Z ddlmZdd lmZmZmZmZmZdd lmZmZmZdd lmZdd lmZmZdd lm Z m!Z!ddl"m#Z#m$Z$ddl%m&Z&ddl'm(Z(m)Z)m*Z*m+Z+m,Z,m-Z-ddl.m/Z/ddl0m1Z1ddl2m3Z3ddgZ4Gd�de�Z5Gd�de�Z6Gd�d�Z7Gd�d�Z8Gd�d�Z9Gd�d�Z:y) �)� annotations)�List�Union�Iterable�overload)�partial)�LiteralN�)�_legacy_response)�message_create_params)� NOT_GIVEN�Body�Query�Headers�NotGiven)� required_args�maybe_transform�async_maybe_transform)�cached_property)�SyncAPIResource�AsyncAPIResource)�to_streamed_response_wrapper�"async_to_streamed_response_wrapper)�Stream� AsyncStream)�make_request_options)� MessageStream�MessageStreamT�AsyncMessageStream�AsyncMessageStreamT�MessageStreamManager�AsyncMessageStreamManager)�Message)� MessageParam)�MessageStreamEvent�Messages� AsyncMessagesc ���eZdZedd��Zedd��Zeeeeeeeeddddd� dd��Zeeeeeeeddddd� dd��Zeeeeeeeddddd� dd ��Ze gd �gd ��eeeeeeeddddd� dd ��Zeeeeeeeddded� dd ��Z eeeeeeeddded� dd��Z eeeeeee ddded� dd�Z y)r&c��t|�S�N)�MessagesWithRawResponse��selfs �hC:\Users\noahv\Documents\GitHub\clickup-operator\.venv\Lib\site-packages\anthropic/resources/messages.py�with_raw_responsezMessages.with_raw_response*s ��&�t�,�,�c��t|�Sr*)�MessagesWithStreamingResponser,s r.�with_streaming_responsez Messages.with_streaming_response.s ��,�T�2�2r0N�X� �metadata�stop_sequences�stream�system� temperature�top_k�top_p� extra_headers� extra_query� extra_body�timeoutc��y�u, Create a Message. Send a structured list of input messages with text and/or image content, and the model will generate the next message in the conversation. The Messages API can be used for for either single queries or stateless multi-turn conversations. Args: max_tokens: The maximum number of tokens to generate before stopping. Note that our models may stop _before_ reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. Different models have different maximum values for this parameter. See [models](https://docs.anthropic.com/claude/docs/models-overview) for details. messages: Input messages. Our models are trained to operate on alternating `user` and `assistant` conversational turns. When creating a new `Message`, you specify the prior conversational turns with the `messages` parameter, and the model then generates the next `Message` in the conversation. Each input message must be an object with a `role` and `content`. You can specify a single `user`-role message, or you can include multiple `user` and `assistant` messages. The first message must always use the `user` role. If the final message uses the `assistant` role, the response content will continue immediately from the content in that message. This can be used to constrain part of the model's response. Example with a single `user` message: ```json [{ "role": "user", "content": "Hello, Claude" }] ``` Example with multiple conversational turns: ```json [ { "role": "user", "content": "Hello there." }, { "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" }, { "role": "user", "content": "Can you explain LLMs in plain English?" } ] ``` Example with a partially-filled response from Claude: ```json [ { "role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun" }, { "role": "assistant", "content": "The best answer is (" } ] ``` Each input message `content` may be either a single `string` or an array of content blocks, where each block has a specific `type`. Using a `string` for `content` is shorthand for an array of one content block of type `"text"`. The following input messages are equivalent: ```json { "role": "user", "content": "Hello, Claude" } ``` ```json { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] } ``` Starting with Claude 3 models, you can also send image content blocks: ```json { "role": "user", "content": [ { "type": "image", "source": { "type": "base64", "media_type": "image/jpeg", "data": "/9j/4AAQSkZJRg..." } }, { "type": "text", "text": "What is in this image?" } ] } ``` We currently support the `base64` source type for images, and the `image/jpeg`, `image/png`, `image/gif`, and `image/webp` media types. See [examples](https://docs.anthropic.com/claude/reference/messages-examples) for more input examples. Note that if you want to include a [system prompt](https://docs.anthropic.com/claude/docs/system-prompts), you can use the top-level `system` parameter — there is no `"system"` role for input messages in the Messages API. model: The model that will complete your prompt. See [models](https://docs.anthropic.com/claude/docs/models-overview) for additional details and options. metadata: An object describing metadata about the request. stop_sequences: Custom text sequences that will cause the model to stop generating. Our models will normally stop when they have naturally completed their turn, which will result in a response `stop_reason` of `"end_turn"`. If you want the model to stop generating when it encounters custom strings of text, you can use the `stop_sequences` parameter. If the model encounters one of the custom sequences, the response `stop_reason` value will be `"stop_sequence"` and the response `stop_sequence` value will contain the matched stop sequence. stream: Whether to incrementally stream the response using server-sent events. See [streaming](https://docs.anthropic.com/claude/reference/messages-streaming) for details. system: System prompt. A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. See our [guide to system prompts](https://docs.anthropic.com/claude/docs/system-prompts). temperature: Amount of randomness injected into the response. Defaults to `1.0`. Ranges from `0.0` to `1.0`. Use `temperature` closer to `0.0` for analytical / multiple choice, and closer to `1.0` for creative and generative tasks. Note that even with `temperature` of `0.0`, the results will not be fully deterministic. top_k: Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. [Learn more technical details here](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277). Recommended for advanced use cases only. You usually only need to use `temperature`. top_p: Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by `top_p`. You should either alter `temperature` or `top_p`, but not both. Recommended for advanced use cases only. You usually only need to use `temperature`. extra_headers: Send extra headers extra_query: Add additional query parameters to the request extra_body: Add additional JSON properties to the request timeout: Override the client-level default timeout for this request, in seconds N��r-� max_tokens�messages�modelr6r7r8r9r:r;r<r=r>r?r@s r.�createzMessages.create2���L r0� r6r7r9r:r;r<r=r>r?r@c��y�u, Create a Message. Send a structured list of input messages with text and/or image content, and the model will generate the next message in the conversation. The Messages API can be used for for either single queries or stateless multi-turn conversations. Args: max_tokens: The maximum number of tokens to generate before stopping. Note that our models may stop _before_ reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. Different models have different maximum values for this parameter. See [models](https://docs.anthropic.com/claude/docs/models-overview) for details. messages: Input messages. Our models are trained to operate on alternating `user` and `assistant` conversational turns. When creating a new `Message`, you specify the prior conversational turns with the `messages` parameter, and the model then generates the next `Message` in the conversation. Each input message must be an object with a `role` and `content`. You can specify a single `user`-role message, or you can include multiple `user` and `assistant` messages. The first message must always use the `user` role. If the final message uses the `assistant` role, the response content will continue immediately from the content in that message. This can be used to constrain part of the model's response. Example with a single `user` message: ```json [{ "role": "user", "content": "Hello, Claude" }] ``` Example with multiple conversational turns: ```json [ { "role": "user", "content": "Hello there." }, { "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" }, { "role": "user", "content": "Can you explain LLMs in plain English?" } ] ``` Example with a partially-filled response from Claude: ```json [ { "role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun" }, { "role": "assistant", "content": "The best answer is (" } ] ``` Each input message `content` may be either a single `string` or an array of content blocks, where each block has a specific `type`. Using a `string` for `content` is shorthand for an array of one content block of type `"text"`. The following input messages are equivalent: ```json { "role": "user", "content": "Hello, Claude" } ``` ```json { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] } ``` Starting with Claude 3 models, you can also send image content blocks: ```json { "role": "user", "content": [ { "type": "image", "source": { "type": "base64", "media_type": "image/jpeg", "data": "/9j/4AAQSkZJRg..." } }, { "type": "text", "text": "What is in this image?" } ] } ``` We currently support the `base64` source type for images, and the `image/jpeg`, `image/png`, `image/gif`, and `image/webp` media types. See [examples](https://docs.anthropic.com/claude/reference/messages-examples) for more input examples. Note that if you want to include a [system prompt](https://docs.anthropic.com/claude/docs/system-prompts), you can use the top-level `system` parameter — there is no `"system"` role for input messages in the Messages API. model: The model that will complete your prompt. See [models](https://docs.anthropic.com/claude/docs/models-overview) for additional details and options. stream: Whether to incrementally stream the response using server-sent events. See [streaming](https://docs.anthropic.com/claude/reference/messages-streaming) for details. metadata: An object describing metadata about the request. stop_sequences: Custom text sequences that will cause the model to stop generating. Our models will normally stop when they have naturally completed their turn, which will result in a response `stop_reason` of `"end_turn"`. If you want the model to stop generating when it encounters custom strings of text, you can use the `stop_sequences` parameter. If the model encounters one of the custom sequences, the response `stop_reason` value will be `"stop_sequence"` and the response `stop_sequence` value will contain the matched stop sequence. system: System prompt. A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. See our [guide to system prompts](https://docs.anthropic.com/claude/docs/system-prompts). temperature: Amount of randomness injected into the response. Defaults to `1.0`. Ranges from `0.0` to `1.0`. Use `temperature` closer to `0.0` for analytical / multiple choice, and closer to `1.0` for creative and generative tasks. Note that even with `temperature` of `0.0`, the results will not be fully deterministic. top_k: Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. [Learn more technical details here](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277). Recommended for advanced use cases only. You usually only need to use `temperature`. top_p: Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by `top_p`. You should either alter `temperature` or `top_p`, but not both. Recommended for advanced use cases only. You usually only need to use `temperature`. extra_headers: Send extra headers extra_query: Add additional query parameters to the request extra_body: Add additional JSON properties to the request timeout: Override the client-level default timeout for this request, in seconds NrC�r-rErFrGr8r6r7r9r:r;r<r=r>r?r@s r.rHzMessages.create�rIr0c��yrLrCrMs r.rHzMessages.create�rIr0�rErFrG�rErFrGr8c��|jdt||||||||| | d� tj�t | | | |��t |xsdt t��S�N� /v1/messages) rErFrGr6r7r8r9r:r;r<�r=r>r?r@F��body�options�cast_tor8� stream_cls)�_postrr �MessageCreateParamsrr#rr%rDs r.rHzMessages.create�s}��>�z�z� � �",� (�"� (�&4�$�$�#.�"�"� �&�9�9��)�+��Q[�el����?�U��0�1�-� � r0c ��y��Create a Message streamNrC�r-rErFrGr6r7r9r:r;r<r=r>r?r@s r.r8zMessages.stream����> r0c��yr]rC�r-rErFrGr6r7r9r:r;r<� event_handlerr=r>r?r@s r.r8zMessages.stream����@ r0� r6r7r9r:r;r<rcr=r>r?r@c���d| tk7rdndd�| xsi�} t|jdt||||||||| dd� tj �t | | | |��td| � �}t|�S� r^rF�true�false)zX-Stainless-Stream-Helperz X-Stainless-Custom-Event-HandlerrST) rErFrGr6r7r9r:r;r<r8rTrU) rrrZrr r[rr#r!)r-rErFrGr6r7r9r:r;r<rcr=r>r?r@� make_requests r.r8zMessages.streams���@*4�:G�=�:X��^e� ��"�� � � � �J�J� � �",� (�"� (�&4�$�#.�"�"�"� �&�9�9��)�+��Q[�el����$�/ � �2$�L�1�1r0)�returnr+)rkr2�rE�intrF�Iterable[MessageParam]rG��Union[str, Literal['claude-3-opus-20240229', 'claude-3-sonnet-20240229', 'claude-3-haiku-20240307', 'claude-2.1', 'claude-2.0', 'claude-instant-1.2']]r6�)message_create_params.Metadata | NotGivenr7�List[str] | NotGivenr8zLiteral[False] | NotGivenr9�str | NotGivenr:�float | NotGivenr;�int | NotGivenr<rsr=�Headers | Noner>� Query | Noner?� Body | Noner@�'float | httpx.Timeout | None | NotGivenrkr#)rErmrFrnrGror8� Literal[True]r6rpr7rqr9rrr:rsr;rtr<rsr=rur>rvr?rwr@rxrkzStream[MessageStreamEvent])rErmrFrnrGror8�boolr6rpr7rqr9rrr:rsr;rtr<rsr=rur>rvr?rwr@rxrk�$Message | Stream[MessageStreamEvent])rErmrFrnrGror6rpr7rqr8�)Literal[False] | Literal[True] | NotGivenr9rrr:rsr;rtr<rsr=rur>rvr?rwr@rxrkr{)rErmrFrnrGror6rpr7rqr9rrr:rsr;rtr<rsr=rur>rvr?rwr@rxrkz#MessageStreamManager[MessageStream])rErmrFrnrGror6rpr7rqr9rrr:rsr;rtr<rsrc�type[MessageStreamT]r=rur>rvr?rwr@rxrkz$MessageStreamManager[MessageStreamT])rErmrFrnrGror6rpr7rqr9rrr:rsr;rtr<rsrcr}r=rur>rvr?rwr@rxrkzJMessageStreamManager[MessageStream] | MessageStreamManager[MessageStreamT]) �__name__� __module__� __qualname__rr/r3rr rHrr8rrCr0r.r&r&)s����-��-��3��3��"?H�/8�,5�!*�(1� )�"+�)-�$(�"&�;>�9E ��E �)� E �  � E � <�!E �"-�#E �$*�%E �&�'E �(&�)E �*�+E �, �-E �2&�3E �4"�5E �6 �7E �89�9E �: �;E ��E �N�$?H�/8�!*�(1� )�"+�)-�$(�"&�;>�9E ��E �)� E �  � E � �!E �"<�#E �$-�%E �&�'E �(&�)E �*�+E �, �-E �2&�3E �4"�5E �6 �7E �89�9E �: $�;E ��E �N�$?H�/8�!*�(1� )�"+�)-�$(�"&�;>�9E ��E �)� E �  � E � �!E �"<�#E �$-�%E �&�'E �(&�)E �*�+E �, �-E �2&�3E �4"�5E �6 �7E �89�9E �: .�;E ��E �N�6�8e�f�"?H�/8�<E�!*�(1� )�"+�)-�$(�"&�;>�95 ��5 �)� 5 �  � 5 � <�!5 �"-�#5 �$:�%5 �&�'5 �(&�)5 �*�+5 �, �-5 �2&�35 �4"�55 �6 �75 �89�95 �: .�;5 �g�5 �n�"?H�/8�!*�(1� )�"+�)-�$(�"&�;D�7 �� �)�  �  �  � <�! �"-�# �$�% �&&�' �(�) �* �+ �0&�1 �2"�3 �4 �5 �69�7 �8 -�9 �� �@�"?H�/8�!*�(1� )�"+�)-�$(�"&�;D�9 �� �)�  �  �  � <�! �"-�# �$�% �&&�' �(�) �* �+ �,,�- �2&�3 �4"�5 �6 �7 �89�9 �: .�; �� �b?H�/8�!*�(1� )�"+�.;�)-�$(�"&�;D�9=2��=2�)� =2�  � =2� <�!=2�"-�#=2�$�%=2�&&�'=2�(�)=2�* �+=2�,,�-=2�2&�3=2�4"�5=2�6 �7=2�89�9=2�: T�;=2r0c ���eZdZedd��Zedd��Zeeeeeeeeddddd� dd��Zeeeeeeeddddd� dd��Zeeeeeeeddddd� dd ��Ze gd �gd ��eeeeeeeddddd� dd ��Zeeeeeeeddded� dd ��Z eeeeeeeddded� dd��Z eeeeeee ddded� dd�Z y)r'c��t|�Sr*)�AsyncMessagesWithRawResponser,s r.r/zAsyncMessages.with_raw_responseFs ��+�D�1�1r0c��t|�Sr*)�"AsyncMessagesWithStreamingResponser,s r.r3z%AsyncMessages.with_streaming_responseJs ��1�$�7�7r0Nr4r5c�� K�y�wrBrCrDs r.rHzAsyncMessages.createN� ����L ���rJc�� K�y�wrLrCrMs r.rHzAsyncMessages.creater�r�c�� K�y�wrLrCrMs r.rHzAsyncMessages.create�r�r�rOrPc���K�|jdt||||||||| | d� tj��d{���t | | | |��t |xsdt t���d{���S7�57��wrR)rZrr r[rr#rr%rDs r.rHzAsyncMessages.create�s�����>�Z�Z� �,�",� (�"� (�&4�$�$�#.�"�"� �&�9�9���)�+��Q[�el����?�U�"�#5�6�- � � � ��� �s!�5A1�A- �0A1�(A/�)A1�/A1c ��yr]rCr_s r.r8zAsyncMessages.stream�r`r0c��yr]rCrbs r.r8zAsyncMessages.stream�rdr0rec���d| tk7rdndd�| xsi�} |jdt||||||||| dd� tj�t | | | |��t d| � �}t|�Srg)rrZrr r[rr#r")r-rErFrGr6r7r9r:r;r<rcr=r>r?r@�requests r.r8zAsyncMessages.stream!s���@*4�:G�K]�:]��cj� ��"�� � � �*�*� � �",� (�"� (�&4�$�#.�"�"�"� �&�9�9��)�+��Q[�el����$�-� ��0)��1�1r0)rkr�)rkr�rl)rErmrFrnrGror8ryr6rpr7rqr9rrr:rsr;rtr<rsr=rur>rvr?rwr@rxrkzAsyncStream[MessageStreamEvent])rErmrFrnrGror8rzr6rpr7rqr9rrr:rsr;rtr<rsr=rur>rvr?rwr@rxrk�)Message | AsyncStream[MessageStreamEvent])rErmrFrnrGror6rpr7rqr8r|r9rrr:rsr;rtr<rsr=rur>rvr?rwr@rxrkr�)rErmrFrnrGror6rpr7rqr9rrr:rsr;rtr<rsr=rur>rvr?rwr@rxrkz-AsyncMessageStreamManager[AsyncMessageStream])rErmrFrnrGror6rpr7rqr9rrr:rsr;rtr<rsrc�type[AsyncMessageStreamT]r=rur>rvr?rwr@rxrkz.AsyncMessageStreamManager[AsyncMessageStreamT])rErmrFrnrGror6rpr7rqr9rrr:rsr;rtr<rsrcr�r=rur>rvr?rwr@rxrkz^AsyncMessageStreamManager[AsyncMessageStream] | AsyncMessageStreamManager[AsyncMessageStreamT]) r~rr�rr/r3rr rHrr8rrCr0r.r'r'Es����2��2��8��8��"?H�/8�,5�!*�(1� )�"+�)-�$(�"&�;>�9E ��E �)� E �  � E � <�!E �"-�#E �$*�%E �&�'E �(&�)E �*�+E �, �-E �2&�3E �4"�5E �6 �7E �89�9E �: �;E ��E �N�$?H�/8�!*�(1� )�"+�)-�$(�"&�;>�9E ��E �)� E �  � E � �!E �"<�#E �$-�%E �&�'E �(&�)E �*�+E �, �-E �2&�3E �4"�5E �6 �7E �89�9E �: )�;E ��E �N�$?H�/8�!*�(1� )�"+�)-�$(�"&�;>�9E ��E �)� E �  � E � �!E �"<�#E �$-�%E �&�'E �(&�)E �*�+E �, �-E �2&�3E �4"�5E �6 �7E �89�9E �: 3�;E ��E �N�6�8e�f�"?H�/8�<E�!*�(1� )�"+�)-�$(�"&�;>�95 ��5 �)� 5 �  � 5 � <�!5 �"-�#5 �$:�%5 �&�'5 �(&�)5 �*�+5 �, �-5 �2&�35 �4"�55 �6 �75 �89�95 �: 3�;5 �g�5 �n�"?H�/8�!*�(1� )�"+�)-�$(�"&�;D�7 �� �)�  �  �  � <�! �"-�# �$�% �&&�' �(�) �* �+ �0&�1 �2"�3 �4 �5 �69�7 �8 7�9 �� �@�"?H�/8�!*�(1� )�"+�)-�$(�"&�;D�9 �� �)�  �  �  � <�! �"-�# �$�% �&&�' �(�) �* �+ �,1�- �2&�3 �4"�5 �6 �7 �89�9 �: 8�; �� �b?H�/8�!*�(1� )�"+�3E�)-�$(�"&�;D�9<2��<2�)� <2�  � <2� <�!<2�"-�#<2�$�%<2�&&�'<2�(�)<2�* �+<2�,1�-<2�2&�3<2�4"�5<2�6 �7<2�89�9<2�: h�;<2r0c��eZdZdd�Zy)r+c�Z�||_tj|j�|_yr*)� _messagesr �to_raw_response_wrapperrH�r-rFs r.�__init__z MessagesWithRawResponse.__init__as"��!���&�>�>� �O�O� �� r0N�rFr&rk�None�r~rr�r�rCr0r.r+r+`��� r0r+c��eZdZdd�Zy)r�c�Z�||_tj|j�|_yr*)r�r �async_to_raw_response_wrapperrHr�s r.r�z%AsyncMessagesWithRawResponse.__init__js"��!���&�D�D� �O�O� �� r0N�rFr'rkr�r�rCr0r.r�r�ir�r0r�c��eZdZdd�Zy)r2c�F�||_t|j�|_yr*)r�rrHr�s r.r�z&MessagesWithStreamingResponse.__init__ss��!���2� �O�O� �� r0Nr�r�rCr0r.r2r2rr�r0r2c��eZdZdd�Zy)r�c�F�||_t|j�|_yr*)r�rrHr�s r.r�z+AsyncMessagesWithStreamingResponse.__init__|s��!���8� �O�O� �� r0Nr�r�rCr0r.r�r�{r�r0r�);� __future__r�typingrrrr� functoolsr�typing_extensionsr �httpx�r �typesr �_typesr rrrr�_utilsrrr�_compatr� _resourcerr� _responserr� _streamingrr� _base_clientr� lib.streamingrrrr r!r"� types.messager#�types.message_paramr$�types.message_stream_eventr%�__all__r&r'r+r�r2r�rCr0r.�<module>r�s���#�2�2��%� ��)�>�>��� &�9�X�,����$�.�;� �� '��Y 2��Y 2�xX 2�$�X 2�v � � � � � � � r0