ClickUp Operator

� lMge���X�ddlmZddlmZmZmZddlmZddlZddl m Z ddl m Z ddl mZmZmZmZmZdd lmZmZmZdd lmZdd lmZmZdd lmZmZdd lm Z m!Z!ddl"m#Z#ddl$m%Z%ddgZ&Gd�de�Z'Gd�de�Z(Gd�d�Z)Gd�d�Z*Gd�d�Z+Gd�d�Z,y)�)� annotations)�List�Union�overload)�LiteralN�)�_legacy_response)�completion_create_params)� NOT_GIVEN�Body�Query�Headers�NotGiven)� required_args�maybe_transform�async_maybe_transform)�cached_property)�SyncAPIResource�AsyncAPIResource)�to_streamed_response_wrapper�"async_to_streamed_response_wrapper)�Stream� AsyncStream)�make_request_options)� Completion� Completions�AsyncCompletionsc ���eZdZed d��Zedd��Zeeeeeeeddddd� dd��Zeeeeeeddddd� dd��Zeeeeeeddddd� dd ��Ze gd �gd ��eeeeeeddddd� dd ��Zy)rc��t|�S�N)�CompletionsWithRawResponse��selfs �kC:\Users\noahv\Documents\GitHub\clickup-operator\.venv\Lib\site-packages\anthropic/resources/completions.py�with_raw_responsezCompletions.with_raw_responses ��)�$�/�/�c��t|�Sr )� CompletionsWithStreamingResponser"s r$�with_streaming_responsez#Completions.with_streaming_response#s ��/��5�5r&N�X� �metadata�stop_sequences�stream� temperature�top_k�top_p� extra_headers� extra_query� extra_body�timeoutc ��y�a�[Legacy] Create a Text Completion. The Text Completions API is a legacy API. We recommend using the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) going forward. Future models and features will not be compatible with Text Completions. See our [migration guide](https://docs.anthropic.com/claude/reference/migrating-from-text-completions-to-messages) for guidance in migrating from Text Completions to Messages. Args: max_tokens_to_sample: The maximum number of tokens to generate before stopping. Note that our models may stop _before_ reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. model: The model that will complete your prompt. See [models](https://docs.anthropic.com/claude/docs/models-overview) for additional details and options. prompt: The prompt that you want Claude to complete. For proper response generation you will need to format your prompt using alternating ` Human:` and ` Assistant:` conversational turns. For example: ``` " Human: {userQuestion} Assistant:" ``` See [prompt validation](https://anthropic.readme.io/claude/reference/prompt-validation) and our guide to [prompt design](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design) for more details. metadata: An object describing metadata about the request. stop_sequences: Sequences that will cause the model to stop generating. Our models stop on `" Human:"`, and may include additional built-in stop sequences in the future. By providing the stop_sequences parameter, you may include additional strings that will cause the model to stop generating. stream: Whether to incrementally stream the response using server-sent events. See [streaming](https://docs.anthropic.com/claude/reference/text-completions-streaming) for details. temperature: Amount of randomness injected into the response. Defaults to `1.0`. Ranges from `0.0` to `1.0`. Use `temperature` closer to `0.0` for analytical / multiple choice, and closer to `1.0` for creative and generative tasks. Note that even with `temperature` of `0.0`, the results will not be fully deterministic. top_k: Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. [Learn more technical details here](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277). Recommended for advanced use cases only. You usually only need to use `temperature`. top_p: Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by `top_p`. You should either alter `temperature` or `top_p`, but not both. Recommended for advanced use cases only. You usually only need to use `temperature`. extra_headers: Send extra headers extra_query: Add additional query parameters to the request extra_body: Add additional JSON properties to the request timeout: Override the client-level default timeout for this request, in seconds N��r#�max_tokens_to_sample�model�promptr,r-r.r/r0r1r2r3r4r5s r$�createzCompletions.create'���V r&� r,r-r/r0r1r2r3r4r5c ��y�a�[Legacy] Create a Text Completion. The Text Completions API is a legacy API. We recommend using the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) going forward. Future models and features will not be compatible with Text Completions. See our [migration guide](https://docs.anthropic.com/claude/reference/migrating-from-text-completions-to-messages) for guidance in migrating from Text Completions to Messages. Args: max_tokens_to_sample: The maximum number of tokens to generate before stopping. Note that our models may stop _before_ reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. model: The model that will complete your prompt. See [models](https://docs.anthropic.com/claude/docs/models-overview) for additional details and options. prompt: The prompt that you want Claude to complete. For proper response generation you will need to format your prompt using alternating ` Human:` and ` Assistant:` conversational turns. For example: ``` " Human: {userQuestion} Assistant:" ``` See [prompt validation](https://anthropic.readme.io/claude/reference/prompt-validation) and our guide to [prompt design](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design) for more details. stream: Whether to incrementally stream the response using server-sent events. See [streaming](https://docs.anthropic.com/claude/reference/text-completions-streaming) for details. metadata: An object describing metadata about the request. stop_sequences: Sequences that will cause the model to stop generating. Our models stop on `" Human:"`, and may include additional built-in stop sequences in the future. By providing the stop_sequences parameter, you may include additional strings that will cause the model to stop generating. temperature: Amount of randomness injected into the response. Defaults to `1.0`. Ranges from `0.0` to `1.0`. Use `temperature` closer to `0.0` for analytical / multiple choice, and closer to `1.0` for creative and generative tasks. Note that even with `temperature` of `0.0`, the results will not be fully deterministic. top_k: Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. [Learn more technical details here](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277). Recommended for advanced use cases only. You usually only need to use `temperature`. top_p: Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by `top_p`. You should either alter `temperature` or `top_p`, but not both. Recommended for advanced use cases only. You usually only need to use `temperature`. extra_headers: Send extra headers extra_query: Add additional query parameters to the request extra_body: Add additional JSON properties to the request timeout: Override the client-level default timeout for this request, in seconds Nr8�r#r:r;r<r.r,r-r/r0r1r2r3r4r5s r$r=zCompletions.create�r>r&c ��yrAr8rBs r$r=zCompletions.creater>r&�r:r;r<�r:r;r<r.c ��|jdt||||||||| d� tj�t | | | | ��t |xsdt t ��S�Nz /v1/complete) r:r;r<r,r-r.r/r0r1)r2r3r4r5F)�body�options�cast_tor.� stream_cls)�_postrr �CompletionCreateParamsrrrr9s r$r=zCompletions.creatensy��(�z�z� � �,@�"�$� (�&4�$�#.�"�"� �)�?�?� �)�+��Q[�el����?�U��j�)�+� � r&)�returnr!)rNr(�r:�intr;�EUnion[str, Literal['claude-2.0', 'claude-2.1', 'claude-instant-1.2']]r<�strr,�,completion_create_params.Metadata | NotGivenr-�List[str] | NotGivenr.zLiteral[False] | NotGivenr/�float | NotGivenr0�int | NotGivenr1rUr2�Headers | Noner3� Query | Noner4� Body | Noner5�'float | httpx.Timeout | None | NotGivenrNr)r:rPr;rQr<rRr.� Literal[True]r,rSr-rTr/rUr0rVr1rUr2rWr3rXr4rYr5rZrNzStream[Completion])r:rPr;rQr<rRr.�boolr,rSr-rTr/rUr0rVr1rUr2rWr3rXr4rYr5rZrN�Completion | Stream[Completion])r:rPr;rQr<rRr,rSr-rTr.�)Literal[False] | Literal[True] | NotGivenr/rUr0rVr1rUr2rWr3rXr4rYr5rZrNr]� �__name__� __module__� __qualname__rr%r)rr r=rr8r&r$rrss���0��0��6��6��BK�/8�,5�(1� )�"+�)-�$(�"&�;>�#j �"�j �U� j � � j � ?� j �-�j �*�j �&�j ��j � �j �&�j �"�j �  �!j �"9�#j �$ �%j ��j �X�BK�/8�(1� )�"+�)-�$(�"&�;>�#j �"�j �U� j � � j � � j �?�j �-�j �&�j ��j � �j �&�j �"�j �  �!j �"9�#j �$ �%j ��j �X�BK�/8�(1� )�"+�)-�$(�"&�;>�#j �"�j �U� j � � j � � j �?�j �-�j �&�j ��j � �j �&�j �"�j �  �!j �"9�#j �$ )�%j ��j �X�>�@u�v�BK�/8�<E�(1� )�"+�)-�$(�"&�;>�#) �"�) �U� ) � � ) � ?� ) �-�) �:�) �&�) ��) � �) �&�) �"�) �  �!) �"9�#) �$ )�%) �w�) r&c ���eZdZed d��Zedd��Zeeeeeeeddddd� dd��Zeeeeeeddddd� dd��Zeeeeeeddddd� dd ��Ze gd �gd ��eeeeeeddddd� dd ��Zy)rc��t|�Sr )�AsyncCompletionsWithRawResponser"s r$r%z"AsyncCompletions.with_raw_response�s ��.�t�4�4r&c��t|�Sr )�%AsyncCompletionsWithStreamingResponser"s r$r)z(AsyncCompletions.with_streaming_response�s ��4�T�:�:r&Nr*r+c �� K�y�wr7r8r9s r$r=zAsyncCompletions.create�� ����V ���r?c �� K�y�wrAr8rBs r$r=zAsyncCompletions.createrirjc �� K�y�wrAr8rBs r$r=zAsyncCompletions.create~rirjrDrEc ���K�|jdt||||||||| d� tj��d{���t | | | | ��t |xsdt t ���d{���S7�57��wrG)rLrr rMrrrr9s r$r=zAsyncCompletions.create�s�����(�Z�Z� �,�,@�"�$� (�&4�$�#.�"�"� �)�?�?� � �)�+��Q[�el����?�U�"�:�.�+ � � � � �� �s!�4A0�A, �0A0�'A.�(A0�.A0)rNre)rNrgrO)r:rPr;rQr<rRr.r[r,rSr-rTr/rUr0rVr1rUr2rWr3rXr4rYr5rZrNzAsyncStream[Completion])r:rPr;rQr<rRr.r\r,rSr-rTr/rUr0rVr1rUr2rWr3rXr4rYr5rZrN�$Completion | AsyncStream[Completion])r:rPr;rQr<rRr,rSr-rTr.r^r/rUr0rVr1rUr2rWr3rXr4rYr5rZrNrnr_r8r&r$rr�ss���5��5��;��;��BK�/8�,5�(1� )�"+�)-�$(�"&�;>�#j �"�j �U� j � � j � ?� j �-�j �*�j �&�j ��j � �j �&�j �"�j �  �!j �"9�#j �$ �%j ��j �X�BK�/8�(1� )�"+�)-�$(�"&�;>�#j �"�j �U� j � � j � � j �?�j �-�j �&�j ��j � �j �&�j �"�j �  �!j �"9�#j �$ !�%j ��j �X�BK�/8�(1� )�"+�)-�$(�"&�;>�#j �"�j �U� j � � j � � j �?�j �-�j �&�j ��j � �j �&�j �"�j �  �!j �"9�#j �$ .�%j ��j �X�>�@u�v�BK�/8�<E�(1� )�"+�)-�$(�"&�;>�#) �"�) �U� ) � � ) � ?� ) �-�) �:�) �&�) ��) � �) �&�) �"�) �  �!) �"9�#) �$ .�%) �w�) r&c��eZdZdd�Zy)r!c�Z�||_tj|j�|_yr )� _completionsr �to_raw_response_wrapperr=�r#� completionss r$�__init__z#CompletionsWithRawResponse.__init__s%��'���&�>�>� � � � �� r&N�rtrrN�None�r`rarbrur8r&r$r!r!��� r&r!c��eZdZdd�Zy)rec�Z�||_tj|j�|_yr )rqr �async_to_raw_response_wrapperr=rss r$ruz(AsyncCompletionsWithRawResponse.__init__"s%��'���&�D�D� � � � �� r&N�rtrrNrwrxr8r&r$rere!ryr&rec��eZdZdd�Zy)r(c�F�||_t|j�|_yr )rqrr=rss r$ruz)CompletionsWithStreamingResponse.__init__+s��'���2� � � � �� r&Nrvrxr8r&r$r(r(*ryr&r(c��eZdZdd�Zy)rgc�F�||_t|j�|_yr )rqrr=rss r$ruz.AsyncCompletionsWithStreamingResponse.__init__4s��'���8� � � � �� r&Nr}rxr8r&r$rgrg3ryr&rg)-� __future__r�typingrrr�typing_extensionsr�httpx�r �typesr �_typesr r r rr�_utilsrrr�_compatr� _resourcerr� _responserr� _streamingrr� _base_clientr�types.completionr�__all__rrr!rer(rgr8r&r$�<module>r�s���#�(�(�%� ��,�>�>��� &�9�X�,��*� �,� -��z �/�z �z z �'�z �z  � � � � � � � r&