Skip to main content
Glama

ClickUp Operator

by noah-vh
caching.cpython-312.pyc37.8 kB
� lMgp{� �6�UddlmZddlZddlZddlZddlZddlZddlZddlZddl m Z m Z ddl m Z mZmZmZmZmZmZmZmZe rddlZddlmZed�Zned�Zed�Zej6d�ZeeegefZGd �d �Z Gd �d e �Z!Gd �de �Z"Gd�de �Z#Gd�de �Z$Gd�de �Z%Gd�de �Z&Gd�de �Z'Gd�deeef�Z(Gd�de �Z)de iZ*de+d<d d!d�Z,e e!e%e"e$e#e&e'e)f D] Z-e,e-�� y)"�)� annotationsN)�Future�ThreadPoolExecutor) � TYPE_CHECKING�Any�Callable�ClassVar�Generic� NamedTuple�Optional� OrderedDict�TypeVar)� ParamSpec�P�T�fsspecc�H�eZdZUdZdZded<d d�Zd d�Zd d�Zdd�Z dd �Z y )� BaseCacheagPass-though cache: doesn't keep anything, calls every time Acts as base class for other cachers Parameters ---------- blocksize: int How far to read ahead in numbers of bytes fetcher: func Function of the form f(start, end) which gets bytes from remote as specified size: int How big this file is �none� ClassVar[str]�namec�f�||_d|_||_||_d|_d|_d|_y�Nr)� blocksize�nblocks�fetcher�size� hit_count� miss_count�total_requested_bytes)�selfrrrs �ZC:\Users\noahv\Documents\GitHub\clickup-operator\.venv\Lib\site-packages\fsspec/caching.py�__init__zBaseCache.__init__:s4��"����� ��� ��� �������%&��"�c�t�|�d}|� |j}||jk\s||k\ry|j||�S)Nrr$)rr�r!�start�stops r"�_fetchzBaseCache._fetchDs@�� �=��E� �<��9�9�D� �D�I�I� ��$����|�|�E�4�(�(r$c�.�d|_d|_d|_y)zAReset hit and miss counts for a more ganular report e.g. by file.rN)rrr �r!s r"� _reset_statszBaseCache._reset_statsMs��������%&��"r$c��|jdk(r|jdk(ryd|j|j|j|jfzS)z2Return a formatted string of the cache statistics.r�z3 , %s: %d hits, %d misses, %d total requested bytes)rrrr r+s r"� _log_statszBaseCache._log_statsSsM�� �>�>�Q� �4�?�?�a�#7��D� �I�I� �N�N� �O�O� � &� &� H � � r$c���d|jj�d|j�d|j�d|j�d|j �d|j �d|j�d�S) Nz <z: block size : z block count : z file size : z cache hits : z cache misses: z$ total requested bytes: z > )� __class__�__name__rrrrrr r+s r"�__repr__zBaseCache.__repr___s��� � �.�.� !� !� "�#�!�^�^�,�-�!�\�\�N�+�!�Y�Y�K�(�!�^�^�,�-�!�_�_�-�.$�$(�$>�$>�#?�@ � � r$N�r�intr�Fetcherrr5�return�None�r'� int | Noner(r:r7�bytes�r7r8)r7�str) r2� __module__� __qualname__�__doc__r�__annotations__r#r)r,r/r3�r$r"rr(s,�� �!�D�-� �'�)�'�  �  r$rc�d��eZdZdZdZ d d �fd� Zd d�Zd d�Zd d�Zd d�Z �xZ S)� MMapCachez�memory-mapped sparse file cache Opens temporary file, which is filled blocks-wise when data is requested. Ensure there is enough disc space in the temporary location. This cache method might only work on posix �mmapc���t�|�|||�|� t�n||_||_|j �|_y�N)�superr#�set�blocks�location� _makefile�cache)r!rrrrKrJr1s �r"r#zMMapCache.__init__ws:��� ����G�T�2�%�~�c�e�6�� � �� ��^�^�%�� r$c�4�ddl}ddl}|jdk(r t�S|j�)t j j|j�s�|j� |j�}t�|_ nt|jd�}|j|jdz �|jd�|j�nt|jd�}|j|j�|j�S)Nrzwb+��1zr+b)rE�tempfiler� bytearrayrK�os�path�exists� TemporaryFilerIrJ�open�seek�write�flush�fileno)r!rErQ�fds r"rLzMMapCache._makefile�s����� �9�9��>��;� � �=�=� ������t�}�}�(E��}�}�$��+�+�-��!�e�� ��$�-�-��/�� �G�G�D�I�I��M� "� �H�H�T�N� �H�H�J��d�m�m�U�+�B��t�y�y�����d�i�i�0�0r$c ��tjd|�d|���|�d}|� |j}||jk\s||k\ry||jz}||jz}t ||dz�D�cgc]}||j vs�|��}}t ||dz�D�cgc]}||j vs�|��}}|xj t|�z c_|xjt|�z c_|r�|jd�}||jz}t||jz|j�} |xj| |z z c_ tjd|�d|�d| �d��|j|| �|j|| |j j|�|r��|j||Scc}wcc}w) NzMMap cache fetching �-rr$rOzMMap get block #z (�))�logger�debugrr�rangerJr�lenr�pop�minr rrM�add) r!r'�end� start_block� end_block�i�need�hits�sstart�sends r"r)zMMapCache._fetch�s���� � �+�E�7�!�C�5�9�:� �=��E� �;��)�)�C� �D�I�I� ��#����t�~�~�-� ��4�>�>�)� � ��i�!�m�<�U�<�a�����@T��<��U� ��i�!�m�<�Q�<�a��T�[�[�@P��<��Q� ���3�t�9�$�� ���#�d�)�#������� �A�����'�F��v����.�� � �:�D� � &� &�$��-� 7� &� �L�L�+�A�3�b����$��q�A� B�&*�l�l�6�4�&@�D�J�J�v�d� #� �K�K�O�O�A� ���z�z�%��$�$��!V��Qs�2F;�F;�G�1Gc�@�|jj�}|d=|S)NrM)�__dict__�copy�r!�states r"� __getstate__zMMapCache.__getstate__�s ��� � �"�"�$�� �'�N�� r$c�d�|jj|�|j�|_yrG)rp�updaterLrMrrs r"� __setstate__zMMapCache.__setstate__�s"�� � � ���U�#��^�^�%�� r$)NN) rr5rr6rr5rKz str | NonerJzset[int] | Noner7r8)r7zmmap.mmap | bytearray�r'r:rgr:r7r;�r7�dict[str, Any]�rsrzr7r8) r2r>r?r@rr#rLr)rtrw� __classcell__�r1s@r"rDrDlsj���� �D� $�"&� &�� &�� &�� &� � &�  � &� � &�1�,%�8� &r$rDc�0��eZdZdZdZd�fd� Zdd�Z�xZS)�ReadAheadCachea!Cache which reads only when we get beyond a block of data This is a much simpler version of BytesCache, and does not attempt to fill holes in the cache or keep fragments alive. It is best suited to many small reads in a sequential order (e.g., reading lines from a file). � readaheadc�R��t�|�|||�d|_d|_d|_y)Nr$r)rHr#rMr'rg�r!rrrr1s �r"r#zReadAheadCache.__init__�s)��� ����G�T�2��� ��� ���r$c�^�|�d}|�||jkDr |j}||jk\s||k\ry||z }||jk\rM||jkr>|xjdz c_|j||jz ||jz S|j|cxkr|jkrOnnL|xj dz c_|j||jz d}|t |�z}|j}n|xj dz c_d}t|j||jz�}|xj||z z c_ |j||�|_||_|jt |j�z|_||jd|zS�Nrr$rO) rr'rgrrMrrcrerr r)r!r'rg�l�parts r"r)zReadAheadCache._fetch�se�� �=��E� �;�#�� � �/��)�)�C� �D�I�I� ��#��� �%�K�� �D�J�J� �3�$�(�(�?� �N�N�a� �N��:�:�e�d�j�j�0�3����3C�D� D� �Z�Z�5� +�4�8�8� +� �O�O�q� �O��:�:�e�d�j�j�0�2�3�D� ��T��N�A��H�H�E� �O�O�q� �O��D��$�)�)�S�4�>�>�1�2�� �"�"�c�E�k�1�"��\�\�%��-�� ��� ��:�:��D�J�J��/����d�j�j��!�n�$�$r$r4rx�r2r>r?r@rr#r)r|r}s@r"rr�s���� �D�� %r$rc�0��eZdZdZdZd�fd� Zdd�Z�xZS)�FirstChunkCachez�Caches the first block of a file only This may be useful for file types where the metadata is stored in the header, but is randomly accessed. �firstc�D��||kDr|}t�|�|||�d|_yrG)rHr#rMr�s �r"r#zFirstChunkCache.__init__�s(��� �t� ��I� ����G�T�2�#'�� r$c�n�|xsd}||jkDrtjd�yt||j�}||jk�r&|j ��|xj dz c_||jkDr@|xj|z c_|jd|�}|d|j|_||dS|jd|j�|_|xj|jz c_|j ||}||jkDrA|xj||jz z c_||j|j|�z }|xjdz c_ |S|xj dz c_|xj||z z c_|j||�S)Nrz,FirstChunkCache: requested start > file sizer$rO) rr`rarerrMrr rr)r!r'rg�datar�s r"r)zFirstChunkCache._fetchsh��� ��� �4�9�9� � �L�L�G� H���#�t�y�y�!�� �4�>�>� !��z�z�!����1�$������'��.�.�#�5�.��<�<��3�/�D�!%�&6����!7�D�J����<�'�!�\�\�!�T�^�^�<�� ��*�*�d�n�n�<�*��:�:�e�C�(�D��T�^�^�#��*�*�c�D�N�N�.B�B�*��� � �T�^�^�S�9�9�� �N�N�a� �N��K� �O�O�q� �O� � &� &�#��+� 5� &��<�<��s�+� +r$r4rxr�r}s@r"r�r��s���� �D�(�,r$r�c���eZdZdZdZ d d �fd� Zd�Zd d�Zd d�Zdd�Z d�fd� Z dd �Z �xZ S)� BlockCachea� Cache holding memory as a set of blocks. Requests are only ever made ``blocksize`` at a time, and are stored in an LRU cache. The least recently accessed block is discarded when more than ``maxblocks`` are stored. Parameters ---------- blocksize : int The number of bytes to store in each block. Requests are only ever made for ``blocksize``, so this should balance the overhead of making a request against the granularity of the blocks. fetcher : Callable size : int The total size of the file being cached. maxblocks : int The maximum number of blocks to cache for. The maximum memory use for this cache is then ``blocksize * maxblocks``. � blockcachec����t�|�|||�tj||z �|_||_t j|�|j�|_ yrG) rHr#�math�ceilr� maxblocks� functools� lru_cache� _fetch_block�_fetch_block_cached�r!rrrr�r1s �r"r#zBlockCache.__init__7sR��� ����G�T�2��y�y�� �!1�2�� �"���#A�9�#6�#6�y�#A�$�BS�BS�#T�� r$c�6�|jj�S�z� The statistics on the block cache. Returns ------- NamedTuple Returned directly from the LRU Cache used internally. �r�� cache_infor+s r"r�zBlockCache.cache_info?����'�'�2�2�4�4r$c�$�|j}|d=|S)Nr��rprrs r"rtzBlockCache.__getstate__Js��� � �� �'� (�� r$c��|jj|�tj|d�|j�|_y)Nr�)rprvr�r�r�r�rrs r"rwzBlockCache.__setstate__Os<�� � � ���U�#�#J�9�#6�#6�u�[�7I�#J� � � �$ �� r$c��|�d}|� |j}||jk\s||k\ry||jz}||jz}t||dz�D]}|j|��|j ||||��S)Nrr$rO��start_block_number�end_block_number)rrrbr�� _read_cache)r!r'rgr�r�� block_numbers r"r)zBlockCache._fetchUs��� �=��E� �;��)�)�C� �D�I�I� ��#���#�d�n�n�4���$�.�.�0��"�"4�6F��6J�K�L� � $� $�\� 2�L���� � �1�-�  � � r$c�@��||jkDrtd|�d|j�d���||jz}||jz}|xj||z z c_|xjdz c_t j d|�t�|�!||�}|S)�= Fetch the block of data for `block_number`. �'block_number=�(' is greater than the number of blocks (r_rOzBlockCache fetching block %d) r� ValueErrorrr rr`�inforHr))r!r�r'rg�block_contentsr1s �r"r�zBlockCache._fetch_blockls���� �$�,�,� &�� ���/)�)-����a�9�� � �t�~�~�-���d�n�n�$�� �"�"�c�E�k�1�"� ���1���� � �2�L�A�����s�3���r$c ��||jz}||jz}|xjdz c_||k(r|j|�}|||S|j|�|dg}|jt |jt |dz|���|j |j|�d|�dj|�S�z� Read from our block cache. Parameters ---------- start, end : int The start and end byte positions. start_block_number, end_block_number : int The start and end block numbers. rONr$�rrr��extend�maprb�append�join� r!r'rgr�r�� start_pos�end_pos�block�outs r"r�zBlockCache._read_cache~s����D�N�N�*� �����&�� ���!��� �!1� 1��3�3�4F�G�E���7�+� +��+�+�,>�?� � �K�L�C� �J�J���,�,��,�q�0�2B�C�� � �J�J�t�/�/�0@�A�(�7�K� L��8�8�C�=� r$�� � rr5rr6rr5r�r5r7r8ryr{rx)r�r5r7r;� r'r5rgr5r�r5r�r5r7r;) r2r>r?r@rr#r�rtrwr)r�r�r|r}s@r"r�r�s�����, �D�MO�U��U�'.�U�69�U�FI�U� �U� 5��  �  �.�$&!��&!�"�&!�8;�&!�OR�&!� �&!r$r�c�Z��eZdZUdZdZded< d d �fd� Zd d�Zd d�Z�xZ S) � BytesCacheaKCache which holds data in a in-memory bytes object Implements read-ahead by the block size, for semi-random reads progressing through the file. Parameters ---------- trim: bool As we read more data, whether to discard the start of the buffer when we are more than a blocksize ahead of it. r;rrc�`��t�|�|||�d|_d|_d|_||_y)Nr$)rHr#rMr'rg�trim)r!rrrr�r1s �r"r#zBytesCache.__init__�s2��� ����G�T�2��� �!%�� �#����� r$c���|�d}|� |j}||jk\s||k\ry|j�c||jk\rT|j�H||jkr9||jz }|xjdz c_|j|||z|z S|j r$t |j||j z�}n|}||k(s||jkDry|j�||jkrh|j�||jkDrM|xj||z z c_|xjdz c_|j||�|_||_�n�|j�J�|j�J�|xjdz c_||jkr�|j�|j|z |j kDr8|xj||z z c_|j||�|_||_�n4|xj|j|z z c_|j||j�}||_||jz|_n�|j��||jkDr�|j|jkDrn�||jz |j kDr7|xj||z z c_|j||�|_||_nR|xj||jz z c_|j|j|�}|j|z|_|jt|j�z|_||jz }|j|||z|z }|jrq|j|jz |j dzz}|dkDrC|xj|j |zz c_|j|j |zd|_|Sr�) rr'rgrrMrrer rrrcr�)r!r'rg�offset�bend�newr��nums r"r)zBytesCache._fetch�sJ�� �=��E� �;��)�)�C� �D�I�I� ��#��� �J�J� "�����#����$��d�h�h���T�Z�Z�'�F� �N�N�a� �N��:�:�f�v��|�e�';�<� <� �>�>��t�y�y�#����"6�7�D��D� �5�=�E�D�I�I�-�� �J�J� �%�$�*�*�"4� �H�H� ��d�h�h�� � &� &�$��,� 6� &� �O�O�q� �O����e�T�2�D�J��D�J��:�:�)� )�)��8�8�'� '�'� �O�O�q� �O��t�z�z�!��8�8�#�t�x�x�#�~����'F��.�.�$��,�>�.�!%���e�T�!:�D�J�!&�D�J��.�.�$�*�*�u�2D�D�.��,�,�u�d�j�j�9�C�!&�D�J�!$�t�z�z�!1�D�J����%�$����/��8�8�d�i�i�'���4�8�8�^�d�n�n�4��.�.�$��,�>�.�!%���e�T�!:�D�J�!&�D�J��.�.�$����/�A�.��,�,�t�x�x��6�C�!%���c�!1�D�J��:�:��D�J�J��/�������#���j�j��&�3�,��"6�7�� �9�9��8�8�d�j�j�(�d�n�n�q�.@�A�C��Q�w�� � �d�n�n�s�2�2� �!�Z�Z�����(<�(>�?�� �� r$c�,�t|j�SrG)rcrMr+s r"�__len__zBytesCache.__len__s���4�:�:��r$)T) rr5rr6rr5r��boolr7r8rx)r7r5) r2r>r?r@rrAr#r)r�r|r}s@r"r�r��sT��� �"�D�-�!�IM����'.��69��AE�� ��G�Rr$r�c�X��eZdZUdZdZded< d d�fd� Zd d�Z�xZS) �AllBytesz!Cache entire contents of the file�allrrc����t�|�|||�|�P|xjdz c_|xj|jz c_|j d|j�}||_y)NrOr)rHr#rr rrr�)r!rrrr�r1s �r"r#zAllBytes.__init__sY��� ����G�T�2� �<� �O�O�q� �O� � &� &�$�)�)� 3� &��<�<��4�9�9�-�D��� r$c�J�|xjdz c_|j||S)NrO)rr�r&s r"r)zAllBytes._fetchs!�� ���!����y�y��t�$�$r$)NNNN) rr:rzFetcher | Nonerr:r�z bytes | Noner7r8r9� r2r>r?r@rrAr#r)r|r}s@r"r�r� sX���+��D�-��!%�"&��!� �� � � �� � � � � �%r$r�c�\��eZdZUdZdZded< d d�fd� Zd �fd� Z�xZS) �KnownPartsOfAFilea� Cache holding known file parts. Parameters ---------- blocksize: int How far to read ahead in numbers of bytes fetcher: func Function of the form f(start, end) which gets bytes from remote as specified size: int How big this file is data: dict A dictionary mapping explicit `(start, stop)` file-offset tuples with known bytes. strict: bool, default True Whether to fetch reads that go beyond a known byte-range boundary. If `False`, any read that ends outside a known part will be zero padded. Note that zero padding will not be used for reads that begin outside a known byte-range. �partsrrc ����t�|�|||�||_|r�t|j ��}|dg}|j |d�g} |ddD]m\} } |d\} } | | k(r&| | f|d<| dxx|j | | f�z cc<�9|j | | f�| j |j | | f���ott|| ��|_ yi|_ y)NrrO�����) rHr#�strict�sorted�keysrdr��dict�zipr�)r!rrrr�r��_� old_offsets�offsetsrJr'r(�start0�stop0r1s �r"r#zKnownPartsOfAFile.__init__=s���� ����G�T�2��� � � �����-�K�"�1�~�&�G��h�h�{�1�~�.�/�F�*�1�2�� ��t� '�� � ����E�>�#)�4�.�G�B�K��2�J�$�(�(�E�4�=�"9�9�J��N�N�E�4�=�1��M�M�$�(�(�E�4�=�"9�:� /��S��&�1�2�D�I��D�Ir$c�t��|�d}|� |j}d}|jj�D]t\\}}}||cxkr|ks�n�||z }||||z|z }|jr||cxkr|kr3nn0|d||z t |�z zz }|xj dz c_|cS|}n|j �td||f�d���tjd||f�d��tjd|�d |���|xj||z z c_ |xjdz c_ |t�|�=||�zS) Nrr$�rOz&Read is outside the known file parts: z. z%. IO/caching performance may be poor!z!KnownPartsOfAFile cache fetching r^)rr��itemsr�rcrrr��warnings�warnr`rar rrHr)) r!r'r(r��loc0�loc1r��offr1s �r"r)zKnownPartsOfAFile._fetch[sX��� �=��E� �<��9�9�D���"&�)�)�/�/�"3� �L�T�4�$��u�#�t�#��d�l���3��t��e�!3�4���{�{�d�d�&:�d�&:� �7�d�U�l�S��X�&=�>�>�C��N�N�a�'�N��J� !�E��'#4�0 �<�<� ��E�u�d�m�_�TV�W�X� X� � � �4�e�T�]�O�D2� 3� � � � �8���q���G�H� �"�"�d�U�l�2�"� ���1����U�W�^�E�4�0�0�0r$)NT) rr5rr6rr5r�z&Optional[dict[tuple[int, int], bytes]]r�r�r�rr9r�r}s@r"r�r�$se����,"�D�-�!�8<�� ������ � 5� � � ���<+1�+1r$r�c�P�eZdZdZGd�de�Zd d d�Zd d�Zd d�Zdd�Z dd�Z y )� UpdatableLRUzg Custom implementation of LRU cache that allows updating keys Used by BackgroudBlockCache c�6�eZdZUded<ded<ded<ded<y)�UpdatableLRU.CacheInfor5rl�misses�maxsize�currsizeN)r2r>r?rArBr$r"� CacheInfor��s��� �� �� �� r$r�c��tj�|_||_||_d|_d|_tj�|_ yr) � collectionsr �_cache�_func� _max_size�_hits�_misses� threading�Lock�_lock)r!�func�max_sizes r"r#zUpdatableLRU.__init__�s<��+6�+B�+B�+D�� ��� �!����� ��� ��^�^�%�� r$c�^�|rtd|j�����|j5||jvrH|jj |�|xj dz c_|j|cddd�S ddd�|j |i|��}|j5||j|<|xjdz c_t|j�|jkDr|jjd��ddd�|S#1swY��xYw#1swY|SxYw)Nz Got unexpected keyword argument rOF��last) � TypeErrorr�r�r�� move_to_endr�r�r�rcr��popitem)r!�args�kwargs�results r"�__call__zUpdatableLRU.__call__�s��� ��>�v�{�{�}�o�N�O� O� �Z�Z��t�{�{�"�� � �'�'��-�� � �a�� ��{�{�4�(�  )� )�"�� ����T�,�V�,�� �Z�Z� &�D�K�K�� � �L�L�A� �L��4�;�;��$�.�.�0�� � �#�#��#�/� � � ��Z��� � �s�A D�)A#D"�D�"D,c�b�|j5||jvcddd�S#1swYyxYwrG)r�r�)r!rs r"� is_key_cachedzUpdatableLRU.is_key_cached�s$�� �Z�Z��4�;�;�&� '� '�Z�Z�s�%�.c���|j5||j|<t|j�|jkDr|jj d��ddd�y#1swYyxYw)NFr�)r�r�rcr�r)r!rrs r"�add_keyzUpdatableLRU.add_key�sK�� �Z�Z� &�D�K�K�� ��4�;�;��$�.�.�0�� � �#�#��#�/��Z�Z�s �AA$�$A-c���|j5|j|jt|j�|j |j ��cddd�S#1swYyxYw)N)r�r�rlr�)r�r�r�rcr�r�r�r+s r"r�zUpdatableLRU.cache_info�sJ�� �Z�Z��>�>�����T�[�[�)��Z�Z��|�|� "�� � �Z�Z�s �AA�A&N)�)r�zCallable[P, T]r�r5r7r8)rzP.argsrzP.kwargsr7r)rrr7r�)rrrrr7r8�r7r�) r2r>r?r@r r�r#rrr r�rBr$r"r�r��s,��� �J�� &��&'�0� r$r�c���eZdZUdZdZded< d d �fd� Zdd�Zdd�Zdd�Z dd �Z dd�fd � Z dd �Z �xZ S)�BackgroundBlockCachea� Cache holding memory as a set of blocks with pre-loading of the next block in the background. Requests are only ever made ``blocksize`` at a time, and are stored in an LRU cache. The least recently accessed block is discarded when more than ``maxblocks`` are stored. If the next block is not in cache, it is loaded in a separate thread in non-blocking way. Parameters ---------- blocksize : int The number of bytes to store in each block. Requests are only ever made for ``blocksize``, so this should balance the overhead of making a request against the granularity of the blocks. fetcher : Callable size : int The total size of the file being cached. maxblocks : int The maximum number of blocks to cache for. The maximum memory use for this cache is then ``blocksize * maxblocks``. � backgroundrrc���t�|�|||�tj||z �|_||_t |j|�|_td��|_ d|_ d|_ tj�|_y)NrO�� max_workers)rHr#r�r�rr�r�r�r�r�_thread_executor�_fetch_future_block_number� _fetch_futurer�r��_fetch_future_lockr�s �r"r#zBackgroundBlockCache.__init__�ss��� ����G�T�2��y�y�� �!1�2�� �"���#/��0A�0A�9�#M�� � 2�q� A���6:��'�37���"+�.�.�"2��r$c�6�|jj�Sr�r�r+s r"r�zBackgroundBlockCache.cache_info�r�r$c�<�|j}|d=|d=|d=|d=|d=|S)Nr�rrrrr�rrs r"rtz!BackgroundBlockCache.__getstate__�s<��� � �� �'� (� �$� %� �.� /� �/� "� �&� '�� r$c���|jj|�t|j|d�|_t d��|_d|_d|_tj�|_ y)Nr�rOr) rprvr�r�r�rrrrr�r�rrrs r"rwz!BackgroundBlockCache.__setstate__sZ�� � � ���U�#�#/��0A�0A�5��CU�#V�� � 2�q� A���*.��'�!���"+�.�.�"2��r$c��|�d}|� |j}||jk\s||k\ry||jz}||jz}d}d}|j5|j��|j�J�|jj �rbt jd�|jj|jj�|j�d|_d|_nKt||jcxkxr|knc�}|r&|j}|j}d|_d|_ddd�|�?t jd�|jj|j�|�t||dz�D]}|j|��|dz} |j5|j�]| |jkrN|jj| �s3| |_|jj!|j"| d�|_ddd�|j%||||��S#1swY��xYw#1swY�+xYw)Nrr$z3BlockCache joined background fetch without waiting.z(BlockCache waiting for background fetch.rO�asyncr�)rrrrr�doner`r�r�r rr�rbrrr�submitr�r�) r!r'rgr�r��fetch_future_block_number� fetch_future� must_joinr��end_block_plus_1s r"r)zBackgroundBlockCache._fetch sL�� �=��E� �;��)�)�C� �D�I�I� ��#���#�d�n�n�4���$�.�.�0��$(�!�� � � $� $��!�!�-��6�6�B�B�B��%�%�*�*�,��K�K� U�V��,�,�4�4��*�*�1�1�3�T�5T�5T��7;�D�3�)-�D�&�!%�*��:�:�,�+�,�!�I� !�59�4S�4S�1�'+�'9�'9� �;?��7�-1��*�7%�< � #� �K�K�B� C� � $� $� ,� ,��#�#�%�'@� � "�"4�6F��6J�K�L� � $� $�\� 2�L� ,�a�/�� � $� $��"�"�*�$�� � �4��0�0�>�>�?O�P�2B��/�%)�%:�%:�%A�%A��%�%�'7��&��"�%���� � �1�-�  � � �o%� $��X%� $�s�C"H>�7A*I �>I� Ic�B��||jkDrtd|�d|j�d���||jz}||jz}tj d||�|xj ||z z c_|xj dz c_t�|�!||�}|S)r�r�r�r_z!BlockCache fetching block (%s) %drO) rr�rr`r�r rrHr))r!r��log_infor'rgr�r1s �r"r�z!BackgroundBlockCache._fetch_blockVs���� �$�,�,� &�� ���/)�)-����a�9�� � �t�~�~�-���d�n�n�$��� � �7��<�P� �"�"�c�E�k�1�"� ���1�������s�3���r$c ��||jz}||jz}|xjdz c_||k(r|j|�}|||S|j|�|dg}|jt |jt |dz|���|j |j|�d|�dj|�Sr�r�r�s r"r�z BackgroundBlockCache._read_cachehs����D�N�N�*� �����&�� ���!��� �!1� 1��,�,�-?�@�E���7�+� +��+�+�,>�?� � �K�L�C� �J�J���,�,��,�q�0�2B�C�� � �J�J�t�/�/�0@�A�(�7�K� L��8�8�C�=� r$r�r�r ryr<rx)�sync)r�r5r#r=r7r;r�)r2r>r?r@rrAr#r�rtrwr)r�r�r|r}s@r"rr�s�����2'�D�-�&�MO� 3�� 3�'.� 3�69� 3�FI� 3� � 3� 5��3�J �X�$(!��(!�"�(!�8;�(!�OR�(!� �(!r$rz!dict[str | None, type[BaseCache]]�cachesc�r�|j}|s |tvrtd|�dt|����|t|<y)z�'Register' cache implementation. Parameters ---------- clobber: bool, optional If set to True (default is False) - allow to overwrite existing entry. Raises ------ ValueError zCache with name z is already known: N)rr&r�)�cls�clobberrs r"�register_cacher*�s=�� �8�8�D� �t�v�~��+�D�8�3F�v�d�|�n�U�V�V��F�4�Lr$)F)r(ztype[BaseCache]r)r�r7r8).� __future__rr�r��loggingr�rSr�r��concurrent.futuresrr�typingrrrr r r r r rrE�typing_extensionsrrr� getLoggerr`r5r;r6rrDrr�r�r�r�r�r�rr&rAr*�crBr$r"�<module>r2s]��"���� � ���9� � � ���+��#��A��� �A� �C�L�� �� � �8� $�� �C��:�u�$� %��A �A �HS&� �S&�l+%�Y�+%�\+,�i�+,�\F!��F!�Rb��b�J%�y�%�0b1� �b1�J9�7�1�a�4�=�9�xK!�9�K!�` �)�-��)�� �(� ����� ��� �A��1�� r$

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/noah-vh/mcp-server-clickup'

If you have feedback or need assistance with the MCP directory API, please join our Discord server