Skip to main content
Glama

Browser Use MCP Server

models.cpython-311.pyc13 kB
� ��hc&��p�dZddlmZddlZddlZddlmZeje��Z ddee defd�Z dS) zD Module containing LLM provider selection and initialization logic. �)� BaseChatModelN)�Optional� model_name�returnc��tj�d��r5ddlm}|r|nd}t �d|����||���Stj�d��r5ddlm}|r|nd }t �d |����||���Stj�d ��r5dd l m }|r|nd }t �d|����||���Stj�d��r5ddl m }|r|nd}t �d|����||���Stj�d��r5ddl m}|r|nd}t �d|����||���Stj�d��r5ddlm}|r|nd}t �d|����||���Stj�d��r5ddlm}|r|nd}t �d|����||���Stj�d��rTtj�d ��r5dd!lm} |r|nd"}t �d#|����| |�$��Stj�d%��r5dd&lm} |r|nd'}t �d(|����| |���Stj�d)��r�tj�d*��rrdd+lm} |r|ntj�d,d-��}t �d.|����| |tj�d/d0���1��Stj�d2��rk dd3lm} |r|nd }t �d4|����| |�5��S#t4$r(} t �d6| ����Yd7} ~ d7Sd7} ~ wwxYwtj�d8��rk dd9lm}|r|nd:}t �d;|����||���S#t4$r(} t �d<| ����Yd7} ~ d7Sd7} ~ wwxYwtj�d=��rk dd>lm}|r|nd?}t �d@|����||���S#t4$r(} t �dA| ����Yd7} ~ d7Sd7} ~ wwxYwtj�dB��r�tj�dC��rk ddDl m!}|r|ndE}t �dF|����||�G��S#t4$r(} t �dH| ����Yd7} ~ d7Sd7} ~ wwxYwtj�dI��rk ddJl"m#}|r|ndK}t �dL|����||�$��S#t4$r(} t �dM| ����Yd7} ~ d7Sd7} ~ wwxYwtj�dN��rk ddOl$m%}|r|ndP}t �dQ|����||���S#t4$r(} t �dR| ����Yd7} ~ d7Sd7} ~ wwxYwtj�dS��rk ddTl&m'}|r|ndU}t �dV|����||�5��S#t4$r(} t �dW| ����Yd7} ~ d7Sd7} ~ wwxYwtj�dX��rEddYl(m)}m*}|r|ndZ}t �d[|����||d\d]�^��}||�_��Stj�d`��s>tj+�,da��stj+�,db��rk ddcl-m.}|r|ndd}t �de|����||���S#t4$r(} t �df| ����Yd7} ~ d7Sd7} ~ wwxYwtj�dg��r� ddhl/m0}|r|ntj�dg��}t �di|����||�j��S#t4$r(} t �dk| ����Yd7} ~ d7Sd7} ~ wwxYwtcdl���)mad Initialize and return a LangChain chat model based on available API keys. The function checks for various API keys in the environment and initializes the appropriate model if the key is found. Only one model will be initialized based on priority. All models in this function support both tool calling and structured output. Args: model_name: Optional model name to override the default model for the provider. Examples: "gpt-4", "claude-3-haiku-20240307", "gemini-1.0-pro", etc. Returns: BaseChatModel: An instance of a LangChain chat model �OPENAI_API_KEYr)� ChatOpenAIzgpt-4oz Using OpenAI )�model�ANTHROPIC_API_KEY)� ChatAnthropiczclaude-3-opus-20240229zUsing Anthropic �GOOGLE_API_KEY)�ChatGoogleGenerativeAIzgemini-1.5-proz Using Google �COHERE_API_KEY)� ChatCoherezcommand-r-plusz Using Cohere �MISTRAL_API_KEY)� ChatMistralAIzmistral-large-latestzUsing Mistral � GROQ_API_KEY)�ChatGroqzllama3-70b-8192z Using Groq �TOGETHER_API_KEY)� ChatTogetherzmeta-llama/Llama-3-70b-chatzUsing Together AI �AWS_ACCESS_KEY_ID�AWS_SECRET_ACCESS_KEY)� ChatBedrockz"anthropic.claude-3-sonnet-20240229zUsing AWS Bedrock )�model_id�FIREWORKS_API_KEY)� ChatFireworksz+accounts/fireworks/models/llama-v3-70b-chatzUsing Fireworks �AZURE_OPENAI_API_KEY�AZURE_OPENAI_ENDPOINT)�AzureChatOpenAI�AZURE_OPENAI_DEPLOYMENT_NAMEzgpt-4zUsing Azure OpenAI �AZURE_OPENAI_API_VERSIONz 2023-05-15)�azure_deployment�openai_api_version�GOOGLE_APPLICATION_CREDENTIALS)� ChatVertexAIzUsing Google Vertex AI )rz'Failed to initialize Google Vertex AI: N�NVIDIA_API_KEY)� ChatNVIDIAzmeta/llama3-70b-instructzUsing NVIDIA AI z Failed to initialize NVIDIA AI: � AI21_API_KEY)�ChatAI21zj2-ultraz Using AI21 zFailed to initialize AI21: �DATABRICKS_HOST�DATABRICKS_TOKEN)�ChatDatabrickszdatabricks-llama-3-70bzUsing Databricks )�endpointz!Failed to initialize Databricks: �WATSONX_API_KEY)� ChatWatsonxzmeta-llama/llama-3-70b-instructzUsing IBM Watsonx z"Failed to initialize IBM Watsonx: � XAI_API_KEY)�ChatXAIzgrok-1z Using xAI zFailed to initialize xAI: �UPSTAGE_API_KEY)� ChatUpstagezsolar-1-mini-chatzUsing Upstage zFailed to initialize Upstage: �HUGGINGFACEHUB_API_TOKEN)�ChatHuggingFace�HuggingFaceEndpointzmeta-llama/Llama-3-8b-chat-hfzUsing Hugging Face ztext-generationi)�repo_id�task�max_new_tokens)�llm� OLLAMA_HOSTz/usr/local/bin/ollamaz/usr/bin/ollama)� ChatOllama�llama3z Using Ollama zFailed to initialize Ollama: �LLAMA_CPP_MODEL_PATH)� ChatLlamaCppzUsing Llama.cpp with model at )� model_pathz Failed to initialize Llama.cpp: aGNo API keys found. Please set one of the following environment variables: - OPENAI_API_KEY - ANTHROPIC_API_KEY - GOOGLE_API_KEY - COHERE_API_KEY - MISTRAL_API_KEY - GROQ_API_KEY - TOGETHER_API_KEY - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY - FIREWORKS_API_KEY - AZURE_OPENAI_API_KEY and AZURE_OPENAI_ENDPOINT - GOOGLE_APPLICATION_CREDENTIALS (for Vertex AI) - NVIDIA_API_KEY - AI21_API_KEY - DATABRICKS_HOST and DATABRICKS_TOKEN - WATSONX_API_KEY - XAI_API_KEY - UPSTAGE_API_KEY - HUGGINGFACEHUB_API_TOKEN - OLLAMA_HOST (for local models) - LLAMA_CPP_MODEL_PATH (for local models))2�os�environ�get�langchain_openair �logger�info�langchain_anthropicr �langchain_google_genair�langchain_coherer�langchain_mistralair�langchain_groqr�langchain_togetherr� langchain_awsr�langchain_fireworksrr�langchain_google_vertexair%� Exception�warning�langchain_nvidia_ai_endpointsr'�langchain_ai21r)�langchain_databricksr,� langchain_ibmr/� langchain_xair1�langchain_upstager3�langchain_huggingfacer5r6�path�exists�langchain_ollamar<�langchain_community.chat_modelsr?� ValueError)rr � use_modelr rrrrrrrrr%�er'r)r,r/r1r3r5r6r:r<r?r@s �6/home/pietro/browser-use-mcp/browser_use_mcp/models.py�get_llmra s? ��" �z�~�~�&�'�'�j �/�/�/�/�/�/�",�:�J�J�(� �� � �/�I�/�/�0�0�0��z� �*�*�*�*� ����+� ,� ,�b �5�5�5�5�5�5�",�J�J�J�2J� �� � �2�y�2�2�3�3�3��}�9�-�-�-�-� ����(� )� )�Z �A�A�A�A�A�A�",�B�J�J�2B� �� � �/�I�/�/�0�0�0�%�%�I�6�6�6�6� ����(� )� )�R �/�/�/�/�/�/�",�B�J�J�2B� �� � �/�I�/�/�0�0�0��z� �*�*�*�*� ����)� *� *�J �5�5�5�5�5�5�",�H�J�J�2H� �� � �0�Y�0�0�1�1�1��}�9�-�-�-�-� ����� '� '�B �+�+�+�+�+�+�",�C�J�J�2C� �� � �-�)�-�-�.�.�.��x�i�(�(�(�(� ����*� +� +�z �3�3�3�3�3�3�",�O�J�J�2O� �� � �4��4�4�5�5�5��|�)�,�,�,�,� ����+� ,� ,�r ������2�2�r � .�-�-�-�-�-�",�V�J�J�2V� �� � �4��4�4�5�5�5��{�I�.�.�.�.� ����+� ,� ,�h �5�5�5�5�5�5�%� W�J�J�*W� � � � �2�y�2�2�3�3�3��}�9�-�-�-�-� ����.� /� /�^ �B�J�N�N��5�5�^ � 5�4�4�4�4�4�� I�J�J����� >��H�H� � � � �6�*�6�6�7�7�7���'�!�z�~�~�.H�,�W�W� � � � � ����8� 9� 9�M � J� >� >� >� >� >� >�&0�F� � �6F�I� �K�K�=�)�=�=� >� >� >��<�9�5�5�5� 5��� J� J� J� �N�N�H�Q�H�H� I� I� I� I� I� I� I� I� I����� J���� ����(� )� )�B � C� @� @� @� @� @� @�&0�P� � �6P�I� �K�K�6�9�6�6� 7� 7� 7��:�I�.�.�.� .��� C� C� C� �N�N�A�a�A�A� B� B� B� B� B� B� B� B� B����� C���� ����� '� '�w � >� /� /� /� /� /� /�&0�@� � �j�I� �K�K�1�i�1�1� 2� 2� 2��8�)�,�,�,� ,��� >� >� >� �N�N�<��<�<� =� =� =� =� =� =� =� =� =����� >���� ����)� *� *�l �r�z�~�~�>P�/Q�/Q�l � D� ;� ;� ;� ;� ;� ;�&0�N� � �6N�I� �K�K�7�I�7�7� 8� 8� 8�!�>�9�5�5�5� 5��� D� D� D� �N�N�B�q�B�B� C� C� C� C� C� C� C� C� C����� D���� ����)� *� *�a � E� 1� 1� 1� 1� 1� 1�&0�W� � �6W�I� �K�K�8�Y�8�8� 9� 9� 9��;� �2�2�2� 2��� E� E� E� �N�N�C��C�C� D� D� D� D� D� D� D� D� D����� E���� ���� � &� &�V � =� -� -� -� -� -� -�&0�>� � �h�I� �K�K�0�Y�0�0� 1� 1� 1��7��+�+�+� +��� =� =� =� �N�N�;��;�;� <� <� <� <� <� <� <� <� <����� =���� ����)� *� *�K � A� 5� 5� 5� 5� 5� 5�&0�I� � �6I�I� �K�K�4��4�4� 5� 5� 5��;�)�4�4�4� 4��� A� A� A� �N�N�?�A�?�?� @� @� @� @� @� @� @� @� @����� A���� ����2� 3� 3�@ �N�N�N�N�N�N�N�N�",�Q�J�J�2Q� �� � �5�)�5�5�6�6�6�!�!��"�� � � �� ��3�'�'�'�'� � ���}�%�%�3 � �7�>�>�1� 2� 2�3 � �7�>�>�+� ,� ,�3 �  @� 3� 3� 3� 3� 3� 3�&0�>� � �h�I� �K�K�3� �3�3� 4� 4� 4��:�I�.�.�.� .��� @� @� @� �N�N�>�1�>�>� ?� ?� ?� ?� ?� ?� ?� ?� ?����� @���� ����.� /� /�$ � C� D� D� D� D� D� D�)�T� � �b�j�n�n�=S�.T�.T� � �K�K�E��E�E� F� F� F��<�:�6�6�6� 6��� C� C� C� �N�N�A�a�A�A� B� B� B� B� B� B� B� B� B����� C���� � 8� � � s��$4P� Q �#Q�Q �.4R#�# S�-S�S�84T-�- U�7U�U�!4W� X� X�X�+4Y � Z�*Z � Z�54[*�* \�4\�\�?4]4�4 ^&�>^!�!^&�+4b � c�*c � c�5Ae� e9�e4�4e9)N) �__doc__�langchain.chat_models.baserrA�logging�typingr� getLogger�__name__rE�strra��r`�<module>rks�����5�4�4�4�4�4� � � � ����������� �� �8� $� $��{ �{ ��� �{ ��{ �{ �{ �{ �{ �{ rj

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pietrozullo/browser-use-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server