Skip to main content
Glama

shivonai-mcp

by shivonai

シヴォンAI

AI 採用ツールをさまざまな AI エージェント フレームワークと統合するための Python パッケージ。

特徴

  • AIエージェント向けのカスタム採用ツールにアクセス
  • MCP ツールを一般的な AI エージェント フレームワークと統合します。
    • ランチェーン
    • ラマインデックス
    • クルーAI
    • アグノ

auth_tokenを生成する

auth_token を生成するには、 https://shivonai.comにアクセスしてください。

インストール

pip install shivonai[langchain] # For LangChain pip install shivonai[llamaindex] # For LlamaIndex pip install shivonai[crewai] # For CrewAI pip install shivonai[agno] # For Agno pip install shivonai[all] # For all frameworks

はじめる

LangChain統合

from langchain_openai import ChatOpenAI from langchain.agents import initialize_agent, AgentType from shivonai.lyra import langchain_toolkit # Replace with your actual MCP server details auth_token = "shivonai_auth_token" # Get LangChain tools tools = langchain_toolkit(auth_token) # Print available tools print(f"Available tools: {[tool.name for tool in tools]}") # Initialize LangChain agent with tools llm = ChatOpenAI( temperature=0, model_name="gpt-4-turbo", openai_api_key="openai-api-key" ) agent = initialize_agent( tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True ) # Try running the agent with a simple task try: result = agent.run("what listing I have?") print(f"Result: {result}") except Exception as e: print(f"Error: {e}")

LlamaIndex 統合

from llama_index.llms.openai import OpenAI from llama_index.core.agent import ReActAgent from shivonai.lyra import llamaindex_toolkit # Set up OpenAI API key - you'll need this to use OpenAI models with LlamaIndex os.environ["OPENAI_API_KEY"] = "openai_api_key" # Your MCP server authentication details MCP_AUTH_TOKEN = "shivonai_auth_token" def main(): """Test LlamaIndex integration with ShivonAI.""" print("Testing LlamaIndex integration with ShivonAI...") # Get LlamaIndex tools from your MCP server tools = llamaindex_toolkit(MCP_AUTH_TOKEN) print(f"Found {len(tools)} MCP tools for LlamaIndex:") for name, tool in tools.items(): print(f" - {name}: {tool.metadata.description[:60]}...") # Create a LlamaIndex agent with these tools llm = OpenAI(model="gpt-4") # Convert tools dictionary to a list tool_list = list(tools.values()) # Create the ReAct agent agent = ReActAgent.from_tools( tools=tool_list, llm=llm, verbose=True ) # Test the agent with a simple query that should use one of your tools # Replace this with a query that's relevant to your tools query = "what listings I have?" print("\nTesting agent with query:", query) response = agent.chat(query) print("\nAgent response:") print(response) if __name__ == "__main__": main()

CrewAI統合

from crewai import Agent, Task, Crew from langchain_openai import ChatOpenAI # or any other LLM you prefer from shivonai.lyra import crew_toolkit import os os.environ["OPENAI_API_KEY"] = "oepnai_api_key" llm = ChatOpenAI(temperature=0.7, model="gpt-4") # Get CrewAI tools tools = crew_toolkit("shivonai_auth_token") # Print available tools print(f"Available tools: {[tool.name for tool in tools]}") # Create an agent with these tools agent = Agent( role="Data Analyst", goal="Analyze data using custom tools", backstory="You're an expert data analyst with access to custom tools", tools=tools, llm=llm # Provide the LLM here ) # Create a task - note the expected_output field task = Task( description="what listings I have?", expected_output="A detailed report with key insights and recommendations", agent=agent ) crew = Crew( agents=[agent], tasks=[task]) result = crew.kickoff() print(result)

アグノ統合

from agno.agent import Agent from agno.models.openai import OpenAIChat from shivonai.lyra import agno_toolkit import os from agno.models.aws import Claude # Replace with your actual MCP server details auth_token = "Shivonai_auth_token" os.environ["OPENAI_API_KEY"] = "oepnai_api_key" # Get Agno tools tools = agno_toolkit(auth_token) # Print available tools print(f"Available MCP tools: {list(tools.keys())}") # Create an Agno agent with tools agent = Agent( model=OpenAIChat(id="gpt-3.5-turbo"), tools=list(tools.values()), markdown=True, show_tool_calls=True ) # Try the agent with a simple task try: agent.print_response("what listing are there?", stream=True) except Exception as e: print(f"Error: {e}")

ライセンス

このプロジェクトは独自のライセンスに基づいてライセンスされています。詳細については、LICENSE ファイルを参照してください。

-
security - not tested
F
license - not found
-
quality - not tested

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

当社のMCPツールは、シームレスで状況に応じた候補者評価プロセスを実現することで、AIを活用した自動面接サービスを強化するように設計されています。これらのツールは、高度なAIモデルを活用して回答を分析し、能力を評価し、リアルタイムのフィードバックを提供します。

  1. 特徴
    1. auth\_tokenを生成する
      1. インストール
        1. はじめる
          1. LangChain統合
          2. LlamaIndex 統合
          3. CrewAI統合
          4. アグノ統合
        2. ライセンス

          Related MCP Servers

          • A
            security
            A
            license
            A
            quality
            This server provides a minimal template for creating AI assistant tools using the ModelContextProtocol, featuring a simple 'hello world' tool example and development setups for building custom MCP tools.
            Last updated -
            1
            1
            8
            TypeScript
            The Unlicense
            • Apple
          • -
            security
            F
            license
            -
            quality
            A specialized Model Context Protocol (MCP) server that enables AI-powered interview roleplay scenarios for practice with realistic conversational feedback.
            Last updated -
            6
            3
            TypeScript
          • -
            security
            F
            license
            -
            quality
            Enables AI tools to capture and process screenshots of a user's screen, allowing AI assistants to see and analyze what the user is looking at through a simple MCP interface.
            Last updated -
            1
            Python
            • Linux
            • Apple
          • A
            security
            F
            license
            A
            quality
            An MCP server that supercharges AI assistants with powerful tools for software development, enabling research, planning, code generation, and project scaffolding through natural language interaction.
            Last updated -
            11
            34
            TypeScript
            • Linux
            • Apple

          View all related MCP servers

          MCP directory API

          We provide all the information about MCP servers via our MCP API.

          curl -X GET 'https://glama.ai/api/mcp/v1/servers/shivonai/python_package'

          If you have feedback or need assistance with the MCP directory API, please join our Discord server