shivonai-mcp

Integrations

  • Provides custom hiring tools for CrewAI agents, facilitating recruitment-related tasks and data analysis within CrewAI workflows

  • Enables AI agents to use custom hiring tools through LangChain, allowing integration of recruitment capabilities with LangChain agents

  • Integrates with OpenAI models like GPT-4 to power the AI agent capabilities for recruitment tools and data analysis tasks

시본AI

다양한 AI 에이전트 프레임워크와 AI 채용 도구를 통합하기 위한 Python 패키지입니다.

특징

  • AI 에이전트를 위한 맞춤형 채용 도구에 액세스하세요
  • 인기 있는 AI 에이전트 프레임워크와 MCP 도구 통합:
    • 랭체인
    • 라마인덱스
    • 크루AI
    • 아그노

auth_token 생성

https://shivonai.com을 방문하여 auth_token을 생성하세요.

설치

지엑스피1

시작하기

LangChain 통합

from langchain_openai import ChatOpenAI from langchain.agents import initialize_agent, AgentType from shivonai.lyra import langchain_toolkit # Replace with your actual MCP server details auth_token = "shivonai_auth_token" # Get LangChain tools tools = langchain_toolkit(auth_token) # Print available tools print(f"Available tools: {[tool.name for tool in tools]}") # Initialize LangChain agent with tools llm = ChatOpenAI( temperature=0, model_name="gpt-4-turbo", openai_api_key="openai-api-key" ) agent = initialize_agent( tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True ) # Try running the agent with a simple task try: result = agent.run("what listing I have?") print(f"Result: {result}") except Exception as e: print(f"Error: {e}")

LlamaIndex 통합

from llama_index.llms.openai import OpenAI from llama_index.core.agent import ReActAgent from shivonai.lyra import llamaindex_toolkit # Set up OpenAI API key - you'll need this to use OpenAI models with LlamaIndex os.environ["OPENAI_API_KEY"] = "openai_api_key" # Your MCP server authentication details MCP_AUTH_TOKEN = "shivonai_auth_token" def main(): """Test LlamaIndex integration with ShivonAI.""" print("Testing LlamaIndex integration with ShivonAI...") # Get LlamaIndex tools from your MCP server tools = llamaindex_toolkit(MCP_AUTH_TOKEN) print(f"Found {len(tools)} MCP tools for LlamaIndex:") for name, tool in tools.items(): print(f" - {name}: {tool.metadata.description[:60]}...") # Create a LlamaIndex agent with these tools llm = OpenAI(model="gpt-4") # Convert tools dictionary to a list tool_list = list(tools.values()) # Create the ReAct agent agent = ReActAgent.from_tools( tools=tool_list, llm=llm, verbose=True ) # Test the agent with a simple query that should use one of your tools # Replace this with a query that's relevant to your tools query = "what listings I have?" print("\nTesting agent with query:", query) response = agent.chat(query) print("\nAgent response:") print(response) if __name__ == "__main__": main()

CrewAI 통합

from crewai import Agent, Task, Crew from langchain_openai import ChatOpenAI # or any other LLM you prefer from shivonai.lyra import crew_toolkit import os os.environ["OPENAI_API_KEY"] = "oepnai_api_key" llm = ChatOpenAI(temperature=0.7, model="gpt-4") # Get CrewAI tools tools = crew_toolkit("shivonai_auth_token") # Print available tools print(f"Available tools: {[tool.name for tool in tools]}") # Create an agent with these tools agent = Agent( role="Data Analyst", goal="Analyze data using custom tools", backstory="You're an expert data analyst with access to custom tools", tools=tools, llm=llm # Provide the LLM here ) # Create a task - note the expected_output field task = Task( description="what listings I have?", expected_output="A detailed report with key insights and recommendations", agent=agent ) crew = Crew( agents=[agent], tasks=[task]) result = crew.kickoff() print(result)

Agno 통합

from agno.agent import Agent from agno.models.openai import OpenAIChat from shivonai.lyra import agno_toolkit import os from agno.models.aws import Claude # Replace with your actual MCP server details auth_token = "Shivonai_auth_token" os.environ["OPENAI_API_KEY"] = "oepnai_api_key" # Get Agno tools tools = agno_toolkit(auth_token) # Print available tools print(f"Available MCP tools: {list(tools.keys())}") # Create an Agno agent with tools agent = Agent( model=OpenAIChat(id="gpt-3.5-turbo"), tools=list(tools.values()), markdown=True, show_tool_calls=True ) # Try the agent with a simple task try: agent.print_response("what listing are there?", stream=True) except Exception as e: print(f"Error: {e}")

특허

이 프로젝트는 독점 라이선스에 따라 라이선스가 부여되었습니다. 자세한 내용은 라이선스 파일을 참조하세요.

-
security - not tested
F
license - not found
-
quality - not tested

당사의 MCP 도구는 AI 기반 자동 면접 서비스를 향상시키도록 설계되었으며, 매끄럽고 상황에 맞는 후보자 평가 프로세스를 보장합니다. 이 도구는 고급 AI 모델을 활용하여 응답을 분석하고, 역량을 평가하고, 실시간 피드백을 제공합니다.

  1. Features
    1. Generate auth_token
      1. Installation
        1. Getting Started
          1. LangChain Integration
          2. LlamaIndex Integration
          3. CrewAI Integration
          4. Agno Integration
        2. License
          ID: 8s8deyg040