Skip to main content
Glama

shivonai-mcp

by shivonai

史冯·艾

用于将 AI 招聘工具与各种 AI 代理框架集成的 Python 包。

特征

  • 访问适用于 AI 代理的定制招聘工具
  • 将 MCP 工具与流行的 AI 代理框架集成:
    • 朗链
    • 骆驼指数
    • CrewAI
    • 阿格诺

生成 auth_token

访问https://shivonai.com生成您的 auth_token。

安装

pip install shivonai[langchain] # For LangChain pip install shivonai[llamaindex] # For LlamaIndex pip install shivonai[crewai] # For CrewAI pip install shivonai[agno] # For Agno pip install shivonai[all] # For all frameworks

入门

LangChain 集成

from langchain_openai import ChatOpenAI from langchain.agents import initialize_agent, AgentType from shivonai.lyra import langchain_toolkit # Replace with your actual MCP server details auth_token = "shivonai_auth_token" # Get LangChain tools tools = langchain_toolkit(auth_token) # Print available tools print(f"Available tools: {[tool.name for tool in tools]}") # Initialize LangChain agent with tools llm = ChatOpenAI( temperature=0, model_name="gpt-4-turbo", openai_api_key="openai-api-key" ) agent = initialize_agent( tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True ) # Try running the agent with a simple task try: result = agent.run("what listing I have?") print(f"Result: {result}") except Exception as e: print(f"Error: {e}")

LlamaIndex 集成

from llama_index.llms.openai import OpenAI from llama_index.core.agent import ReActAgent from shivonai.lyra import llamaindex_toolkit # Set up OpenAI API key - you'll need this to use OpenAI models with LlamaIndex os.environ["OPENAI_API_KEY"] = "openai_api_key" # Your MCP server authentication details MCP_AUTH_TOKEN = "shivonai_auth_token" def main(): """Test LlamaIndex integration with ShivonAI.""" print("Testing LlamaIndex integration with ShivonAI...") # Get LlamaIndex tools from your MCP server tools = llamaindex_toolkit(MCP_AUTH_TOKEN) print(f"Found {len(tools)} MCP tools for LlamaIndex:") for name, tool in tools.items(): print(f" - {name}: {tool.metadata.description[:60]}...") # Create a LlamaIndex agent with these tools llm = OpenAI(model="gpt-4") # Convert tools dictionary to a list tool_list = list(tools.values()) # Create the ReAct agent agent = ReActAgent.from_tools( tools=tool_list, llm=llm, verbose=True ) # Test the agent with a simple query that should use one of your tools # Replace this with a query that's relevant to your tools query = "what listings I have?" print("\nTesting agent with query:", query) response = agent.chat(query) print("\nAgent response:") print(response) if __name__ == "__main__": main()

CrewAI集成

from crewai import Agent, Task, Crew from langchain_openai import ChatOpenAI # or any other LLM you prefer from shivonai.lyra import crew_toolkit import os os.environ["OPENAI_API_KEY"] = "oepnai_api_key" llm = ChatOpenAI(temperature=0.7, model="gpt-4") # Get CrewAI tools tools = crew_toolkit("shivonai_auth_token") # Print available tools print(f"Available tools: {[tool.name for tool in tools]}") # Create an agent with these tools agent = Agent( role="Data Analyst", goal="Analyze data using custom tools", backstory="You're an expert data analyst with access to custom tools", tools=tools, llm=llm # Provide the LLM here ) # Create a task - note the expected_output field task = Task( description="what listings I have?", expected_output="A detailed report with key insights and recommendations", agent=agent ) crew = Crew( agents=[agent], tasks=[task]) result = crew.kickoff() print(result)

Agno 集成

from agno.agent import Agent from agno.models.openai import OpenAIChat from shivonai.lyra import agno_toolkit import os from agno.models.aws import Claude # Replace with your actual MCP server details auth_token = "Shivonai_auth_token" os.environ["OPENAI_API_KEY"] = "oepnai_api_key" # Get Agno tools tools = agno_toolkit(auth_token) # Print available tools print(f"Available MCP tools: {list(tools.keys())}") # Create an Agno agent with tools agent = Agent( model=OpenAIChat(id="gpt-3.5-turbo"), tools=list(tools.values()), markdown=True, show_tool_calls=True ) # Try the agent with a simple task try: agent.print_response("what listing are there?", stream=True) except Exception as e: print(f"Error: {e}")

执照

该项目已获得专有许可 - 有关详细信息,请参阅 LICENSE 文件。

-
security - not tested
F
license - not found
-
quality - not tested

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

我们的 MCP 工具旨在增强 AI 驱动的自动化面试服务,确保候选人评估流程无缝衔接且与实际情况紧密相关。这些工具利用先进的 AI 模型来分析回复、评估能力并提供实时反馈,从而

  1. 特征
    1. 生成 auth\_token
      1. 安装
        1. 入门
          1. LangChain 集成
          2. LlamaIndex 集成
          3. CrewAI集成
          4. Agno 集成
        2. 执照

          Related MCP Servers

          • A
            security
            A
            license
            A
            quality
            This server provides a minimal template for creating AI assistant tools using the ModelContextProtocol, featuring a simple 'hello world' tool example and development setups for building custom MCP tools.
            Last updated -
            1
            1
            8
            TypeScript
            The Unlicense
            • Apple
          • -
            security
            F
            license
            -
            quality
            A specialized Model Context Protocol (MCP) server that enables AI-powered interview roleplay scenarios for practice with realistic conversational feedback.
            Last updated -
            6
            3
            TypeScript
          • -
            security
            F
            license
            -
            quality
            Enables AI tools to capture and process screenshots of a user's screen, allowing AI assistants to see and analyze what the user is looking at through a simple MCP interface.
            Last updated -
            1
            Python
            • Linux
            • Apple
          • A
            security
            F
            license
            A
            quality
            An MCP server that supercharges AI assistants with powerful tools for software development, enabling research, planning, code generation, and project scaffolding through natural language interaction.
            Last updated -
            11
            34
            TypeScript
            • Linux
            • Apple

          View all related MCP servers

          MCP directory API

          We provide all the information about MCP servers via our MCP API.

          curl -X GET 'https://glama.ai/api/mcp/v1/servers/shivonai/python_package'

          If you have feedback or need assistance with the MCP directory API, please join our Discord server