Skip to main content
Glama

searchInteractions

Identify and retrieve specific interactions from emails, calendars, social platforms, and messaging apps. Use for queries like "Who did I meet most?" or to find recently added contacts, best friends by relevance, or interactions with specific job titles, companies, or locations.

Instructions

Search for interactions and return matching interactions. Use for questions about specific interactions, "who" questions (e.g. "Who did I meet most?"), finding best friends based on relevance score, or finding recently added/created contacts. Returns actual contact records for queries needing specific interactions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
company_nameNoIf the query refers to a company or acronym of companies, list company names as they would on a LinkedIn profile.
exclude_contact_idsNoUsed to exclude previously returned contact IDs when the user asks for more results (e.g. "who else" or "show me more"). You should pass all contact IDs from previous searchContacts responses to ensure new results are shown.
job_titleNoIf the query refers to a job title, position, or industry, list relevant job titles as they would be on a LinkedIn profile. Examples: Developer should return positions such as 'Software Engineer', 'Full Stack Developer', 'Data Scientist', etc. Banker should return positions such as 'Financial Analyst', 'Investment Banker', 'Credit Analyst', etc. Healthcare industry should return positions such as 'Registered Nurse', 'Physician', 'Medical Director', etc. Legal industry should return positions such as 'Attorney', 'Legal Counsel', 'Paralegal', etc.
keywordsNoExtract and list specific keywords related to professional expertise, skills, interests, or hobbies that the user is searching for. For example, if someone asks for 'people who know about machine learning or play tennis', the keywords would be ['machine learning', 'tennis']. Do not include job titles or company names here as those have dedicated fields. Focus on capturing domain expertise, technical skills, personal interests, and hobby-related terms that help identify relevant contacts.
limitNoThe number of contacts to return if the user asks for an amount.
locationNoIf the query refers to a location (city, state, country, region) where people are located or based, list the locations as they would appear on a LinkedIn profile. For example, if someone asks about "people in New York", return "New York City Metropolitan Area" or if they ask about "contacts in California", return "San Francisco Bay Area", "Greater Los Angeles Area", etc.
queryYesThe raw search query from the user. Must preserve exact intent and details to enable accurate searching, including: relationship qualifiers, interaction metrics, relationship strength, names, companies, locations, dates (specific dates, date ranges, or relative dates like "last week" are required if mentioned by user), job titles, skills, and logical conditions (OR/AND).
sort_instructionsNoHow would you like the results sorted? For example: "most recent contacts" will sort by last interaction date, "closest connections" will sort by interaction count, and "alphabetical" will sort by name. If no sort preference is given, this can be left empty.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns 'actual contact records' and mentions 'relevance score' for ranking, which adds useful context. However, it doesn't describe important behavioral aspects like pagination, rate limits, error conditions, or whether this is a read-only operation (implied by 'search' but not explicit).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences. The first sentence states the core purpose, and the second provides usage examples and clarifies the return type. Every sentence adds value, though the second sentence could be slightly more structured by separating use cases more clearly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 8 parameters, 100% schema coverage, and no output schema, the description provides adequate context about purpose and usage. However, it lacks details about the return format (beyond 'actual contact records'), result limitations, or how the search algorithm works. Given the complexity of the parameter schema, more behavioral context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, so parameters are well-documented in the schema itself. The description doesn't add any specific parameter information beyond what's in the schema, but it does provide context about what types of queries the tool handles (e.g., 'who' questions, finding best friends), which helps understand how to use the parameters effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search for interactions and return matching interactions.' It specifies the verb ('search') and resource ('interactions'), and provides concrete use cases like answering 'who' questions or finding best friends. However, it doesn't explicitly differentiate from the sibling 'searchContacts' tool, which appears to be a related search function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use this tool: 'for questions about specific interactions, "who" questions, finding best friends based on relevance score, or finding recently added/created contacts.' It gives specific scenarios but doesn't explicitly state when NOT to use it or mention alternatives like 'searchContacts' for comparison, which would be needed for a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/clay-inc/clay-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server