Skip to main content
Glama

@arizeai/phoenix-mcp

Official
by Arize-ai
input-mapping.md5.64 kB
# Input Mapping Evaluators are defined with a specific input schema, and the input payload is expected to take a certain shape. However, the input data is not always structured properly, so evaluators can be bound with an optional `input_mapping` which map/transforms the input to the shape they require. The powerful input mapping capabilities allow you to extract and transform data from complex nested structures.  ### Summary * Use `input_mapping` to map/transform evaluator-required field names to your input data. * You can bind an `input_mapping` to an evaluator for reuse with multiple inputs using `.bind` or `bind_evaluator` ### Why do evaluators accept a payload and an input\_mapping vs. kwargs?  Different evaluators require different keyword arguments to operate. These arguments may not perfectly match those in your example or dataset. Let's say our example looks like this, where the inputs and outputs contain nested dictionaries: ```python eval_input = { "input": { "query": "user input query", "documents": ["doc A", "doc B"] }, "output": {"response": "model answer"}, "expected": "correct answer" } ``` We want to run two evaluators over this example: * `Hallucination`, which requires `query`, `context`, and `response` * `exact_match`, which requires `expected` and `output` Rather than modifying our data to fit the two evaluators, we make the evaluators fit the data. Binding an `input_mapping` enables the evaluators to run on the same payload - the map/transform steps are handled by the evaluator itself. ```python # define an input_mapping to map inputs required by hallucination evaluator to our data input_mapping = { "input": "input.query", # dot notation to access nested keys "output": "output.responses[0]", # brackets to access list elements "context": lambda x: " ".join( x["output"]["documents"] ), # lambda function to combine the document chunks } # the evaluator uses the input_mapping to transform the eval_input into the expected input schema result = hallucination_evaluator.evaluate(eval_input, input_mapping) ``` ### Input Mapping Types The `input_mapping` parameter accepts several types of mappings: 1. **Simple key mapping**: `{"field": "key"}` - maps evaluator field to input key 2. **Path mapping**: `{"field": "nested.path"}` - uses JSON path syntax from [jsonpath-ng](https://pypi.org/project/jsonpath-ng/) 3. **Callable mapping**: `{"field": lambda x: x["key"]}` - custom extraction logic #### Path Mapping Examples ```python # Nested dictionary access input_mapping = { "query": "input.query", "context": "input.documents", "response": "output.answer" } # Array indexing input_mapping = { "first_doc": "input.documents[0]", "last_doc": "input.documents[-1]" } # Combined nesting and list indexing input_mapping = { "user_query": "data.user.messages[0].content", } ``` #### Callable Mappings For complex transformations, use callable functions that accept an `eval_input` payload: ```python # Callable example def extract_context(eval_input): docs = eval_input.get("input", {}).get("documents", []) return " ".join(docs[:3]) # Join first 3 documents input_mapping = { "query": "input.query", "context": extract_context, "response": "output.answer" } # Lambda example input_mapping = { "user_query": lambda x: x["input"]["query"].lower(), "context": lambda x: " ".join(x["documents"][:3]) } ``` ### Pydantic Input Schemas Evaluators use Pydantic models for input validation and type safety. Most of the time (e.g. for `ClassificationEvaluator` or functions decorated with `create_evaluator`), the input schema is inferred. But, you can always define your own. The Pydantic model allows you to annotate input fields with additional information such as aliases or descriptions. ```python from pydantic import BaseModel from typing import List class HallucinationInput(BaseModel): query: str context: List[str] response: str evaluator = HallucinationEvaluator( name="hallucination", llm=llm, prompt_template="...", input_schema=HallucinationInput ) ``` #### Schema Inference Most evaluators automatically infer schemas if not provided at instantiation. LLM evaluators infer schemas from prompt templates: ```python # This creates a schema with required str fields: query, context, response evaluator = LLMEvaluator( name="hallucination", llm=llm, prompt_template="Query: {query}\nContext: {context}\nResponse: {response}" ) ``` Decorated function evaluators infer schemas from the function signature: ```python @create_evaluator(name="exact_match") def exact_match(output: str, expected: str) -> Score: ... # creates input_schema with required str fields: output, expected {'properties': { 'output': {'title': 'Output','type': 'string'}, 'expected': {'title': 'Expected', 'type': 'string'} }, 'required': ['output', 'expected'] } ``` ### Binding System Use `bind_evaluator` or `.bind` to create a pre-configured evaluator with a fixed input mapping. At evaluation time, you only need to provide the `eval_input` and the mapping is handled internally. ```python from phoenix.evals import bind_evaluator # Create a bound evaluator with fixed mapping bound_evaluator = bind_evaluator( evaluator, { "query": "input.query", "context": "input.documents", "response": "output.answer" } ) # Run evaluation scores = bound_evaluator({ "input": {"query": "How do I reset?", "documents": ["Manual", "Guide"]}, "output": {"answer": " Go to settings > reset. "} }) ```

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server