05.20.2025-datasets-and-experiment-evaluations-in-the-js-client.md•1.04 kB
# 05.20.2025: Datasets and Experiment Evaluations in the JS Client
{% embed url="https://storage.googleapis.com/arize-phoenix-assets/assets/images/TS-experiments.png" %}
We've added a host of new methods to the JS client:
* [getExperiment](https://arize-ai.github.io/phoenix/functions/experiments.getExperiment.html) - allows you to retrieve an Experiment to view its results, and run evaluations on it
* [evaluateExperiment](https://arize-ai.github.io/phoenix/functions/experiments.evaluateExperiment.html) - allows you to evaluate previously run Experiments using LLM as a Judge or Code-based evaluators
* [createDataset](https://arize-ai.github.io/phoenix/functions/datasets.createDataset.html) - allows you to create Datasets in Phoenix using the client
* [appendDatasetExamples](https://arize-ai.github.io/phoenix/functions/datasets.appendDatasetExamples.html) - allows you to append additional examples to a Dataset
### Full list of supported JS/TS Client Methods:
{% embed url="https://arize-ai.github.io/phoenix/modules.html" %}