Atla
Visit ProjectEnable AI agents to interact with the Atla API for state-of-the-art LLMJ evaluation.
Visit ProjectCategory
Tags
What is Atla MCP Server?
Atla MCP Server is a standardized interface for LLM agents to interact with the Atla API for state-of-the-art LLM evaluation. It enables AI agents to evaluate LLM responses against specific criteria using Atla's advanced evaluation models.
How to use Atla MCP Server?
- Obtain an Atla API key from Atla's website or create a new one here.
- Install dependencies using
uv
package manager (recommended). - Run the server manually with
ATLA_API_KEY= uvx atla-mcp-server
or configure it with:- OpenAI Agents SDK by installing the SDK and connecting via MCP protocol
- Claude Desktop by adding configuration to
claude_desktop_config.json
- Cursor by adding configuration to
.cursor/mcp.json
Key features of Atla MCP Server
- Standardized
evaluate_llm_response
tool for single-criteria evaluation - Advanced
evaluate_llm_response_on_multiple_criteria
tool for multi-dimensional evaluation - Integration with multiple AI platforms (OpenAI, Claude, Cursor)
- Returns structured evaluation results with scores and critques
Use cases of Atla MCP Server
- Objective benchmarking of LLM performance against industry standards
- Multi-criteria evaluation of complex LLM responses
- Integrating Atla's evaluation models into existing AI workflows
- Comparative analysis of different LLM outputs
FAQ from Atla MCP Server
-
What is the Model Context Protocol (MCP)?
MCP is a standardized protocol for communication between LLM agents and tools/services like Atla MCP Server. Learn more here.
-
Which platforms support MCP integration?
Currently supported platforms include OpenAI Agents SDK, Claude Desktop, and Cursor. More integrations are coming.
-
What evaluation metrics does Atla provide?
Atla provides comprehensive evaluation metrics including accuracy scores and detailed critiques for LLM responses, using state-of-the-art evaluation models.
-
Is Atla MCP Server open source?
Yes, Atla MCP Server is licensed under the MIT License and contributions are welcome. See CONTRIBUTING.md for details.
Atla MCP Server
An MCP server implementation providing a standardized interface for LLMs to interact with the Atla API for state-of-the-art LLMJ evaluation.
Learn more about Atla here. Learn more about the Model Context Protocol here.
Available Tools
evaluate_llm_response
: Evaluate an LLM's response to a prompt using a given evaluation criteria. This function uses an Atla evaluation model under the hood to return a dictionary containing a score for the model's response and a textual critique containing feedback on the model's response.evaluate_llm_response_on_multiple_criteria
: Evaluate an LLM's response to a prompt across multiple evaluation criteria. This function uses an Atla evaluation model under the hood to return a list of dictionaries, each containing an evaluation score and critique for a given criteria.
Usage
To use the MCP server, you will need an Atla API key. You can find your existing API key here or create a new one here.
Installation
We recommend using
uv
to manage the Python environment. See here for installation instructions.
Manually running the server
Once you have uv
installed and have your Atla API key, you can manually run the MCP server using uvx
(which is provided by uv
):
ATLA_API_KEY= uvx atla-mcp-server ``` ### Connecting to the server > Having issues or need help connecting to another client? Feel free to open an issue or [contact us](mailto:support@atla-ai.com)! #### OpenAI Agents SDK > For more details on using the OpenAI Agents SDK with MCP servers, refer to the [official documentation](https://openai.github.io/openai-agents-python/). 1. Install the OpenAI Agents SDK: ```shell pip install openai-agents ``` 2. Use the OpenAI Agents SDK to connect to the server: ```python import os from agents import Agent from agents.mcp import MCPServerStdio async with MCPServerStdio( params={ "command": "uvx", "args": ["atla-mcp-server"], "env": {"ATLA_API_KEY": os.environ.get("ATLA_API_KEY")} } ) as atla_mcp_server: ... ``` #### Claude Desktop > For more details on configuring MCP servers in Claude Desktop, refer to the [official MCP quickstart guide](https://modelcontextprotocol.io/quickstart/user). 1. Add the following to your `claude_desktop_config.json` file: ```json { "mcpServers": { "atla-mcp-server": { "command": "uvx", "args": ["atla-mcp-server"], "env": { "ATLA_API_KEY": "" } } } } ``` 2. **Restart Claude Desktop** to apply the changes. You should now see options from `atla-mcp-server` in the list of available MCP tools. #### Cursor > For more details on configuring MCP servers in Cursor, refer to the [official documentation](https://docs.cursor.com/context/model-context-protocol). 1. Add the following to your `.cursor/mcp.json` file: ```json { "mcpServers": { "atla-mcp-server": { "command": "uvx", "args": ["atla-mcp-server"], "env": { "ATLA_API_KEY": "" } } } } ``` You should now see `atla-mcp-server` in the list of available MCP servers. ## Contributing Contributions are welcome! Please see the [CONTRIBUTING.md](CONTRIBUTING.md) file for details. ## License This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.