curl --request GET \
--url https://api.openai.com/v1/evals/{eval_id} \
--header 'Authorization: Bearer <token>'
{
"object": "eval",
"id": "<string>",
"name": "Chatbot effectiveness Evaluation",
"data_source_config": {
"type": "custom",
"schema": "{\n \"type\": \"object\",\n \"properties\": {\n \"item\": {\n \"type\": \"object\",\n \"properties\": {\n \"label\": {\"type\": \"string\"},\n },\n \"required\": [\"label\"]\n }\n },\n \"required\": [\"item\"]\n}\n"
},
"testing_criteria": "eval",
"created_at": 123,
"metadata": {},
"share_with_openai": true
}
Get an evaluation by ID.
curl --request GET \
--url https://api.openai.com/v1/evals/{eval_id} \
--header 'Authorization: Bearer <token>'
{
"object": "eval",
"id": "<string>",
"name": "Chatbot effectiveness Evaluation",
"data_source_config": {
"type": "custom",
"schema": "{\n \"type\": \"object\",\n \"properties\": {\n \"item\": {\n \"type\": \"object\",\n \"properties\": {\n \"label\": {\"type\": \"string\"},\n },\n \"required\": [\"label\"]\n }\n },\n \"required\": [\"item\"]\n}\n"
},
"testing_criteria": "eval",
"created_at": 123,
"metadata": {},
"share_with_openai": true
}
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
The ID of the evaluation to retrieve.
The evaluation
An Eval object with a data source config and testing criteria. An Eval represents a task to be done for your LLM integration. Like:
Was this page helpful?