POST
/
v1
/
{team_id}
/
verify
curl --request POST \
  --url https://api.naviml.com/v1/{team_id}/verify \
  --header 'Content-Type: application/json' \
  --data '{
  "messages": [
    {
      "role": "user",
      "content": "<string>"
    }
  ],
  "verifiers": [
    {
      "id": "<string>",
      "type": "policy-verifier",
      "parameters": {
        "threshold": 0.5
      }
    }
  ],
  "explain": true
}'
{
  "request_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
  "action": "allow",
  "result": {
    "verifiers": [
      {
        "id": "<string>",
        "result": "<string>",
        "explanation": {
          "text": "<string>"
        },
        "action": "allow",
        "details": {},
        "usage": {
          "prompt_tokens": 123,
          "completion_tokens": 123,
          "total_tokens": 123
        }
      }
    ],
    "action": "allow"
  }
}

This endpoint allows to run multiple verifiers against a single LLM response. The response will contain the result of each verifier and the final action based on the verifiers’ results.

Path Parameters

team_id
string
required

Team ID of the team owning the resources and under which the verifiers are executed

Body

application/json
messages
object[]
required

List of messages to run verifiers against

verifiers
object[]
required

List of verifiers to execute with configurations

explain
boolean

Includes explanation for the result in response

Response

200
*/*
OK
request_id
string
required

Unique identifier for the request

action
enum<string>
required

Suggested action based on the result

Available options:
allow,
modify,
deny
result
object
required