Use LiteLLM Proxy to Log OpenAI, Azure, Vertex, Bedrock (100+ LLMs) to Arize
Use LiteLLM Proxy for:
LiteLLM Requires a config with all your models define - we will call this file litellm_config.yaml
Detailed docs on how to setup litellm config - here
model_list:
- model_name: gpt-4
litellm_params:
model: openai/fake
api_key: fake-key
api_base: https://exampleopenaiendpoint-production.up.railway.app/
litellm_settings:
success_callback: ["arize"] # 👈 Set Arize AI as a callback
environment_variables: # 👈 Set Arize AI env vars
ARIZE_SPACE_KEY: "d0*****"
ARIZE_API_KEY: "141a****"
Step 2. Start LiteLLM proxy
docker run \
-v $(pwd)/litellm_config.yaml:/app/config.yaml \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-latest \
--config /app/config.yaml --detailed_debug
Step 3. Test it - Make /chat/completions request to LiteLLM proxy
curl -i http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-1234" \
-d '{
"model": "gpt-4",
"messages": [
{"role": "user", "content": "Hello, Claude gm!"}
]
}'