Logging
To get started with Athina's Monitoring, the first step is to start logging your inferences.
Quick Start
- OpenAI with Python: Just 3 lines of code.
- LiteLLM (opens in a new tab): Just 2 lines of code.
- Langchain: Set up in 2 minutes.
- Log using a single POST Request
If you are using OpenAI with streaming, then follow these instructions:
- OpenAI Chat Completion (1.x)
- OpenAI Chat Completion (0.x)
- OpenAI Completion (1.x)
- OpenAI Completion (0.x)
If you are using OpenAI Assistant, then follow these instructions:
For all other models, follow these instructions:
FAQs
- Will Athina logging add additional latency
- Do I have to use Athina as a proxy for logging
- Where is my data stored
Tracing
Recent LLM applications have many intricate abstractions (chains, tool-equipped agents, sophisticated prompts). Athina's nested tracing clarifies operations and pinpoints problem origins. The Athina SDK and UI facilitate tracing advanced LLM features, such as vector database queries and multiple LLM interactions, by allowing easy chaining of them in the SDK.
Head over to Tracing to learn more.