4️Taking a Deeper Look
Observability is very important in understanding how LLMs are performing over a period of time. Evaluable AI divides observability into two parts: inference-related monitoring and evaluation-related analysis. The following documents highlight the components of our platform that enable development teams to dive deeper on cost, latency, token usage, and other important aspects of LLM observability over a period of time:
#1: Reviewing Run Analytics on the Run Analytics page
#2: Reviewing Evaluation Analytics on the Evaluation Analytics page
#3: Comparing Models on the Evaluation Comparison page
Last updated