4️Taking a Deeper Look

Observability is very important in understanding how LLMs are performing over a period of time. Evaluable AI divides observability into two parts: inference-related monitoring and evaluation-related analysis. The following documents highlight the components of our platform that enable development teams to dive deeper on cost, latency, token usage, and other important aspects of LLM observability over a period of time:

Last updated