X-Rays
FinOps X Logo

AI Cost Observability in Practice: Allocating and Optimizing Token Spend

Agenda / AI Cost Observability in Practice: Allocating and Optimizing Token Spend
Breakout
Level: 200
AI for FinOps
FinOps for AI

AI introduces a new class of cloud spend. Token-based usage is dynamic, hard to attribute, and often invisible to traditional FinOps workflows.

In this session, Demandbase shares how they’re operationalizing AI cost visibility in production. Using LiteLLM, they track AI usage across their customers, agents, and models, then use Vantage to allocate underlying cloud costs and understand what’s driving spend. We’ll show how this enables Demandbase to analyze usage patterns, compare model behavior, and begin aligning pricing with actual consumption – and how these same practices can be applied to track the emerging world of R&D token usage.

We’ll also touch on how AI is being applied to FinOps workflows including MCP and agents for that drive efficiency in FinOps.

Speakers