Chatbots backed by large language models (LLMs) are gaining a lot of popularity right now, and we’re seeing emerging techniques around productionizing and productizing them. One such productization challenge is understanding how users are conversing with a chatbot that is driven by something as generic as an LLM, where the conversation can go in many directions. Understanding the reality of conversation flows is crucial to improving the product and improving conversion rates. One technique to tackle this problem is to use graph analysis for LLM-backed chats. The agents that support a chat with a specific desired outcome — such as a shopping action or a successful resolution of a customer's problem — can usually be represented as a desired state machine. By loading all conversations into a graph, you can analyze actual patterns and look for discrepancies to the expected state machine. This helps find bugs and opportunities for product improvement.