Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Transforming exhibition experiences with AI

Disclaimer: AI-generated summaries may contain errors, omissions, or misinterpretations. For the full context please read the content below.

A good exhibition can be an incredible experience for attendees, combining opportunities to network with the ability to learn. But sometimes their scale can be overwhelming — finding the talks and exhibitors that are most relevant to you can be challenging. This is something one of our clients in the exhibition space is acutely aware of; you can plan and execute something incredible, but if attendees can’t navigate and discover what’s most valuable to them the experience will be underwhelming at best.

We were approached to find a solution to this problem: the client wanted to work with us to find a way to improve exhibition experiences for visitors and to help guests identify and connect with the exhibitors that mattered the most to them.

Bringing AI into the exhibition experience

 

After working through the core problem with the client, we realized that one way we could solve the problem is through something akin to a matchmaking service that would help attendees find relevant events and exhibitors. We envisioned that such a tool could be integrated into existing systems like exhibition applications, so user journeys were as smooth and familiar as possible for attendees  of the clients' exhibitions.

 

Because we needed to combine ease of use, familiarity and information accuracy, we decided AI could prove a useful technical solution. Thoughtworks has significant experience in this space, and applications like Jugalbandi — a tool used by millions to access public services in India — highlight what’s possible with effective AI-driven chat interfaces. 

 

Implementation and key challenges

 

Data quality

 

At the core of the project was exhibitor data — information about who they were, what they were offering and where they could be found, for instance. 

 

It’s become a bit of a cliché to say that data quality is fundamental to AI success, but this project brought the issue into full view; we found that as we were preparing our data set for the model there was lots of inconsistency. Some exhibitors provided a large quantity of information while others offered very little. This meant we had to augment the data through our own research, sometimes using other sources to ensure we had everything that was relevant. Without it, the AI chatbot wouldn’t work.

 

The tech stack

 

Once the data set was collected —  which was all text —  it was transformed into embeddings — numerical representations of unstructured data — using the Amazon Titan Embeddings Model (which was native to Amazon Bedrock, our hosting platform of choice) and then saved into a vector database.

 

The vector database consisted of PostgreSQL with the pgvector extension. It allowed us to index and later retrieve the relevant text embeddings efficiently. 

 

Efficiency was necessary as we had implemented retrieval-augmented generation. This takes contextual data and visitor’s objective as input and generates relevant recommendations; the more contextually rich the data, the more effective the use of RAG would be.

 

We used Claude Sonnet 3.5 hosted through AWS Bedrock as a large language model to generate responses. 

 

Translation challenges

 

A further challenge we discovered later was around translation. The chat application needed to work in various languages given the client wanted to deploy it globally. 

 

We faced this problem in specific regions where mode of communication is predominantly not English like France and Japan.


To tackle this, we first translated our textual data into the relevant language to make sure our model worked correctly in terms of semantic similarity — the means through which the model retrieves data — and also that the recommendations that were delivered through the virtual assistant were in the correct local language. 

    

However, this alone wasn’t enough; we also needed to verify our translations were correct and our recommendations were making sense. This was particularly difficult as we aren’t language experts!

 
However, the solution combined both human oversight — involving relevant client stakeholder and subject matter experts to ensure responses were appropriate for respective languages — and model selection.  Specifically, we compared various models for translation capabilities, ranging from open source models on HuggingFace, Anthropic Claude, Google Gemma. We ultimately landed on Claude Sonnet 3.5 — already being used to generate responses — for its command of language. 


To be clear, this wasn’t just a question of linguistic accuracy; it was also about tone; it needed to both align with the client’s corporate tone and what was appropriate for each market. The translations should appear both professional and natural. 

 

What we learned

 

The application we developed has proven to be initially very successful. In this journey we learnt a lot of things: 

 

  • The application particularly helps high-value exhibition visitors, who possess financial decision making power in their businesses; it saves valuable time and helps them better connect with relevant exhibitors. 

  • For exhibitors, meanwhile, high value visitors are important leads. That means improved connectivity is a win-win situation for both exhibitors and visitors.

  • Visitors aren't just interested in recommendations; they want reasons for those recommendations too. It helps to establish credibility of recommendations. Striking a balance, then, between seamlessness and transparency is critical to engaging visitors.  

 

What’s next?

 

One of the most important pieces of feedback we found is that visitors want quick results — any kind of latency in the application’s response can be frustrating for the user. Subsequently, the next phase of work is going to focus on how we can reduce the latency of generated responses.

 

To do this, we’ve experimented with a number of approaches already, including caching, lightweight LLMs and precomputing recommendations. We haven’t yet landed on the best approach but we hope, in time, to find an effective solution that works in a real-world exhibition context.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Explore more such insights