Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Combining traditional NLP with LLMs

Published : Apr 03, 2024
NOT ON THE CURRENT EDITION
This blip is not on the current edition of the Radar. If it was on one of the last few editions, it is likely that it is still relevant. If the blip is older, it might no longer be relevant and our assessment might be different today. Unfortunately, we simply don't have the bandwidth to continuously review blips from previous editions of the Radar. Understand more
Apr 2024
Trial ?

Large language models (LLMs) are the Swiss Army knives of natural language processing (NLP). But they’re also quite expensive and not always the best tool for the job — sometimes it's more effective to use a proper corkscrew. Indeed, there’s a lot of potential in combining traditional NLP with LLMs, or in building multiple NLP approaches in conjunction with LLMs to implement use cases and leverage LLMs for the steps where you actually need their capabilities. Traditional data science and NLP approaches for document clustering, topic identification and classification and even summarization are cheaper and can be more effective for solving a part of your use case problem. We then use LLMs when we need to generate and summarize longer texts, or combine multiple large documents, to take advantage of the LLM's superior attention span and memory. For example, we’ve successfully used this combination of techniques to generate a comprehensive trends report for a domain from a large corpus of individual trend documents, using traditional clustering alongside the generative power of LLMs.

Download the PDF

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes