The relentless hype surrounding generative AI in the past few months has been accompanied by equally loud anguish over the supposed perils — just look at the open letter calling for a pause in AI experiments. This tumult risks blinding us to more immediate risks — think sustainability and bias — and clouds our ability to appreciate the real value of these systems: not as generalist chatbots, but instead as a class of tools that can be applied to niche domains and offer novel ways of finding and exploring highly specific information.
This shouldn’t come as a surprise. The news that a number of companies have developed ChatGPT plugins is a clear demonstration of the likely direction of travel. You won’t ask a “generalized” chatbot to do everything for you, but if you’re, say, Expedia, being able to offer customers a simple way to organize your travel plans is undeniably going to give you an edge in a marketplace where information discovery is so important.
Whether or not this really amounts to an “iPhone moment” or a serious threat to Google search isn’t obvious at present — while it will likely push a change in user behaviors and expectations, in the first instance it will be organizations pushing to bring tools trained on large language models (LLMs) to learn from their own data and services.
And this, ultimately, is the key — the significance and value of generative AI today is not really a question of societal or industry-wide transformation, it’s instead a question of how it can open up new ways of interacting with large and unwieldy amounts of data and information.
OpenAI is clearly attuned to this fact and senses a commercial opportunity: although the list of organizations taking part in the ChatGPT plugin initiative is small, OpenAI has opened up a waiting list where companies can sign up to gain access to the plugins. In the months to come we will no doubt see many new products and interfaces backed by OpenAI’s generative AI systems.
While it’s easy to fall into the trap of seeing OpenAI as the sole gatekeeper of this technology — and ChatGPT as the go-to generative AI tool — but fortunately this is far from the case. You don’t need to sign up to a waiting list or have vast amounts of cash available to hand over to Sam Altman; instead, it’s possible to self-host LLMs.
This is something we’re starting to see at Thoughtworks. In the latest volume of the Technology Radar — what we describe as an opinionated guide to the techniques, platforms, languages and tools being used across the industry today — we’ve identified a number of interrelated tools and practices that indicate the future of generative AI is niche and specialized, contrary to what much mainstream conversation would have you believe.
Unfortunately, we don’t think this is something many business and technology leaders have yet recognized. The industry’s focus has been set on OpenAI, which means the emerging ecosystem of tools beyond — exemplified by projects like GPT-J and GPT Neo — and the more DIY approach they can facilitate, have so far been somewhat neglected. This is a shame because there are many benefits here. For example, it sidesteps the very real privacy issues that can come from connecting data with an OpenAI product. In other words, if you want to deploy an LLM to your own enterprise data, you can do precisely that yourself; it doesn’t need to go elsewhere. Such a benefit can’t be overstated. Given both industry and public concerns with privacy and data management, being cautious rather than being seduced by the marketing efforts of big tech is eminently sensible.
A related trend we’ve seen is domain-specific language models. Although this is also only just beginning to emerge, fine-tuning publicly available, general-purpose LLMs on your own data could form a foundation for developing some incredibly useful information retrieval tools. It could be used, for example, on product information, content or internal documentation. In the months to come, we think you’ll see more examples of these being used to do things like helping customer support staff or enabling content creators to experiment more freely and productively.
If generative AI does become more domain-specific, the question as to what this actually means for humans remains. However, I’d suggest that this view of the medium-term future of AI is one that is a lot less threatening and frightening than many of today’s doom-mongering visions. By better bridging the gap between generative AI and more specific and niche datasets, over time people should build a subtly different relationship with it. It will lose its mystique as something that ostensibly knows everything, and is instead embedded in our context.
Indeed, this isn’t that novel. GitHub Copilot is a great example of AI being used by software developers in very specific contexts to solve problems. Despite being billed as "your AI pair programmer" we would not call what it does "pairing" — it’s much better described as a supercharged, context-sensitive Stack Overflow.
As an example, one of my colleagues uses Copilot not to do work but as a way of supporting him as he explores a new programming language — it helps him to understand the syntax or structure of a language in a way that makes sense in the context of his existing knowledge and experience.
We will know that generative AI is succeeding when we stop noticing it and the pronouncements about what it might do die down. In fact, we should be willing to accept that its success might actually look quite prosaic. This shouldn’t matter, of course; once we’ve realized it doesn’t know everything — and never will — that will be when it starts to become really useful.