Interacting with computers and other devices to get things done has become part of daily life—we search for information on food, clothes and new jobs; we tell our phones to find directions for us; and we no longer need to speak to a human to book a trip. These functions continue to become easier thanks to Natural Language Interfaces.
Natural Language Interfaces allow you to interact with a computer system using regular language, either typed or spoken.
Applications and devices are adapting to the way we speak, rather than us needing to give formulaic or robotic commands.
You can tell your computer to perform tasks the same way you might ask a friend and it will do them. You may have already held a conversation with your device using Siri or Amazon Alexa.
Conversational Natural Language Interfaces continue to change the way people interact with services by providing them with a much more natural way of doing so. Remember the first time you saw characters in Star Trek interacting with the computer using their natural voices? You may have thought “Wow! I can’t wait to be able to use a real computer like that.” Technology is starting to reach the place where it is possible.
The Turing Test was designed to see if it is possible to make a system whose interactions are indistinguishable from that of a human. The test has yet to be passed, but breakthroughs continue to be made.
Natural language systems were first made mainstream following the introduction of Siri on the iPhone in 2011. Prior to Siri, existing systems were mostly limited to dictation services and curiosities such as eliza, which only attempt to hold a conversation with the user.
Natural Language Interfaces are now increasingly involved in services such as flight and taxi bookings, searching for restaurants, and paying credit card bills. Some services also exist to put people into interactive games and novels. It is becoming more common to control smart home/office devices and music players. Some businesses are also planning to use this technology to provide a natural interface for data analytics, for example, to call up specific information quickly and efficiently.
It is now possible to use Natural Language Interfaces to make systems do increasingly useful things. Consider the following scenarios:
Healthcare. You could create a device that will allow a surgeon to call up various data and information using a natural voice interface. This would allow the surgeon to interact with the system, even while performing surgery.
Retail. Here, a device could suggest various clothes or accessories based on style preferences that you offer. You could even combine this with image recognition services so that the system learns what styles and colours are popular with people of different body shapes and skin tones.
Finance. You could make services that will allow users to invest in various markets. Using further AI technology, such as robo-advisers the system could even give investment tips and suggestions.
Cookery. How about creating a handy virtual chef, that guides you through recipes? It could even suggest dishes based on foods you tell it you have to hand.
In their current state, most chatbots and intelligent assistants work on a transactional basis. You ask them a question or to perform a task, and they respond. They can also be made to keep some context, or ask follow-up questions.
With Natural Language Interfaces the user can simply ask in the same way as they would a person. There is no need to look up a set command and they can phrase the request in a hundred different ways, and still be understood. This is made possible due to advances in Natural Language Understanding—this is what allows computers to interpret natural language inputs, written or spoken—as well as computing power, specifically, cloud computing. The language technology becomes more natural and a seamless part of other daily activities.
A music player is a good example of how natural language understanding is humanizing our interactions with services. Imagine that we want to listen to some music by Coldplay. Instead of strictly following a command by saying “play next”, you can speak to the device as you would a friend by saying “skip this song, I don’t like it.”
With the Natural Language Interface, much like we would to a friend, we may give a reason. In this example, it would tell the player to skip the song, but also that we dislike it. This makes it less likely that the player would select this song, or similar ones, in future. The player can learn and refine its understanding of your individual musical tastes.
The music player can truly become “smart”. By taking cues from the way we ask for things, the system can use Artificial Intelligence technologies to learn about us to provide even more natural experiences. Although Artificial Intelligence and Machine Learning are complex, online cloud services exist that make it simple to implement.
Why this is important
It is being widely predicted that “chatbots are the new apps”. Whether or not chatbots actually take over from apps remains to be seen, but there is increased demand for services to become available in this way. While the number of customers using this technology regularly is currently quite low, it is an expanding segment. Amazon Echo devices are selling well in the US, and have been recently launched in Europe. The Google Home has been released recently, and Apple has recently revealed its plans to release a competing device. Microsoft technology is also starting to be used in some “smart home” products to add a Natural Language Interface.
The early adopters of such devices are doing so out of curiosity, but as the number of apps and devices with Natural Language Interfaces increases, they are becoming capable of doing more things and, hence, becoming more useful. Natural voice interfaces are also being built into more everyday technologies, including smart-phones, computers, TVs, smart-watches, etc. Although, the current set of skills and services being provided by these devices are rather basic, it is still early in their technological evolution and developers are still learning how users like interact with them.
Soon, we will come to expect this capability, like the current expectation of a touch screen on a mobile phone. Developers must have a strategy for developing Natural Language Interfaces into their services before the technology reaches this point.
What should I do?
Services now exist that make it easier and easier to create natural language and voice-based interfaces. At this point it is important to think about the way that people interact with our services and use our devices. We should also think about how these interactions need to evolve and change in order to become more natural, which will help give a competitive edge in the market place.
If we can get the interactions right, the technology blends into the background. It is no longer a special task to load an application or website and perform the required action. We can simply ask for it without having to specifically think about the steps involved. This allows us to do what we need without breaking our flow or having to switch context, effectively humanizing our entire experience with technology.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.