Disruption in capital markets is nothing new. But now, technological advances have dramatically accelerated the pace of disruption.
The cloud offers cheap computing power with almost unlimited scalability. Organizations are building data science and machine learning capabilities to maximize the business value of their data. Also, rapid developments in AI are enabling machines to take over manual tasks, freeing humans to focus on more complex, higher-value tasks.
Together, these advances are increasing the value and speed of insights, helping ambitious business leaders disrupt their domains. The technological laggards, however, face an increasing risk of irrelevance.
In capital markets, incumbents face stiff competition from fintechs and technology companies. These newcomers are agile in their approach, disruptive in their products, and innovative in their execution.
Part of the challenge for capital markets firms is that many of their core processes are still heavily manual, iterative, and conversational. Most of these interactive processes, including negotiations and sales, haven't changed in decades and suffer from high processing costs, inconsistent outcomes, and inefficient operations. Digitizing these front office processes with smarter technologies can reduce operating costs and save time.
One emerging technology in particular — conversational AI — can have a profound impact on a firm’s ability to automate traditional processes. In this article, we’ll examine the most promising use cases for conversational AI in capital markets, and outline the key principles to guide your transformation strategy.
Conversational AI uses natural language processing (NLP) machine learning algorithms to understand and generate natural human language. It gives machines the ability to have meaningful dialogs with humans in spoken or written form.
Few fields in AI have made such amazing progress in recent years as conversational AI. Personal assistants such as Siri and Alexa have rapidly grown in popularity, both with consumers and organizations, as their performance has continuously improved.
One of the keys to the boom in conversational AI was the invention of “transformers”. Transformers such as Google’s BERT, OpenAI’s GPT2, and GPT3 use huge volumes of unlabeled texts from across the internet to train very large natural language models, sometimes with billions of parameters. BERT now powers almost every English-based search on Google.
These tools remove the expensive manual process of labeling training data, and they also provide a massive performance boost for NLP technology that underpins conversational AI.
So, how can capital markets firms take advantage of these new levels of conversational AI performance and accessibility? Let’s explore the potential use cases.
The margin call and settlement process for over-the-counter (OTC) derivatives trades can be a highly iterative and interactive process when the counterparties dispute margin calls. Operations teams send emails back and forth until an agreement is reached. If disputes remain unresolved, banks face increased dispute capital charges, and counterparties are expected to resolve them promptly.
The recent COVID-19 related market volatility has dramatically increased collateral volumes and highlighted the challenges in this area. In 2020, there was a 76% increase in the average number of margin calls, and the value of margin calls grew by over 200%, massively increasing the workload of collateral managers.
Conversational AI systems that can read emails and automatically resolve simpler disputes can help reduce operations teams’ workload. These AI systems, such as IVP Treasury and Synechron Automated Margin Call Management, will also detect patterns in disputes to help improve future trade structures and contracts.
With Markets in Financial Instruments Directive (MiFID II) requiring sell-side firms to charge for research separately, many firms are under growing pressure to reduce research costs while enhancing quality. Some are even reducing the number of research analysts, which has made producing and consuming research more challenging, both on sell-side and buy-side.
In response to these challenges, we’re now beginning to see increased use of conversational AI to make high-quality research more cost-effective. Morgan Stanley, for example, has developed a chatbot to help analysis and sales teams search over 50,000 reports generated each year. And JP Morgan has made its Corporate and Investment Banking research available over Alexa.
These technologies reduce costs both at sell-side and buy-side, but they also streamline research content, enhancing the quality of research and increasing productivity.
Wider adoption of Straight Through Processing (STP) has led to an increasing end-to-end optimization of trading operations. However, structured securities trading remains largely manual and high touch, especially during pre-sales.
Structured products generate high commissions for investment banks, but at high costs. With conversational AI, firms can now optimize structured trading operations to increase margins.
Conversational AI systems can capture semi-structured Request For Quotes (RFQ) conversations into a schema that firms can analyze across the lifecycle of several trades. These systems can provide real-time insights to help sales and trading teams optimize processes and standardize frequently traded bespoke products.
The entire corporate actions lifecycle — from the announcement of a corporate action event to entitlements being made available to shareholders — is based on manual processes that create significant inefficiencies.
Event announcements are not always made in standardized formats, and processing a huge and ever-increasing volume of manual and electronic responses to corporate action events is complex and tedious. Failure to process a single event can lead to losses in tens of millions of dollars, along with reputational damage to investment management firms. The cost of corporate actions processing also runs in hundreds of millions of dollars.
While conversational AI may not be able to help resolve all the issues in corporate actions processing, it has the potential to optimize some areas. For example, AI can gather and normalize corporate actions data from multiple sources such as newswires, the internet, and data providers. Another potential area for efficiency gains is processing customer responses to event notifications. A conversational AI system could read electronic responses received from customers and process them, sending only the complicated, incomplete, inconsistent, and incorrect responses to the operations team for manual processing.
To successfully introduce conversational AI in your organization, alignment with the business is critical. Choose use cases that will create real business value and then start small with minimum viable products (MVPs) to help manage the risk of introducing a new technology.
Continuously assess your MVPs against meaningful metrics to determine if you’re achieving the desired business value and to identify how and when you should adapt, scale, or decommission them.
To ensure smooth collaboration between your data science, machine learning, and IT teams, use a frictionless delivery process for conversational AI to bring high-quality models into production safely, reliably, and continuously.
In general, it’s helpful to take an approach that breaks your conversational AI program into three stages: Initiate, Extend and Enhance.
The diagram below shows the technical composition of a chatbot. In the first step, the intent recognizer tries to understand the intent of the input. For example, in the sentence “My phone number is 120 1234 567”, the intent is “provide phone number”.
Next, the named entity recognizer identifies named entities in the sentence — in our example, that’s the phone number. Then the dialog execution module decides the next actions during the dialog, based on the status of the conversation. These next actions could be asking for missing information, an API call to an external system, or a finishing message when the task has been accomplished. These actions are then performed by the action resolver.
In the last step, the natural language generator formulates the response to the user.
To improve the quality of your chatbot during its development and operation, it’s vital to use an agile development process based on continuous delivery principles, such as MLOps or Continuous Delivery for Machine Learning (CD4ML).
The diagram below shows the continuous development and deployment process from a chatbot project. This process uses a continuous delivery platform to closely monitor the chatbot’s behavior in production and optimize it continuously. With this platform, you can automate and optimize most chatbot development tasks, including testing, deploying, monitoring, and patching.