The CFPB's Chatbot Worries and 3 Ways Vectari's Bank-Grade AI Can Help

June 12, 2023

When it comes to banks and chatbots, it may feel like what’s old is new again. Just six years ago, it seemed like the chatbot buzz was everywhere as banks started considering how the then-current natural language processing technology would power a new generation of digital customer interactions. The buzz quickly receded, though, as the limitations of that technology became clear. With ChatGPT as an example of broader large language models (LLMs), the chatbot narrative for banks has been rebooted. While these contemporary techniques for creating chatbots is much more robust, they also introduce new risks. The Consumer Financial Protection Bureau (CFPB) has taken note.

Last week, the CFPB issued a new Issues Spotlight titled Chatbots in Consumer Finance highlight how FIs are using chatbots today and setting out a set of concerns regarding the limitations of those technologies today. At Rivet, our co-founders have experience advising and helping to troubleshoot some of the industry’s building chatbots, including by identifying and addressing some of the very problems the Bureau’s report highlights. We have brought the lessons learned from that experience to our approach in building each of the compliance solutions that impact chatbots. Several of the concerns highlighted by the Bureau related less to the concept of a chatbot itself, and more to how the data with which the bots are trained and the resulting lack of intuitive, contextual responses. Notably, those are exactly the type of issues that can be addressed through the type of thoughtfully-trained, robustly governed, and carefully monitored approach to generative AI that Rivet has at its core.

Here are a few of the key issues the Bureau flagged and how Rivet may play into addressing them:

· “Difficulty in recognizing and resolving people’s disputes”

As the Bureau suggests, rule-based models are limited by the imagination and knowledge of the rule creators, restricting the set of useful responses. By contrast, Bank-grade AI must be trained on real-world interactions between customers and customer service representatives that capture not only a fuller set of potential concerns, but also far more nuance in the way that a customer may present a question. As the report discusses, this is particularly important in the context of Spanish-language content where there may be a significant gap between the dictionary definition of a term and how a consumer may refer to it. For example, a rules-based model may only recognize that a customer is asking about a wire transfer if the customer uses the formal term for the product, “transferencia electronica.” However, a customer is far more likely to use the common colloquial term for a wire, “giro,” which translates literally to a turn (as in turn in a half-loop going from sender to recipient). A model trained on real-life interactions will be able to learn to map the word giro onto wire, such that the chatbot can respond more accurately when the term comes up.

To address this concern Rivet’s approach to its translation products is not only designed to be incorporate real-life customer inquiries and responses, but it also relies on thorough and continuous tuning by native-Spanish-speaking regulatory expertise and extensive translation experience. This tuning allows the models to not only account for colloquialisms and dialects, but also to have heightened language robustness around terms of concepts that experts know to have the widest variety of potential terms for the same concepts, such as payments.

· “[I]naccurate, unreliable, or insufficient information”

The Bureau also highlighted concerns regarding the quality of the responses provided by the chatbot, noting:

[C]hatbots sometimes get the answer wrong. When a person’s financial life is at risk, the consequences of being wrong can be grave. In instances where financial institutions are relying on chatbots to provide people with certain information that is legally required to be accurate, being wrong may violate those legal obligations.

Financial institutions have been dealing with the accuracy problem for far longer than chatbots have existed; namely, every call to a call center agent runs the same risk of providing potentially inaccurate and incomplete information. After all, like models, human agents are only as strong as the training and tools available to them. For that reason, Rivet’s approach to our compliance solutions always starts with the question “how would we train a human to do this well?”. We then ensure that the models are trained on, at the very least, the same types of job aids, training manuals, and other materials available to humans. More importantly, we are able to train models on the actual transcripts of interactions between customer service agents and customers, in order to better understand the nuance of those conversations and ensure that the information provided is complete and accurate.

However, even the best training tools will fall short unless the models are robustly trained by humans with extensive experience resolving complex customer inquiries and addressing consumer financial regulatory issues. For that reason, Rivet’s approach is designed to always include extensive and ongoing feedback and training from individuals with such experience, who are not only able to confirm accuracy, but can also advise on the types of questions or issues that raise the greatest risk for accuracy concerns, allowing the model build to focus even further on those areas.

In addition, Rivet recommends to clients using chatbots that, wherever possible, they consider having the answer lead the customer to a specific document or page that provides the background for the answer. For example, if the customer is asking for a particular transaction, it should be easy for the customer to access a link to the relevant statement or account page where they can see the transaction in question. This helps ensure accuracy by allowing the customer to review a “single source of truth” rather than having to rely on the bot’s description of the source.

· “Hindering Access to Timely Human Intervention”

The Bureau also focused on issues surrounding inadequate bot-to-human handoffs:

Chatbots are programmed to resolve specific tasks or retrieve information and sometimes cannot meaningfully tailor services for a distressed customer. When customers are seeking assistance with financial matters, they may feel anxious, stressed, confused, or frustrated. Research has shown that when people experience anxiety their perspectives on risk and decisions shift. A chatbot’s limitations may leave the customer unable to access their basic financial information and increase their frustration. …

One way to help prevent this concern is by using complaint and interaction training data to proactively recognize which topics are less-well suited to chat interaction. Rivet’s approach to complaint analysis includes learning to identify particularly thorny or complex issues, or issues that may require more back-and-forth questions with the customer than typical. By training the model on actual interactions and agent notes, it is possible to identify the set of topics that may be particularly stressful, time-sensitive, or hard to explain for customers, and for which a human chat tool or phone option may be more efficient.

However, such identification is only helpful if the chatbot strategy includes risk-based proactive handoffs to a human agent. By helping the model identify when to offer human assistance before the customer even requests it, the financial institution can not only build good will and trust with the customer, but can also help ensure that sensitive topics get additional attention. Moreover, the model can use those more complex interactions to continue refining and learning how and when to present certain information. It may also make customers more comfortable using the tool in the future if they know they don’t have to worry about being stuck in a loop if the bot is not able to provide the answer they need.

When using a human offramp, institutions should consider giving agents fast and easy access to the previous conversation that the consumer just had to make sure that the consumer doesn’t feel like they are starting over from scratch and to give the agent valuable context to best serve the customer. Rivet also recommends ensuring that any bot-to-agent transitions be documented and retained to allow for even more training data that can give the model further feedback to improve future interactions.

Rivet sees the CFPB’s report not as a indictment of all chatbot use, but rather as an opportunity to identify and address specific problem spots and areas that need additional safeguards and attention to ensure that the chatbot use enhances rather than detracts from the customer’s experience. Rivet’s iterative, compliance-expert-guided approach to AI solution design allows us to incorporate the Bureau’s feedback and data to further strengthen and refine our tools to ensure they are in line with regulatory expectations and best practices.

To see if Rivet’s tools may be right for your company, send us an email at hello@rivet.ai.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Contact Us

Let's explore how Vectari can help.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.