The last couple of months have been crammed full of AI-related announcements. Generative AI capabilities, like those created by OpenAI's ChatGPT, have fueled a tremendous amount of excitement, along with a healthy dose of hype.
As a consequence of the hype, many businesses have found themselves asking two seemingly opposing questions:
1. How much of this AI stuff is real enough that it will enable - or threaten - our business?
2. Even if the best examples of the current batch of AI capabilities are somehow fundamentally flawed, can we risk ignoring this trend?
The best way to better answer these questions is by understanding the technology. But how quickly should a business act? How do they pace their activities and determine where to focus? Is this a sprint or a marathon? Is there one finish line or many?
As much as it may seem like the full force of AI disruption is upon us, we are still in the very early days of the capabilities of AI. The pace of advancements is accelerating through a combination of forces, including:
If you consider that today's AI capabilities are the absolute worst and most expensive example of "state of the art" AI we will ever see, it’s clear to see we'll be in for a pretty exciting ride.
And at the same time, we need to consider how long the waves of disruption actually take to ripple through enterprises, particularly highly regulated enterprises. Many banks continue to spend more on cloud transformation than they did the previous year, despite the first notable public cloud product - Amazon's AWS S3 - being released seventeen _years_ ago.
We are still very, very early in understanding how - specifically - AI will disrupt enterprises like the business of banking.
The most important thing every business can do is... just start.
But, how?
Some might be tempted to wait until some overarching framework or strategy can be created. The reality is this space is evolving too fast, and any grand strategy you _start_ creating now won't be done for months, at which point it will be outdated.
Banks need a faster, more iterative approach that creates a faster feedback loop informed by actually getting their hands dirty. To that end, I propose three core tactics to an AI strategy I'll call "accelerate safely":
You won't know what you don't know when you're just getting started. Rather than trying to figure out what you don't know by theory, or by table top exercises, find a way to experiment safely.
This means involving stakeholders from compliance, information security, technology, and product teams to agree on the guardrails that allow them to say "yes." That has to be the framing: as an organization, we have to take action to learn more about AI.
With that premise, what rules, if satisfied, mean that everyone will agree to move forward with some small form of action. A pilot, a prototype, a proof-of-concept, whatever it is.
These rules can be refined, improved, and potentially loosened with more experience (and controls that will withstand regulatory scrutiny). Great learning can happen, even with initially restrictive rules, including things like:
Of course, eventually, and with repeated success and learning, these rules need to be relaxed. To reach the point where doing so is comfortable, you'll want to have lots of experience and evidence to draw on to support your decision. The point at the early stage is to get everyone feeling good about saying "yes".
Performing this kind of exploration requires access to modern, elastic computing infrastructure. If you've already vetted a public cloud provider, their AI service offerings - like Microsoft Azure's Cognitive Services - can provide a jump-start.
If you haven't already vetted a public cloud provider, now is the time. While elastic compute environments have always had benefits for certain workloads, access to elastic, virtualized GPU environments is now crucial. This type of infrastructure is likely to become more constrained, not less, as more organizations invest in the space. Capital investments in private, on-premise, enterprise-scale GPU environments may make sense for some organizations, but predicting that need in the experimentation phase would be incredibly difficult.
Every stakeholder should have some learning, accomplishment, or insight they take away from AI experimentation. These "wins" don't necessarily need to be something working. In fact, it's the opposite - learning what doesn't work is valuable insight, especially when the stakes are low!
Compliance teams can use experiments like this to test how controls can work and what types of reviews are needed to surface potential issues. It also creates a foundation for conversation with regulatory partners about how you're thoughtfully and intentionally creating an environment for experimentation.
Business teams can start to extrapolate how experiments using AI capabilities like LLM-powered chatbots could help in a specific business context, like assisting call center agents or finding deeper insights from customer feedback. These early explorations can spark more focused proofs-of-concept or pilots with more targeted outcomes and more defensible business cases.
And finally, technology teams should be empowered through these exercises. This is a quickly evolving space, and everyone is learning more day-by-day. By creating an environment that safely provides access to current technology with the support of business and compliance, technology teams can better support the business.