Product-based vs. AI-products: Making Sense of AI in the Enterprise
Enterprise IT leaders need to be leading the conversation around AI in the enterprise. But you'll need to understand and make some critical distinctions first.
I'm coming to dread talking about AI these days, but an important distinction came up in a recent discussion in the mastermind group we run for enterprise IT executives — and it made me think about things a bit differently.
In case you don't know, I run a mastermind group for enterprise IT executives. We meet once a month to discuss emerging and pressing topics affecting enterprise IT leaders. There are no vendors, no agendas, just the important conversations that need to happen — and recently, that's been a lot of talk about generative AI.
(Note: If you're an enterprise IT executive, feel free to reach out for an invitation to this free group.)
Whenever a topic is a source of frothy hype like generative AI is at the moment, I tend to avoid it. I prefer to let it settle a bit so that we can have meaningful conversations about how to apply it rather than spend energy on all the what-ifs.
Still, it's hard to avoid the impact that generative AI and large language models (LLMs) are having on the conversation — or the fact that enterprise IT leaders will be asked about it.
So, amongst a conversation about the need to build a digital transformation platform, a discussion erupted around the steps that enterprise IT leaders are taking with AI.
As I expected, the answer was mostly that there's a lot of cautious experimentation taking place. But comments from two of the participating executives really made me think about the need to distinguish between the various approaches to and types of AI that enterprise IT leaders must apply to this conversation to make it meaningful.
Distinguishing Between Product-based AI and AI Products
The first light bulb moment for me was the idea that there is what I'll call product-based AI that we need to distinguish from what I'm calling AI Products. AI Products may not be products, per se, but basically is a bespoke application of LLMs against your proprietary data to create something of unique value for your organization.
A healthcare CTO described how they were experimenting with applying LLMs to patient data to create specific value for their organization (I'm purposefully not going into any details here).
But that approach to applying AI was distinct from leveraging the AI features that so many of the enterprise tech companies are now embedding within their products — thus, product-based AI.
This aligns well with what I'm seeing in the market — and is an important distinction for enterprise IT leaders to make as they craft their AI strategy.
For instance, I recently spoke with the team at Iterate.ai. A significant part of their focus is helping enterprise IT organizations build these types of bespoke, AI-powered apps (using a wide range of AI-based technologies, which we'll get to in a moment).
On the other hand, I've been pretty impressed with what companies like BigPanda are doing in leveraging LLMs to make their products better, more effective, and easier to use. This is product-based AI.
Both approaches will have an important role to play in the enterprise IT leader's toolbox, so both need a place at the table.
Distinguishing Between Generative AI, Conversational AI, and Everything Else
During our session, I also expressed my frustration that generative AI technologies have consumed the entire conversation around AI.
The challenge is that generative AI and LLMs are just one flavor of AI.
Moreover, while I remain bullish on generative AI, the nature of the technology introduces all sorts of challenges and risks — bluntly, it is not the magical answer to all problems.
I also think that it's unfortunate that the hype around generative AI is creating confusion around the more established domain of conversational AI. Companies like Amelia, which have been at the forefront of this space, are now having to help enterprise IT leaders understand and then distinguish the use cases between LLMs — which have no contextual understanding — and contextually-aware conversational AI technologies.
But this challenge extends far beyond just the confusion between conversational AI and LLMs. The domain of artificial intelligence is vast and encompasses countless forms, including machine learning, deep neural networks, computer vision, and many others.
The point is that there are many facets and types of AI that will play an important role in the enterprise tech stack — and it is shortsighted to put too much (or your only) focus on generative AI.
An AI World of Opportunity
Despite my frustration and concern that we need to see greater nuance in our conversations around AI in the enterprise, I am also super excited about the opportunity it presents.
The hype of generative AI has delivered at least one good thing: it's opened the door to meaningful conversations about the role of AI in the enterprise in ways that probably couldn't have happened just a year ago.
Likewise, the fear of its ramifications and the resulting regulatory actions (read Casey Newton's great breakdown of the Biden administration's recent executive order, as an example) will keep it a top-of-mind issue for many non-IT executives.
If you can take advantage of this moment and help others understand its nuances, it will create an opportunity for you to elevate the conversation and move your organization forward.