Discover more from The DX Report
Should You Be Putting Critical Data and AI Assets in the Hands of Non-Technical Users?
Low-code approaches have permeated even the most complex and impactful technologies and promise to increase your development capacity. Should you use this new generation of tools?
👋 Hi and welcome to The DX Report — all about Digital Transformation, the Digital Experience, and the Digital Enterprise. I’m industry analyst, author, and speaker Charles Araujo, and I’m all about providing insights and analysis for enterprise IT leaders as you make the big bets about your organization’s future!
The Bet: Putting AI and Critical Data in the Hands of Less-Technical Users
I recently had conversations with two start-ups, Iterate.ai and HumanFirst, that represent the intersection of three trends in enterprise IT: AI, data, and low-code/no-code tools. Low-code and no-code platforms, of course, have been around for quite some time and have been very effective at democratizing the development of applications, either by enabling so-called citizen development (business users building apps) or by enabling less-technical IT staff (e.g., business analysts) to develop applications.
In either case, the objective has been to put powerful technology in the hands of users who would otherwise be unable to participate in the development process, thereby increasing the enterprise's development capacity.
In fact, this broad movement has been so successful that low-code and no-code is one the verge of ceasing to be a distinct category and is, instead, becoming a feature-set employed in any number of technologies.
So, it was only a matter of time before tech companies dealing with some of the most complex technologies, such as data management and AI, would begin leveraging low-code and no-code techniques to democratize their use. Which begs the question: should you leverage these sorts of low-code-enabled tools to put these technologies in the hands of non-technical or less-technical users?
HumanFirst and Iterate.ai operate in very different spaces. HumanFirst is focused on transforming unstructured, conversational data (e.g., the transcripts of customer service calls) into structured data that can then be used as part of the broader data estate, but specifically to help improve AI and natural language understanding (NLU) models. Iterate.ai, on the other hand, is effectively an AI-centric development environment. Supporting nine different types of "AI applications," the company aims to help enterprise speed the development and deployment of AI.
What these two have in common, however, is that both are heavily relying on the use of a low-code approach to enable non-technical or less-technical users to participate in the development of these cutting edge technologies.
On the surface, it has obvious appeal.
Every enterprise is in a race to both harness the competitive value of their data and to leverage AI to create competitive advantage. The bottleneck is having the technical capacity (in terms of qualified and trained staff) to act on all this demand.
But those resources are hard — and getting harder (hear about the $900k AI engineer?) — to find.
For companies like Iterate.ai and HumanFirst, the answer is obvious: leverage codeless approaches to make these technologies more accessible. In HumanFirst's case, this is about letting users who are versed in whatever domain the data represents (e.g., customer service engagements) participate in the processing and structuring of the data. For Iterate.ai, this isn't about letting business users develop AI apps, it's about transforming an enterprise's army of traditional developers into AI developers by leveraging their knowledge of basic application development and then giving them access to AI-enablement without having to come up to speed on new languages or the intricacies of model training.
It's hard to argue with the logic and the underlying value proposition, but is it a good idea?
The answer to that question is going to come down to a few critical factors, starting with demand:
Demand Volume: The first critical factor is whether there is enough demand to justify any investment whatsoever. If your organization is moving more slowly down the data-enablement or AI-enablement paths, then you may not need the additional capacity that these sorts of tools can bring to the table. That said, I'd be cautious about simply dismissing the demand. These are two critical areas that, if executed well, will lead to competitive advantage. Fall behind and there may be no catching up.
Demand Criticality: The second big question is the criticality of the demand. The more critical the demand, the more risk you should be prepared to take in the form of on-boarding new tools and opening the door to this sort of democratization.
If there's insufficient demand, you're done. But that's probably not the case. So, if you're thinking of going down this road, the next set of factors come down to risk management.
The reality is that bringing on any new technology that abstracts out complexity increases risk. It gives significant power to practitioners who may have the requisite domain knowledge, but who lack the technical knowledge to understand potential ramifications of poor decisions (or that they're poor decisions in the first place).
In some cases, the potential ramifications are just inefficient processes, duplicative efforts, and so on. In the worst cases, however, these sort of efforts can expose you to data breaches, data corruption, or significant business decisions being made with faulty data. To address these potential risks, you need to be looking at your guardrails and governance practices:
Guardrails: The first, and probably most important, factor is the degree to which any given tool enables you to implement and maintain guardrails. Ideally, this takes the form of a collaborative approach that enables domain experts (e.g., non-technical/less-technical users) to coordinate activities with experts in the given technology. This collaborative model should result in guardrails that protect the non-technical users from making impactful mistakes, but without limiting their ability to move quickly and innovate. The collaborative nature of this process is essential because these guardrails cannot be static. Your needs and environment will change too quickly. Static guardrails will either become easily-skipped speed bumps or barriers that destroy the usefulness of the tool altogether.
Governance: The second factor is governance. First and foremost, you need to have a mature governance model that you can leverage to oversee this sort of democratized development. This is purely a process and cultural consideration. Still, the tool should make this process easier by providing reporting and access to controls that allow a governance function to operate with as little friction as possible.
The Brass Tacks: Invest, Pass, or Hold?
🚀 Invest (Cautiously) or ⚠️ Hold
First, let me be clear that my "investment rating" is for the general practice and concept and is not specific to either Iterate.ai or HumanFirst. Both companies recognize the importance of the guardrail and governance controls, and incorporate them to varying degrees. Do your own due diligence, but both companies seem to be viable options in their respective domains if you choose to go down this road.
The reason for the hesitancy to the overall approach is that there's no avoiding the risk factor — even with proper guardrails and controls. These are incredibly powerful and impactful technologies — that will likely have deep and meaningful impacts on your organization's competitive footing. Any steps you make in these directions should, therefore, be slow and cautious.
That said, there is an ample case to be made that those organizations that will win the day are those that can seize the moment to deploy data-driven and AI-powered apps, that can harness the power of customer and employee conversations, and that can unleash their data using AI. Democratizing the development of these high-impact apps, and enabling those with domain expertise to participate in their development makes a ton of sense.
You just need to recognize the power you're playing with.
So, that’s the brass tacks for my point of view, but what do you think? Agree? Think I’m completely off? Let me know!
And don’t keep this conversation to yourself. Invite your friends and associates to weigh in!