The Risk-Fraught Future of Augmented Intelligence and Its Impact on the Enterprise
Artificial intelligence (AI), in its various forms, is already having a significant impact within the enterprise, particularly for specific use cases. That success has led some to fear the impact AI may have on the so-called future of work.
It’s a topic that has been bubbling just beneath the surface for as long as humans conjured the idea of creating a machine that might one day look and act like a human.
While this class of technology is unquestionably developing at an incredible rate, few of us who follow this sector believe there is any real reason to fear its development. But that isn’t to say that it won’t have a significant impact on the future of work in the enterprise — it will.
As I and others have pointed out, however, that impact will be in the form of what is being called augmented intelligence, in which organizations leverage AI-based technologies to enhance, assist, and otherwise augment their human workers. Still, there are three challenges with how most of us have addressed this augmented future: the true nature of augmentation, the leadership and management impact, and the massive technology issue staring down enterprise leaders.
What Does Augmented Intelligence Really Mean to the Future of Work?
Let’s tackle these one at a time, beginning with the first: what do we really mean when we talk about augmented intelligence?
If you read my article in CIO from a few years ago, you’ll see that I describe augmented intelligence technologies as doing precisely what it might sound like: augmenting humans in their day-to-day tasks.
And that’s how most vendors using this phrase mean it — and how most of us are using AI today. We go about our tasks and, at different points, some algorithm pops up a notice with some advice or insight, or we use an AI-powered app to make a decision. The common thread is that we, the humans, are in control.
But is this what the augmented-intelligence-future-of-work really looks like? Probably not.
In a recent Harvard Business Review article, David De Cremer and Garry Kasparov argue that the future of work is one in which humans and machines are “working in tandem” to take advantage of their respective talents.
Unlike most of us who are merely positing theory, Kasparov speaks from experience. You may recognize him as the chess grandmaster famous, among his great accolades, for his 1997 loss to IBM’s Deep Blue. He has been studying the dynamics of this relationship between human and machine ever since.
De Cremer and Kasparov argue that we realize the most significant benefits when we optimize the collaboration between man and machine. They point to a 2005 chess event as an example. In the online chess tournament that year, it was a set of amateur chess players that we’re able to most effectively “collaborate” with their team of three computers that beat out grandmasters playing with supercomputers.
The AI Leadership Challenge
It’s almost impossible to overstate the significance of that outcome. Up to now, the application of AI has been in the service of human workers — just another, albeit much more powerful, tool that an organization could provide to its employees.
However, what is emerging is that it will be the ability of organizations to create human-machine collaborative partnerships that will determine the eventual effectiveness of their AI investments.
And that will mean a fundamental shift in the nature of leadership and management in the enterprise.
In another CIO article, I suggested that the most likely use of AI in the enterprise (related to work) might be to automate managers. While I still believe this may be true for the management of quantifiable activities, this need to manage human-machine collaborations will radically alter the demands on enterprise leaders.
“Teams will gradually become composed of humans and non-humans working together, which we refer to as the ‘new diversity,’” explains De Cremer and Kasparov. “The new shape of teams will call for leaders who are skilled in bringing different parties together. In the future, creating inclusive teams by aligning man and machine will be an important ability to be trained and developed.”
I believe it’s fair to argue that this human-machine alignment and management is a skill set that is essentially non-existent in the enterprise today.
Yet, as De Cremer and Kasparov point out, it will be essential in the relatively near future. The question is whether or not enterprise executives can themselves adapt fast enough and then develop the leadership bench they will need for this future.
The Technology Challenge Around the Corner
But even if enterprises can rise to the leadership challenge, that may still not be enough.
Underlying this view of the future is the technology itself. And we are only beginning to truly understand the limitations and unintended consequences embedded in this often opaque technology.
The fact remains that most of today’s applications of AI are in relatively controlled environments and use cases. But that is rapidly beginning to change.
As it does, highly consequential issues are beginning to emerge. One such issue is something called data shift. This so-called data shift occurs when there is a mismatch between the data used to train machine learning models and the real-world situations in which organizations apply them — something that happens all too often. As organizations place AI into more real-world, dynamic environments, the data shift challenges are multiplying.
But even that may not be the worst of it.
In a recent MIT Technology Review article, senior editor Will Douglas Heaven explains that something called underspecification renders these AI-driven behaviors and actions unpredictable in the real world. He explains:
“The [machine learning] training process can produce many different models that all pass the test but—and this is the crucial part—these models will differ in small, arbitrary ways…These small, often random, differences are typically overlooked if they don’t affect how a model does on the test. But it turns out they can lead to huge variation in performance in the real world.
In other words, the process used to build most machine-learning models today cannot tell which models will work in the real world and which ones won’t.
This is not the same as data shift, where training fails to produce a good model because the training data does not match real-world examples. Underspecification means something different: even if a training process can produce a good model, it could still spit out a bad one because it won’t know the difference. Neither would we.”
The point is that as market demands push enterprises to adopt more AI-based technologies, and as they begin to deploy them at scale in a collaborative fashion with their human workers in the real world, the results will likely become increasingly less predictable.
The Intellyx Take: A Rocky Road to the Future
Central to this challenge is that every enterprise is inherently unique. There are, of course, many aspects of enterprise operations that are common, at least within a given vertical — and it is within these use cases that we are seeing great success from AI-based technologies today.
But as those use cases expand and the collaboration of humans and machines in day-to-day activities proliferate, the risk to the enterprise will likewise expand exponentially.
The need to develop leadership skills to manage this collaboration effectively, combined with the challenges of both data shift and underspecification, will likely put enterprise leaders between the proverbial rock and a hard spot. You will need to leverage this technology to remain competitive, but doing so may unpredictably and uncontrollably increase your operational risk.
While there are no clear-cut answers, there are two critical steps you can begin taking now to give yourself a leg-up as this future emerges.
First, start developing machine-to-human leadership skills now. Don’t wait. While this may feel wildly futuristic, you need to begin preparing your management ranks for a future in which they are managing hybrid, collaborative teams made up of humans and machines — and to deal with all of the ethical, cultural, and interpersonal challenges that will entail.
Second, you must choose your AI partners carefully. The sexiest-looking technology may not, in fact, be the best for your enterprise when it comes to dealing with the issues of data shift and underspecification. Instead, favor those technology partners who are most focused on applying their technologies in the real world and at scale.
Making these investments now will align your organization with the leaders and the technology partners that will be most likely to help you navigate successfully in this risk-fraught future world of work.