Breakthroughs in machine learning have triggered a frenzy of artificial intelligence (AI) investments. Today it is no longer a question of if, but when robots will infiltrate every aspect of our lives. Self-driving cars, virtual assistants, preventive medicine and smart home devices—everywhere we look, computers are automating human tasks.
Despite warnings of potentially devastating repercussions, the replication of human intelligence into computer form is an unavoidable evolution. Like moths drawn to the light, we seem incapable of stopping the forward path of progress, even as some speculate it will lead to the demise of our species.
But has artificial intelligence set its sight on the right target? AI is commonly defined as the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
It is the ambition of many researchers to reverse engineer the brain in order to rebuild human intelligence (and even consciousness) in machine form by translating the principle operations of human intelligence into the correct components of software, algorithms and content. Artificial intelligence activists including Elon Musk, Stephen Hawking and Raymond Kurzweil predict that machines will develop consciousness through the application of human intelligence within the next 10-20 years.
The pursuit of AI assumes that human intelligence is worth replicating and will create a benefit for end users. But do we really want to replicate the flaws of human intelligence like prejudices, greed, and procrastination? Or the shortcomings of our processing capabilities? Merely replicating human intelligence using known patterns and outcomes might unburden us from menial tasks, but it won’t solve the most pressing problems of the future.
We live in a world of unprecedented complexity and rapidly evolving technology. A global business is an amalgamation of millions of dynamic parts and interdependencies, but the human brain is not wired to handle more than seven or, even in exceptional cases, eight dependencies at the same time.
So in an age of hyper connectivity and hyper risk, humans are forced to make generalizations that ignore factors which may have profound impacts. Statistical models make the same generalizations to arrive at a predicted outcome, which may vary greatly from reality and support poor decisions. Certainly, replicating bad decisions using incomplete data is not the end goal of any business or AI innovation, but are we on the right path to prevent this?
If we look at the risk management problems our clients face, it is clear that we need to aim higher than human intelligence. Our clients want to expose unknown risks and move beyond entrenched biases. They want to understand the impacts of the always evolving modern business dynamics, as well as clearly identify when and which opportunistic actions should be taken to ensure the continuous efficiency and cost effectiveness of business operations. Artificial intelligence, which supports the recycling of known information will never help them achieve this goal.
Instead we seek to develop generative intelligence, which pairs human perception and decision making capabilities with the scientific disciplines of dynamic complexity and perturbation theory to synthesize new intelligence. The ultimate goal of our approach is to augment intelligence by making the unknowns, known. And with this knowledge create self-healing systems.