The Value of Digital Twin Technology

Interest in digital twin technology is surging as use cases demonstrate its problem-solving value across diverse applications including operations, manufacturing, supply chains, utilities and enterprise management.

A digital twin is commonly defined as a software representation of a physical asset, system or process designed to detect, prevent, predict and optimize the system being studied through real time analytics. Using the algorithmic capabilities of X-ACT, we extend the definition of digital twin to cover the replication of process dynamics. This means the digital twin created by X-ACT covers time dependent behaviors for complex relationships.

Gain Confidence in Decisions and Control Risks

Digital twins let users understand the present and predict the future. Many industries are applying these capabilities to gain confidence in decisions, maintain better control of risk or find opportunities for improvement. A virtual model can span the full lifecycle of any asset or process, product or service and provides an easy way to build and validate new projects or planned changes.

Once it has been verified that the digital twin produces the same results as the target physical asset, the model can be used to determine what is happening or what could happen in the future. This information is gained within minutes by testing a full range of scenarios—including events that would be difficult, costly or even impossible to test in the real world.

Through stress testing and sensitivity analysis users can quickly identify any conditions that might cause a physical asset to deviate from its expected behavior or find the root cause of problems. As risks are revealed, fixes or controls can be identified using the digital twin and implemented when needed. Users can also gain confidence in decisions and see how different choices playout under various conditions.

For example, a business that wants to move critical applications to the cloud can use a digital twin to see what would happen if transaction volumes unexpectedly double. Or a retailer might use a digital twin to analyze the fallout of a multi-day shutdown of a supplier’s manufacturing plant. A car manufacturer could use a digital twin to help answer, “What will be the realistic cost benefits and performance improvements gained by automating or digitalizing our processes?”

No matter the problem, the information gained from digital twins allow organizations to learn and make decisions faster with more certainty in the outcome. The speed and agility enabled by digital twin technologies is expected to become increasingly important as the complexity and pace of business continues to accelerate.

 

Causal Deconstruction Method

Our causal deconstruction method is a seven-stage scientific methodology that is used to understand the constituent components of a system and any dependencies by establishing the base dynamics, deconstructing complexity, constructing an emulator, predicting singularities, comparing to the actual system, defining improvements and monitoring the execution.

Causal deconstruction allows us to uncover results that often defy the common wisdom that stops at the wrong level of analysis and usually produces a host of misleading conclusions. Using this method, we can promote the right approach of analysis and mathematics capable of solving the problem within an environment where dynamic complexity has become the major risk.

Optimal Business Control

Optimal business control (OBC) is a set of management, data collection, analytics, machine learning and automation processes through which management predicts, evaluates, and, when necessary, responds to mitigate dynamic complexity related risks that hinder the realization of business goals.

OBC is enabled by the X-Act OBC Platform to support the goals of universal risk management (URM) through the predictive analysis and prescriptive treatment of business risks. Using the quantitative and qualitative metrics supported by X-Act OBC Platform, users can proactively discover risks that may cause situations of system deterioration. Using this knowledge, systems can then be placed under surveillance to enable right-time risk alerts and preemptive fixing of any identified problems.

Optimal Business Control (OBC) Diagram Through the use of a knowledge library and machine-learning sciences, X-Act OBC Platform enables users to define the optimal treatment of risk and use this knowledge to feed a decision engine that organically evolves to cover new and increasingly complex behavioral scenarios.

X-Act OBC Platform uses situational data revealed through causal analysis and stress testing to provide surveillance of systems and identify cases of increasing risk. These cases are unknowns in big data analytical methods, which are limited to prediction based on data collected through experience. Within the OBC database, a diagnosis definition and remediation plan—that covers both the experience-based knowns and those that were previously unknown—are stored together. This allows for the rapid identification of a potential risk with immediate analysis of root causes and proposed remedial actions.

This approach to right-time risk surveillance, represents a real breakthrough that alleviates many of the pains created by the traditional long cycle of risk management, which starts with problem-analysis-diagnosis and ends with eventual fixing well beyond the point of optimal action. OBC represents a clear advantage by shortening the time between the discovery and remediation of undesirable risks.

As the database is continuously enriched by the dynamic characteristics that continuously evolve during a system’s lifetime, the knowledge contained within the database becomes more advanced. OBC is also adaptive. By continuously recording within the OBC database foundational or circumstantial system changes, the predictive platform will identify any new risk, determine the diagnosis and define the remedial actions, and finally enhance the OBC database with this new knowledge.

Companies with the most mature OBC practices and robust knowledge bases will be able to confidently define and make the right moves at the right time to achieve better economy, control risks and ultimately create and maintain a competitive advantage.

Perturbation Theory

Perturbation theory provides a mathematical method for finding an approximate solution to a problem, by starting from the exact solution of a related problem. A critical feature of the technique is a middle step that breaks the problem into “solvable” and “perturbation” parts. Perturbation theory is applicable if the problem at hand cannot be solved exactly, but can be formulated by adding a “small” term to the mathematical description of the exactly solvable problem.

Background

Perturbation theory supports a variety of applications including Poincaré’s chaos theory and is a strong platform to deal with the dynamic behavior problems . However, the success of this method is dependent on our ability to preserve the analytical representation and solution as far as we are able to afford (conceptually and computationally). As an example, I successfully applied these methods in 1978 to create a full analytical solution for the three-body lunar problem[1].

In 1687, Isaac Newton’s work on lunar theory attempted to explain the motion of the moon under the gravitational influence of the earth and the sun (known as the three-body problem), but Newton could not account for variations in the moon’s orbit. In the mid-1700s, Lagrange and Laplace advanced the view that the constants, which describe the motion of a planet around the Sun, are perturbed by the motion of other planets and vary as a function of time. This led to further discoveries by Charles-Eugène Delaunay (1816-1872), Henri Poincaré (1854 – 1912), and more recently I used predictive computation of direct and indirect planetary perturbations on lunar motion to achieve greater accuracy and much wider representation. This discovery has paved the way for space exploration and further scientific advances including quantum mechanics.

How Perturbation Theory is Used to Solve a Dynamic Complexity Problem

The three-body problem of Sun-Moon-Earth is an eloquent expression of dynamic complexity whereby the motion of planets are perturbed by the motion of other planets and vary as a function of time. ‪ While we have not solved all the mysteries of our universe, we can predict the movement of a planetary body with great accuracy using perturbation theory.

During my doctorate studies, I found that while Newton’s law is ostensibly true in a simple lab setting, its usefulness decreases as complexity increases. When trying to predict the trajectory (and coordinates at a point in time) of the three heavenly bodies, the solution must account for the fact that the gravity attracts these bodies to each other depending on their mass, distance, and direction. Their path or trajectory therefore undergoes constant minute changes in velocity and direction, which must be taken into account at every step of the calculation. I found that the problem was solvable using common celestial mechanics if you start by taking only two celestial bodies, e.g. earth and moon.

But of course the solution is not correct because the sun was omitted from the equation. So this incorrect solution is then perturbed by adding the influence of the sun. Note that the result is modified, not the problem, because there is no formula for solving a problem with three bodies. Now we are closer to reality but still far from precision, because the position and speed of the sun, which we used was not its actual position. Its actual position is calculated using the same common celestial mechanics as above but applied this time to the sun and earth, and then perturbing it by the influence of the moon, and so on until an accurate representation of the system is achieved.

Applicability to Risk Management

The notion that the future rests on more than just a whim of the gods is a revolutionary idea. A mere 350 years separate today’s risk-assessment and hedging techniques from decisions guided by superstition, blind faith, and instinct. During this time, we have made significant gains. We now augment our risk perception with empirical data and probabilistic methods to identify repeating patterns and expose potential risks, but we are still missing a critical piece of the puzzle. Inconsistencies still exist and we can only predict risk with limited success. In essence, we have matured risk management practices to the level achieved by Newton, but we cannot yet account for the variances between the predicted and actual outcomes of our risk management exercises.

This is because most modern systems are dynamically complex—meaning system components are subject to the interactions, interdependencies, feedback, locks, conflicts, contentions, prioritizations, and enforcements of other components both internal and external to the system in the same way planets are perturbed by other planets. But capturing these influences either conceptually or in a spreadsheet is impossible, so current risk management practices pretend that systems are static and operate in a closed-loop environment. As a result, our risk management capabilities are limited to known risks within unchanging systems. And so, we remain heavily reliant on perception and intuition for the assessment and remediation of risk.

I experienced this problem first hand as the Chief Technology Officer of First Data Corporation, when I found that business and technology systems do not always exhibit predictable behaviors. Despite the company’s wealth of experience, mature risk management practices and deep domain expertise, sometimes we would be caught off guard by an unexpected risk or sudden decrease in system performance. And so I began to wonder if the hidden effects which made the prediction of satellite orbits difficult, also created challenges in the predictable management of a business. Through my research and experiences, I found that the mathematical solution provided by perturbation theory was universally applicable to any dynamically complex system—including business and IT systems.

Applying Perturbation Theory to Solve Risk Management Problems

Without the ability to identify and assess the weight of dynamic complexity as a contributing factor to risk, uncertainty remains inherent in current risk management and prediction methods. When applied to prediction, probability and experience will always lead to uncertainties and prohibit decision makers from achieving the optimal trade-off between risk and reward. We can escape this predicament by using the advanced methods of perturbation mathematics I discovered as computer processing power has advanced sufficiently to support my methods perturbation based emulation to efficiently and effectively expose dynamic complexity and predict its future impacts.

Emulation is used in many industries to reproduce the behavior of systems and explore unknowns. Take for instance space exploration. We cannot successfully construct and send satellites, space stations, or rovers into unexplored regions of space based merely on historical data. While the known data from past endeavors is certainly important, we must construct the data which is unknown by emulating the spacecraft and conducting sensitivity analysis. This allows us to predict the unpredicted and prepare for the unknown. While the unexpected may still happen, using emulation we will be better prepared to spot new patterns earlier and respond more appropriately to these new scenarios.

Using Perturbation Theory to Predict and Determine the Risk of Singularity

Perturbation theory seems to be the best-fit solution for providing accurate formulation of dynamic complexity that is representative of the web of dependencies and inequalities. Additionally, perturbation theory allows for predictions that correspond to variations in initial conditions and influences of intensity patterns.  In a variety of scientific areas, we have successfully applied perturbation theory to make accurate predictions.

After numerous applications of perturbation theory based-mathematics, we can affirm its problem solving power. Philosophically, there exists a strong affinity between dynamic complexity and its discovery revealed through perturbation based-solutions. At the origin, we used perturbation theory to solve gravitational interactions. Then we used it to reveal interdependencies in mechanics and dynamic systems that produce dynamic complexity. We feel strongly that perturbation theory is the right foundational solution of dynamic complexity that produces a large spectrum of dynamics: gravitational, mechanical, nuclear, chemical, etc. All of them represent a dynamic complexity dilemma. All of them have an exact solution if and only if all or a majority of individual and significant inequalities are explicitly represented in the solution.

An inequality is the dynamic expression of interdependency between two components. Such dependency could be direct (e.g. explicit connection always first order) or indirect (connection through a third component that may be of any order on the base that the perturbed perturbs). As we can see, the solutions based on Newton’s work were only approximations of reality as Newton principles considered only the direct couples of interdependencies as the fundamental forces.

We have successfully applied perturbation theory across a diverse range of cases from economic, healthcare, and corporate management modeling to industry transformation and information technology optimization. In each case, we were able to determine with sufficient accuracy the singularity point—beyond which dynamic complexity would become predominant and the predictability of the system would become chaotic.

Our approach computes the three metrics of dynamic complexity and determines the component, link, or pattern that will cause a singularity. It also allows users to build scenarios to fix, optimize, or push further the singularity point. It is our ambition to demonstrate clearly that perturbation theory can be used to not only accurately represent system dynamics and predict its limit/singularity, but also to reverse engineer a situation to provide prescriptive support for risk avoidance.

More details on how we apply perturbation theory to solve risk management problems and associated case studies are provided in my book, The Tyranny of Uncertainty.

[1] Abu el Ata, Nabil. Analytical Solution the Planetary Perturbation on the Moon. Doctor of Mathematical Sciences Sorbonne Publication, France. 1978.

 

Understanding Dynamic Complexity

Complexity is a subject that everyone intuitively understands. If you add more components, more requirements or more of anything, a system apparently becomes more complex. In the digital age, as globalization and rapid technology advances create an ever-changing world at a faster and faster pace, it would be hard not to see the impacts of complexity, but dynamic complexity is less obvious. It lies hidden until the symptoms reveal themselves, their cause remaining undiscovered until their root is diagnosed. Unfortunately, diagnosis often comes too late for proper remediation. We have observed in the current business climate that the window of opportunity to discover and react to dynamic complexity and thereby avoid negative business impacts is shrinking.

Dynamic Complexity Defined

Dynamic complexity is a detrimental property of any complex system in which the behaviorally determinant influences between its constituents change over time. The change can be due to either internal events or external influences. Influences generally occur when a set of constituents (1…n) are stressed enough to exert an effect on a dependent constituent, e.g. a non-trivial force against a mechanical part, or a delay or limit at some stage in a process. Dynamic complexity creates what was previously considered unexpected effects that are impossible to predict from historic data—no matter the amount—because the number of states tends to be too large for any given set of samples.

Dynamic complexity—over any reasonable period of time—always produces a negative effect (loss, time elongation, or shortage), causing inefficiencies and side effects, similar to friction, collision or drag. Dynamic complexity cannot be observed directly, only its effects can be measured.

Static vs. Dynamic Complexity

To understand the difference between complexity (a.k.a. static complexity) and dynamic complexity, it is helpful to consider static complexity as something that can be counted (a number of something), while dynamic complexity is something that is produced (often at a moment we do not expect). Dynamic complexity is formed through interactions, interdependencies, feedback, locks, conflicts, contentions, prioritizations, enforcements, etc. Subsequently, dynamic complexity is revealed through forming congestions, inflations, degradations, latencies, overhead, chaos, singularities, strange behavior, etc.

Human thinking is usually based on linear models, direct impacts, static references, and 2-dimensional movements. This reflects the vast majority of our universe of experiences. Exponential, non-linear, dynamic, multi-dimensional, and open systems are challenges to our human understanding. This is one of the natural reasons we can be tempted to cope with simple models rather than open systems and dynamic complexity. But simplifying a dynamic system into a closed loop model doesn’t make our problems go away.

Dynamic Complexity Creates Risk

With increasing frequency, businesses, governments and economies are surprised by a sudden manifestation of a previously unknown risk (the Fukushima Daiichi Nuclear Disaster, the 2007-2008 Financial Crisis, or Obamacare’s Website Launch Failure are a few recent examples). In most cases the unknown risks are caused by dynamic complexity, which lies hidden like a cancer until the symptoms reveal themselves.

Often knowledge of the risk comes too late to avoid negative impacts on business outcomes and forces businesses to reactively manage the risk. As the pace of business accelerates and decision windows shrink, popular methods of prediction and risk management are becoming increasingly inadequate. Real-time prediction based on historical reference models is no longer enough.  To achieve better results, businesses must be able to expose new, dangerous patterns of behavior with sufficient time to take corrective actions—and be able to determine with confidence which actions will yield the best results.