Solving Blockchain Distributed Transaction Challenges

Blockchains are ideal for shared databases in which every user is able to read everything, but no single user controls who can write what. By contrast, in traditional databases, a single entity exerts control over all read and write operations. However, issues relating to scalability, enforcement of business constraints, and aggregation may arise when using shared ledger structures for multi-party or distributed transaction models. Augmentation of the blockchain protocol using a collector or aggregator mechanism is necessary to overcome the issues.

Interorganizational Record Keeping

The chain acts as a mechanism for collectively recording and notarizing any type of data, whose meaning can be financial or otherwise. An example is an audit trail of critical communications between two or more organizations, say in the healthcare or legal sectors. No individual organization in the group can be trusted with maintaining this archive of records, because falsified or deleted information would significantly damage the others. Nonetheless it is vital that all agree on the archive’s contents, in order to prevent disputes.

Multiparty Aggregation

This use case is similar to the previous one, in that multiple parties are writing data to a collectively managed record. However, in this case the motivation is different – to overcome the infrastructural difficulty of combining information from a large number of separate sources.

Imagine two banks with internal databases of customer identity verifications. At some point they notice that they share a lot of customers, so they enter a reciprocal sharing arrangement in which they exchange verification data to avoid duplicated work. Technically, the agreement is implemented using standard master–slave data replication, in which each bank maintains a live read-only copy of the other’s database, and runs queries in parallel against its own database and the replica.

Now imagine these two banks invite three others to participate in this circle of sharing. Each of the 5 banks runs its own master database, along with 4 read-only replicas of the others. With 5 masters and 20 replicas, we have 25 database instances in total. While doable, this consumes noticeable time and resources in each bank’s IT department.

Fast forward to the point where 20 banks are sharing information in this way, and we’re looking at 400 database instances in total. For 100 banks, we reach 10,000 instances. In general, if every party is sharing information with every other, the total number of database instances grows with the square of the number of participants. At some point in this process, the system is bound to break down.

Multi-party and Distributed Transaction Challenges

In either a multi-party record keeping or distributed transaction model, a party may want to find and reorder related blocks in chronological order to support a decision or they may want to enforce constraints before taking action. Using patient care as a specific example, a doctor may want to pull all of the medical records for a patient before deciding whether to perform a medical procedure. These records may include hospital records, test results, medical history, insurance authorizations, etc. Proceeding with the patient care may be dependent upon factors such as a primary care doctor referral, appropriate insurance authorizations, and blood test results within 72 hours of the planned medical procedure.

By automating the retrieval and sequencing of events through an aggregator mechanism, we can deliver the necessary information in correct sequence at the right time—without cumbersome manual manipulation of chains or custom coding.

Shared-Ledger Algorithm

We have developed a mathematical algorithm that collects and dispatches the right sequence of events and time sensitive priorities to aggregate multiple domain specific blockchains to form a purpose-oriented blockchain. Tested under a variety of cases to determine its wide applicability, the patented algorithm complements the blockchain protocol to provide necessary aggregation solution for multi-party transaction processes that characterize industries and services of multiple shared blockchains such as:

  • Healthcare: Patient treatment and admission, preventive medicine, research, etc.
  • Government-citizen services
  • Regulations
  • Supply chain management
  • Corporate actions
  • Multiple-suppliers to right time processing
  • Food production
  • Research and development
  • Banking and capital markets

Modeling Economic Dynamics

Examining the Problems with Traditional Risk Modeling Methods

Traditional financial risk management methods were formulated in an analogy with the early foundational principles of thermodynamics. However, traditional economic models are incomplete models of reality because economic systems are not inclined to attain equilibrium states unless we are talking about very short windows of time (similar to meteorological or most nuclear or gravitational systems).

Problems with risk modeling methods based on the laws of thermodynamics:

  • Predictability is limited to short windows, where the initial conditions varies in small amplitudes and in small frequencies
  • Complexities are dealt with once recognized, rather than as a result of structural evolution and systemic behavior of multiple-level interactions
  • Only closed systems that reach equilibrium are dealt with, no adaptive ability to an external or internal modification is allowed
  • Complex systems do not systematically expose equilibrium
  • Using Stochastic models that deal with randomness are difficult to determine 
small resonances and therefore do not tend to a long term representation

A New Way to Look at Economy and Risk

Financial systems are not wholly physical. They do not always behave in an expected manner as predicted from their patterns of past behavior. They are immature. They can sometimes exhibit unexpected and unknown behavior because we do not understand their complexity and how it changes.

To avoid future crisis in the proportions of 2008, we must identify new methods of economic risk analysis that more accurately model the dynamic reality of financial systems. To this end, we promote determinism, which is the view that every event, including human cognition, behavior, decision, and action, is causally determined by an unbroken sequence of prior occurrences.

Determinists believe the universe is fully governed by causal laws resulting in only one possible state at any point in time. Simon-Pierre Laplace’s theory is generally referred to as “scientific determinism” and predicated on the supposition that all events have a cause and effect and the precise combination of events at a particular time engender a particular outcome.

How the impact of dynamic complexity leads to economy non-equilibrium:

  • Different instruments within a portfolio have different dynamic patterns, evolution speeds, producing different impact on risk
  • But also they influence each other: in sharing, affecting, and operating in terms of both frequency and amplitude in the behavior of discriminant factors (econometrics, relation economy/finance, long term repercussion etc.)
  • In addition, each will have different reaction/interaction towards an external/ internal event.

Consequently, modeling economics dynamics is the right foundation to insure predictability of such self-organized evolutionary systems that may prevail towards even several points of singularities and larger number of degrees of freedom than the small number in traditional methods.

Using this method, we will be able to address most of the drawbacks of the traditional methods:

  • Both the need for predictable determinism and the intensive presence of high level of dynamic complexity justifies the use of Perturbation Theory
  • The condition of success to approach an exact solution at any moment of time relies on the use of deconstruction theory that will separate the constituents and find the proper mathematical expression of each prior to the deployment of the perturbed expression (i.e. two-level solution)
  • Evolutionary process guarantees wider window of representativeness and adaptability for the dynamic complexityeconomics
  • Tends to exact solution

Table: Dynamic Complexity versus Traditional Economics

Dynamic Complexity Economics Traditional Economics
Open, dynamic, non-linear in equilibrium Closed, static, linear in equilibrium
Each constituent of the system is model individually then aggregated through Perturbation Theory The system is modeled collectively in one step
No separation between micro and macro level behaviors Separation between micro and macro level behaviors
Evolutionary process guarantees wider window of representativeness and adaptability for the dynamic complexity economics Unstable for wider windows of time
Allows for continuous interactions of external and internal agents Does not allow for continuous interactions of external and internal agents
Optimal control is possible as sub product of dynamic complexity modeling Optimal control is not possible


From a scientific standpoint, the subject of financial dynamics and the best risk analysis method is still open and further mathematical, physical and engineering as well as economic risk analysis developments are necessary. A great body of contributions, covering a wide spectrum of preferences and expertise and from deeply theoretical to profoundly pragmatic, currently exists today. All show the interest, but also the urgency, to find a solution that can help us avoid the singularities that occurred in 2008. To progress, we must continuously seek to recognize the failures of past methods and strive to find solutions.

Blockchain: Navigating the Disruption

After years of theoretical debates and abstract use cases, it is no longer a question of if blockchain will cause market disruption, but rather when and how widely the impact will be felt. Now is the time to remove any outstanding doubts about blockchain applicability and strategically manage the business and operational risks that inevitably come with innovation. The main barrier being how to best plan for and manage the disruption. In all cases, correctly quantifying the threats and opportunities is a requirement for success.

Using a range of specific cases across various markets and industries, including financial services, supply chain and healthcare, we are actively conducting research and collaborating with other industry leaders to verify how to best apply a scientific method of predictive emulation to reveal where the capabilities of blockchain are best suited to solve business problems, quantify the expected improvements and manage the risks in delivering the solution.

Blockchain technology is best known for being the magic behind Bitcoin, but there are scores of other industries that can benefit from this revolutionary technology. The benefits include driving costs savings by reducing labor-intensive processes and eliminating duplicate efforts, as well as creating new markets by exposing previously untapped sources of supply.

Funded by eager venture capitalists, start-ups can easily pursue blockchain initiatives, but convincing stakeholders of global corporations and financial institutions to go all-in on a new technology that could overturn the very fundamentals of the business and the organization’s biggest profit drivers is not easy. Alternatively, taking a wait and see approach may place laggards at a significant competitive disadvantage as the move to blockchain requires a well devised plan and sufficient time for its execution.

Given the dynamic complexity of modern systems, it can be difficult to identify across operations the right plan to manage disruption and create a sustainable business model. With technological innovations, often the most dangerous risks are posed by the unknowns that cannot be predicted with historical reference models and often escape the imagination of risk committees. A scientific method to predictively quantify opportunities and universally manage risks can help stakeholders strategically time, justify and manage a disruptive move.

X-Act® OBC Platform is useful in these cases as it is the only mathematical dynamic complexity emulator that can realistically model business services and infrastructures. We use X-Act OBC Platform to replicate the dynamics and complexity of business implementations—allowing us to predictively compute system behaviors at different points in time and under various operational conditions. These insights can then be used to plan for and manage a disruptive move by supporting the series of complex decisions necessary to make the right trade-offs between sometimes conflicting objectives, allow acceptable time to market and preserve business continuity.

Using the emulator capabilities of X-Act OBC Platform, we tested various blockchain scenarios under different patterns of initial conditions and dynamic constraints to identify the conditions under which risk will increase, as well as the possible mitigation strategies.

By modifying the parameters of each scenario within the emulator, one by one, by group, or by domain, to represent possible changes, we are able to extrapolate each time the point at which the system will hit a singularity and use the corresponding information to diagnose the case. Additional scenarios can be created to explore viable and proactive remedial options that secure an acceptable risk mitigation strategy.

Understanding a System through Deconstruction

A system—organizational, industrial, biological, environmental, or IT—is composed of components, objects, or members, each of which have specific properties that characterize its behavior in space and time. All members interact, impact, serve, and receive from other members in time and space. We can think of this as the connectivity or more specifically the time and space connectivity from which many possible combinatorial and dependencies result. Depending on the intensities of such intra- and inter-relations among components and their configuration, the overall system will expose behavior patterns and characteristics.

From this we can produce a set of quantitative and qualitative metrics that will provide a synthesis of what happens. This set of metrics will show the global characteristics of the system, but the ultimate target is contribution of each individual component and their interactions. This knowledge will allow us to properly identify the causal configuration. In this case, deconstruction theory becomes important to our goal of identifying the component, or components, that expose the system to a risk—in terms of limits beyond which the system will no longer work, service quality, or cost. Basically, if you want to understand the behavior of a system, you must deconstruct it and look at its components.

It is important to perform deconstruction in such a way that allows the shortest path to the identification of the risk component(s), the dynamic signature of what happens or may happen, the conditions under which a component will reveal the risk, and above all the actions required to proactively fix the problem while there is still an opportunity for a possible solution.

Over the last 10 years, we have been able to confirm that this approach yields significant contributions to the determination of risk and risk management in comparison to traditional methods. The suggested process of causal deconstruction has been applied many times on different business, industrial, economic, and services activities, and the results have been significant and exhaustive.

A Complex System under Optimal Control

By combining causal deconstruction theory and perturbation theory, a dynamic complexity problem can be accurately solved with the right level of representation and a good level of certainty on the reproducibility. This method shows great promise as a powerful process for risk identification, evaluation, management, and avoidance.

To determine the performance and accurately identify risky components within an open structure involving multiple orders perturbations, we use a layered hierarchical process based on the causal deconstruction to feed a mathematical hierarchy of specialized algorithms, which are computed and aggregated following the capabilities of perturbation theory. Through this approach, the behavior of a component determines its status that, with respect to others, will determine the characteristics of the component, its ability to deliver its service to the system, and to what extent. The environment is composed of the ensemble of components, the demand structures from each to all components, and the possible combinations that deliver a service based on multiple interactions.

From this point, the solution can be extended to meet the goals of optimal business control (OBC). In this case, a knowledge base and other automation technologies are used to observe the system in operation to identify dynamic characteristics that may lead to a risk. The ambition of these methods are to place the system under permanent control, so that it becomes possible to slow down the adverse effects of dynamic complexity or prepare for the avoidance of an eventual risk.

Identifying Cost Saving Opportunities in Healthcare

The goal of many governments is to continuously improve the public health system efficiency by increasing preventive and proactive intervention, reducing any unnecessarily overhead due to multiple analysis and even diagnosis for the same case, and consolidating patient history to improve preparedness.

In a client case involving a government sponsored healthcare program, each citizen was attached to a binder that included all of his or her health and drug history, time series analyses, medical procedures, medical attributes and some projections that could help with follow-up tracking. This binder was available to medical and pharmaceutical personnel and was synchronized in case multiple doctors were involved. The gigantic national infrastructure necessary to support this level of information sharing formed one of the earliest applications of big data—(even before the popularization of the term).

Once the binders were implemented, the next challenge was to improve the speed of record updates and allow access to patient records 24/7 from anywhere in the medical network grid (around 37 large academic hospitals organized in 12 groups and tens of clinics and nursing homes). The introduction of smart cards allowed the goals to become a reality. With this improvement, the risks associated with late diagnosis, surveillance and patient record management efforts were reduced.

X-Act OBC Platform predictive emulation and risk management were used to evaluate the cost efficiency of a network of public hospitals serving a large metropolitan area and its suburbs. The hospital system offered healthcare to more than 7 million individuals with 5 million external consultations, 1.2 million beds, 1.1 million urgent care visits (1 every 30 seconds), 38,000 new births, and 1,200 organ transplants each year. Ninety thousand professionals, including 22,000 doctors, 51,000 hospital personnel, 16,000 nurses and 15,000 administrative personnel, served the needs of the constituents.  Additionally, an average of 2,700 research projects in biomedicine with strong connection to the academic world were included under the same management structure.

The risk in this environment is predominantly operational. However as the system involves human safety, management of possible pandemic, and professional errors, legal, economic, reputation and administrative risks are present which require strong predictive analytics to control and alert stakeholders of any performance problems. Using our X-Act OBC Platform technologies and optimal business control (OBC) methodologies, we were able to help the government reduce the cost of healthcare by 9% and have plans for an additional reduction of 10% through the smart use of a universal database.

Through this project, we were able to construct a predictive platform that allowed the management to test decision scenarios and explore options to implement right-time control and surveillance. The technologies allowed stakeholders to anticipate risk and enhance mitigation plans as the system dynamics evolve or change. Having proven the solution through this project, it is our ambition to generalize the approach to cover the whole country. The expanded usage would allow a host of studies and research projects to take place in order to understand the origin, evolution, risk factors and correlations to internal and external influences of both rare and more recognizable maladies.

Dynamic Complexity in Healthcare

A healthcare system can be defined as the organization of people, institutions, and resources that deliver healthcare services to meet the health needs of target populations. Worldwide we have a diverse variety of complex, arduous and multifaceted healthcare systems. Nations design and develop healthcare systems in accordance with their needs and resources, but their choices impact social and political dimensions as well as every governmental department, corporation, and individual, which they are built to serve. Currently many governments are struggling to contain the cost of reliable and equitable healthcare systems. The efficiency of the system is necessary to support the wellness of citizens as well as the economic and social progress of the country. Therefore we can consider healthcare as both a cost to taxpayers as well as an investment in the future.

If we consider the risk dimension of healthcare, we can anticipate a spectrum of risk factors, each of which can become preponderant to the others at any point a time. Operational risk, economic risk, pandemic management, and right-time interventions are just a few of the critical risk considerations. But we must also consider public safety, medication shortage, lack of healthcare professionals, as well as inefficient management of health environments and associated research.

Over the last decades, several government sponsored healthcare mega-projects have been undertaken to add more automation to healthcare management systems. The scope of these projects has varied based on the country’s willingness to invest in the effort, but in each case the main objectives have been the containment of healthcare costs and improvements in the quality of healthcare services. So far, the results have been mixed. Any measurable program success is often tempered with considerable financial burdens and less than expected efficiency gains. From the management of patients, care infrastructure, medical records, and medical research to preventative and palliative care, the spectrum of contributing risk factors is wide and hampered by both static complexity (number of items and attributes) and dynamic complexity (dependencies, time series, case evolution, historical changes).

There is no doubt that the impact of dynamic complexity causes a great number of healthcare transformation project failures. Project outcomes are typically marred by costs that are several times higher than originally planned and significant project delays, which then further inflate the overall costs of the change program. In general, these problems are created when dynamic complexity is ignored during the business analysis phase that precedes information technology system transformation plans. The inability to express dynamics using natural language, difficulties in gaining an end-to-end picture of system dynamics, variations in healthcare procedures and practices, and finally the lack of clarity in required care, prevention and speed of treatments versus the expected results, are major roadblocks in automating healthcare systems.

Reconstructing the 2007-2008 Financial Crisis

Using X-Act® OBC Platform, we were able to reconstruct the global financial singularity of 2007 and 2008. Obviously the constructed solution came too late to spare the world from economic turmoil. It is not our intent to join the after-event agents of prophecy. Rather our goal is to use a scientific approach to reverse engineer what happened and in doing so prove the usefulness of mathematical emulation as a preventative solution.

Financial Dynamics in 2007

To analyze the root cause of the 2007-2008 financial crisis, we built a mathematical emulator that represented the financial market dynamics prior to the crash. This included the financial engines and dynamic flows among them and explicitly the dependencies on the internal structure and the external influencers that impact market performance.

In building the mathematical emulator, we put particular emphasis on the first category of direct dependencies (in couples: edges, vertices or a mix) as well as the indirect dependencies based on the fact that each and every part of the first category could be influenced by the impact on each category from the already-impacted participating components.

A perturbed structural model can be mathematically expressed in the form of participating inequalities. Each inequality contributes, through different amplitude and frequency, to the overall solution. Mathematically based solutions of this class are generally validated through three criteria:

  1. The solution should be representative to the process dynamics
  2. The solution should be accurate with respect to a real environment outcome with identical initial conditions
  3. The solution should allow a predictable precision that provides sufficient confidence in decision making and execution under different initial conditions

Mathematically speaking, we can consider that the coupled business dynamics-initial conditions will express and produce speeds, and the influencers will provoke accelerations. If we project these principles on the financial meltdown of 2007, we find that the system was stable (at least apparently) until the foreclosure rate went from 2 to 3 percent—the increase corresponding to more than 50% for subprime mortgages— and representing 10.5% of the US housing market (which was supposedly a low risk financial instrument).  As is the case with mortgages, this amount was not distributed, but the full amount represented a direct loss to the financial institutions.

The Singularity is Precipitated by a Heating of the Market

The treasury secretary Paulson says, “The housing bubble was driven by a big increase in loans to less credit worthy, or subprime, borrowers that lifted homeownership rates to historic levels.” But this explanation alone is not sufficient to explain the collapse of the whole financial system based solely on foreclosure propagation.

Our discovery of dynamic complexity through mathematical emulation allowed us to clearly point to the real cause: the market dynamics caused a singularity and dynamic complexity was the real cause of the crisis. This fact was not identifiable by the risk management and mitigation methods used pre-crisis. In short, the housing crisis was quickly overshadowed by a much bigger crisis caused by the dependencies of intertwined financial structures (connected through financial instruments, such as mortgage-backed securities) that by design, or by accumulation, caused the market collapse.

In other words, if someone wanted to design a system that would lead to a devastating crisis, the financial system of 2006 through 2008 was the perfect example of doing just that. If you use a similar structure with the level of dynamic complexity, you can replace housing with credit card as the predominant financial instrument, then back it up with securities (along the lines of mortgage-backed securities) and you will the same recipe for disaster.

Using X-Act® OBC Platform service we were able to model how the crisis was communicated from mortgage through home equity origination to contaminate the whole financial market in much larger amplitude than the variance in home mortgage interest rate over 20 years. Mortgage and housing market conditions create cycles of economic crises that repeat approximately every six years. However, the singularity that surprised even the U.S. Federal Reserve Chairman in 2007 held much deeper and wider causes for the collapse—namely the effects of dynamic complexity—than was the case in previous cycles.

The Singularity Hits When the System Becomes Out of Control

We used the causal emulation step of our methodology to measure the impact of the system contamination agent, which had been identified as mortgage-backed securities (MBS). After MBSs hit the financial markets, they were reshaped into a wide variety of financial instruments with varying levels of risk. This bundling of activities blurred the traceability of the original collateral assets. Interest-only derivatives divided the interest payments made on a mortgage among investors. When interest rates rose, the return was good. If rates fell and homeowners refinanced, then the security lost value. Other derivatives repaid investors at a fixed interest rate, so investors lost out when interest rates rose since they weren’t making any money from the increase. Subprime mortgage-backed securities were created from pools of loans made to subprime borrowers. These were even riskier investments, but they also offered higher dividends based on a higher interest rate to make the investment more attractive to investors.

In August 2008, one out of every 416 households in the United States had a new foreclosure filed against it. When borrowers stopped making payments on their mortgages, MBSs began to perform poorly. The average collateralized debt obligation (CDO) lost about half of its value between 2006 and 2008. And since the riskiest (and highest returning) CDOs were comprised of subprime mortgages, they became worthless when nationwide loan defaults began to increase. This would be the first domino to fall in the series that fell throughout the U.S. economy.

How the MBS’s brought Down the Economy

When the foreclosure rate began to increase late in 2006, it released more homes on the market. New home construction had already outpaced demand, and when a large number of foreclosures became available (representing up to 50% of the subprime mortgages) at deeply discounted prices, builders found that they couldn’t sell the homes they’d built. A homebuilder can’t afford to compete with foreclosures at 40 percent to 50 percent off their expected sales price. The presence of more homes on the market brought down housing prices. Some homeowners owed more than their homes were worth. Simply walking away from the houses they couldn’t afford became an increasingly attractive option, and foreclosures increased even more.

Had a situation like this taken place before the advent of mortgage-backed securities, a rise in mortgage defaults would nonetheless create a ripple effect on the national economy, but possibly without reaching a singularity. It was the presence of MBSs that created an even more pronounced effect on the U.S. economy and made escaping the singularity impossible.

Since MBSs were purchased and sold as investments, mortgage defaults impacted all dimensions of the financial system. The portfolios of huge investment banks with large and predominant MBSs positions found their net worth sink as the MBSs began to lose value. This was the case with Bear Stearns. The giant investment bank’s worth sank enough that competitor JPMorgan purchased it in March 2008 for $2 per share. Seven days before the buyout, Bear Stearns shares traded at $70[2].

Because MBSs were so prevalent in the market, it wasn’t immediately clear how widespread the damage from the subprime mortgage fallout would be. During 2008, a new write-down of billions of dollars on one institution or another’s balance sheet made headlines daily and weekly. Fannie Mae and Freddie Mac, the government-chartered corporations that fund mortgages by guaranteeing them or purchasing them outright, sought help from the federal government in August 2008. Combined, the two institutions own about $3 trillion in mortgage investments[3]. Both are so entrenched in the U.S. economy that the federal government seized control of the corporations in September 2008 amid sliding values; Freddie Mac posted a $38 billion loss from July to August of 2008[4].

When Fannie Mae and Freddie Mac won’t lend money or purchase loans, direct lenders become less likely to lend money to consumers. If consumers can’t borrow money, they can’t spend it. When consumers can’t spend money, companies can’t sell products, and low sales means lessened value, and so the company’s stock price per share declines. So, on one side the capital market is tightening due to the MBSs and CDOs but also corporations suffer, as consumers lessened their consumption, and as money and credit tightened gradually.  Businesses then trim costs by laying off workers, so unemployment increases and consumers spend even less. When enough companies (not only banks and other financial institutions but also corporations and finally investors) lose their values at once, the stock market crashes. A crash can lead to a recession. A bad enough crash can lead to a depression; in other words, an economy brought to its knees[5].

Preparing to Avoid the Next Financial Singularity

The predictive emulation shows us that an excessive integration of financial domains without understanding how dynamic complexity will be generated is the equivalent of creating a major hidden risk that will always surprise everyone involved. Either because the predictive tools can’t easily identify the role dynamic complexity plays, or due to imprudent constructs that seem to be acceptable options—if the world is flat and the dynamics are linear. These two assumptions are obviously wrong. Science teaches us the concept of determinism (all events, including human action, are ultimately determined by causes external to will).

Undoubtedly, financial markets will continue to pose grave risks to the welfare of the global economy as long as economic stakeholders are unable to accurately measure dynamic complexity or understand all the steps required to protect their assets.  We must test and expand upon the presented universal risk management methods to define a better way forward that allows economic stakeholders to predict a potential singularity with sufficient time to act to avoid crisis of this magnitude in the future.



Dynamic Complexity’s Role in 2007-2008 Financial Crisis

After the economic events of 2007 and 2008, many economic experts claimed that they had predicted that such a disaster would occur, but none were able to preemptively pinpoint the answers to key questions that would have helped us prepare for such an event or even lessen its impacts, including: When will it occur? What will be the cause? How will it spread? And, how wide will its impacts be felt?

The then-U.S. Treasury Secretary, Henry Paulson, recognized that the credit market boom obscured the real danger to the economy.  Despite all the claims of knowing the true triggers of the economic recession, we believe the importance of dynamic complexity has been overlooked in everyone’s analysis. The real cause of the economic meltdown can be traced to intertwined financial domains, which generated considerable dynamic complexity that in turn made it difficult to determine the possible outcomes. There is no doubt that the subprime foreclosure rate started the domino effect, but had the degree of inter-domains dependency not pre-existed, then the effect on the market would have been much less severe.

While some seasoned experts have alluded to the same conclusion, most have considered that the market complexity (in a broad and immeasurable sense) played a significant role in creating the risk, which ultimately caused a global recession. But most conclude that the aggregate risk of complexity was not necessarily something that the market should be able to predict, control and mitigate at the right time to avoid the disaster.

While dynamic complexity can be identified after the fact as the origin of many unknowns that ultimately lead to disaster, most financial services and economic risk management models accept market fluctuations as something that is only quantifiable based on past experience or historical data.  However, the next economic shock will come from a never-seen-before risk. And the distance between economic shocks will continue to shrink as banks add more technology and more products/services, further compounding the inherent risk of dynamic complexity.

A Better Path Forward

Revealing the unknowns through the joint power of deconstruction theory and mathematical perturbation theory allows for both the determination of potential cause origins (allowing the evolution to happen as reactions to the influencer’s changes) and helps to predict the singularity/chaos point and the distance to such point in time.  As we privilege the determinism, we consider that any observation points to a cause and that such a cause should be discovered by the tools we possess. Openly, we are trying to convince you that, “If we know the cause, then we will be able to predict when it will occur, the severity of risk and what may be the amplitude of a possible singularity.” This will then afford us the time needed to mitigate the risk.

Figure 1. Spread between 3-month LIBOR and 3-month Expected Federal Funds Rate (January 2007 – May 2008 daily)

By reviewing graphs of the financial market from 2007 to 2008, we discovered that market changes happened at the vertices as well as at the edges, as we would normally expect. The example in Figure 1 illustrates part of the story.

According to Stephen G. Cecchetti[1], the divergence between the two rates is typically less than 10 basis points. This small gap arises from an arbitrage that allows a bank to borrow at LIBOR (London Interbank Offer Rate), lend for three months, and hedge the risk that the comparable overnight index swap rates (OIS) will move in the federal funds futures market, leaving only a small residual level of credit and liquidity risk that accounts for the usually small spread. But on August 9, 2007, the divergence between these two interest rates jumped to 40 basis points.

The problem lies in the worst case. Each vitric and each edge directly connects to every other vitric and every other edge, and therefore represents the direct effects covered by perturbation theory as presented in Figure 29.2. But, because each one is perturbed, the analysis will not be sufficient to understand the full picture if we do not add the indirect effects on the direct ones. This points precisely to the difference between Paulson’s analysis and ours.


Figure 2. Schematic Representation of Financial Market Dependencies and Crisis Points

In short, Paulson attributed the crisis to the housing bubble and we attribute it to the dynamic complexity, which includes multiple dependencies within the whole market: housing market, equity, capital market, corporate health, and banking solvency, which in turn impacted the money supply that caused massive unemployment and severe recession.

A major result of our analysis was still not obvious or entirely elucidated when Henry Paulson expressed his own analysis. Indeed, the housing bubble precipitated the crisis, but the real cause was a large proportion of dynamic complexity that was hidden in the overall construct of the financial system. This means that any regulation, organization, and consequently, surveillance of the system should measure the impact of dynamic complexity, if we hope to adequately predict and mitigate its risk.

[1] Cecchetti, Stephen G. Crisis and Responses: The Federal Reserve in the Early Stages of the Financial Crisis. Journal of Economic Perspectives, American Economic Association, vol. 23(1), pages 51-75, Winter 2009. PDF file.