Preventing financial crisis has become the concerns of average citizens all over the world and the

aspirations of academics from disciplines outside finance. In many ways, better management of

financial risks can be achieved by more effective use of information in financial institutions. In

this paper, we developed a network-based framework for modeling and analyzing systemic risks

in banking systems by viewing the interactive relationships among banks as a financial network.

Our research method integrates business intelligence (BI) and simulation techniques, leading to

three main research contributions in this paper. First, by observing techniques such as the HITS

algorithm used in estimating relative importance of web pages, we discover a network-based

analytical principle called the Correlative Rank-In-Network Principle (CRINP), which can guide

an analytical process for estimating relative importance of nodes in many types of networks

beyond web pages.     Second, based on the CRINP principle, we develop a novel risk estimation

algorithm for understanding relative financial risks in a banking network called Link-Aware

Systemic Estimation of Risks (LASER) for purposes of reducing systemic risks. To validate

the LASER approach, we evaluate the merits of the LASER by comparing it with conventional

approaches such as Capital Asset Ratio and Loan to Asset Ratio as well as simulating the effect

of capital injection guided by the LASER algorithm. The simulation results show that LASER

significantly outperforms the two conventional approaches in both predicting and preventing

possible contagious bank failures. Third, we developed a novel method for effectively modeling


                    Electronic copy available at:
one major source of bank systemic risk - correlated financial asset portfolios – as banking network

links. Another innovative aspect of our research is the simulation of systemic risk scenarios is

based on real-world data from Call Reports in the US. In those scenarios, we observe that the U.S.

banking system can sustain mild simulated economic shocks until the magnitude of the shock

reaches a threshold. We suggest our framework can provide researchers new methods and insights

in developing theories about bank systemic risk. The BI algorithm - LASER, offers financial

regulators and other stakeholders a set of effective tools for identifying systemic risk in the

banking system and supporting decision making in systemic risk mitigation.

Keywords: Systemic risk, contagious bank failures, business intelligence, financial asset

portfolios, simulation, etc.


                    Electronic copy available at:


                                     1. INTRODUCTION

Many economists consider the recent global financial tsunami (2007 – present) as the worst

financial crisis since the Great Depression in the 1930s (Bullard et al. 2009; Pendery 2009). It

was triggered by a liquidity shortfall in the United States (U.S.) banking system and resulted in

the bankruptcy of major financial institutions like Lehman Brothers, pushing the system to the

brink of a system-wide collapse. Eventually the U.S. Government implemented the Troubled

Asset Relief Program (TARP) to bailout major banks by purchasing their troubled assets. Even

so, more than 160 U.S. banks failed and were taken over by the Federal Deposit Insurance

Corporation (FDIC) in 2008 and 2009, while only 11 banks failed between 2003 and 2007

(Federal Deposit Insurance Corporation 2010). To prevent a system-wide breakdown of the

banking system in the future, the 2010 Dodd–Frank Wall Street Reform and Consumer

Protection Act passed by the U.S. Congress created the Financial Stability Oversight Council to

monitor and mitigate the systemic risk of the U.S. banking system. However, most people,

including the financial regulators, failed to predict this banking crisis due to the lack of efficient

tools and methods to model and monitor bank systemic risk. Moreover, research on modeling

and analyzing systemic risk in the banking system is limited, especially studies that use business

intelligence techniques.

Although intensively used in describing the current financial crisis, the term systemic risk is not

yet well defined, especially in banking systems (Elsinger et al. 2006; Kaufman et al. 2003).

Systemic risk in general refers to the risk of the breakdown of an entire system rather than

individual components (Kaufman et al. 2003). Its existence in banking is often indicated by

correlated bank failures in a single country, for instance, the collapse of all three major

commercial banks in Iceland during the 2007 global financial crisis due to their difficulties in

refinancing short-term debts on deposits in the United Kingdom. While the precise meaning of

systemic risk in banking systems remains ambiguous, various definitions appear in finance

literature. Two major types of systemic risk in banking were identified (Kaufman et al. 2003).

The first type is defined by the Bank for International Settlements (BIS) as “the risk that the

failure of a participant to meet its contractual obligations may in turn cause other participants to

default with a chain reaction leading to broader financial difficulties” (BIS 1994). Accordingly,

the U.S. Federal Reserve (the Fed) provides an operational definition for this type of systemic

risk in its interbank payment system: “systemic risk may occur if an institution participating in a

private large dollar payments network was unable or unwilling to settle its net debt position.

When such a settlement failure occurred, the institution’s creditors might also be unable to settle

their commitments. Serious repercussions could, as a result, spread to other participants in the

private network” (Federal Reserve System 2001). As these definitions show, this type of systemic

risk in banking is mainly based on correlation with causation among banks through direct

interbank payment obligations. When the first “domino” bank fails, it will default on its

interbank payment obligations towards other banks, causing more banks to fail, which in turn

knock down even more banks in a chain reaction (Kaufman et al. 2003). In short, this type of

systemic risk and its contagious bank failures can be summarized as “the risk of a chain reaction

of falling interconnected dominos” (Kaufman 1995).

The second type of systemic risk also involves contagious bank failures through less direct

interbank relationships than interbank payment obligations. This type of risk is mainly based on

the shared third-party risk exposures among banks (e.g., holding the same stock such as IBM).

When the first “domino” bank suffers from an external economic shock, e.g., the default of a

specific type of mortgage-backed security (MBS) that causes severe losses in its equity capital,

doubts and uncertainty about the banks holding the same MBS and subject to the same adverse

effects will quickly emerge among financial market participants (e.g., banks and investors). To

avoid further loss, these market participants will reassess whether and to what extent the banks

with correlated financial asset (e.g., security and bond) portfolios will be affected by the original

economic shock. The more similar the risk exposure for these banks’ correlated financial asset

portfolios to that of the first “domino” bank, the greater their possible loss, and the more likely

the market participants will withdraw funds from these banks. This chain reaction may lead to a

liquidity shortfall for these banks, and could even induce system-wide insolvency problems as

more and more banks are closely connected through their portfolios of shared financial products

(Elsinger et al. 2006; Kaufman et al. 2003).

These two types of risk reveal the two major sources of systemic risk in banking systems

–correlation in the value of bank financial asset portfolios and interbank payment links that can

contagiously transmit insolvency of a single bank to other linked banks in a chain reaction. These

two types of risk are not independent of each other and often occur simultaneously. Figure 1

shows an example of how these two types of systemic risks occur and cause the contagious

failures of three banks. In this example, pairs of the three banks A, B and C share a set of

similar financial products, represented by X, Y and Z, respectively. The solid arrows represent

interbank payment obligations and the dotted arrows indicate the ownership relationships

between the shared sets and the three banks. An economic shock can/will reduce the value of the

set of financial products X Bank A holds, causing A to default on an interbank payment to Bank

B. Since B also holds certain shares of the financial products in set X, this shock then generates a

loss for bank B both from the reduced value of X it holds and Bank A’s defaulted payment

obligation. If this loss from these two sources together is greater than B’s capital, it will force B

to default on its interbank payment to Bank C, thereby producing a loss greater than C’s capital,

and so on, causing more bank failures through the interbank payment process. The above

example illustrates how contagious bank failures happen at a micro level on a small scale. It was

suggested that when such contagious failures affect more and more banks and deteriorate the

interbank credit market in banking systems, the quality of available information about the banks

also deteriorates and the information costs will increase drastically (Kaufman et al. 2003). The

lack of quality bank information and appropriate systematic assessment methodology means the

financial market participants (e.g., banks and individual investors) are unable to accurately

estimate each bank’s potential loss and differentiate banks on the basis of their systemic risk.

Without adequate knowledge about the magnitude of each bank’s systemic risk, market

participants will quickly transfer their funds to safer financial instruments such as U.S. treasury

bills and will not lend at any rate. At this step, the initial shock and the subsequent contagious

failures will have a major negative impact on the solvency of the whole banking system and

could even result in a credit freeze in global financial markets. This is the crucial time for the

relevant financial authorities (e.g., central banks such as the Fed) to take immediate action to

provide liquidity to the banking system. One of the most often used strategies by financial

authorities is to inject a large amount of emergency funds to key banks. Since such funds are

often limited, a key question is how to determine which banks should receive the capital

(injection) in order to stabilize the whole banking system.

              Correlated               X              Y             Z
              financial asset

              Banks:       A                          B                              C

                                Payment links

                       Figure 1. An Example of Contagious Bank Failures

In order to address this question, various financial and accounting methods have been proposed

to model and measure the level of systemic risk a bank has, such as the capital adequacy ratio.

However, existing financial and accounting methods suffer from three major problems that

prevent them from effectively modeling and measuring systemic risk in banking systems. First,

as its name indicates, modeling systemic risk requires a system perspective rather focusing on

risk at an individual bank level as most existing financial and accounting measures do. Second,

since the two major sources of systemic risk are identified as the interbank payment obligations

and correlated risk exposures in the shared set of financial products (i.e., correlated financial

asset portfolio) between two banks (Eisenberg and Noe 2001; Elsinger et al. 2006; Freixas et al.

2000; Kaufman et al. 2003), a relational perspective is also critical in modeling systemic risk in

banking systems. However, there is a lack of systematic approaches for modeling correlated

financial asset portfolios. Third, system level data for banking systems is often not publicly

available, such as interbank payment transaction records. Moreover, systemic risk in banking

systems is the risk of occurrences of very rare and extreme events like the contagious bank

failures we described in Figure 1. However, most banks’ historical data do not typically include

such extreme events that cause bank contagious failures. Therefore traditional financial risk

management methods such as historical Value at Risk (VaR) analysis, which mainly utilizes

historical data, are not appropriate for analyzing systemic risk. These data- and methodology-

related issues largely limit researchers’ ability to conduct empirical research on bank systemic


In this research we develop a network-based framework for modeling and analysis of bank

systemic risk to address the three challenges mentioned above. This framework first include a

banking network model in which banks are nodes and the interbank payment obligations as well

as correlated financial asset portfolios are links. This network model provides us with both a

system and relational perspective to study systemic risks in a banking network. In addition, we

distinguish our model by developing a systematic approach for modeling correlated financial

asset portfolios as bank network links. At the core of our framework is a new Business

Intelligence algorithm for risk estimation in a banking network called Link-Aware Systemic

Estimation of Risks (LASER), which measures the relative systemic risks of banks in banking

systems. We adopted BI approach is mainly because it is a data-centric approach. The

implementations of our empirically derived network model, risk estimation algorithm and risk

scenario simulation all require large-scale real-world data collection, extraction and analysis.

Moreover, the banking industry is a data-rich environment which suits BI approach well and

provides various kinds of information about public bank holding companies such as income

statements, balance sheets, financial regulatory reports.

The design of the LASER algorithm is inspired by a network principle best represented by

Kleinberg’s HITS (1999b) algorithm. That is, the importance of a network node depends on the

number of its incoming links and the importance of its linked nodes. Before HITS, this principle

has been used in citation analysis to develop the famous “impact factor” for measuring the

prominence of scientific journals (Garfield 1972), and in social network analysis to identify

cliques (Hubbell 1965). Kleinberg operationalize this principle for measuring the importance of

web pages on World Wide Web by assigning a global weight score to each page, in such a way

that “a node's global weight equals to the sum of its internal weight and the global weights of all

nodes that link to it, scaled by their connection strengths” (Kleinberg 1999a).

We term this conceptual principle as the Correlative Rank-In-Network Principle (CRINP). Here

the web page importance could be replaced by other node characteristics depending on the nature

of nodes. We believe CRINP is applicable to the context of banks’ systemic risk. It is mainly

because that, like the importance of a web page, a bank’s systemic risk is also originated from

other nodes (i.e., bank) in the (bank) network through their interconnections. The level of a

bank’s systemic risk largely depends on its possible loss due to other banks’ failures. Contrary to

this principle, traditional risk financial risk management approaches heavily rely on information

from a bank’s accounting statements (Deventer et al. 2004) at an individual bank level to

generate various risk measures. Such top-down risk measures mainly reflect a centralized

perspective of a bank’s endogenous risk, but failed to model the systemic risk from its peers and

their interrelationship. On the other hand, based on the CRINP, our LASER algorithm is a

bottom-up approach that effectively captures 1) a bank’s systemic risk originating from

collective bank nodes and 2) the mutually reinforcing relationship among banks’ systemic risk


Further, we study systemic risk scenarios in the banking system using real-world U.S. bank data

in order to simulate contagious bank failures and the effects of different capital injection

strategies on preventing such failures during financial crises. Our simulation results

demonstrated that new capital injection strategies based on the LASER algorithm can more

effectively reduce the possibility of contagious bank failures than the existing strategies using

traditional bank risk measures such as capital adequacy ratio (CAR).

We claim three major contributions for this research. First, by observing techniques such as the

HITS algorithm used in estimating relative importance of web pages, we discover a

network-based analytical principle called the Correlative Rank-In-Network Principle (CRINP),

which can guide an analytical process for estimating relative importance of nodes in many types

of networks beyond web pages. Second, based on the CRINP principle, we develop a novel

risk estimation algorithm for understanding relative financial risks in a banking network called

Link-Aware Systemic Estimation of Risks (LASER) for purposes of reducing systemic risks.

To validate the LASER approach, we evaluate the merits of the LASER by comparing it with

conventional approaches such as Capital Asset Ratio and Loan to Asset Ratio as well as

simulating the effect of capital injection guided by the LASER algorithm.      The simulation

results show that LASER significantly outperforms the two conventional approaches in both

predicting and preventing possible contagious bank failures. Third, we developed a novel method

for effectively modeling one major source of bank systemic risk - correlated financial asset

portfolios – as banking network links.

In addition, our scenario-based simulation provides finance researchers and practitioners an

alternative approach for traditional risk management methods like historical Value at Risk (VaR)

to model and evaluate the impacts of contagious bank failures under extreme market conditions.

From the technical perspective, our LASER algorithm also improved the Hyperlink-Induced

Topic Search (HITS) algorithm, which mainly deals with simple graphs (i.e., network with single

type of links), to handle multigraphs (i.e., networks with multiple types of links), since there are

two major types of interbank relationships that cause systemic risk in banking systems.

The remainder of this paper is structured as follows. In Section 2, we briefly review the two

streams of literature relevant to this study. Section 2.1 reviews the literature on bank systemic

risk. Section 2.2 briefly surveys the business intelligence literature on bank risk management

technologies and discusses representative network-based algorithms that are relevant to our

research. Section 3 describes our bank network model with interbank clearing mechanisms and

illustrates how to construct bank network links based on their correlated portfolios of financial

products. In Section 4, we discuss our BI algorithm LASER which measures and ranks banks

based on the level of their systemic risk. In Section 5, using real-world U.S. bank data, we

present the empirical and simulation-based findings on the effectiveness of our proposed

algorithm in measuring and mitigating the systemic risk in banking systems. Then we discuss the

implications of our findings in the context of bank systemic risk management. We summarize

our findings in Section 6 and discuss future research directions.


               2.1 Modeling Sources of Systemic Risk in Banking Systems

As mentioned in the previous section, systemic risk is not a well-defined concept in the existing

finance literature (Kaufman et al. 2003). This risk rooted in the interrelationships among banks.

Existing bank risk management techniques or measurements, mainly developed for individual

banks, therefore are not very useful in modeling and managing systemic risk. Elsinger et al.

(2006) identified the major challenge for modeling bank systemic risk as capturing the two major

risk sources: 1) an insolvent bank may default on its interbank payment obligations to another

bank and cause it to fail, thereby triggering a domino effect which is often called contagious

bank failure (Aghion et al. 2000); 2) an adverse economic shock may cause significant losses in

banks’ correlated financial asset portfolios and result in simultaneous multiple bank failures.

These two risk sources are not independent of each other and often happen together in reality.

The first source of bank systemic risk – the interbank payment obligations - has been intensively

modeled and studied in finance literature. Rochet and Tirole (1996) identified the relationship

between interbank loans and systemic risk. Angelini et al. (1996) empirically studied an

interbank clearing network and found that on average 4% of network participants were able to

trigger contagious failures.   Eisenberg and Noe (2001) first analyzed the properties of inter-firm

directional cash flows featuring cyclical interdependence. In addition, since contagious bank

failure is very rare in banking systems, there is little historical data available for studying

systemic risk. Thus simulation approaches are popular for analyzing interbank exposures that

may cause systemic risk. For instance, Sheldon and Maurer (1998), Degryse and Nguyen (2004),

Wells (2002), and Upper and Worms (2004) all use simulation methods to study contagious bank

failures through interbank exposures that originally result from the simulated failure of a single

bank. These studies all focus on one source of systemic risk, the interbank payment relationships,

largely ignoring the other major source – banks’ correlated financial asset portfolios.

On the other hand, Elsinger et al. (2006) suggested that it is necessary to study both sources to

gain a full understanding of the systemic risk in banking systems. They adopted a historical

simulation approach and used real-world data on banks’ exposures in common categories of

financial products to simulate banks’ financial asset portfolios. However, the relationships

among specific financial products in different banks’ portfolios are not explicitly modeled. This

is primarily due to the fact that the information about a bank’s exposure to specific financial

products is often confidential and available only to the bank itself. It is nearly impossible to gain

such information from multiple banks, not to mention all the banks in the whole banking system.

Therefore, there is a lack of effective methods for modeling the second source of bank systemic

risk - the relationships among the correlated banks’ financial asset portfolios.

          2.2 Measuring Systemic Risk with Business Intelligence Techniques

Another important challenge is the lack of effective measurement of systemic risk in banking

systems. Modern financial risk management techniques and measures are often based on

probability distributions of various risk events. However, this approach may not apply to

systemic risk since events are rare and little relevant historical data are available to generate risk

probability distributions. Moreover, existing financial and accounting measures mainly focus on

banks’ endogenous risks at the individual level rather than risks involving the interbank

relationships at the system level, such as systemic risk. Thus, in spite of the rich research data

sources in the banking industry (e.g., financial and accounting statements, regulatory reports),

there is a lack of sophisticated methods to model and measure bank systemic risk in general. A

stream of finance research attempting to model bank systemic risk using network notation

(Eisenberg and Noe 2001; Elsinger et al. 2006; Rochet and Tirole 1996; Sheldon and Maurer

1998) mainly used networks as a mathematical representation of interbank payment relationships.

The network topological information and other link-based information like the nodes’ authority

scores (Kleinberg 1999b) often are not used in such research due to the lack of advanced data

analysis techniques.

Business intelligence (BI), as a data-centric approach, includes many methodologies, tools and

technologies that specialize in large-scale data collection, modeling and analysis. BI-related

techniques have been applied in many industries such as airlines (Wixom et al. 2008) and health

care (Carte et al. 2005), business domains like financial credit rating (Huang et al. 2004), and

particularly in the E-Commerce domain (Abbasi and Chen 2008; Huang et al. 2007; Marshall et

al. 2004). In addition, most relevant to our research, there is a stream of studies that adopted

business intelligence techniques to analyze real-world business data for predicting bank or firm

failures. In statistics, risk is defined as the probability of an event seen as undesirable. Therefore,

if we define a bank will not fail as the undesirable event, the bank failure prediction problem

then can be recast as one that measures the risk of potential bank failures. The BI techniques

used in these studies include various Artificial Intelligence (AI) and Data Mining (DM)

algorithms such as Neural Networks (NN), Support Vector Machines (SVMs), Discriminant

Analysis (DA), Genetic Algorithm (GA), and Bayesian Networks (BN).

We briefly review these studies in bank failure predictions in terms of the BI techniques they

used. Discriminant analysis (DA) was one of the first techniques adopted (Altman 1968) to

predict firm failures across different industries. Later Sinkey (1975) applied it to bank failure

predictions. However, recent studies (Lee et al. 2005; Tsukuda and Baba 1994) have shown that

the back propagation neural network (BPNN) algorithm outperformed it in predicting firm

bankruptcy. NN-related techniques were first adopted by Odom and Sharda (1990) to predict

firm failures. Then (Tam and Kiang 1990) applied NN and demonstrated its effectiveness in

predicting bank failures. Later Tam (1991) adopted a variation of NN - the back propagation

neural network (BPNN) algorithm - for predicting bank bankruptcy. Another set of more recent

data mining methods termed as Support Vector Machines (SVMs) were also used in bank failure

prediction research. Like the two previous techniques, SVM was initially used for predicting

corporate bankruptcy (Shin et al. 2005). Then Wang et al. (2005) developed and employed a

fuzzy support vector machine method for credit risk assessment for the banking industry.

In addition, recent years have witnessed a growing trend in combining different types of BI

models and techniques for predicting bank failures. (Min et al. 2006) proposed to use genetic

algorithms (GA) to optimize the feature subsets and parameters of SVM in order to improve its

performance in bankruptcy prediction. A similar study done by Wu et al. (2007) also used GA to

optimize the parameters of SVMs for predicting bankruptcy. They empirically examined the

performances of their combined GA-SVM model with other methods, such as DA, NN and

standard SVMs, to predict financial crises in Taiwan. Their results showed that the GA-SVM

model outperforms other methods, implying the integration of different BI techniques may be an

effective approach for improving the prediction performances of single techniques.

In summary, these studies adopted various BI techniques on real-world business data to analyze

the relationships between the data items and bank failures. Based on the relationship knowledge

learned from the data, these techniques aim to measure the risk of future bank failures or firm

bankruptcy (i.e., predicting bank failures or firm bankruptcy). However, the BI techniques

adopted or developed in these studies mainly analyze bank or firm data at the individual

organization level. Their models and algorithms may not be directly applied to study and analyze

bank systemic risk, which is rooted in the interbank relationships. Therefore, a set of BI

techniques that can model and measure bank systemic risks largely based on relational data is

greatly needed.

                                    2.3 Research Questions

To summarize, research on bank systemic risk lacks methods for modeling the relationships

among the correlated banks’ financial asset portfolios. In addition, BI techniques that can

analyze relational data and measure bank systemic risk are greatly needed. Corresponding to

these two research gaps in the literature relevant to our research, we aim to answer the following

two research questions:

   How can correlated bank financial asset portfolios be effectively modeled in order to study

    systemic risk in banking systems?

   How can we effectively identify the level of a bank’s systemic risk through its interbank

    payment relationships and correlated bank financial asset portfolio?

To answer the first question, we developed a method described in Section 3.1 which models the

relationships among bank financial asset portfolios through the correlations in their returns. To

answer the second question, we develop an algorithm called Link-Aware Systemic Estimation of

Risks (LASER) to measure systemic risk in banking systems based on information about

interbank payments and correlated bank financial asset portfolios.

                          3. MODELING A BANK NETWORK

In this section, we present a systematic approach to modeling a banking system as a bank

network, in which nodes are banks and links are correlated financial asset portfolios as well as

interbank payment obligations. In our bank network model, there are two major types of links.

First, we propose a new method to model correlations between banks via financial product

portfolios. Second, we introduce interbank payment obligations as another type of links between

banks in the network. We also summarize the known clearing mechanisms found in existing

interbank payment network models and present our approach that incorporates asset portfolios in

a bank network.

             3.1 Modeling Bank Correlation via Financial Asset Portfolios

While various interbank payment models have been proposed in previous research, the major

challenge for modeling the sources of bank systemic risk lies in how to untangle the complex

bank interrelationships via asset portfolios. A bank’s asset portfolio consists of various types of

financial products that can be traded in major financial markets, such as U.S. Treasury securities,

mortgage-backed securities, cash instruments, and financial derivatives. Unlike interbank

transaction data that can be attained from central settlement systems like the Fedwire,

information about bank holdings or exposure to specific financial products is not available to

regulators and researchers at a single data source. Further, a bank constantly changes their

holdings of various financial products due to market volatility. This limitation on data sources

makes it very difficult, if not impossible, to conduct empirical studies about complex

interrelationships among asset portfolios at a financial product level. For this reason, we aim to

model bank relationships at a portfolio level.

According to modern portfolio theory in finance (Markowitz 1952), specific risks associated with

individual financial products in a portfolio can be reduced through diversification. However,

there exist system-wide risks common to all financial products within a portfolio that cannot be

diversified among portfolios. Such system-wide risk is the cause of bank correlations in the

returns of their asset portfolios and thereby induces systemic risks in banking systems, which

may lead to contagious bank failures. Based on this important feature of systemic risk in a bank

network, we use correlations in asset portfolios as one type of link in our bank network model.

In particular, we assume that a strong correlation indicates a more similar composition of these
portfolios and therefore exposure to similar system-wide risks. In our approach, information

about asset portfolio links is generated by filtering relevant information in the time series of bank

portfolio returns using a technique called correlation coefficient matrix. This is done by 1)

determining the synchronous correlation coefficient of the logarithmic value difference of a bank

portfolio at a selected time horizon, and 2) selecting correlation coefficients greater than 0.5 as

the asset portfolio links in the bank network model. For this purpose, we adopt the operational

definition of correlation coefficient on returns of financial securities (Bonanno et al. 2004):

                             ρ ij                  ri r j  ri        rj
                                      r   i
                                                    ri
                                                                 rj2  rj
                                                                                2   

where i and j are bank ID, ri  ln Fi (t )  ln Fi (t  t ) , Fi is the value of asset portfolio held by

bank i at the trading time t and t is the time horizon, which is, in our empirical simulation,

chosen as one quarter. The logarithmic value difference of a portfolio between two subsequent

time periods is calculated as a proxy of percentage changes of portfolio returns. The correlation

coefficient  ij is computed between all possible pairs of portfolios (banks) in the dataset. The

statistical average, as indicated in this paper with the notation                       , is a temporal average

calculated over the entire period under study. According to the above definition,  ij varies from

-1 (fully negatively correlated pairs of bank portfolios) to 1 (fully positively correlated pairs) and

is used to measure the strength of linear relationships between two banks on periodic changes of

portfolio returns. When  ij approaches 0, there is a reduction in correlation in a relationship (i.e.,

closer to uncorrelated), and the two bank portfolios i and j are considered to be uncorrelated. The

matrix of correlation coefficients C  [  ij ] is a symmetric matrix with  ii  1 at the main

diagonal. In this study, only banks with strong and positive correlations in their asset portfolio
returns will be adversely affected by the same economic shocks. In other words, we need to

establish a portfolio link between two banks i and j if and only if the correlation coefficient

exceeds a positive threshold value. As suggested by our domain expert, we set this positive

threshold as 0.5 (i.e.,  ij  0.5 ) to represent a significant positive correlation between two banks’

financial asset portfolios.

       3.2 Modeling Interbank Payment Obligations and Clearing Mechanisms

Interbank payment obligation has been identified as another major source of bank systemic risk

(Eisenberg and Noe 2001; Elsinger et al. 2006; Kaufman et al. 2003). Similar to Elsinger’s

(2001)approach, we define bank i’s interbank payment obligation to bank j as an interbank

payment link in our bank network model. All interbank payment obligations are bilaterally netted

before clearing. Thus, an interbank payment link between two banks exists if and only if the

netted payment obligation is larger than zero (from the sender bank to the receiver bank) over the

period under study. To mathematically model the interbank payment links, we use a N  N

matrix L , in which lij represents bank i’s (netted) nominal payment obligation towards bank j.

Therefore, the value of bank i’s total obligations towards the rest of the banking system can be

computed as d i  N1 lij . We further define a normalized matrix  0,1NN by dividing each entry

by the relevant total obligation d i :

                                           lij / d i if (d i  0)
                                                                    (2)
                                     ij  
                                            0        otherwise

To operationalize a bank network with two types of links, we develop a clearing mechanism for

settling all interbank payment obligations. As mentioned in Section 2, existing interbank

payment network models, as well as their clearing mechanisms, do not have a means for

modeling correlated bank portfolios. This new clearing mechanism improves on previous work

by explicitly including financial portfolios in bank network models.                  This mechanism works as


Consider a set of N banks, each bank i  N has a clearing payment pi* , which represents bank

i’s ability to pay off its obligations to other banks. pi* consists of three major components. First,

since i’s financial asset portfolio consists of financial products that can be quickly sold to gain

cash; it is included in our clearing payment pi* . Second, payment l ji received by bank i from

bank j can also be used to pay off i’s obligations to other banks. Thus, pi* also

includes l ji   ji p * . Third, pi* consists of i’s capital reserve ei , which is required by

governmental financial authorities to offset a bank’s unexpected liquidity shortfall. Therefore,

using notations defined in previous subsections, bank i’s clearing payment pi* at time t can be

mathematically defined as follows:

                                                          Fi   j 1 ji p *  ei  d i
                                   di               if

                 pi*   Fi   j 1 ji p *  ei        d i  Fi   j 1 ji p *  ei  0
                               N                                            N
                                           j        if                           j              (3)
                                                            Fi   j 1 ji p *  ei  0
                                  0                if
                                                                             j

where Fi is the value of bank i’s asset portfolio.               j 1
                                                                         ji p *j is the amount of payments bank i

can receive from other banks. We assume that each bank has limited liability and requires

proportional sharing of assets in case of a bank failure. Therefore, the amount of payment bank i

can receive from its counterparty bank j depends on j’s actual payment abilities  ji p j * instead

of the nominal value of payment obligations  ji d j . Through simulation-based experiments, we

implement this payment vector by developing a variant of the fictitious default algorithm

(Eisenberg and Noe 2001). We explain this algorithm in detail as follows.

Input: (1) number of banks N , (2) interbank payment matrix L  [lij ] , (3) bank capital reserve

vector e  {e1 ,..., e N } , (4) vector of banks’ financial asset portfolios F  {F1 ,..., FN }

Output: (1) clearing payment vector p *  { p1 ,..., p * } for the input banking system, (2) defaulting

sequence of banks Def

Step 1. Initialization

     1.2. Set the initial clearing payment vector p *  d , where d i   j 1 l ij , i.e., the total nominal

         value that Bank i should owe to other banks.

     1.3. Normalize the interbank payment matrix L into   { ij } :

     1.3.1 For each i  N ,

 For each j  N ,

      If d i  0 ,  ij  0 ,

           Else  ij  lij / d i

Step 2. Repeat the following sub steps until there are no new bank failures.

       2.1. Try to clear the banking system with current clearing payment vector p * .

          2.2. If there is more than one bank failure under the p * ,

               2.2.2. Add the default banks into the defaulting sequence Def ;

               2.2.3 Otherwise indicate no new bank failures and terminate the algorithm.
          2.3. Update the clearing payment vector as:

                    p * :  ( p * )( ( ( p * ) p *  ( I   ( p * ))d )  e  F )  ( I   ( p * ))d ,

          where  is the normalized payment obligation matrix and  ( p * ) is a matrix in which all

          the elements are zero except that ( p * ) ii  1 when bank i fail under current clearing

          payment vector p * .

Step 3. Output the final value of p * and the defaulting sequence of banks Def .

This payment vector is a critical instrument in our simulation analysis since it helps determine

the conditions for a bank to default on its payment obligations. Most of the time a bank fails if it

cannot fully meet its payment obligations, under the assumption that all other banks must do so.

This condition can be formulated as Fi   N1 ji d j  ei  d i  0 . Corresponding to the two major

sources of bank systemic risk, a contagious bank default may happen with one of the following


   First, other banks default on their payment obligations on bank i, causing i’s failure, i.e.,

     Fi   j 1 ji d j  ei  d i  0 , but Fi   j 1 ji p *  ei  d i  0 .
             N                                          N

   Second, a system-wide economic shock may significantly reduce the value of bank i’s own

    asset portfolio and its correlated bank portfolios. These correlated banks may default due to

    the shock and fail to pay bank i in time, causing its default, i.e., Fi   j 1 ji d j  ei  d i  0 ,

    but Fi *   j 1 ji p *j  ei  d i  0 , where Fi* denotes the value of bank i’s asset portfolio after

    the shock.

In this payment vector, we assume that  is a random variable, thereby enabling our model to

handle uncertainty. However, under this assumption, no closed-form solutions can be found for

the distribution of this payment vector p * . Following Elsinger (2006), we adopt an iterative

approach under which each draw of the distribution of  is defined as a scenario. As mentioned

earlier, Eisenberg and Noe (2001) proved that there exists only one unique clearing payment

vector p * for each scenario. Therefore, we can assess the expected bank failure rate and scale of

losses due to contagious failures across different systemic risk scenarios, given a distribution

of  . More detail about how we determine the distribution of  is given in Section 5.



Once we construct a bank network model with interbank payments and correlation of asset

portfolios as links among banks, the next challenge is how to quantitatively measure systemic

risks associated with nodes and links in this network model. Previous research in network

analysis has developed various link analysis algorithms to rank the importance of nodes. These

algorithms are widely adopted in web search engines that effectively search and rank web pages

based on their relative importance following the incoming hyperlinks from other web pages.

Among these algorithms, the most famous two are Google’s PageRank algorithm (Brin and Page

1998) and the Hyperlink-Induced Topic Search (HITS) algorithm developed by Kleinberg

(1999b). HITS is most relevant to our study as it can identify the importance of source web pages

marked with incoming hyperlinks from other web pages. In this case, a hyperlink transmits the

recognition from one web page to another web page. The collective recognition from all

incoming network links to a web page builds up its relative importance in the World Wide Web.

We believe that this approach of ranking relative importance of web pages can also be applied in

ranking systemic risk of banks in a bank network.      We refer to this correlative approach to

ranking node importance in a network as the Correlative Rank-In-Network Principle (CRINP). In

this paper, we generalize this principle and extend it to liquidity risk management in the banking

industry. However, the bank network is more complex than the web page network because the

former contains two types of links between banks while the latter has only one type of links

between web pages.        The interbank payment links and asset portfolio links in a bank network

model serve as two channels of transmitting a bank failure’s negative impacts on other banks that

are linked with bank i.

Next, we introduce the HITS algorithm in detail. The HITS algorithm introduces two scores for

each web page node: 1) the authority score, which estimates the importance or value of this web

page, and 2) the hub score, which estimates the importance of its hyperlinks to other pages.

Authority and hub scores are computed in terms of one another in a mutual recursion. The

authority score of a web page M is computed as the sum of the normalized hub scores of web

pages that have hyperlinks pointing to M. M’s hub score is the sum of the normalized authority

scores of the pages M points to. The HITS algorithm will update and normalize these two scores

for multiple iterations until they converge. The intuition for this mechanism is that a web page is

important if it is pointed to by many other important web pages. However, the original HITS

algorithm suffers from two major problems when applied to our bank network model. First,

HITS, as well as many other link analysis algorithms, was initially designed to only rank web

pages and deal with one type of links – the web page hyperlinks. Our bank network model

consists of two types of links which are the sources of bank systemic risk. Second, when

calculating the authority and hub scores, HITS considers that all links are equally important since

hyperlinks are not different from one another and cannot show the strength of recognition.

However, in the banking system interbank payment links and asset portfolio links are not

homogeneous. Big banks often have larger interbank transactions with each other and higher

correlated exposure in their asset portfolios than smaller banks.

Our new ranking algorithm is called Link-Aware Systemic Estimation of Risks (LASER) based

on the aforementioned CRINP principle. LASER aims to quantitatively measure a bank’s

systemic risk level and rank the banks in terms of systemic risk. Similar to HITS, LASER has

two types of scores for each bank – the authority scores and the hub scores. Furthermore, since

there are two types of links in our bank network model, we calculate both scores for each link in

our algorithm. The authority scores of a bank represent the systemic risk it received from other

banks in the system due to a possible failure under extreme conditions. On the other hand, a

bank’s hub scores represent the systemic risk it introduces to the system. We describe how

LASER quantitatively measures bank systemic risk with these scores in the following


          4.1 Measuring Systemic Risk Associated with Asset Portfolio Links

In our LASER algorithm, we model systemic risk as a bank’s loss (relative to its payment ability)

caused by another bank’s failure through two types of links. Thus, bank i’s authority score for

asset portfolio links is the aggregated negative impacts in its portfolio if all correlated banks fail.

Using notations  , F , and p * which are proposed in our network model, we calculate bank i’s

authority score for financial asset portfolio links fp _ aui as:

                                           ji  ji Fi
                         fp _ aui   j
                                                         fp _ hub j   jN   (4)

where  ji is the correlation coefficient of the portfolio’s returns between bank i and j. It

measures the strength of the linear relationship of two returns.  ji is the linear regression

coefficient between bank j’s and bank i’s returns. Fi is the value of i’s financial asset portfolio

and pi* is i’s payment ability. Fi pi* is the percentage of i’s payment ability that will be

affected if all value in i’s asset portfolio are lost. Together,  ji  ji Fi pi* is the relative impact

of bank j’s failure on bank i’s payment ability if their asset portfolios are strongly correlated

(  ji  0.5 ). N is the set of banks in the banking system.

Similarly, we defined bank i’s hub score for financial asset portfolio links fp _ hubi as the total

negative impacts i’s failure will cause on its correlated banks’ asset portfolios. It is calculated as:

                                               ki  ki Fk
                            fp _ hubi  k
                                                             fp _ au k     kN       (5)

where  ki  ki Fk pk is the relative impact of bank i’s failure on its correlated financial asset

portfolio held by bank k.

        4.2 Measuring Systemic Risk Associated with Interbank Payment Links

In our LASER algorithm, we define bank i’s authority score for interbank payment links

ip _ aui as the total negative impacts on i’s payment ability if all banks that have interbank

payment obligations towards i fail. It is calculated as:

                                                   l ji
                                ip _ aui   j
                                                          ip _ hub j     jN   (6)

where l ji is bank j’s payment obligation toward bank i. Accordingly, bank i’s hub score for

interbank payment links ip _ hubi is defined as the total negative impacts of i’s failure on all

banks it has payment obligations to. It is calculated as:

                                 ip _ hubi  k
                                               N ik
                                                    ip _ au k   kN    (7)

Similar to the HITS algorithm, these four scores are calculated for multiple iterations. In each

iteration, the two authority scores for the two types of links are calculated simultaneously. Then

two hub scores are computed. All four scores are then normalized. These two types of scores

mutually reinforce each other. Since all scores are defined in the form of relative impacts on a

bank’s payment ability, we can use a total bank authority score denoted as total _ aui to

represent the total negative impacts on bank i if all banks that link to i fail. Thus, total _ aui is

calculated as total _ au i  fp _ au i  ip _ au i . Similarly, a total bank hub score for bank i -

total _ hubi - which represents the total negative impacts of bank i’s default on the banking

system, is calculated as total _ hubi  fp _ hubi  ip _ hubi .

The total authority and hub scores are then used to rank banks. The higher the total authority

score a bank has, the bigger the total negative impacts from other banks’ failures on its payment

ability and the more likely it is to fail. This score can be used to rank the banks that will be most

likely to fail during financial crisis. The higher the total hub score a bank has, the bigger the total

negative impacts its failure will have on all relevant banks. The total hub score can be used to

identify banks that will have the largest negative impacts on the banking system during a

financial crisis. The main computational steps of the LASER algorithm are as follows.

Input: (1) number of banks N , (2) vector containing the value of banks’ financial asset

portfolio F  {F1 ,..., FN } , (3) correlation coefficient matrix of the changes in the portfolio’s

returns between banks [  ij ] , (4) matrix of linear regression coefficients between the returns of

banks [  ij ] , (5) interbank payment matrix L  [lij ] , and (6) interbank payment clearing

vector p *  { p1 ,..., p * } .

Output: (1) ranked list of banks in terms of total bank authority scores in a descending order,

and (2) a ranked list of banks in terms of total bank hub scores in a descending order.

Step 1. Initialization: Set up the initial value as identity vector 1 of banks’ authority and hub

scores for both types of links.

Step 2. Repeat the following sub steps until the scores converge

     2.1. Set fp _ aui  0 , ip _ aui  0

     2.2. For each bank j  N ( j  i )

           2.2.1. If (  ij  0.5) ,

                                                       ji  ji Fi
       fp _ aui : fp _ aui                       fp _ hub j ,

                                           l ji
           2.1.2. ip _ aui : ip _ aui           ip _ hub j

     2.3. Set fp _ hubi  0 , ip _ hubi  0

     2.4. For each bank j  N ( j  i )

          2.4.1. If (  ij  0.5)

                                                    ki  ki Fk
    fp _ hubi : fp _ hubi           *
                                                                  fp _ au k

        2.4.2. ip _ hubi : ip _ hubi       *
                                               ip _ au k

    2.3. Normalize fp _ au , fp _ hub , ip _ au and ip _ hub .

Step 3. Ranking

    3.1. For each j  N

        3.1.1. total _ au j : fp _ au j  ip _ au j

        3.1.2. total _ hub j : fp _ hub j  ip _ hub j

    3.2. Rank the banks in terms of total _ au in a descending order

    3.3. Rank the banks in terms of total _ hub in a descending order

                       5. SIMULATION-BASED EXPERIMENTS

We conduct a simulation-based investigation using both real-world and simulated data to

demonstrate how LASER can be used to predict contagious bank failures and identify key banks

with the highest systemic risk since most banks damage banking systems via contagious failures.

This simulation-based study consists of three components. First, we generate interbank

transactions based on official U.S. bank data and use these data to create high systemic risk

scenarios, thus simulating major financial market shocks that will cause contagious bank failures.

Second, we compare the simulation-based LASER predictions on contagious bank failures with

the prediction based on well-known financial risk measures such as capital adequacy ratio (CAR).

Third, using the generated systemic risk scenarios, we estimate the effects of various capital

injection strategies on banks selected according to the LASER algorithm and standard financial

risk measures, in order to demonstrate the impact of LASER on mitigating systemic risk in

banking systems.

                              5.1 Data and Scenario Generation

5.1.1 Call Report Data

Our main data sources are balance sheets of major U.S. banks’ balance sheets, income statements

and other supervisory data from the quarterly reports of condition and income (commonly

referred to as Call Reports) filed with the Federal Financial Institutions Examination Council

(FFIEC). In addition, we use empirical findings from a stream of research (May et al. 2008;

Soramäki et al. 2007) about the Federal Reserve Wire Network (Fedwire), the primary U.S.

Interbank payment settlement network, as the basis for simulating interbank payments.

Every bank among National Bank, State Member Bank and insured Nonmember Bank in the U.S.

is required by the FFIEC to file a quarterly report of condition and income on the last day of each

calendar quarter, i.e., the report date. The information is extensively used by bank regulatory

agencies in their daily offsite bank monitoring activities. In addition to balance sheet and income

statement data, Call Reports contain a fairly comprehensive set of data required for supervisory

purposes such as capital adequacy statistics and exposure to various financial risks. Call

Reports are widely used by the federal and state banking authorities, bank rating agencies and the

academic community as an important financial data source for monitoring and studying bank

financial conditions. The Federal Deposit Insurance Corporation (FDIC) oversees the FFIEC and

is responsible for collecting call reports under the provision of Section 1817(a)(1) of the Federal

Deposit Insurance Act. In summary, Call Reports are a timely and critical public data source of

information about the status of the U.S. banking system (FDIC 2010).

         Table 1a. Basic Statistics of the Data Sample Selected for the Simulation Study

                                                       Average Number
                     Number of      Total Number of                          Average bank assets
 Time Span                                               of Reporting
                      Reports       Reporting Banks                         per quarter (in trillion)
                                                       Banks per quarter
                        4,708             204                 124                     6.6

We collected the data from 306,195 call reports which involve 10,081 banks from the FDIC web

site from March, 2001 to June, 2010. Table 1a shows the basic statistics of the data used in our

study.       After examining completeness of information on capital reserve and financial asset

portfolios for each quarter, 4,708 call reports involving 204 banks are included in our study. On

average, 124 banks filed their quarterly call reports from March, 2001 to June, 2010.

Table 1b. The Top 10 U.S. Banks in Terms of Their Average Total Assets from March 2001

                                              to June 2010

                                                                      Average Total Assets
     Rank                            Bank
                                                                    (in billions U.S. dollars)
         1                                                                  1337.82
                  BANK OF AMERICA, NATIONAL
         2                                                                   998.38

         3        CITIBANK, N.A.                                             820.01

                  WACHOVIA BANK, NATIONAL
         4                                                                   419.43

         5        CHASE MANHATTAN BANK                                       415.89

                  WELLS FARGO BANK, NATIONAL
         6                                                                   389.23

       7      FIRST UNION NATIONAL BANK                                      230.43

       8                                                                     214.55
              COMPANY OF NEW YORK

       9      U.S. BANK NATIONAL ASSOCIATION                                 206.44

      10      FLEET NATIONAL BANK                                            193.86

Our selected data sample is representative for the FDIC bank dataset since it contains various

types of banks ranging from huge investment banks like JPMorgan Chase Bank to small regional

banks like Bank of Guam. Table 1b lists the top 10 banks in terms of average total assets from

2001 to 2010. As Table 1b shows, the banking industry is quite concentrated. The total assets of

the top 3 banks are significantly larger than the rest of the banks in the table. The average total

bank liabilities for all banks in our data sample amount to 2.17 trillion per quarter for these 10

years. Another important bank liquidity measure Loan to Asset ratio varies a lot across banks

from 0% up to 51%. The average loan-to-asset ratio is 2.8% across banks.

5.1.2 Generation of High Systemic Risk Scenarios

For each quarter, we generate high systemic risk scenarios for the banking system in question

using both empirical data extracted from bank call reports and simulated interbank payment

transactions. This scenario is based on the bank network model, which consists of two types of

links and the corresponding clearing mechanism developed in Section 4. Figure 2 shows the

basic structure of the scenario generation. To generate a high systemic risk scenario for the

banking system, we first extract data on banks’ correlated financial asset portfolios and establish

the correlation relationship among these bank portfolios using Equation (1). We then generate

the distribution of interbank payment transaction value and assign them to pairs of banks based

on previous empirical findings on interbank payment networks. In addition, we extract

information about banks’ capital reserve from the call reports. At last, we generate an economic

shock which adversely affect the value of the financial asset portfolios for a set of banks. We

described these steps in detail in the following subsections.

                   Information about       Interbank Payment                                                             Simulated Economic
                   Banks’ Correlated       Values Distributions
                                               0.05                                                                             Shocks

                    Financial Equity                                                                                          and Capital



                       Portfolio                                                                                          Injections to Banks





                                                      0.00   1.00   2.00   3.00   4.00   5.00   6.00   7.00   8.00

                                 Bank Status                                                                         Shocks to the
                                 Information                                                                         Values of
                                 from Call                                                                           Banks’
                                 Reports                                                                             Financial Asset

                            Figure 2. Structure of Scenario Generation

Extracting Data on Financial Asset Portfolios

After consulting with two domain experts, we select three accounting items from the call report

to calculate the value of a bank i’s financial asset portfolio Fi : (1) held-to-maturity securities, (2)

trading assets (minus trading liabilities), and (3) available-for-sale securities. The Statement of

Financial Accounting Standards No. 115 requires these three items for a company (including

bank holding companies) in order to classify its investment in equity (stocks) securities. We use

the total value for these three items as the value of a bank’s financial asset portfolio for each

bank. According to our domain experts, this portfolio also illustrates a bank’s major exposure to

market risks.

Generating Distributions of Interbank Payment Transactions

As a matter of fact, transaction-level data on interbank payments is not available from call

reports or other public data sources. As a result, we need to simulate the distribution of interbank

payment transactions for under simulated scenarios. As mentioned in Section 2, Soramäki et al.

(2007) studied the Fedwire interbank payment network, which consists of more than 7,500 banks,

for 62 days. In that network, the average number of payments per day is 345,000 and average

volume per payment is about 3 million U.S. dollars. The average degree of the payment network

is about 15. That research also reveals the frequency distributions and value for the interbank

payment transactions. Since the U.S. banks in our selected data sample are also using Fedwire as

their major means for settling the interbank payments, we utilize the information about the

interbank payment network statistics and distribution from the paper by Soramäki et al. (2007) in

order to generate interbank payment networks and simulate the distribution of the interbank

payment value in our study. We then assign simulated payment value to interbank payment links

in the generated network. The assignment of a payment value to a link is constrained by the sizes

(i.e., total assets) of the linked pair of banks.

Extracting Data about Bank Capital Reserves

Information about bank capital reserve e is retrieved from the data item - tier 1 capital at the

regulatory capital section of a call report. Tier 1 capital is defined in both Basel I and II Accords

and mainly includes a bank’s equity capital and disclosed reserves. It is a major measure on a

bank’s ability to sustain unexpected loss and serves as a safety net for bank solvency. Financial

regulators in many countries require banks to keep a certain level of tier 1 capital as protection
against various banking risks including systemic risk.    In our simulation, the value of a bank ’s

tier 1 capital is given as the input of the capital reserve in our network model.

Simulating Economic Shocks on Banks’ Correlated Financial Asset Portfolios

Banks are exposed to economic shocks because of exposures to market risk through their

financial asset portfolios. Some extreme economic shocks may cause contagious bank failures as

described in Section 1. An important step of our simulation is to generate systemic risk scenarios

by introducing extreme economic shocks that may significantly reduce the banks’ asset portfolio

value. Our approach to generating market shocks is analogous to the economic shocks in the

recent subprime mortgage crisis imposed on the U.S. banking system. The collapse of the

housing bubble in 2007 caused the value of derivative products that linked to real estate prices

such as mortgage-backed securities (MBS) to plummet. Major U.S. banks with huge exposure to

these financial products were affected first and suffered great losses. The initial failures then

affected other banks in the systems worldwide through correlated financial asset portfolios and

interbank payment obligations. To emulate this propagation of bank failures, our generated

scenarios that first shock the top 10 banks that have the largest financial asset portfolios by

reducing a percentage S (shock rate) of the portfolio value. We choose 10 banks because this is

about 10% of the banks in each scenario. We then calculate the impacts of a shock on all banks

that have correlated portfolios (   0.5 ) with these 10 banks. The negative impact on a

correlated bank portfolio from a top 10 bank is calculated using the linear regression coefficient

 between the returns of these two banks. Then, in each simulated scenario, before the first

interbank payment clearing, bank i’s initial payment ability can be estimated as:

                            (1   ) Fi   N  ji p *  ei
                                            j 1       j           i T
                      pi   (1   ) Fi   j 1 ji p j  ei i  T  i  C
                       *                       N          *
                                 Fi   j 1 ji p *  ei       i T  i  C
                                                  j

where T is the set of top 10 banks in terms of the financial asset portfolio value and C is the set

of banks that have correlated portfolios with the top 10 banks.

In our simulation study, we draw one scenario for each quarter from March 2001 to June 2010

and used real-world data from the FFIEC call reports to Fi and e in Equation (8). We then

generate 1,000 interbank payment distributions for each scenario using the method described in

the previous section. After generating these 38,000 (38 quarters  1000 interbank payment

distributions) scenarios, we settle the interbank payment transactions using the clearing payment

mechanism (Equation (3)) for all these scenarios. As a result of the simulated major economic

shocks, contagious bank failure may happen in the banking system and cause many banks to fail.

Thus, for each simulated scenario, we can calculate a bank failure rate as bf  g m , where g is

the number of failed banks and m is the total number of banks in that scenario. An average

bank failure rate  across all scenarios can be calculated as   i bf i SI (i  L) , where L is

the set of all simulated scenarios and SI is the total number of simulated scenarios.

Comparing Average Bank Failure Rates at Different Shock Rates

                                           Average Bank Failure Rates of Generated Scenarios
                                                       at Different Shock Rates
      Average Bank Failure Rate γ

                                           0    0.5      1     1.5     2     2.5       3   3.5     4   4.5   5
                                                                        Shock Rate β

 Figure 3. Average Bank Failure Rates for Generated Scenarios at Different Shock Rates

                                                      after the First Interbank Payment Clearing

Figure 3 shows the average bank failure rates after the first payment clearing resulting from

generated scenarios at various shock rates. In reality, the shock rate  to a bank can be larger

than 1 because under economic shocks the trading liabilities of its financial asset portfolio may

exceed its value due to excessive leverage in financial derivative products. As the figure shows,

when the shock rate  is relatively low (0.1 to 1.3), the average bank failure rates are relatively

low and range from 4.9% to 12.5%. Starting at   1.4 ,  began to increase dramatically. On

average, 54% of the banks may fail when  increases to 1.7, indicating a system-wide collapse

of the banking system. When  reaches 3.0, on average, 90% of the banks in the generated

scenarios failed. After that, the effect of economic shocks on the banks in the simulated scenarios

was no longer interesting since the banking system had already collapsed.

These simulation results show that a banking system may sustain relatively mild economic

shocks ( 0    1.3 ) on a small number of leading banks in terms of exposure to financial asset

portfolios. However, when the shocks exceed a certain threshold (   1.4 ), the average bank

failure rates increase dramatically, causing a collapse of the banking system. Further, at the

higher end of shock rate (>3), the effects of the shocks become marginal since most banks (more

than 90%) already failed. Therefore, in additional experiments with these simulated scenarios as

to assess the performance of LASER, we focus on the scenarios with shock rates between 1.4

and 3.0 in studying how contagious bank failures happen due to economic shocks.

Another important finding in this first round of simulation study is that a banking system tends to

stabilize after the first interbank payment clearing (i.e., settling interbank payment obligations

among banks) if there are no further economic shocks. In our model setting, interbank payment

clearing is a daily operation. In our simulation, starting with the second clearing (day), the

average failure rate drastically decreases to less than 2% if the shock does not continue,

demonstrating that most of the damage caused by systemic risk on the banking system occurs

after the first interbank payment clearing. Therefore, in our next experiments described in the

next sub section, we only focus on scenarios before and after the first payment clearing.

      5.2 Evaluation of the Link-Aware Systemic Estimation of Risks (LASER)

We conducted an experimental study using the systemic risk scenarios generated in Section 5.1

to evaluate the performance of the LASER algorithm in terms of accuracy in predicting

contagious bank failures and guiding capital injection strategies relative to predictions based on

two well-adopted liquidity risk measures – capital adequacy ratio (CAR) and loan to asset ratio.

Currently there are few standard measures for bank systemic risks. These two liquidity risk

measures are often used as proxies in the banking industry to evaluate bank systemic risk. This is
mainly because at an individual bank level, bank systemic risk is in the form of liquidity shortfall

under extreme market conditions. The three measures are explained as follows:

   Capital Adequacy Ratio (CAR), also called Capital to Risk Weighted Assets Ratio, is a

    measure of the amount of bank capital represented as a percentage of its risk-weighted assets.

    CAR determines the capacity of a bank in terms of absorbing unexpected loss and meeting

    various other liabilities under extreme market conditions. Bank capital serves as a "cushion"

    for potential losses in order to protect depositors. Most national banking regulators use CAR

    to monitor their financial status, thereby maintaining confidence in the banking system. It is

    defined as:

                                                      Tier 1 Capital + Tier 2 Capital
                          Capital Adequacy Ratio                                     (9)
                                                         Risk Weighted Assets

    where tier 1 capital (defined in Section 5.1.2) is denoted as e in our network model. Tier 2

    capital is composed of supplementary capital, which is categorized in the Basel I Accord as

    undisclosed reserves, revaluation reserves, general provisions, hybrid instruments and

    subordinated term debt. A bank’s risk weighted assets are fund-based assets such as cash,

    loans, investments and other assets.

   The loan to asset ratio measures a bank’s total loans expressed as a percentage of its total

    assets. A high ratio indicates that a bank is loaned up and thus has low liquidity. In other

    words, the higher the ratio, the more likely a bank will suffer liquidity shortage during a

    financial crisis and may fail. It is calculated as:

                                                          Total Loans
                               Loan to Asset Ratio =                     (10)
                                                          Total Assets

Predicting Contagious Bank Failures

We generated 38,000 scenarios based on the information about the constructed financial asset

links and past interbank transaction patterns as the training dataset for LASER. We used data

from simulated extreme systemic scenarios before the first interbank payment clearing as the

testing dataset for LASER. We treated contagious bank failures after the first clearing as the

“unknown” future events to evaluate the prediction capability of LASER. LASER was set to

generate a ranked list of  banks in terms of total bank authority scores in a descending order.

As defined in Section 4, bank i’s total bank authority score total _ aui represents total negative

impacts it receives if all its partner banks (through interbank payment or correlated portfolios)

failed. Thus a bank’s total bank authority score actually represents the systemic risk it receives

from the banking system. The higher this score for a bank, the more likely it will fail because of

liquidity shortage caused by other banks’ failures. Then the list of banks based on total bank

authority generated by LASER can be deemed as a prediction of the banks that are most likely to

fail in extreme systemic scenarios. In addition, based on the other two financial risk measures,

capital adequacy ratio and Loan to Asset ratio, we generate two ranked lists of banks for

comparison purposes. According to their definitions, the lower the value of these two measures

for a bank, the more likely it will fail under economic shocks.

We adopted the following prediction-quality metrics (Breese et al. 1998) for evaluating the

ranked lists generated by LASER and the three aforementioned liquidity risk measures:

                      Number of hits
   Precision: Ps                      (11)

                               Number of hits
   Recall: Rs                                                   (12)
                   Number of banks in the generated scenario

                       2  Ps  Rs
    F measure: Fs                   (13)
                         Ps  Rs

where s is a generated scenario in which these measures are calculated and  is the number of

the banks in the ranked list generated by LASER. Since 85% of the simulated extreme systemic

scenarios have more than 40 contagious bank failures after the first interbank clearing, we set the

predicted number of contagious bank failures for LASER  to 40. To check for robustness, we

repeat the experiment for three setting for , 10, 20 and 50, and got similar results. In addition, as

mentioned earlier, we used shock rates from 1.4 to 3.0. The results are similar across scenarios at

different shock rates. Thus, we mainly report the results for scenarios with shock rates ranging

from 1.5 to 1.9. The experimental results are presented in Table 2. The performance measures in

bold font indicate the best performance across the three ranking methods under study for the

corresponding shock rate and number of predicted bank failures. Differences between the best

and the second-best results are statistically significant at the 5% level.

           Table 2. Prediction Performances of LASER and Bank Risk Measures

          Number of
          Predicted         Ranking Methods         Precision         Recall       F Measure
                          LASER Algorithm
                          (Total Bank                0.4145          0.3907          0.3915
                          Authority Score)
    1.5    40
                          Capital Asset Ratio        0.3349          0.3009          0.3098
                          Loan to Asset Ratio        0.3079          0.2973          0.2936
                          LASER Algorithm
                          (Total Bank                0.5033          0.3568          0.4109
                          Authority Score)
    1.6    40
                          Capital Asset Ratio        0.4500          0.3128          0.3633
                          Loan to Asset Ratio        0.3974          0.2783          0.3221

                         LASER Algorithm
                         (Total Bank                0.5901          0.3550          0.4353
                         Authority Score)
  1.7      40
                         Capital Asset Ratio        0.5408          0.3169          0.3928
                         Loan to Asset Ratio        0.5013          0.2996          0.3684
                         LASER Algorithm
                         (Total Bank                0.6618          0.3471          0.4496
                         Authority Score)
  1.8      40
                         Capital Asset Ratio        0.6013          0.3097          0.4040
                         Loan to Asset Ratio        0.5678          0.2958          0.3843
                         LASER Algorithm
                         (Total Bank                0.7540          0.3494          0.4741
                         Authority Score)
  1.9      40
                         Capital Asset Ratio        0.6921          0.3170          0.4319
                         Loan to Asset Ratio        0.6586          0.3048          0.4136

We observe that the LASER algorithm outperformed the two widely used bank risk measures in

all three performance measures for scenarios at all shock rates. The results indicate that the

LASER algorithm is useful with exploring the principles in banking systems that govern bank

systemic risk through interbank payments and correlated financial asset portfolios, which to a

certain extent are not fully captured by the other two widely-used bank risk measures.

Preventing Contagious Bank Failures through Capital Injections

Most previous research on bank systemic risk has been descriptive or exploratory in nature. One

of the intended contributions of this paper is to explore ways our algorithm can be utilized by

financial regulators to enable or improve better decision making in managing systemic risk.

Predicting the impact of capital injections on banking systems, particularly under systemic risk

scenarios, provides an ideal context for testing our LASER approach. The Troubled Asset Relief

Program (TARP) which was implemented during the 2007 financial crisis is a form of capital

injections to stabilize the U.S. financial system and prevent contagious bank failures by

providing enough liquidity to the financial market. One crucial question about capital injection

policies is which banks the regulators should inject capital in to prevent or reduce the likelihood

of contagious bank failures caused by economic shocks.

The total bank hub score generated by the LASER algorithm is a measure of total negative

impacts of a bank’s failure on the banking system. Therefore, capital injections to banks with

high total bank hub scores can effectively reduce the possible negative effect a failure of such a

bank may have on the banking system. Based on the scenarios generated in Section 5.1.2, we

inject a certain amount of capital to the banks selected with LASER algorithm and the two other

bank risk measures – capital adequacy ratio and loan to asset ratio. In this experiment, the lower

the two measures for a bank, the more likely it will fail under extreme market conditions and

cause more contagious failures of other banks. The amount of capital injected is expressed as a

percentage of a bank’s capital reserve, called capital injection rate. Based on our domain experts’

opinion, we set the capital injection rate  at 100%, 200%, 300%, 400% and 500%,

respectively, for different experimental configurations. The relative performance results for the

three methods are consistent across different configurations. When  exceeds 500%, the effect

of capital injections based on LASER in reducing average bank failure rates becomes marginal.

But if we increase the number of banks to receive capital injections, LASER’s performance will

continue to grow in terms of reducing average bank failure rates. Therefore, for reporting

purposes, we only present the results with capital injection rates of 100%, 300% and 500%, and

shock rates ranging from 1.5 to 1.9. The number of banks in which to inject capital is selected

based on domain expert opinion, corresponding to the number of banks being shocked in the

simulated systemic risk scenarios. We then observe and compare the changes in the average bank

failure rates of the simulated scenarios based on the three methods after the first interbank

payment clearing. Table 3 reports the reduction rates (i.e., percentage changes in the average

bank failure rates) for the scenarios with different lists of injection banks and experimental

settings. The average reduction rates in bold font indicate the best performance across the three

methods for the corresponding shock rate and number of banks to inject capital. Differences

between the best and second-best results are statistically significant at the 5% level.

  Table 3. Performances of LASER and Bank Risk Measures in Mitigating Systemic Risk

        Number of
  Shock Banks to
  Rate  Inject
                           Ranking Methods                 Average Reduction Rate 

                                                      100%           300%           500%

                         LASER Algorithm
                         (Total Bank Hub             -11.43%         -18.10%          -29.32%
  1.5      10
                         Capital Asset Ratio         -1.06%           -4.24%              -5.20%
                         Loan to Asset Ratio         -2.17%           -5.26%              -7.38%
                         LASER Algorithm
                         (Total Bank Hub             -12.44%         -21.86%          -31.98%
  1.6      10
                         Capital Asset Ratio         -1.32%           -3.07%              -4.44%
                         Loan to Asset Ratio         -2.26%           -5.64%              -6.50%
                         LASER Algorithm
                         (Total Bank Hub             -13.56%         -24.40%          -35.97%
  1.7      10
                         Capital Asset Ratio         -1.08%           -3.62%              -4.26%
                         Loan to Asset Ratio         -1.99%           -4.91%              -6.83%
                         LASER Algorithm
                         (Total Bank Hub             -14.86%         -24.64%          -38.47%
  1.8      10            Score)
                         Capital Asset Ratio         -0.87%           -1.90%              -3.71%

                         Loan to Asset Ratio         -1.94%           -3.87%           -6.71%
                         LASER Algorithm
                         (Total Bank Hub             -14.73%          -26.43%          -37.33%
  1.9      10
                         Capital Asset Ratio         -1.00%           -2.30%           -2.19%
                         Loan to Asset Ratio         -2.25%           -5.46%           -4.74%

The reduction rate  for each scenario is calculated as   ( a   b )  b , where  a is the

average bank failure rate in the scenarios without capital injections and  b is the one with

capital injections. We then calculate and report the average reduction rate  across all

simulated scenarios with the corresponding configuration. The results show that LASER’s

average reduction rates are significantly larger than the ones based on the two bank risk

measures across the systemic scenarios at all shock rates and capital injection rates. In general,

these results indicate capital injections to the 10 banks selected based on LASER’s total bank

hub scores outperforms the other two measures in reducing average bank failure rates. In

addition, the reduction rates for all three methods increases as more capital is injected into the

banking system. In other words, injecting capital into key banks identified by LASER that have

bigger systemic risk to the banking system can significantly reduce the possibility of contagious

bank failures.

To check for robustness of the results, we repeat the experiment with different shock rates,

different number of banks to shock, different capital injection rates, and different number of

banks to inject capital. The results are consistent with the ones presented in Table 3 that LASER

outperform the other two standard risk measures in terms of reducing average bank failure rate.


In this paper, we develop a network-based framework for modeling and ranking systemic risk in

banking systems. This framework consists of 1) a novel method for modeling the correlations of

banks’ exposures in their financial asset portfolios as network links, 2) a network-based BI

algorithm which measures the banks’ systemic risk through two types of interbank relationships

(links), and 3) a simulation-based evaluation on the performances of LASER in terms of

predicting contagious bank failures and aiding with capital injection decisions. We show that the

total bank authority score computed by LASER outperforms two widely-used bank liquidity risk

measures – the capital adequacy ratio and the loan to asset ratio - in terms of predicting

contagious bank failures. Our simulation also demonstrated that capital injections to the banks

that are ranked by LASER’s total bank hub scores as top 10 sources of systemic risk can more

effectively prevent possible contagious bank failures. In addition, applying our framework to a

comprehensive U.S. bank dataset extracted from FDIC Call reports led to two major findings.

First, the banking system can sustain relatively mild economic shocks. However, when the

shocks exceed a certain threshold, the banking system will start to collapse due to contagious

bank failures. Second, the banking system tends to stabilize after the first interbank payment

clearing if there are no further economic shocks.

We summarize the contributions and potential implications of our framework as follows. The

innovation of the modeling method is that we can learn the similarity in banks’ exposures to

systemic risk originating from their financial asset portfolios without knowing the exact portfolio

compositions. Based on that knowledge, we can effectively quantify the strength of the

previously vague connections between these portfolios and model them as bank network links.

Our BI algorithm – LASER - combined with the bank network model innovates by effectively
capturing and measuring the systemic risk a bank receives from the two types of interbank

relationships, and the systemic risk it generates and transmits to banking systems through the

same relationships. The simulation method we developed can be employed empirically if data

about both interbank payments and bank portfolio returns are available from financial authorities.

In the United States, such data are available from the Fedwire system and the FDIC call reports.

Simulations of systemic risk scenarios with such real-world data may help decision makers from

the relevant financial authorities to better 1) identify systemic risks in banking systems, and 2)

assess the impacts of different policies and strategies that are used to stabilize banking systems

during crisis times. Overall, our framework provides a BI-based tool for stakeholders to

effectively model, measure, and manage systemic risk in banking systems.

We acknowledge that the LASER algorithm provides relative ranking of banks in terms of the

two calculated systemic risk scores. The exact level of the systemic risk associated with a bank

either measured by failure probability or possible loss is not provided. For financial regulators,

such information is often needed to more accurately assess the status of individual banks and the

financial stability of banking systems. Therefore, our future research will focus on developing a

functional component for the LASER algorithm which estimates each bank’s probability of

contagious failure given a systemic risk scenario. We are working on improving the algorithm

with more fine-tuning and large-scale comparative experiments with empirical bank datasets in

other countries such as the European Union and Japan. We also intend to explore more BI

techniques to fully exploit the information associated with the two sources of systemic risk for

better ranking performance.


Abbasi, A., and Chen, H. 2008. "Cybergate: A Design Framework and System for Text Analysis
        of Computer-Mediated Communication," MIS Quarterly (32:4), pp 811-837.
Acharya, V.V. 2009. "A Theory of Systemic Risk and Design of Prudential Bank Regulation,"
        Journal of Financial Stability (5:3), pp 224-255.
Aghion, P., Bolton, P., and Dewatripont, M. 2000. "Contagious Bank Failures in a Free Banking
        System," European Economic Review (44:4-6), pp 713-718.
Altman, E.I. 1968. "Financial Ratios, Discriminant Analysis and the Prediction of Corporate
        Bankruptcy," The Journal of Finance (23:4), pp 589-609.
Angelini, P., Maresca, G., and Russo, D. 1996. "Systemic Risk in the Netting System," Journal of
        Banking & Finance (20:5), pp 853-868.
BIS. 1994. "64th Annual Report," Basel, Switzerland.
Bonanno, G., Caldarelli, G., Lillo, F., Micciché, S., Vandewalle, N., and Mantegna, R.N. 2004.
        "Networks of Equities in Financial Markets," The European Physical Journal B -
        Condensed Matter and Complex Systems (38:2), pp 363-371.
Breese, J., Heckerman, D., and Kadie, C. 1998. "Empirical Analysis of Predictive Algorithms for
        Collaborative Filtering," Proceedings of the Fourteenth Conference on Uncertainty in
        Artificial Intelligence, pp. 43-52.
Brin, S., and Page, L. 1998. "The Anatomy of a Large-Scale Hypertextual Web Search Engine,"
        Computer Networks and ISDN Systems (30:1-7), pp 107-117.
Bullard, J., Neely, C.J., and Wheelock, D.C. 2009. "Systemic Risk and the Financial Crisis: A
        Primer," Federal Reserve Bank of St Louis Review (91:5), Sep-Oct, pp 403-417.
Carte, T.A., Schwarzkopf, A.B., Shaft, T.M., and Zmud, R.W. 2005. "Advanced Business
        Intelligence at Cardinal Health," MIS Quarterly Executive (4:4), pp 413-424.
Degryse, H., and Nguyen, G. 2004. "Interbank Exposures: An Empirical Examination of Systemic
        Risk in the Belgian Banking System," National Bank of Belgium.
Deventer, v., R., D., Imai, K., and Mesler, M. 2004. Advanced Financial Risk Management: Tools
        and Techniques for Integrated Credit Risk and Interest Rate Risk Management. John
Eisenberg, L., and Noe, T.H. 2001. "Systemic Risk in Financial Systems," Management Science
        (47:2), Feb, pp 236-249.
Elsinger, H., Lehar, A., and Summer, M. 2006. "Risk Assessment for Banking Systems,"
        Management Science (52:9), Sep, pp 1301-1314.
FDIC. 2010. from
Federal Deposit Insurance Corporation. 2010. from
Federal Reserve System. 2001. "Policy Statement on Payments System Risk," Washington, D.C.,
        pp. 1-13.
Freixas, X., Parigi, B.M., and Rochet, J.-C. 2000. "Systemic Risk, Interbank Relations, and
        Liquidity Provision by the Central Bank," Journal of Money, Credit and Banking (32:3),
        pp 611-638.
Garfield, E. 1972. "Citation Analysis as a Tool in Journal Evaluation: Journals Can Be Ranked by
        Frequency and Impact of Citations for Science Policy Studies," Science (178:4060),
        November 3, 1972, pp 471-479.

Huang, Z., Chen, H., Hsu, C.-J., Chen, W.-H., and Wu, S. 2004. "Credit Rating Analysis with
       Support Vector Machines and Neural Networks: A Market Comparative Study," Decision
       Support Systems (37:4), pp 543-558.
Huang, Z., Zeng, D.D., and Chen, H. 2007. "Analyzing Consumer-Product Graphs: Empirical
       Findings and Applications in Recommender Systems," Management Science (53:7), July
       1, 2007, pp 1146-1164.
Hubbell, C.H. 1965. "An Input-Output Approach to Clique Identification," Sociometry (28:4), pp
Kaufman, G.G. (ed.) 1995. Comment on Systemic Risk. Greenwich, Conn: JAI.
Kaufman, G.G., Scott, K.E., Kaufman, G.E.G., and Scott, K.E.E. 2003. "What Is Systemic Risk,
       and Do Bank Regulators Retard or Contribute to It," Independent Review (VII:3), pp
Kleinberg, J. 1999a. "Hubs, Authorities, and Communities," ACM Comput. Surv. (31:4es), p 5.
Kleinberg, J.M. 1999b. "Authoritative Sources in a Hyperlinked Environment," Journal of ACM
       (46:5), pp 604-632.
Lee, K., Booth, D., and Alam, P. 2005. "A Comparison of Supervised and Unsupervised Neural
       Networks in Predicting Bankruptcy of Korean Firms," Expert Systems with Applications
       (29:1), pp 1-16.
Markowitz, H. 1952. "Portfolio Selection," The Journal of Finance (7:1), pp 77-91.
Marshall, B., McDonald, D., Chen, H., and Chung, W. 2004. "Ebizport: Collecting and Analyzing
       Business Intelligence Information," Journal of the American Society for Information
       Science and Technology (55:10), pp 873-891.
May, R.M., Levin, S.A., and Sugihara, G. 2008. "Complex Systems: Ecology for Bankers," Nature
       (451:7181), pp 893-895.
Min, S.-H., Lee, J., and Han, I. 2006. "Hybrid Genetic Algorithms and Support Vector Machines
       for Bankruptcy Prediction," Expert Systems with Applications (31:3), pp 652-660.
Odom, M.D., and Sharda, R. 1990. "A Neural Network Model for Bankruptcy Prediction," IJCNN
       International Joint Conference on Neural Networks, pp. 163-168 vol.162.
Pendery, D. 2009. "Three Top Economists Agree 2009 Worst Financial Crisis since Great
       Depression." Reuters.
Rochet, J.-C., and Tirole, J. 1996. "Interbank Lending and Systemic Risk," Journal of Money,
       Credit and Banking (28:4), pp 733-762.
Sheldon, G., and Maurer, M. 1998. "Interbank Lending and Systemic Risk: An Empirical Analysis
       for Switzerland," Swiss Journal of Economics and Statistics (SJES) (134:IV), pp 685-704.
Shin, K.-S., Lee, T.S., and Kim, H.-j. 2005. "An Application of Support Vector Machines in
       Bankruptcy Prediction Model," Expert Systems with Applications (28:1), pp 127-135.
Sinkey, J.F. 1975. "A Multivariate Statistical Analysis of the Characteristics of Problem Banks,"
       The Journal of Finance (30:1), pp 21-36.
Soramäki, K., Bech, M.L., Arnold, J., Glass, R.J., and Beyeler, W.E. 2007. "The Topology of
       Interbank Payment Flows," Physica A: Statistical Mechanics and its Applications (379:1),
       pp 317-333.
Tam, K.Y. 1991. "Neural Network Models and the Prediction of Bank Bankruptcy," Omega
       (19:5), pp 429-445.
Tam, K.Y., and Kiang, M. 1990. "Predicting Bank Failures: A Neural Network Approach,"
       Applied Artificial Intelligence (4:4), pp 265-282.

Tsukuda, J., and Baba, S.-i. 1994. "Predicting Japanese Corporate Bankruptcy in Terms of
       Financial Data Using Neural Network," Computers & Industrial Engineering (27:1-4), pp
Upper, C., and Worms, A. 2004. "Estimating Bilateral Exposures in the German Interbank Market:
       Is There a Danger of Contagion?," European Economic Review (48:4), pp 827-849.
Wang, Y., Wang, S., and Lai, K.K. 2005. "A New Fuzzy Support Vector Machine to Evaluate
       Credit Risk," IEEE Transactions on Fuzzy Systems (13:6), pp 820-831.
Wells, S. 2002. "UK Interbank Exposures: Systemic Risk Implications," Bank of England,
       London, UK.
Wixom, B.H., Watson, H.J., Reynolds, A.M., and Hoffer, J.A. 2008. "Continental Airlines
       Continues to Soar with Business Intelligence," Information Systems Management (25:2),
       pp 102 - 112.
Wu, C.-H., Tzeng, G.-H., Goo, Y.-J., and Fang, W.-C. 2007. "A Real-Valued Genetic Algorithm
       to Optimize the Parameters of Support Vector Machine for Predicting Bankruptcy," Expert
       Systems with Applications (32:2), pp 397-408.


To top