# Report of the Correlation Working Party by armedman1

VIEWS: 0 PAGES: 22

• pg 1
```									      Report of the
Correlation Working Party
Glenn Meyers
Insurance Services Office, Inc.
April 27, 2004
Charge of the Working Party
• ERM requires the quantification of the total
risk of an enterprise. One must consider
correlation to properly combine the
individual risk components.
• Considerations
– Theoretical
– Empirical
– Computational
Theoretical Considerations
• Conclusion – No overriding “theory of
correlation.”
• We will provide examples of multivariate
models that exhibit correlation.
• Experts prefer the term “dependencies”
rather than correlation.
– I find myself reverting the common usage so
nonexperts will know what I am talking about.
Empirical Considerations
• Historical problem – lack of data
– One observation per year
• If correlation matters, we should be able to
find data that exhibits that correlation.
• One approach
– Create a model that depends on a “driver” for
correlation.
– Use data from several insurers to parameterize
the driver.
– Example to follow
Computational Considerations
• ERM demands the aggregation of
segments.
• Simulation
– Iman Conover and Copulas
• Fourier transforms
– Faster than simulations, but less flexible and
require more setup time.
Chapters Written by
Individual Authors
•   Common Shock Models – Glenn Meyers
•   The Iman-Conover Method – Stephen Mildenhall
•   Correlation over time – Hans Waszink
•   Aggregating Bivariate Distributions – David
Homer
•   Dependency in Market Risk – Younju Lee
•   Modeling Time Series with Non-Constant
Correlations – Dan Heyer
•   Correlations in a General Stochastic Setting –
Lijia Guo
•   4 CAS Members and 3 non members
From Meyers Chapter
The Negative Binomial Distribution

• Select a at random from a gamma
distribution with mean 1 and variance c.
• Select the claim count K at random from
a Poisson distribution with mean al
• K has a negative binomial distribution
with:
E K   l and Var K   l  c  l   2
Multiple Line Parameter Uncertainty
• Select b from a distribution with E[b] = 1
and Var[b] = b.
• For each line h, multiply each loss by b.
• Can calculate r if desired.

Var  X   E b Var  X | b   Varb E  X | b 
                                
Cov  X ,Y   E b Cov  X | b ,Y | b   Cov b E  X | b , E Y | b 
                                                    
Cov  X ,Y 
r
Std  X   Std Y 
Multiple Line Parameter Uncertainty

A simple, but nontrivial example

b1  1  3b , b 2  1, b3  1  3b

Pr b  b1  Pr b  b3   1/ 6 and Pr b  b2   2 / 3

Eb = 1 and Var[b] = b
Low Volatility
b = 0.01 r = 0.50

Chart 3.3

4,000
3,500
3,000
Y 2 = bX 2

2,500
2,000
1,500
1,000
500
0
0   1,000     2,000      3,000   4,000
Y 1 = bX 1
Low Volatility
b = 0.03 r = 0.75

Chart 3.3

4,000
3,500
3,000
Y 2 = bX 2

2,500
2,000
1,500
1,000
500
0
0   1,000     2,000      3,000   4,000
Y 1 = bX 1
High Volatility
b = 0.01 r = 0.25

Chart 3.3

4,000
3,500
3,000
Y 2 = bX 2

2,500
2,000
1,500
1,000
500
0
0   1,000     2,000      3,000   4,000
Y 1 = bX 1
High Volatility
b = 0.03 r = 0.45

Chart 3.3

4,000
3,500
3,000
Y 2 = bX 2

2,500
2,000
1,500
1,000
500
0
0   1,000     2,000      3,000   4,000
Y 1 = bX 1
• There is no direct connection between r
and b.
• For the same value of b:
– Small insurers have large process risk and
hence smaller correlation
– Large insurers have smaller process risk and
hence larger correlations.
• Pay attention to the process that
generates correlations.
Estimating b From Data
Cov  X ,Y 
 Eb Cov  X =0Y | b   Cov b E  X | b , E Y | b 
        | b,                                     

 E  X   E Y   Cov  b , b 
 b  E  X   E Y  Thus:
x  E  X  y  E Y 
b            
EX        E Y 
Reliable estimates of b are possible
with lots of data.

For example, 50 insurers with 10 years of
 50 
data gives    10  12,250 observations.
2

• Real estimates provided by Meyers, Klinker and
Lalonde
http://www.casact.org/pubs/forum/03sforum/03sf015.pdf
Sample Calculations
Common Shocks to Frequency and Severity
• Multiply expected claim count by a random
shock.
– Negative binomial count distributions
– Var[Shock] called covariance generator
• Multiply scale of claim severity by a random
shock.
– Lognormal severity distributions
– Var[Shock] called the mixing parameter
Scroll Down
Scroll Down
Parting Message
• Build models of underlying processes.
– Common shock model illustrated here
– Other chapters build other models
• Quantify parameters of models
– Use data! (If data will never exist, why worry?)
– Express parameters in a form that has intuitive
meaning.
• Correlation is a consequence of the models.

```
To top