# White

Document Sample

```					Estimating Parameters for Incomplete Data
William White

Insurance Agent
• Auto Insurance Agency • Task
– Claims in a week
294 340 384 457 680 855 974 1193 1340 1884 2558 9743

– Boss, “Is this a good representation of the population?”

Insurance Agent
• Things to think of.
– How should it look? – The distribution should be skewed right.
Frequency of Claims

\$ per Claim

294 340 384 457 680 855 974 1193 1340 1884 2558 9743

Insurance Agent
•Exponential Distribution,
–If
.35
0.35

 is 1

f ( x  )  e

 x

− If
.00005

 is .0001

0.00005

0.30
0.00004

0.25

0.00003

0.20

.15

0.15

.00002

0.00002

0.10
0.00001

0.05

2

5

4

6

8

10

20 000

10

40,000

40 000

60 000

80,000

80

Insurance Agent
•How can we estimate the value of  ?
–Find an estimator

•What is an estimator?
–Uses sample data to find approximations of actual parameters

Estimator
• What do we need to look for?
– Consistent
• The estimator value converges to the population value.
Estimate
Error True Parameter Sample Size

Estimator
• What do we need to look for?
– Efficient
• For a fixed sample size, there is less variability in the estimator.
Sample Mean

Sample Median

• Sample means have less variability than sample medians.

Estimator
• What do we need to look for?
– Unbiased
True Parameter Sample Size

Estimate True Parameter

Sample Size

• As people take more samples, the expected value of the parameter will become the population parameter.

Maximum Likelihood Estimator
Sir Ronald A. Fisher (1890-1962)
– Maximum Likelihood Estimator (MLE) – Solve the problems of estimation – Written in 1912 – Completed in 1922

Maximum Likelihood Estimator
• Characteristics of the MLE
– Very versatile – Applies to most types of data – Simplistic
• Can be very efficient with little calculations

Maximum Likelihood Estimator
• Uses the likelihood function
– Finds the probability of obtaining the sample results that were obtained – Product of probability density functions (pdf) with independent random variables

Maximum Likelihood Estimator
Probability

ˆ 



Maximum Likelihood Estimator
•Likelihood function
–Sample Data- Claims
294 340 384 457 680 855 974 1193 1340 1884 2558 9743

–What parameter is most likely for our sample?

L( 294,340,384,457,680,855,940,1193,1340,1884,2558,9743)
–If we knew 

P( X 1  294, X 2  340, X 3  384, X 4  457, X 5  680, X 6  855, X 7  974, X 8  1193, X 9  1340, X 10  1884, X 11  2558, X 12  9743)

P( X 1  x1 , X 2  x2 ,..., X n  xn ) is the probability density not the probability

Maximum Likelihood Estimator
•Likelihood function
–Probability density function
•Our samples are identically distributed • X •

P( X  x)  f ( x)

P( X 1  x1 , X 2  x2 ,..., X n  xn )  f X ( X 1 , X 2 ,..., X n )

–Restate: If we had a value for the parameter, what is the likelihood we would get the sample set? –Because the events are independent of each other

f X (294  ) f X (340  ) f X (384  ) f X (457  ) f X (680  ) f X (855  ) f X (974  ) f X (1193  ) f X (1340  ) f X (1884  ) f X (2558  ) f X (9743  )
P( X 1  x1 , X 2  x2 ,..., X n  xn ) is the probability density not the probability

Maximum Likelihood Estimator
•Likelihood function

L( X 1 , X 2 ,..., X n )  P( X 1  x1 , X 2  x2 ,..., X n  xn )  f X ( X 1 , X 2 ,..., X n  )  f X ( X 1  ) f X ( X 2  )... f X ( X n  )

P( X 1  x1 , X 2  x2 ,..., X n  xn ) is the probability density not the probability

Maximum Likelihood Estimator
• What
Probability

 makes our product maximized?

ˆ 



Maximum Likelihood Estimator
•Loglikelihood Function
–Taking the product can be cumbersome –Often easier due to properties of Logarithms • log( ab)  log( a)  log( b) • log( a b )  b log( a) –Do logarithms change up our evaluation?
•No, because logarithms are increasing, we are still looking for the maximum value.

log( L( X 1 , X 2 ,..., X n ))  log( f X ( X 1  ) f X ( X 2  )... f X ( X n  ))   log( f X ( X i  ))
i 1 n

Maximum Likelihood Estimator
•Example using the Exponential Distribution
f ( x  )  e x

L( X 1 , X 2 ,..., X n )   e
i 1 n

n

 x i

log( L( X 1 , X 2 ,..., X n ))   log e
i 1



 x i


n i 1

 n log(  )    xi

Maximum Likelihood Estimator
log L( X 1 , X 2 ,..., X n )   n log(  )    xi
n i 1


ˆ 



Maximum Likelihood Estimator
•With calculus we can find the MLE by taking the derivative, setting it equal to 0, and solving for the parameter. (We can use the 2nd derivative to check maximum.) n  n n d log L( ) d    n log(  )    xi     xi d d  i 1   i 1
0 n



  xi
i 1

n



n

x
i 1

n

i

ˆ 1   x

Because this is are estimate for the population parameter we are also concluding that the sample mean is an estimate for the population mean.

What Do We Think?
•Let’s use our claims with the Exponential Distribution, sample mean= 1725.2
0.0005

1  1725.2 x   1  1 f x e 1725.2   1725.2  1725.2  

0.0004

0.0003

0.0002

0.0001

2000

4000

6000

8000

10 000

What Do We Think?
• Why are there no claims below 294?
294 340 384 457 680 855 974 1193 1340 1884 2558 9743
Probability of Claim

\$ per Claim

Deductible
• We forgot there is a \$250 deductible!
– No one is going to file a claim if the damage is not worth \$250.

• Incomplete data- Truncated
10 12 16 17 22 25 27 33 35 39 45 47 53 57 65 71 81 89 99 103 115 122 139 140 156 185 194 225 243 294 340 384 457 680 855 974 1193 1340 1884 2558 9743

Incomplete Data
• The MLE also works with incomplete data. • Incomplete data occurs when specific observations are either lost or are not recorded exactly. • Two Types
– Truncated data
• When data is excluded.

– Censored
• When the number of observations is known, but the values of the observations are unknown.

Incomplete Data
•Truncated Data
–Vehicle insurance with a Deductible of \$250 –Claims are filed when greater than \$250

undefined , X  250 Y  X, 250  X 
10 12 16 17 22 25 27 33 35 39 45 47 53 57 65 71 81 89 99 103 115 122 139 140 156 185 194 225 243 294 340 384 457 680 855 974 1193 1340 1884 2558 9743

Incomplete Data
•This is an example of data that is truncated from below, or the left, since the data below the set value, \$250, is truncated. •Truncated from above, the right, is when data is truncated above a set value.
Probability of Claim =undefined

\$250

\$5,000

\$ per Claim

Incomplete Data
•Censored data
10 12 16 17 22 25 27 33 35 39 45 47 53 57 65 71 81 89 99 103 115 122 139 140 156 185 194 225 243 294 340 384 457 680 855 974 1193 1340 1884 2558 9743 –Policy Limit
•All values above \$1,000, are set equal to \$1,000.

10 12 16 17 22 25 27 33 35 39 45 47 53 57 65 71 81 89 99 103 115 122 139 140 156 185 194 225 243 294 340 384 457 680 855 974 1000 1000 1000 1000 1000  X , X  1000

Y  1000, X  1000

Incomplete Data
• This example would be considered censored from above, or the right, since the data above the set value, 1000, is censored. • Censored from below, or the left, would be the case when data is censored below a set value.
Probability of Claim =\$1,000

\$500 \$1,000

\$ per Claim

Incomplete Data
•Estimate with deductible and policy limit294 340 384 457 680 855 974 1000 1000 1000 1000 1000

•What are we estimating for?
–We want to estimate for our entire sample using truncated and censored data. 10 12 16 17 22 25 27 33 35 39 45 47 53 57 65 71 81 89 99 103 115 122 139 140 156 185 194 225 243 294 340 384 457 680 855 974 1193 1340 1884 2558 9743 –We want our estimate to be unbiased.

Incomplete Data
•Estimating with incomplete data
–Group X- modified value, claim amount –Group Y- modified values, amount paid Group X- 294 340 384 457 680 855 974 1000 1000 1000 1000 1000

undefined , X  250  Y   X  250, 250  X  1,000  X  1,000 750, 

Group Y- 44 90 134 207 430 605 724 750 750 750 750 750

Incomplete Data
P(Y  y)  P(Y  y Y  250) P(Y  y  Y  250)  P(Y  250) FX ( y  250)  FX (250)  1  FX (250)
Probability 1

750 250 250+y y Group Y- 44 90 134 207 430 605 724 750 750 750 750 750

Incomplete Data
•Estimating with incomplete data
 0,  F ( y  250)  F (250)  X FY ( y )   X , 1  FX (250)   1, 
Probability 1

y0 0  y  750 y  750

250

250+y

750

y

Group Y- 44 90 134 207 430 605 724 750 750 750 750 750

Incomplete Data
•Solving with incomplete
 f X ( y  250)  1  F (250) , X  1  FX (1000) fY ( y )   ,  1  FX (250) 0,   

Probability

0  y  750 y  750 y  750

750

y

Group Y- 44 90 134 207 430 605 724 750 750 750 750 750

Incomplete Data
Group Y- 44 90 134 207 430 605 724 750 750 750 750 750

fY (44  ) fY (90  ) fY (134  ) fY (207  ) fY (430  ) fY (605  ) fY (724  ) fY (750  ) fY (750  ) fY (750  ) fY (750  ) fY (750  )
 f X ( y  250)  1  F ( 250) , X  1  FX (1000) fY ( y )   , 1  FX ( 250)  0,   

0  y  750 y  750 y  750

 f X (294)  f X (340)  f X (384)  f X (457)  f X (680)  f X (855)  f X (974)    1  F (250)  1  F (250)  1  F (250)  1  F (250)  1  F (250)  1  F (250)  1  F (250)         X X X X X X X          1  FX (1000)  1  FX (1000)  1  FX (1000)  1  FX (1000)  1  FX (1000)    1  F (250)  1  F (250)  1  F (250)  1  F (250)  1  F (250)       X X X X X      

What’s Our Result?
Boss, “Is this a good representation of the population?”
Excel File

What do we need to tell the boss? Estimated mean is \$854.86.
If we compare this too what our complete data set mean, \$565.05, we observe that our estimate is too high. This may mean that we have a considerably high amount of accidents below the deductible.

What’s Our Result?
• The results show that it is a good representation of our received claims, but it is not a good representation for our population.

Incomplete Data
• Why should we use the MLE?
– “One of the major attractions of this estimator is that it is almost always available. That is, if you can write an expression for the desired probabilities, you can execute this method. If you cannot write and evaluate an expression for probabilities using your model, there is no point in postulating that model in the first place because you will not be able to use it to solve your problem.” (Klugman, Panjer, and Willmot)

Thanks!
• Dr. Troy Riggs- Project Advisor • Dr. Matt Lunsford, Seminar Instructor

References
Klugman, Stuart A., Harry H. Panjer, and Gordon E. Willmot. Loss Models: From Data to Decisions. New York: John Wiley and Sons, Inc, 1998. ---. Loss Models: From Data to Decisions. 2nd ed. New York: John Wiley and Sons, Inc, 2004. Myung, In Jae. "Tutorial on Maximum Likelihood Estimation." Journal of Mathematical Psychology. 47 (2003): 93.

```
DOCUMENT INFO
Shared By:
Categories:
Stats:
 views: 9 posted: 12/23/2009 language: English pages: 39