System And Method For Closed Loop Decisionmaking In An Automated Care System - Patent 7599861 by Patents-78

VIEWS: 3 PAGES: 29

More Info
									


United States Patent: 7599861


































 
( 1 of 1 )



	United States Patent 
	7,599,861



 Peterson
 

 
October 6, 2009




System and method for closed loop decisionmaking in an automated care
     system



Abstract

There is disclosed a system functional for operating and updating an
     interactive voice response self care system which utilizes various types
     of models. The system is capable of improving the performance of the self
     care system through updating the models used in that system in a closed
     loop manner. The system is also potentially configured to utilize input
     from a human agent, such as a customer service representative, to which a
     customer call is transferred by a self care system.


 
Inventors: 
 Peterson; Dale R. (Cincinnati, OH) 
 Assignee:


Convergys Customer Management Group, Inc.
 (Cincinnati, 
OH)





Appl. No.:
                    
11/276,497
  
Filed:
                      
  March 2, 2006





  
Current U.S. Class:
  705/26  ; 379/265.02; 705/10; 705/304; 705/35; 705/9; 717/104; 717/174
  
Current International Class: 
  G06Q 30/00&nbsp(20060101)
  
Field of Search: 
  
  







 705/1,9,10,26,35 717/104,174,200
  

References Cited  [Referenced By]
U.S. Patent Documents
 
 
 
5206903
April 1993
Kohler et al.

5214715
May 1993
Carpenter et al.

5345380
September 1994
Babson, III et al.

5411947
May 1995
Hostetler et al.

5581664
December 1996
Allen et al.

5586218
December 1996
Allen

5615296
March 1997
Stanford et al.

5625748
April 1997
McDonough et al.

5652897
July 1997
Linebarger et al.

5678002
October 1997
Fawcett et al.

5701399
December 1997
Lee et al.

5748711
May 1998
Scherer

5757904
May 1998
Anderson

5802526
September 1998
Fawcett et al.

5802536
September 1998
Yoshii et al.

5825869
October 1998
Brooks et al.

5852814
December 1998
Allen

5867562
February 1999
Scherer

5872833
February 1999
Scherer

5895466
April 1999
Goldberg et al.

5944839
August 1999
Isenberg

5956683
September 1999
Jacobs et al.

5960399
September 1999
Barclay et al.

5963940
October 1999
Liddy et al.

5966429
October 1999
Scherer

5987415
November 1999
Breese et al.

5991394
November 1999
Dezonno et al.

6021403
February 2000
Horvitz et al.

6029099
February 2000
Brown

6038544
March 2000
Machin et al.

6044142
March 2000
Hammarstrom et al.

6044146
March 2000
Gisby et al.

6070142
May 2000
McDonough et al.

6094673
July 2000
Dilip et al.

6122632
September 2000
Botts et al.

6137870
October 2000
Scherer

6173266
January 2001
Marx et al.

6173279
January 2001
Levin et al.

6177932
January 2001
Galdes et al.

6182059
January 2001
Angotti et al.

6188751
February 2001
Scherer

6192110
February 2001
Abella et al.

6205207
March 2001
Scherer

6212502
April 2001
Ball et al.

6233547
May 2001
Denber

6233570
May 2001
Horvitz et al.

6243680
June 2001
Gupta et al.

6243684
June 2001
Stuart et al.

6249807
June 2001
Shaw et al.

6249809
June 2001
Bro

6253173
June 2001
Ma

6256620
July 2001
Jawahar et al.

6260035
July 2001
Horvitz et al.

6262730
July 2001
Horvitz et al.

6263066
July 2001
Shtivelman et al.

6263325
July 2001
Yoshida et al.

6275806
August 2001
Pertrushin

6278996
August 2001
Richardson et al.

6282527
August 2001
Gounares et al.

6282565
August 2001
Shaw et al.

6304864
October 2001
Liddy et al.

6307922
October 2001
Scherer

6330554
December 2001
Altschuler et al.

6337906
January 2002
Bugash et al.

6343116
January 2002
Quinton et al.

6356633
March 2002
Armstrong

6356869
March 2002
Chapados et al.

6366127
April 2002
Friedman et al.

6370526
April 2002
Agrawal et al.

6377944
April 2002
Busey et al.

6381640
April 2002
Beck et al.

6389124
May 2002
Schnarel et al.

6393428
May 2002
Miller et al.

6401061
June 2002
Zieman

6405185
June 2002
Pechanek et al.

6411692
June 2002
Scherer

6411926
June 2002
Chang et al.

6411947
June 2002
Rice et al.

6415290
July 2002
Botts et al.

6434230
August 2002
Gabriel et al.

6434550
August 2002
Warner et al.

6442519
August 2002
Kanevsky et al.

6449356
September 2002
Dezonno

6449588
September 2002
Bowman-Amuah

6449646
September 2002
Sikora et al.

6451187
September 2002
Suzuki et al.

6460037
October 2002
Weiss et al.

6480599
November 2002
Ainslie et al.

6493686
December 2002
Francone et al.

6498921
December 2002
Ho et al.

6519571
February 2003
Guheen et al.

6519580
February 2003
Johnson et al.

6519628
February 2003
Locascio

6523021
February 2003
Monberg et al.

6539419
March 2003
Beck et al.

6546087
April 2003
Shaffer et al.

6560590
May 2003
Shwe et al.

6563921
May 2003
Williams et al.

6567805
May 2003
Johnson et al.

6571225
May 2003
Oles et al.

6574599
June 2003
Lim et al.

6581048
June 2003
Werbos

6584180
June 2003
Nemoto

6584185
June 2003
Nixon

6587558
July 2003
Lo

6594684
July 2003
Hodjat et al.

6598039
July 2003
Livowsky

6604141
August 2003
Ventura et al.

6606479
August 2003
Cook et al.

6606598
August 2003
Holthouse et al.

6614885
September 2003
Polcyn

6615172
September 2003
Bennett et al.

6618725
September 2003
Fukuda et al.

6632249
October 2003
Pollock

6633846
October 2003
Bennett et al.

6643622
November 2003
Stuart

6650748
November 2003
Edwards et al.

6652283
November 2003
Van Schaack et al.

6658598
December 2003
Sullivan

6665395
December 2003
Busey et al.

6665640
December 2003
Bennett et al.

6665644
December 2003
Kanevsky et al.

6665655
December 2003
Warner et al.

6694314
February 2004
Sullivan et al.

6694482
February 2004
Arellano et al.

6701311
March 2004
Biebesheimer et al.

6704410
March 2004
McFarlane et al.

6707906
March 2004
Ben-Chanoch

6718313
April 2004
Lent et al.

6721416
April 2004
Farrell

6724887
April 2004
Eilbacher et al.

6725209
April 2004
Iliff

6732188
May 2004
Flockhart et al.

6735572
May 2004
Landesmann

6741698
May 2004
Jensen

6741699
May 2004
Flockhart et al.

6741959
May 2004
Kaiser

6741974
May 2004
Harrison et al.

6745172
June 2004
Mancisidor et al.

6754334
June 2004
Williams et al.

6760727
July 2004
Schroeder et al.

6766011
July 2004
Fromm

6766320
July 2004
Wang et al.

6771746
August 2004
Shambaugh et al.

6771765
August 2004
Crowther et al.

6772190
August 2004
Hodjat et al.

6775378
August 2004
Villena et al.

6778660
August 2004
Fromm

6778951
August 2004
Contractor

6795530
September 2004
Gilbert et al.

6798876
September 2004
Bala

6801763
October 2004
Elsey et al.

6807274
October 2004
Joseph et al.

6807544
October 2004
Morimoto et al.

6813606
November 2004
Ueyama et al.

6819748
November 2004
Weiss et al.

6819759
November 2004
Khuc et al.

6823054
November 2004
Suhm et al.

6829348
December 2004
Schroeder et al.

6829585
December 2004
Grewal et al.

6829603
December 2004
Immink et al.

6832263
December 2004
Polizzi et al.

6836540
December 2004
Falcone et al.

6839671
January 2005
Attwater et al.

6842737
January 2005
Stiles et al.

6842748
January 2005
Warner et al.

6842877
January 2005
Robarts et al.

6845154
January 2005
Cave et al.

6845155
January 2005
Elsey

6845374
January 2005
Oliver et al.

6847715
January 2005
Swartz

6850612
February 2005
Johnson et al.

6850923
February 2005
Nakisa et al.

6850949
February 2005
Warner et al.

6856680
February 2005
Mengshoel et al.

6859529
February 2005
Duncan et al.

6871174
March 2005
Dolan et al.

6871213
March 2005
Graham et al.

6873990
March 2005
Oblinger

6879685
April 2005
Peterson et al.

6879967
April 2005
Stork

6882723
April 2005
Peterson et al.

6885734
April 2005
Eberle et al.

6895083
May 2005
Bers et al.

6895558
May 2005
Loveland et al.

6898277
May 2005
Meteer et al.

6901397
May 2005
Moldenhauer et al.

6904143
June 2005
Peterson et al.

6907119
June 2005
Case et al.

6910003
June 2005
Arnold et al.

6910072
June 2005
Macleod Beck et al.

6915246
July 2005
Gusler et al.

6915270
July 2005
Young et al.

6922466
July 2005
Peterson et al.

6922689
July 2005
Shtivelman

6924828
August 2005
Hirsch

6925452
August 2005
Hellerstein et al.

6928156
August 2005
Book et al.

6931119
August 2005
Michelson et al.

6931434
August 2005
Donoho et al.

6934381
August 2005
Klein et al.

6934684
August 2005
Alpdemir et al.

6937705
August 2005
Godfrey et al.

6938000
August 2005
Joseph et al.

6941266
September 2005
Gorin et al.

6941304
September 2005
Gainey et al.

6944592
September 2005
Pickering

6950505
September 2005
Longman et al.

6950827
September 2005
Jung

6952470
October 2005
Tioe et al.

6956941
October 2005
Duncan et al.

6959080
October 2005
Dezonno et al.

6959081
October 2005
Brown et al.

6961720
November 2005
Nelken

6965865
November 2005
Pletz et al.

6967316
November 2005
Lee

6970554
November 2005
Peterson et al.

6970821
November 2005
Shambaugh et al.

6975708
December 2005
Scherer

6976019
December 2005
Davallou

6981020
December 2005
Miloslavsky et al.

6983239
January 2006
Epstein

6985862
January 2006
Strom et al.

6987846
January 2006
James

6988072
January 2006
Horvitz

6988132
January 2006
Horvitz

6990179
January 2006
Merrow et al.

6993475
January 2006
McConnell et al.

6996531
February 2006
Korall et al.

6999990
February 2006
Sullivan et al.

7003079
February 2006
McCarthy et al.

7003459
February 2006
Gorin et al.

7006607
February 2006
Garcia

7007067
February 2006
Azvine et al.

7012996
March 2006
Polcyn

7013290
March 2006
Ananian

7016056
March 2006
Skaanning

7016485
March 2006
Shtivelman

7016842
March 2006
Mills

7019749
March 2006
Guo et al.

7027586
April 2006
Bushey et al.

7027643
April 2006
Comaniciu et al.

7031951
April 2006
Mancisidor et al.

7035384
April 2006
Scherer

7035388
April 2006
Kurosaki

7039165
May 2006
Saylor et al.

7039166
May 2006
Peterson et al.

7045181
May 2006
Yoshizawa et al.

7046777
May 2006
Colson et al.

7047498
May 2006
Lui et al.

7050568
May 2006
Brown et al.

7050976
May 2006
Packingham

7050977
May 2006
Bennett

7058169
June 2006
Summer et al.

7058565
June 2006
Gusler et al.

7065188
June 2006
Mei et al.

7065202
June 2006
Statham et al.

7068774
June 2006
Judkins et al.

7076032
July 2006
Pirasteh et al.

7076051
July 2006
Brown et al.

7076427
July 2006
Scarano et al.

7076736
July 2006
Hugh

7080323
July 2006
Knott et al.

7085367
August 2006
Lang

7085755
August 2006
Bluhm et al.

7092509
August 2006
Mears et al.

7092510
August 2006
Hamilton, II et al.

7092888
August 2006
McCarthy et al.

7096219
August 2006
Karch

7099855
August 2006
Nelken et al.

7103170
September 2006
Fain et al.

7103172
September 2006
Brown et al.

7103553
September 2006
Applebaum et al.

7103562
September 2006
Kosiba et al.

7106850
September 2006
Campbell et al.

7107207
September 2006
Goodman

7107254
September 2006
Dumais et al.

7110523
September 2006
Gagle et al.

7110524
September 2006
Rupe et al.

7110525
September 2006
Heller et al.

7117158
October 2006
Weldon et al.

7117188
October 2006
Guyon et al.

7133866
November 2006
Rishel et al.

7134672
November 2006
Beishline et al.

7136851
November 2006
Ma et al.

7139555
November 2006
Apfel

7152029
December 2006
Alshawi et al.

7155158
December 2006
Iuppa et al.

7158935
January 2007
Gorin et al.

7171352
January 2007
Chang et al.

7181492
February 2007
Wen et al.

7194359
March 2007
Duffy et al.

7200675
April 2007
Wang et al.

7203635
April 2007
Oliver et al.

7203646
April 2007
Bennett

7209908
April 2007
Li et al.

7213742
May 2007
Birch et al.

7215744
May 2007
Scherer

7219085
May 2007
Buck et al.

7237137
June 2007
Goeller et al.

7237243
June 2007
Sutton et al.

7240011
July 2007
Horvitz

7240244
July 2007
Teegan et al.

7246353
July 2007
Forin et al.

7249135
July 2007
Ma et al.

7254579
August 2007
Cabrera et al.

7257203
August 2007
Quinton

7257514
August 2007
Faihe

7269516
September 2007
Brunner et al.

7275048
September 2007
Bigus et al.

2001/0010714
August 2001
Nemoto

2001/0024497
September 2001
Campbell et al.

2001/0041562
November 2001
Elsey et al.

2001/0044800
November 2001
Han

2001/0047261
November 2001
Kassan

2001/0047270
November 2001
Gusick et al.

2001/0049688
December 2001
Fratkina et al.

2001/0053977
December 2001
Schaefer

2001/0054064
December 2001
Kannan

2001/0056346
December 2001
Ueyama et al.

2002/0007356
January 2002
Rice et al.

2002/0013692
January 2002
Chandhok et al.

2002/0023144
February 2002
Linyard et al.

2002/0026435
February 2002
Wyss et al.

2002/0032591
March 2002
Mahaffy

2002/0044296
April 2002
Skaanning

2002/0046096
April 2002
Srinivasan et al.

2002/0051522
May 2002
Merrow et al.

2002/0055975
May 2002
Petrovykh

2002/0062245
May 2002
Niu et al.

2002/0062315
May 2002
Weiss et al.

2002/0072921
June 2002
Boland

2002/0087325
July 2002
Lee et al.

2002/0087385
July 2002
Vincent

2002/0104026
August 2002
Barra

2002/0111811
August 2002
Bares et al.

2002/0116243
August 2002
Mancisidor et al.

2002/0116698
August 2002
Lurie et al.

2002/0118220
August 2002
Lui et al.

2002/0123957
September 2002
Notarius et al.

2002/0143548
October 2002
Korall et al.

2002/0146668
October 2002
Burgin et al.

2002/0156776
October 2002
Davallou

2002/0161626
October 2002
Plante et al.

2002/0161896
October 2002
Wen et al.

2002/0168621
November 2002
Cook et al.

2002/0169834
November 2002
Miloslavsky et al.

2002/0174199
November 2002
Horvitz

2002/0178022
November 2002
Anderson et al.

2002/0184069
December 2002
Kosiba et al.

2003/0004717
January 2003
Strom et al.

2003/0007612
January 2003
Garcia

2003/0028451
February 2003
Ananian

2003/0031309
February 2003
Rupe et al.

2003/0035531
February 2003
Brown et al.

2003/0037177
February 2003
Sutton et al.

2003/0046297
March 2003
Mason

2003/0046311
March 2003
Baidya et al.

2003/0084009
May 2003
Bigus et al.

2003/0084066
May 2003
Waterman et al.

2003/0095652
May 2003
Mengshoel et al.

2003/0100998
May 2003
Brunner et al.

2003/0108162
June 2003
Brown et al.

2003/0108184
June 2003
Brown et al.

2003/0115056
June 2003
Gusler et al.

2003/0117434
June 2003
Hugh

2003/0120502
June 2003
Robb et al.

2003/0120653
June 2003
Brady et al.

2003/0144055
July 2003
Guo et al.

2003/0154120
August 2003
Freishtat et al.

2003/0169870
September 2003
Stanford

2003/0169942
September 2003
Comaniciu et al.

2003/0172043
September 2003
Guyon et al.

2003/0177009
September 2003
Odinak et al.

2003/0177017
September 2003
Boyer

2003/0187639
October 2003
Mills

2003/0198321
October 2003
Polcyn

2003/0200135
October 2003
Wright

2003/0204404
October 2003
Weldon et al.

2003/0212654
November 2003
Harper et al.

2003/0222897
December 2003
Moore et al.

2003/0225730
December 2003
Warner et al.

2003/0228007
December 2003
Kurosaki

2003/0233392
December 2003
Forin et al.

2003/0236662
December 2003
Goodman

2004/0002502
January 2004
Banholzer et al.

2004/0002838
January 2004
Oliver et al.

2004/0005047
January 2004
Joseph et al.

2004/0006478
January 2004
Alpdemir et al.

2004/0010429
January 2004
Vedula et al.

2004/0062364
April 2004
Dezonno et al.

2004/0066416
April 2004
Knott et al.

2004/0068497
April 2004
Rishel et al.

2004/0081183
April 2004
Monza et al.

2004/0093323
May 2004
Bluhm et al.

2004/0117185
June 2004
Scarano et al.

2004/0140630
July 2004
Beishline et al.

2004/0141508
July 2004
Schoeneberger et al.

2004/0148154
July 2004
Acero et al.

2004/0162724
August 2004
Hill et al.

2004/0162812
August 2004
Lane et al.

2004/0176968
September 2004
Syed et al.

2004/0181471
September 2004
Rogers et al.

2004/0181588
September 2004
Wang et al.

2004/0193401
September 2004
Ringger et al.

2004/0204940
October 2004
Alshawi et al.

2004/0210637
October 2004
Loveland

2004/0220772
November 2004
Cobble

2004/0226001
November 2004
Teegan et al.

2004/0228470
November 2004
Williams et al.

2004/0230689
November 2004
Loveland

2004/0234051
November 2004
Quinton

2004/0240629
December 2004
Quinton

2004/0240636
December 2004
Quinton

2004/0240639
December 2004
Colson et al.

2004/0240659
December 2004
Gagle et al.

2004/0243417
December 2004
Pitts et al.

2004/0249636
December 2004
Applebaum et al.

2004/0249650
December 2004
Freedman et al.

2004/0252822
December 2004
Statham et al.

2004/0260546
December 2004
Seo et al.

2004/0260564
December 2004
Horvitz

2004/0268229
December 2004
Paoli

2005/0002516
January 2005
Shtivelman

2005/0021485
January 2005
Nodelman et al.

2005/0021599
January 2005
Peters

2005/0027495
February 2005
Matichuk

2005/0027827
February 2005
Owhadi et al.

2005/0047583
March 2005
Summer et al.

2005/0049852
March 2005
Chao

2005/0050527
March 2005
McCrady et al.

2005/0065789
March 2005
Yacoub et al.

2005/0065899
March 2005
Li et al.

2005/0066236
March 2005
Goeller et al.

2005/0068913
March 2005
Tan et al.

2005/0071178
March 2005
Beckstrom et al.

2005/0083846
April 2005
Bahl

2005/0084082
April 2005
Horvitz et al.

2005/0091123
April 2005
Freishtat et al.

2005/0091147
April 2005
Ingargiola et al.

2005/0091219
April 2005
Karachale et al.

2005/0097028
May 2005
Watanabe et al.

2005/0097197
May 2005
Vincent

2005/0105712
May 2005
Williams et al.

2005/0114376
May 2005
Lane et al.

2005/0125229
June 2005
Kurzweil et al.

2005/0125369
June 2005
Buck et al.

2005/0125370
June 2005
Brennan et al.

2005/0125371
June 2005
Bhide et al.

2005/0125463
June 2005
Joshi et al.

2005/0132094
June 2005
Wu

2005/0135595
June 2005
Bushey et al.

2005/0143628
June 2005
Dai et al.

2005/0149520
July 2005
De Vries

2005/0152531
July 2005
Hamilton, II et al.

2005/0154591
July 2005
Lecoeuche

2005/0160060
July 2005
Swartz et al.

2005/0163302
July 2005
Mock et al.

2005/0165803
July 2005
Chopra et al.

2005/0171932
August 2005
Nandhra

2005/0175167
August 2005
Yacoub

2005/0176167
August 2005
Lee

2005/0177368
August 2005
Odinak

2005/0177414
August 2005
Priogin et al.

2005/0177601
August 2005
Chopra et al.

2005/0187944
August 2005
Acheson et al.

2005/0193102
September 2005
Horvitz

2005/0195966
September 2005
Adar et al.

2005/0198110
September 2005
Garms et al.

2005/0203747
September 2005
Lecoeuche

2005/0203760
September 2005
Gottumukkala et al.

2005/0203949
September 2005
Cabrera et al.

2005/0204051
September 2005
Box

2005/0212807
September 2005
Premchandran

2005/0228707
October 2005
Hendrickson

2005/0228796
October 2005
Jung

2005/0228803
October 2005
Farmer et al.

2005/0232409
October 2005
Fain et al.

2005/0246241
November 2005
Irizarry et al.

2005/0251382
November 2005
Chang et al.

2005/0256819
November 2005
Tibbs et al.

2005/0256850
November 2005
Ma et al.

2005/0256865
November 2005
Ma et al.

2005/0267772
December 2005
Nielsen et al.

2005/0270293
December 2005
Guo et al.

2005/0273336
December 2005
Chang et al.

2005/0273384
December 2005
Fraser

2005/0273771
December 2005
Chang et al.

2005/0278124
December 2005
Duffy et al.

2005/0278177
December 2005
Gottesman

2005/0278213
December 2005
Faihe

2005/0288006
December 2005
Apfel

2005/0288871
December 2005
Duffy et al.

2005/0288981
December 2005
Elias et al.

2006/0004845
January 2006
Kristiansen et al.

2006/0010164
January 2006
Netz et al.

2006/0010206
January 2006
Apacible et al.

2006/0015390
January 2006
Rijsinghani et al.

2006/0020692
January 2006
Jaffray et al.

2006/0026011
February 2006
Verego et al.

2006/0036445
February 2006
Horvitz

2006/0036642
February 2006
Horvitz et al.

2006/0041423
February 2006
Kline et al.

2006/0041648
February 2006
Horvitz

2006/0053204
March 2006
Sundararajan et al.

2006/0059431
March 2006
Pahud

2006/0069564
March 2006
Allison et al.

2006/0069570
March 2006
Allison et al.

2006/0069684
March 2006
Vadlamani et al.

2006/0069863
March 2006
Palmer et al.

2006/0070081
March 2006
Wang

2006/0070086
March 2006
Wang

2006/0074732
April 2006
Shukla et al.

2006/0074831
April 2006
Hyder et al.

2006/0075024
April 2006
Zircher et al.

2006/0075399
April 2006
Loh et al.

2006/0080107
April 2006
Hill et al.

2006/0080468
April 2006
Vadlamani et al.

2006/0080670
April 2006
Lomet

2006/0101077
May 2006
Warner et al.

2006/0106743
May 2006
Horvitz

2006/0109974
May 2006
Paden et al.

2006/0122834
June 2006
Bennett

2006/0122917
June 2006
Lokuge et al.

2006/0149555
July 2006
Fabbrizio et al.

2006/0161407
July 2006
Lanza et al.

2006/0167696
July 2006
Chaar et al.

2006/0167837
July 2006
Ramaswamy et al.

2006/0178883
August 2006
Acero et al.

2006/0182234
August 2006
Scherer

2006/0190226
August 2006
Jojic et al.

2006/0190253
August 2006
Hakkani-Tur et al.

2006/0195321
August 2006
Deligne et al.

2006/0195440
August 2006
Burges et al.

2006/0198504
September 2006
Shemisa et al.

2006/0200353
September 2006
Bennett

2006/0206330
September 2006
Attwater et al.

2006/0206336
September 2006
Gurram et al.

2006/0206481
September 2006
Ohkuma et al.

2006/0212286
September 2006
Pearson et al.

2006/0212446
September 2006
Hammond et al.

2006/0235861
October 2006
Yamashita et al.

2007/0005531
January 2007
George et al.

2007/0033189
February 2007
Levy et al.

2007/0041565
February 2007
Williams et al.

2007/0043571
February 2007
Michelini et al.

2007/0043696
February 2007
Haas et al.

2007/0063854
March 2007
Zhang et al.

2007/0208579
September 2007
Peterson

2008/0086402
April 2008
Patel et al.



 Foreign Patent Documents
 
 
 
2248897
Sep., 1997
CA

2301664
Jan., 1999
CA

2485238
Jan., 1999
CA

0077175
Apr., 1983
EP

0700563
Mar., 1996
EP

0977175
Feb., 2000
EP

1191772
Mar., 2002
EP

1324534
Jul., 2003
EP

1424844
Jun., 2004
EP

1494499
Jan., 2005
EP

2343772
May., 2000
GB

10133847
May., 1998
JP

2002055695
Feb., 2002
JP

2002189483
Jul., 2002
JP

2002366552
Dec., 2002
JP

2002374356
Dec., 2002
JP

2004030503
Jan., 2004
JP

2004104353
Apr., 2004
JP

2004118457
Apr., 2004
JP

2004220219
Aug., 2004
JP

2004241963
Aug., 2004
JP

2004304278
Oct., 2004
JP

2005258825
Sep., 2005
JP

WO9215951 1
Sep., 1992
WO

WO9321587
Oct., 1993
WO

WO95/27360
Oct., 1995
WO

WO9502221
Nov., 1995
WO

WO99/04347
Jan., 1999
WO

WO99/53676
Oct., 1999
WO

WO00/18100
Mar., 2000
WO

WO00/70481
Nov., 2000
WO

WO00/73955
Dec., 2000
WO

WO00/75851
Dec., 2000
WO

WO01/04814
Jan., 2001
WO

WO01/33414
May., 2001
WO

WO01/35617
May., 2001
WO

WO01/37136
May., 2001
WO

WO01/39028
May., 2001
WO

WO01/39082
May., 2001
WO

WO01/39086
May., 2001
WO

WO01/82123
Nov., 2001
WO

WO02/09399
Jan., 2002
WO

WO02/09399
Jan., 2002
WO

WO02/19603
Mar., 2002
WO

WO02/27426
Apr., 2002
WO

WO02/061730
Aug., 2002
WO

WO02/073331
Sep., 2002
WO

WO 03/009175
Jan., 2003
WO

WO03/009175
Jan., 2003
WO

WO03/021377
Mar., 2003
WO

WO03/069874
Aug., 2003
WO

WO2004/059805
Jul., 2004
WO

WO2004/081720
Sep., 2004
WO

WO2004/091184
Oct., 2004
WO

WO2004/107094
Dec., 2004
WO

WO2005/006116
Jan., 2005
WO

WO2005/011240
Feb., 2005
WO

WO2005/013094
Feb., 2005
WO

WO2005/069595
Jul., 2005
WO

WO2006/050503
May., 2006
WO

WO2006/062854
Jun., 2006
WO

WO2007/033300
Mar., 2007
WO



   
 Other References 

Acl.Idc.upenn.edu/W/W99/W99-0306.pdf (visited on Aug. 22, 2007). cited by other
.
Dingo.sbs.arizona.edu/.about.sandiway/ling538o/lecture1.pdf (visited on Aug. 22, 2007). cited by other
.
En.wikipedia.org/wiki/Microsoft.sub.--Agent (visited on Aug. 22, 2007). cited by other
.
en.wikipedia.org/wiki/Wizard.sub.--of.sub.--Oz.sub.--experiment (visited on Aug. 22, 2007). cited by other
.
Liveops.com/news/news.sub.--07-0116.html (visited on Aug. 22, 2007). cited by other
.
www.aumtechinc.com/CVCC/cvcc11.0.htm (visited on Aug. 22, 2007). cited by other
.
www.beamyourscreen.com/US/Welcome.aspx (visited Aug. 24, 2007). cited by other
.
www.bultreebank.org/SProLaC/paper05.pdf (visited on Aug. 22, 2007). cited by other
.
www.callcenterdemo.com (visited on Aug. 22, 2007). cited by other
.
www.changingcallcenters.com (visited on Aug. 22, 2007). cited by other
.
www.crm2day.com/news/crm/115147.php (visited on Aug. 22, 2007). cited by other
.
www.csdl2.computer.org/persagen/DLAbsToc (visied Sep. 13, 2007). cited by other
.
www.eff.org/patent (visited on Aug. 22, 2007). cited by other
.
www.eff.org/patent/wanted/patent.php?p=firepond (visited on Aug. 22, 2007). cited by other
.
www.egain.com (visited Aug. 24, 2007). cited by other
.
www.instantservice.com (visited Aug. 24, 2007). cited by other
.
www.kana.com (visited Aug. 24, 2007). cited by other
.
www.learn.serebra.com/trainingclasses/index (visited Sep. 13, 2007). cited by other
.
www.livecare.it/en/par.sub.--business (visited Aug. 24, 2007). cited by other
.
www.livehelper.com/products (visited Aug. 24, 2007). cited by other
.
www.liveperson.com/enterprise/proficient.asp (visited Aug. 24, 2007). cited by other
.
www.microsoft.com/serviceproviders/solutions/ccf.mspx (visited on Aug. 22, 2007). cited by other
.
www.oracle.com/siebel/index (visited Aug. 24, 2007). cited by other
.
www.pageshare.com (visited Aug. 24, 2007). cited by other
.
www.realmarket.com/news/firepond082703.html (visited on Aug. 22, 2007). cited by other
.
www.serebra.com/naa/index (visited Sep. 13, 2007). cited by other
.
www.speechcycle.com/about/management.sub.--team.asp (visited on Aug. 22, 2007). cited by other
.
www.spoken.com (visited on Aug. 22, 2007). cited by other
.
www.spoken.com/who/our.sub.--story.asp (visited on Aug. 22, 2007). cited by other
.
www.training-classes.com/course.sub.--hierarchy/courses/4322.sub.--call.su- b.--center.sub.--structures.sub.--customer relationships.php. cited by other
.
www.velaro.com (visited Aug. 24, 2007). cited by other
.
www.virtualhold.com (visited on Aug. 22, 2007). cited by other
.
www.volusion.com (visited Aug. 24, 2007). cited by other
.
www.vyew.com/content (visited Aug. 24, 2007). cited by other
.
www.webdialogs.com (visited Aug. 24, 2007). cited by other
.
www.webmeetpro.com/technology.asp (visited Aug. 24, 2007). cited by other
.
www.wired.com/news/business/0,64038-0.html (visited on Aug. 22, 2007). cited by other
.
Adams, Scott, Dilbert cartoon. cited by other
.
Alam, Hisham; Stand and Deliver (Industry Trend or Event) (Editorial), Intelligent Enterprise, Mar. 27, 2001, pp. 44, vol. 4, No. 5, CMP Media, Inc., USA. (Abstract only reviewed and provided.). cited by other
.
Avaya. Advanced Multichannel Contact Management Avaya Interaction Center White Paper. Apr. 2005. cited by other
.
Bernstel, J.B., Speak right up! Ybank call centers, ABA Bank Marketing, Nov. 2001, pp. 16-21, vol. 33, No. 9, American Bankers Assoc., USA. (Abstract only reviewed and provided.). cited by other
.
Burbach, Stacey; Niedenthal, Ashley; Siebel Leverages Telephony@Work Technology as Part of Siebel CRM OnDemand Release 7; Siebel Incorporates Telephony@Work Technology In CRM OnDemand Release 7, Jun. 6, 2005, ProQuest, Chicago, IL. (Abstract only
reviewed and provided.). cited by other
.
Caruso, Jeff; Standards Committee to Define Call Center Terms (industry Reporting Standards Steering Committee) (Technology Information), CommunicationsWeek, Apr. 29, 1996, 1(2) pp., No. 608, USA. (Abstract only reviewed and provided.). cited by
other
.
Chan, C.; Chen, Liqiang; Chen, Lin-Li; Development of an Intelligent Case-Based System for Help Desk Operations, May 9-12, 1999, pp. 1062-1067, vol. 2, Electrical and Computer Engineering, 1999 IEEE Canadian Conference on, Edmonton, Alta., Canada.
(Abstract only reviewed and provided.). cited by other
.
Chiu, Dickson K.W.; Chan, Wesley C.W.; Lam, Gary K.W.; Cheung, S.C.; and Luk, Franklin T., An Event Driven Approach to Customer Relationship Management in E-Brokerage Industry, Jan. 2003, 36.sup.th Annual Hawaii International Conference on System
Sciences, USA. (Abstract only reviewed and provided.). cited by other
.
Choudhary, Alok; Dubey, Pradeep; Liao, Wei-Keng; Liu, Ying; Memik, Gokhan; Pisharath, Jayaprakash; Performance Evaluation and Characterization of Scalable Data Mining Algorithms, 2004, pp. 620-625, vol. 16, Proc. IASTED INT. Conf. Parall. Distrib.
Comput. Syst., USA. cited by other
.
Finke, M.; Lapata, M.; Lavie, A.; Levin, L.; Tomokiyo, L.M.; Polzin, T.; Ries, K.; Waibel, A.; Zechner, K.; Clarity: Inferring Discourse Structure from Speech, Mar. 23-25, 1998, pp. 25-32, Proceedings of 1998 Spring Symposium Series Applying Machine
Learning to Discourse Processing, USA. (Abstract only reviewed and provided.). cited by other
.
Fowler, Dennis. The Personal Touch--How E-Businesses Are Using Customer Relations Management to Thwart Competitors and Bolster Their Bottom Lines. Dec. 2000. cited by other
.
Gao, Jianfeng; Microsoft Research Asia and Chin-Yew Lin, Information Sciences Institute, Univ. of S. California, Introduction to the Special Issue on Statistical Language Modeling, ACM Transactions on Asian Language Information Processing, vol. 3,
No. 2, Jun. 2004, pp. 87-93. cited by other
.
Getting Better Every Day: How Marrying Customer Relationship Marketing to Continuous Improvement Brings Sustained Growth, Aug. 2005, pp. 24-25, vol. 21, No. 8, Strategic Direction, USA. (Abstract only reviewed and provided.). cited by other
.
Gupta, Narendra; Gokhan Tur, Dilek Hakkani-Tur, Member, IEEE, Srinivas Bangalore, Giuseppe Riccardi, Senior Member; IEEE, and Mazin Gilbert, Senior Member, IEEE. The AT&T Spoken Language Understanding System. IEEE Transaction on Audio, Speech, and
Language Processing. vol. 14, No. 1, Jan. 2006. cited by other
.
Hu, Xunlei Rose, and Eric Atwell. A survey of machine learning approaches to analysis of large corpora. School of Computing, University of Leeds, U.K. LS2 9JT. cited by other
.
IBM TDB, #7 Business Method to Improve Problem Diagnosis in Current Systems Using a Combination of XML and VoiceXML, Jan. 1, 2002, IPCOM000014964D, USA. cited by other
.
Iyer, A.V.; Deshpande, V.; Zhengping, Wu; A Postponement Model for Demand Management, Aug. 2003, pp. 983-1002, vol. 49, No. 8, Management Science, USA. (Abstract only reviewed and provided.). cited by other
.
Karahan, Mercan; Dilek Hakkani-Tur, Giuseppe Riccardi, Gokhan Tur. Combining Classifiers for Spoken Language Understanding. .COPYRGT. 2003 IEEE. cited by other
.
Langkilde, Irene; Marilyn Walker; Jerry Wright, Allen Gorin, Diane Litman. Automatic Prediction of Problematic Human-Computer Dialogues in `How May I Help You?` AT&T Labs--Research. cited by other
.
Lewis, Michael, Research note: A Dynamic Programming Approach to Customer Relationship Pricing, Jun. 2005, pp. 986-994, vol. 51, No. 6, Management Science, USA. (Abstract only reviewed and provided.). cited by other
.
Lindsay, Jeff; Schuh, James; Reade, Walter; Peterson, Karin; Mc Kinney, Christopher. The Historic Use of Computerized Tools for Marketing and Market Research: A Brief Survey, Dec. 27, 2001, www.ip.com, USA. (Abstract only reviewed and provided.).
cited by other
.
Litman, Diane J. And Shimei Pan. Designing and Evaluating an Adaptive Spoken Dialogue System. .COPYRGT. 2002 Kluwer Academic Publishers. cited by other
.
Loren Struck, Dennis. Business Rule Continuous Requirement Environment. A Dissertation Submitted to the Graduate Council in Partial Fulfillment of the Reqirement for the Degree of Doctor of Computer Science. Colorado Springs, Colorado, May 1999.
cited by other
.
Marsico, K., Call Centers: Today's New Profit Centers, 1995-96, pp. 14-18, vol. 10, No. 4, AT&T Technology, USA. (Abstract only reviewed and provided.). cited by other
.
Mohri, Mehryar; Fernando Pereira, Michael Riley. Weighted Finite-State Transducers in Speech Recognition. Article submitted to Computer Speech and Language. cited by other
.
Paek, Tim & Eric Horvitz. Optimizing Automated Call Routing by Integrating Spoken Dialog Models with Queuing Models. (timpaek/horvitz@Microsoft.com) Microsoft, Redmond, WA. cited by other
.
Peng, Fuchun and Dale Schuurmans. Combining Naive Bayes and n-Gram Language Models for Text Classification. (f3peng, dale@cs.uwaterloo.ca). cited by other
.
Pradhan, Sameer S.; Ward, Wayne H.; Estimating Semantic Confidence for Spoken Dialogue Systems, 2002, pp. 233-236, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing--Proceedings v 1, USA. (Abstract only reviewed and
provided.). cited by other
.
Riccardi, G.; Gorin, A.L.; Ljolje, A.; Riley, M.; A Spoken Language System for Automated Call Routing, Apr. 21, 1997, pp. 1143-1146, International Conference on Acoustics, Speech and Signal Processing, USA. (Abstract only reviewed and provided.).
cited by other
.
Ritter, Julie, Crossroads Customer Solutions Expands Siebel Contact OnDemand Deployment with Siebel CRM OnDemand, Apr. 12, 2004, ProQuest, Chicago, IL. (Abstract only reviewed and provided.). cited by other
.
Schapire, Robert E.; and Yoram Singer. BoosTexter: A Boosting-based System for Text Categorization. (schapire@research.att.com; singer@research.att.com). cited by other
.
Schmidt, M.S., Identifying ,Speakers with Support Vector Networks, Jul. 8-12, 1996, pp. 305-314, Proceedings of 28.sup.th Symposium on the Interface of Computing Science and Statistics (Graph-Image-Vision.), USA. (Abstract only reviewed and
provided.). cited by other
.
Seiya and Masaru (Toshiba), #3 Knowledge Management Improvement of Help Desk Operation by Q & A Case Referencing, Toshiba Rebya, 2001, vol. 56, No. 5, pp. 28-31. cited by other
.
Shriberg, E.; Bates, R.; Stolcke, A.; Taylor, P.; Jurafsky, D.; Ries, K.; Coccaro, N.; Martin, R.; Mateer, M.; Vaness-Dykema, C.; Can Prosody Aid the Automatic Classification of Dialog Acts in Conversational Speech?, 1988, pp. 443-492, vol. 41.
cited by other
.
SSA Global Strengthens CRM Functionality for Customers' Inbound and Outbound Marketing Initiatives; SSA Marketing 7.0 introduces major enhancements to the company's industry-leading marketing automation solution, Apr. 3, 2006, ProQuest, Chicago, IL.
(Abstract only. cited by other
.
Steinborn, D. Time flies, even waiting (bank telephone answering), Bank Systems + Technology, Sep. 1993, pp. 39, 41, vol. 30, No. 9, La Salle Nat. Bank, Chicago, IL. (Abstract only reviewed and provided.). cited by other
.
Stolcke, A.; Ries, K.; Coccaro, N.; Shriberg, E.; Bates, R.; Jurafsky, D.; Taylor, P.; Martin, R.; Van Ess-Dykema, C.; Meteer, M.; Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech, Sep. 2000, pp. 339-373. cited by
other
.
Suhm, Bernhard and Pat Peterson. Data-Driven Methodology for Evaluating and Optimizing Call Center IVRs. Revised Aug. 22, 2001. bsuhm@bbn.com; patp@bbn.com). cited by other
.
Tang, Min; Bryan Pellom, Kadri Hacioglu. Call-Type Classification and Unsupervised Training for the Call Center Domain. (tagm,pellom, hacioglu@cslr.colorado.edu). cited by other
.
Thiel, Beth Miller; Weber, Porter Novelli Kimberly; PeopleSoft Announces General Availability of PeopleSoft Enterprise CRM 8.9 Service, Jun. 16, 2004, ProQuest, Chicago, IL. (Abstract only reviewed and provided.). cited by other
.
To use and abuse (Voice processing), What to Buy for Business, Jan. 1995, pp. 2-20, No. 166, UK. (Abstract only reviewed and provided.). cited by other
.
Wahlster, Wolfgang. The Role of Natural Language in Advanced Knowledge-Based Systems. In: H. Winter (ed.): Artificial Intelligence and Man-Machine Systems, Berlin: Springer. cited by other
.
Walker, Marilyn A.; Irene Langkilde-Geary, Helen Wright Hastie, Jerry Wright, Allen Gorin. Automatically Training a Problematic Dialogue Predictor for a Spoken Dialogue System. .COPYRGT. 2002 AI Access Foundation and Morgan Kaufmann Publishers,
published May 2002. cited by other
.
Walker, Marilyn, Learning to Predict Problematic Situations in a Spoken Dialogue System: Experiments with How May I Help You?, 2000, pp. 210-217, ACM International Conference Proceeding Series; vol. 4 archive, Proceedings of the first conference on
North American Chapter of the Association for Computational Linguistics, Seattle, WA. (Abstract only reviewed and provided.). cited by other
.
Whittaker, Scahill, Attwater and Geenhow, Interactive Voice Technology for Telecommunications for Telecommuniactions Applications,: #10 Practical Issues in the application of speech technology to network and customer service applications, (1998)
IVTTA '98 Proceedings, 1998 IEEE 4.sup.th Workshop, Published Sep. 1998, pp. 185-190. USA. cited by other
.
Yan, Lian; R.H. Wolniewica, R. Dodier. Abstract--Predicting Customer Behavior in Telecommunications. Intelligent Systems, IEEE Mar.-Apr. 2004. cited by other
.
Young, Alan; Innis, Rafael; System and Method for Developing Business Process Policies, Jul. 3, 2002, Patent Publication 2003005154, Computer Associates International, Inc., USA. (Abstract only reviewed and provided.). cited by other
.
Young, Howard; Adiano, Cynthia; Enand, Navin; Ernst, Martha; Thompson, Harvey; Zia, May Sun; Customer Relationship Management Business Method, Jul. 5, 2005, U.S. Appl. No. 09/723,519, USA. (Abstract only reviewed and provided.). cited by other
.
Zweig, G.; O. Siohan,G. Saon, B. Ramabhadran, D. Povey, L. Mangu and B. Kingsbu. Automated Quality Monitoring in the Call Center With ASR and Maximum Entropy. IBM T.J. Watson Research Center, Yorktown Heights, NY 10598 (ICASSP 2006). cited by other
.
Alpaydin, Ethem; Introduction to Machine Learning, Abe Books. cited by other
.
Mitchell, Tom, Machine Learning, Abe Books. cited by other
.
Russell, Stuart J. and Peter Norvig, Artificial Intelligence: A Modern Approach, Abe Books. cited by other
.
Witten, Ian H., and Eibe Frank, Data Mining: Practical Tools Machine Learning Tools and Techniques, Abe Books. cited by other
.
Gao, Jianfeng, et al.: Introduction to a Special Issue on Statistical Language Modeling; ACT Transaction on Asian Language Information Processing; vol. 3, No. 2: Jun. 2004; pp. 87-93. cited by other
.
Office Action Non-Final Rejection for U.S. Appl. No. 10/419,463, filed Oct. 23, 2007. cited by other
.
Information Disclosure Statement for U.S. Appl. No. 10/419,463, filed Apr. 21, 2003. cited by other
.
Original Application for U.S. Appl. No. 10/419,463, filed Apr. 21, 2003. cited by other
.
Information Disclosure Statement for U.S. Appl. No. 11/565,702, filed Sep. 26, 2007. cited by other
.
Original Application for U.S. Appl. No. 11/565,702, filed Dec. 1, 2006. cited by other
.
Original Application for U.S. Appl. No. 60/978,553, filed Oct. 9, 2007. cited by other
.
U.S. Appl. No. 11/565,702. cited by other
.
U.S. Appl. No. 60/978,553. cited by other
.
Office Action dated Mar. 24, 2009 for U.S. Appl. No. 10/419,463. cited by other
.
U.S. Appl. No. 11/198,934. cited by other
.
U.S. Appl. No. 11/686,812. cited by other
.
U.S. Appl. No. 11/749,983. cited by other
.
U.S. Appl. No. 11/751,680. cited by other
.
U.S. Appl. No. 11/751,851. cited by other
.
U.S. Appl. No. 11/751,976. cited by other
.
U.S. Appl. No. 60/747,896. cited by other
.
U.S. Appl. No. 60/867,030. cited by other
.
U.S. Appl. No. 60/882,906. cited by other
.
U.S. Appl. No. 60/908,044. cited by other
.
U.S. Appl. No. 09/323,718. cited by other
.
U.S. Appl. No. 09/518,916. cited by other
.
U.S. Appl. No. 09/809,142. cited by other
.
U.S. Appl. No. 09/909,250. cited by other
.
U.S. Appl. No. 09/945,032. cited by other
.
U.S. Appl. No. 10/419,463. cited by other
.
U.S. Appl. No. 10/624,283. cited by other
.
U.S. Appl. No. 10/679,192. cited by other
.
U.S. Appl. No. 10/825,645. cited by other
.
U.S. Appl. No. 10/862,482. cited by other
.
U.S. Appl. No. 11/044,848. cited by other
.
U.S. Appl. No. 11/174,827. cited by other
.
U.S. Appl. No. 11/291,562. cited by other
.
U.S. Appl. No. 11/457,048. cited by other
.
U.S. Appl. No. 60/146,515. cited by other
.
U.S. Appl. No. 60/306,142. cited by other
.
U.S. Appl. No. 60/632,437. cited by other
.
U.S. Appl. No. 60/741,291. cited by other
.
U.S. Appl. No. 60/743,493. cited by other.  
  Primary Examiner: Garg; Yogesh C


  Attorney, Agent or Firm: Morriss; William S.
Frost Brown Todd LLC



Claims  

The invention claimed is:

 1.  A system comprising: a) a processor;  b) a computer memory connected to said processor containing one or more models utilized to process a customer interaction, said
customer interaction comprising: i) one or more statements made by a customer;  ii) one or more prompts played for said customer;  c) a set of computer executable instructions stored in said computer memory and configured to: i) coordinate processing of
the customer interaction;  ii) maintain a set of context information related to said customer interaction;  iii) create a data record comprising information related to said customer interaction from said set of context information;  iv) store information
from said data record in said computer memory;  v) utilize information from said data record stored in said computer memory, along with information from a plurality of other data records comprising context information gathered during a plurality of
interactions with other customers, to automatically create one or more model updates without involvement from a human being;  vi) automatically update one or more of said models using one or more of said model updates without involvement from a human
being;  wherein the set of computer executable instructions is configured to automatically update one or more of said models using one or more of said model updates without involvement from a human being in response to a trigger, wherein said trigger
comprises at least one element selected from the group consisting of: i) a usage milestone, wherein the usage milestone is a trigger condition based at least in part on number of interactions processed by the system;  ii) the creation of the one or more
model updates;  and iii) a schedule, wherein the schedule is a trigger condition based at least in part on elapsed time.


 2.  The system of claim 1 further comprising: an agent terminal operable to allow an agent to process said customer interaction;  wherein said customer interaction further comprises one or more statements made by the agent to the customer;  and
wherein said set of computer executable instruction is further configured to: a) create a second data record;  b) determine if said data record and said second data record contain information relating to the same customer interaction;  c) if said data
record and said second data record contain information relating to the same customer interaction, correlate said data record and said second data record;  d) store information from said second data record in said computer memory.


 3.  The system of claim 1 wherein said set of computer executable instructions is further configured to make at least one recommendation regarding processing said customer interaction.


 4.  The system of claim 3 wherein said set of computer executable instructions is further configured to: a) identify a set of actions;  b) perform an evaluation of the desirability of an action in said set of actions;  wherein said set of
computer executable instructions is further configured to make said at least one recommendation based at least in part on said evaluation of the desirability of the action in said set of actions.


 5.  The system of claim 3 wherein: a) said one or more models comprises a statistical prediction model;  b) said set of computer executable instructions is further configured to make said at least one recommendation based on said statistical
prediction model.


 6.  The system of claim 5 wherein said one or more model updates created by said set of computer executable instructions comprises an update for said statistical prediction model.


 7.  The system of claim 3 wherein said set of computer executable instructions is further configured to make said at least one recommendation based on a means for evaluating options.


 8.  The system of claim 1 wherein said set of computer executable instructions is further configured to determine at least one word included in a statement from said one or more customer statements.


 9.  The system of claim 8 wherein: a) said one or more models in said computer memory comprises a statistical language model;  b) said set of computer executable instructions is further configured to determine the at least one word included in a
statement from said one or more customer statements based on said statistical language model.


 10.  The system of claim 9 wherein said one or more model updates created by said set of computer executable instructions comprises an update for said statistical language model.


 11.  The system of claim 1 wherein said set of computer executable instructions is further configured to determine a meaning of a statement from said one or more customer statements.


 12.  The system of claim 11 wherein: a) said one or more models in said computer memory comprises a semantic classification model;  b) said set of computer executable instructions is further configured to determine said meaning of a statement
from said one or more customer statements based on said semantic classification model.


 13.  The system of claim 12 wherein said one or more model updates created by said set of computer executable instructions comprises an update for said semantic classification model.


 14.  The system of claim 1 wherein said one or more model updates are comprised of one or more updated models.


 15.  The system of claim 1 wherein said computer memory further contains: a) a set of potential call dispositions;  b) an indication of the desirability of said call dispositions;  c) a record of a disposition of said customer interaction.


 16.  The system of claim 1 wherein the trigger is a usage milestone.


 17.  The system of claim 1 wherein said set of computer executable instructions is further configured to coordinate processing of said customer interaction through the implementation of one or more actions. 
Description  

FIELD OF THE INVENTION


This invention is in the field interactive voice response (IVR) systems.


BACKGROUND OF THE INVENTION


Enterprises are continuously looking for ways to provide quality service to their customers while at the same time minimizing their customer service operations costs.  In response to these concerns, many businesses have utilized Self Care Systems
(SCSs), which are computerized systems which allow a customer interaction, such as the reporting of a problem with an existing service, to be performed in an automated manner.  In order to increase the effectiveness of SCSs, many businesses utilize SCSs
which have the ability to understand and make natural language statements, that is, to understand and make declarations or remarks in a human language such as English, French or Japanese, rather than requiring a customer to utilize a more artificial
interface such as a touch tone menu system.  Thus, when a call is received, the SCS generally begins the interaction by playing a prompt, which is an audio sequence requesting information from the customer, such as "How may I help you?" The interaction
then continues with the customer making a series of statements, which are natural language communications that supply information or make requests, and the SCS playing a series of prompts until the interaction is complete.


While these SCSs are functional, they suffer from a number of weaknesses.  While automatic speech recognition (ASR) and natural language understanding (NLU) software have reached a high level of sophistication, word recognition and context
recognition are not perfect.  This can lead to an SCS repeatedly prompting a customer for more information.  This may cause frustration on the part of the customer, lead to a greater number of abandoned calls, and, potentially, lose revenue.


Because of technological deficiencies, many businesses supplement SCSs with assisted care systems (ACSs).  These are systems that allow customers to interact directly with human agents.  One type of system, in which an SCS is often supplemented
with an ACS, is an interactive voice response (IVR) system.  IVR systems interact with callers using natural language.  However, augmenting SCSs with ACSs can lead to additional problems.  For example, some SCSs prematurely transfer calls to ACSs.  This
may lead to increased costs associated with running the ACS.  Even more troublesome are situations where an SCS is supplemented by an ACS which is made up of multiple agent groups, that is, by groups of one or more human operators with special training
to process certain types of interactions.  This creates the potential for an SCS to transfer a customer to an agent group which is not properly trained to process that customer's requests.  An agent within the agent group is then required to assess the
customer's needs and transfer the agent to a different group.  Such improper transfers result in customer frustration and increase the costs of maintaining the ACS.


In addition to the above weaknesses, SCSs do not include any mechanism for automatically updating the software they use for processing natural language interactions.  Thus, even if a particular type of statement is identified as particularly
likely to result in a misdirected transfer, or in an unnecessary request for additional information, it would still be necessary to undertake a cumbersome manual update process in order to improve the SCS performance.


Certain embodiments of the present invention are designed to provide a solution to certain of the weaknesses set forth above.


SUMMARY OF THE INVENTION


Some embodiments of the invention of this application comprise a system comprising a computer memory containing one or more models utilized to process a customer interaction comprising one or more statements made by a customer and one or more
prompts played for the customer.  As used in this application, computer memory should be understood to include any device or medium, or combination of devices and media, which is capable of storing computer readable and/or executable instructions and/or
data.  In such embodiments, the system might further comprise a set of computer executable instructions configured to coordinate processing of the customer interaction, to maintain a set of context information related to the customer interaction, to
create a data record comprising information related to the customer interaction, to store information from the data record in the computer memory, to utilize information stored in the computer memory to automatically create one or more model updates, and
to automatically update one or more of the models using one or more of the model updates.  As used in this application, set of computer executable instructions should be understood to include any combination of one or more computer instructions
regardless of how organized, whether into one or more modules, one or more programs, a distributed system or any other organization.


Additional embodiments of the invention might include, either in addition to, or as an alternative to one or more of the components described above, an agent terminal operable to allow an agent to process a customer interaction which might, in
addition to the statements described above, comprise statements made by the agent to the customer.  Further, in some embodiment the set of computer executable instructions might be further configured to create a second data record, determine if the data
record and the second data record contain information relating to the same customer interaction, and store information from the second data record in the computer memory.  In some embodiments, the computer executable instructions might be configured such
that, if the data record and the second data record contain information related to the same customer interaction, those data records would be correlated with one another.


In some embodiments, the computer executable instructions might be configured to make at least one recommendation regarding processing the customer interaction.  In some embodiments, the computer executable instructions might identify a set of
actions, and perform an evaluation of the desirability of the actions in the set of actions.  In some embodiments where the computer executable instructions are configured to make the at least one recommendation regarding processing the customer
interaction, that at least one recommendation might be based at least in part on the evaluation of the desirability of an action in the set of actions.  In some embodiments, where the set of computer executable instructions are configured to make at
least one recommendation related to processing a customer interaction, that at least one recommendation might be based on a means for evaluating options.


In some embodiments, one or more of the models stored in the computer memory might comprise a statistical prediction model, and, if the set of computer executable instructions is configured to make at least one recommendation related to the
processing of a customer interaction, that at least one recommendation might be based at least in part on the statistical prediction model.  In some embodiments where one or more of the models in the computer memory is a statistical prediction model, the
one or more model updates created by the set of computer executable instructions might comprise an update for the statistical prediction model.


In some embodiments, the set of computer executable instructions might be configured to determine at least one word included in a statement from the one or more customer statements.  In some embodiments, the one or more models in the computer
memory might comprise a statistical language model.  In some embodiments in which the computer executable instructions are configured to determine at least one word included in a statement from the one or more customer statements, that determination
might be based in part on the statistical language model.  In some embodiments in which the one or more models in the computer memory comprise a statistical language model, one or more of the model updates might comprise an update for the statistical
language model.


In some embodiments, the set of computer executable instructions might be configured to determine a meaning, that is, a significance, of a statement from the one or more customer statements.  In some such embodiments, one or more of the models in
the computer memory might comprise a semantic classification model, which might be used to determine the meaning of a statement from the one or more customer statements.  In some embodiments wherein the one or more models in the computer memory comprises
a semantic classification model, one or more of the model updates might comprise an update for the semantic classification model.


In some embodiments, the one or model updates might comprise one or more updated models.  In some embodiments, the set of computer executable instructions might be configured to automatically update one or more of the models using one or more of
the model updates in response to a trigger.  In some embodiment, that trigger might be a usage milestone.


In some embodiments, the computer memory, either in addition to, or in alternative to, the contents described above, might comprise a set of potential call dispositions, an indication of the desirability of the call dispositions and/or a record
of a disposition for a customer interaction.


Further, in some embodiments, the set of computer executable instructions might be further configured to coordinate processing of the customer interaction through implementation of one or more actions.


In addition to the above, some embodiments of the invention might comprise a computerized system for improving an interactive voice response system.  Such embodiments might comprise an interactive voice response system, a decision agent, and a
feedback system.  In such embodiments, the interactive voice response system might be activated by initiation of a customer interaction, and the decision agent might be configured with a set of computer executable instructions to determine a route
through the interactive voice response system.  In embodiments where the decision agent is configured with a set of computer executable instructions to determine a route through an interactive voice response system, the determination might be made based
on a set of information related to a customer interaction and a predication model.  In some embodiments, upon completion of a customer interaction, the interactive voice response system might be configured to transfer the customer to a customer service
representative.  The customer service representative might then determine a customer satisfaction level, that is, a measure of the customer's thoughts and/or feelings, with the interactive voice response system.  That customer satisfaction level might be
reported to the feedback system, which might then use it to modify the prediction model used by the decision agent.


Some embodiments might comprise a computer readable medium having computer executable instructions for performing the method comprising the steps of: capturing a natural language statement made during an interaction with a self care system;
automatically creating a textual representation of the natural language statement using a computer program; associating the textual representation of the natural language statement and a set of context information corresponding to the natural language
statement; adding the textual representation of the natural language statement and the set of context information to a first data record; merging the first data record and a second data record to create a third data record, the second data record
comprising a set of information related to an interaction with an assisted care system; automatically generating a statistical language model based at least in part on the third data record; and updating the computer program using the statistical
language model.


Further, some embodiments might comprise a computer readable medium having computer executable instructions for performing the method comprising the steps of: capturing a natural language statement made during an interaction with a self care
system; automatically creating a textual representation of the natural language statement; automatically creating a classification corresponding to the natural language statement; adding the textual representation and the classification to a first data
record; appending the classification to a second data record, the second data record comprising a set of information related to an interaction with an assisted care system; merging the first data record and the second data record to create a third data
record; automatically generating a semantic model based at least in part on the third data record; and updating the computer program using said semantic model. 

BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 provides a high level illustration of some major components which might be present in some embodiments.


FIG. 2 provides an illustration of a conceptual architecture of an SCS which might be implemented in some embodiments.


FIG. 3 provides an illustration of a conceptual architecture of an Assisted Care System (ACS) which might be implemented in some embodiments.


FIG. 4 provides an illustration of a conceptual architecture of a feedback system which might be implemented in some embodiments.


FIG. 5 shows a combination of two data records which might take place in some embodiments.


FIG. 6 is a table which shows a format in which disposition information and relative desirability might be stored in some embodiments.


FIG. 7 is a table which shows potential data which might have been collected about the processing of 1,200 calls in some embodiments.


FIG. 8 is a table which shows potential data which might have been collected about the processing of 2,400 calls in some embodiments.


FIG. 9 is a table which shows potential data which might have been collected about the processing of 1,200 calls in some embodiments.


FIG. 10 is a diagram showing a potential method which could be used in some embodiments to update statistical language models.


FIG. 11 is a diagram showing a potential method which could be used in some embodiments to update semantic classification models.


FIG. 12A is a diagram showing a simplified design model which might be implemented in some embodiments.


FIG. 12B is a diagram showing a simplified design model which might be implemented in some embodiments.


DETAILED DESCRIPTION OF THE EMBODIMENTS


All embodiments described herein are intended to be illustrative and, therefore, non-limiting.


Some embodiments might include one or more of the major components depicted in FIG. 1.  In embodiments following FIG. 1, an interaction between a customer [100] and an SCS [101] might be initiated by a customer [100] contacting the SCS [101] over
a network [105], which might be any type of network capable of transmitting call audio information, such as the public switched telephone network (PSTN) or a broadband voice over IP (VoIP) network.  Once the call had been received, the SCS [101] might
play some prompt, such as "How may I help you?" which would be transferred back over the network [105] to the customer [100].  In determining how to respond to the customer's statement, in some embodiments, an SCS [101] might request data from some
external system, referred to in FIG. 1 as an enterprise system [104].  Additionally, in some embodiments of the present invention, the SCS [101] might have the capacity to transfer the customer call to an ACS [102].  Finally, in some embodiments, when
the customer call is terminated, a feedback system [103], which could be a set of software routines and their associated hardware designed to improve the performance of an SCS, might be able to update the SCS [101] so that the SCS [101] might be able to
better respond to customer statements in the future.


FIG. 2 expands on FIG. 1 and focuses particularly on a conceptual architecture for an SCS [101] which might be realized in some embodiments.  For the sake of easy understanding, numbering is consistent throughout all figures.  In the diagram of
FIG. 2, in addition to the boxes representing subcomponents of an SCS [101], there are also labeled arrows which describe types of data which might be passed between components in some embodiments.  It should be understood, however, that the diagram of
FIG. 2 is intended to be illustrative only of a particular type of conceptual architecture which might be implemented in an SCS [101], and therefore is not intended to be limiting of the components which might be present in all embodiments.


It should further be understood that, while certain data transfers are depicted using arrows in FIG. 2, those arrows do not necessarily depict all data transfers which might take place in an embodiment following the conceptual architecture of
FIG. 2.  For example, some embodiments following the architecture of FIG. 2 might have an SCS [101] which includes a set of software routines specialized for gathering data related to the processing of a customer call by an SCS [101] (hereinafter
collectively referred to as an Automated Session Agent (ASA) [208]).  In some embodiments, regardless of the fact that there are no arrows showing data being transmitted to an ASA [208], an ASA [208] might collect data related to the processing of a
customer call, perhaps by monitoring data transmissions between other components, perhaps by means of discrete data transmissions from other components, or perhaps by some other means, depending on the specifics of a particular embodiment.  Further, even
though FIG. 2 only shows an arrow from the ASA [208] leading to a Feedback System [103], in various embodiments an ASA [208] might communicate information to other components as well.  For example, in some embodiments, an ASA [208] might transfer context
information, that is, data related to the processing of a customer call by an SCS [101] which might include data related to the customer himself or herself, to other components of the SCS [101], either on request or automatically in response to certain
triggers.  Further, various embodiments might have different sets of information included in the context information maintained and transmitted by an ACA.  For example, in some embodiments, context information might include outputs from various other
components of the SCS [101], data received from an enterprise system [104], the duration of the customer interaction, and customer information such as the customer's name, emotional state, account information, lifetime value, previous interactions,
telephone number, etc. Other embodiments might maintain context information which contains only some of the above data, or might maintain context information which contains different data entirely.  Thus, while FIG. 2 is intended to provide an
illustrative example of a potential conceptual architecture which might be realized in some embodiments, it is not intended to be limiting on the scope of the invention.


In embodiments which follow FIG. 2, an SCS [101] might include a network interface [201] which might be configured to receive information, such as speech, from a network [105].  Once the network interface [201] receives information, it might
transfer any speech information to Automatic Speech Recognition software [202] (hereinafter, referred to either as ASR software, or simply as an ASR), which is computer software configured to translate audio input comprising a natural language statement
into an alternate format, such as text.  The ASR [202] might create a textual representation of the information transferred to it using a statistical model which calculates the probability that, given the context of an interaction, a phrase made up of
one or more words will be spoken by a customer.  For the sake of convenience, such models will be referred to throughout this application as Statistical Language Models (SLMs).  In some embodiments, the ASR [202] might additionally produce a confidence
level, which may be alphanumeric although other schemata are possible.  The confidence level may indicate the probability that each word in the textual representation accurately reflects the content of the call audio received by the network interface
[201].  Once the ASR [202] has finished processing, it might send data such as its textual representation of call audio, either with or without the associated confidence level, to a NLU unit (referred to throughout the application as either an NLU unit
or simply as an NLU) [203].  The NLU [203] might then attempt to classify the text provided by the ASR [202] into one of a set of predefined values, such as "Printer Problem," through the use of a semantic classification model (SCM), a technique which is
well known to one of ordinary skill in the art.  In some embodiments, in addition to classifying text provided by an ASR [202], an NLU [203] might also provide a confidence level, which might indicate the probability that the NLU's classification is
correct.


In some embodiments, once a classification has been made by an NLU [203], that classification, either with or without an associated confidence level, might be forwarded to a set of software routines specialized to coordinate the components of the
SCS [101] with each other, an ACS [102], the customer [100], the network [105], enterprise systems [104], and the feedback system [103].  For the sake of convenience, such specialized software routines will be referred to collectively as an Automated
Care Agent (ACA) [206].  In some embodiments, an ACA [206] might be implemented in a computer language such as C++, Java, or VXML.


In some embodiments, an ACA [206], might be designed as a set of actions, that is computer executable commands or functions and transitions, that is, triggers which lead to the execution of a new action.  As an example of such a design, consider
the diagram of FIG. 12A.  In an embodiment following FIG. 12A, when a call is initially received by an SCS [101], the ACA [206] might be programmed to execute the action of playing a prompt [1200].  After that action was complete, the ACA might
automatically transition to the next action of getting a statement from the customer [1201].  In such a case, the trigger for moving between the second and third actions is the completion of the second action.  After the ASR had transcribed the customer
statement, an ACA following the diagram of FIG. 12A might automatically transition to the action of sending the transcription to the NLU [1203].  Once the NLU was finished processing the statement, an ACA following the diagram of FIG. 12A might face a
choice: if the NLU understood the transcription, the ACA might order a service ordered by the customer [1204]; if the NLU did not understand the transcription, the ACA might play a prompt asking the customer to repeat his or her statement [1205].


It should be understood that the above example of actions and transitions based on FIG. 12A is a simplified overview designed to illustrate an ACA design which might be utilized in some embodiments.  It should further be understood that various
embodiments might differ from the simplified example based on FIG. 12A in any number of particulars.  For example, an embodiment might include a greater number of actions, such as requesting that an Enterprise System provide an account balance or adjust
a bill; or redirecting a customer interaction, perhaps by routing the customer call to an ACS.  Various embodiments might also include a greater number of transitions, or even be designed in such a way that certain actions are configured to transition
back to themselves, that is, to loop, as can be seen in the diagram of FIG. 12B.  Further, in some embodiments, there might be both actions which are composite, that is, composed of other actions, and actions which are atomic, that is, not composed of
other actions.  For example, some embodiments might include a composite action of "get customer input," which might be composed of the constituent atomic actions of "get audio data from the network," and "send audio data to the ASR." Additionally, some
embodiments might include actions which are synchronous, that is, which prevent other actions from being taken while they are being processed, and actions which are asynchronous, that is, which can be processed concurrently with other actions.  Further,
some embodiments might include an ACA [206] which is based on a different model entirely.  For example, in some embodiments in which an ACA [206] is utilized in an IVR System, rather than being implemented as a set of actions and transitions, the ACA
might be implemented as a set of algorithms which invokes, and controls the inputs to and outputs from, other components of the SCS [101].  For the sake of convenience, such an embodiment will be referred to as having an ACA which determines a route
through the IVR, though it should be understood that the discussion throughout the application which refers to ACAs using actions is equally applicable to ACAs which determine a route through an IVR.


Returning to the diagram of FIG. 2, in addition to the components described above, some embodiments might include software routines specialized to assist an ACA [206] in making decisions between options, which are methods of processing a customer
call or coordinating components of an SCS and other systems, such as actions or routes through an IVR.  For the sake of convenience, in embodiments in which they are present, such specialized software routines will be referred to as a Decisioning Agent
(DA) [207].  In some embodiments, an ACA [206] might request a recommendation from a DA [207] whenever the ACA [206] needed to transition from one action to another, or whenever the SCS [101] received additional data, either from the customer [100], an
enterprise system [104], or from some other source.  Alternatively, in some embodiments, an ACA [206] might only request a recommendation from a DA [207] in certain situations.  For example, some embodiments might be configured such that some transitions
are controlled by well defined rules, while other transitions are uncertain, and require a recommendation from the DA [207].  An example of such an embodiment might be one where the ACA [206] is configured to automatically respond if the NLU [203]
provides an understanding of a customer statement with a confidence level greater than a set threshold (say, 95% certainty), while the ACA [206] might request a recommendation from the DA [207] if the confidence level provided by the NLU [203] is below
the threshold.


In embodiments in which a DA [207] is present, the DA [207] might use various methods for determining what action from a set of actions to recommend.  For example, in some embodiments, a DA [207] might make decisions as to what action to take, or
route to recommend, using predictive modeling which would indicate the desirability of likely outcomes of various actions or routes.  In some embodiments in which a DA [207] utilizes predictive modeling when making recommendations, the DA [207] might be
configured with information regarding potential dispositions, or outcomes, of a customer interaction.  Such dispositions might include call outcomes such as the customer abandoned the call, the customer purchased a service, or the customer had his or her
question answered after the call had gone on for a certain length of time.  The DA [207] might also be configured with information related to the desirability of certain dispositions, and it might further be configured to dynamically keep track of
certain information, such as the length of the phone call (also known as call hold time), in order to help measure system performance or determine the likelihood of certain dispositions.  In some embodiments in which a DA [207] includes information about
dispositions and their desirability, and uses predictive modeling to make recommendations regarding actions, the predictive models used by the DA [207] might be based on conditional probabilities.  For example, in order to determine whether to take a
specific action, in some embodiments, a DA [207] might assign an expectation score to each disposition by multiplying the conditional probability that the given action would result in that disposition occurring, given information such as the context
information gathered by the ASA [208], information maintained by the DA [207], and other information relevant to the customer interaction, by the desirability associated with that disposition.  The DA [207] might then create a similar expectation score
for each disposition, and might combine those scores into a final score, called an alignment, for an action.  The DA [207] might create a similar alignment for each action, and recommend the action whose associated alignment was greatest.  Such a
calculation, which might be undertaken in some embodiments, could be represented as follows:


 .function..ident..times..times..times..function..times..times.  ##EQU00001## where P(Disposition|Action, InteractionContext) is the conditional probability that Disposition will result from Action given InteractionContext, GoalValue(Disposition)
a numeric value indicating the desirability of Disposition and Alignment is the final score assigned to a particular action.


In addition to having various methods of determining what actions from a set of actions to recommend, different embodiments might also have different methods of determining a set of actions that the DA [207] evaluates.  For example, in some
embodiments, when an ACA [206] makes a request of a DA [207], the ACA [206] might send the DA [207] a list of actions for the DA [207] to choose from.  Alternatively, some embodiments might be implemented such that the DA [207] determines a set of
actions to choose between based on context information about the customer call.  In some embodiments, there might be some situations where an ACA [206] sends a list of actions to a DA [207], while in other situations the DA [207] is required to
independently determine potential actions to judge.


Various implementations might also differ from one another in the sources and nature of the information used by the DA [207].  For example, in addition to the factors set forth above, some embodiments might be designed to allow a DA to take
information such as a measurement of the caller's emotional state, the number of times a caller has been asked for information to clarify an earlier response (referred to as reprompts), or various utterance keywords, that is words with particular
significance, detected in a customer statement.  In some embodiments, this might entail making alterations in other components of an SCS [101] in order to facilitate processing by a DA [207].  For example, in some embodiments, an ASA [208] might be
specially configured to be able to measure the caller's emotional state, and to communicate that information to the DA [207] as a portion of the call's context information.  It should also be understood that additional methods of making determinations by
a DA [207] will be apparent to one of ordinary skill in the art, and that the use of such additional methods is not inconsistent with the scope and spirit of the invention.  It should be understood that the above description focused on a DA [207] which
recommends an action which might be taken by an ACA [206], but that a DA [207] could use the same techniques (i.e., predictive modeling, etc .  . . ) to recommend routes through an IVR, or for other methods of making determinations to be carried out by
the ACA [206].


Once the determination of the correct action had been made, either by the DA [207], the ACA [206], or some other component, the ACA [206] might implement that action, perhaps by sending a text string representing a prompt to be played to a Text
To Speech (TTS) system [204].  Depending on the action implemented, the customer [100] might make another statement which the SCS [101] could analyze, repeating the above described process until either the call reached disposition, or the SCS [206]
transferred the call, either to an ACS [207] or to some other system.


In some embodiments, once the SCS [101] completes its processing of the call, either because the call is transferred or for some other reason, a component of the SCS [101], for example, the ASA [208] might assemble a set of information related to
the customer's completed interaction with the SCS [101].  For the sake of convenience, in embodiments where such interaction records are created, they will be referred to as Automated Interactive Session Detail Records (AISDRs).  In some embodiments, an
AISDR might include information describing the session, information describing the interaction with the customer [100], or both.  In embodiments in which an AISDR includes information describing a session, that information might include one or more of: a
session identifier; the session type, which might be "automatic" for a session conducted by an SCS [101]; the date and time of the session's start; the date and time of the session's conclusion; the session's duration; and/or the session's disposition,
which in some embodiments might be automatically determined by the ASA [208].  In embodiments in which an AISDR includes information describing interaction with the customer, that information might include one or more of: actions performed by an ACA
[206]; inputs from the customer, such as speech or signals generated by buttons on a touch tone phone; the data produced by the ASR [202] and/or NLU [203]; data sent from enterprise systems [104] through the data interface [205]; and/or customer
information such as name, account number, customer lifetime value, the customer's previous contact history, the customer's emotional state, the customer's telephone number, and other similar information.


It should be understood that the above discussion of an SCS [101] as depicted in FIG. 2 is not intended as an exhaustive discussion of all possible embodiments, and numerous modifications and/or additions to the above discussion and diagram of
FIG. 2 will be immediately apparent to one of skill in the art.  For example, in some embodiments, an ASA [208] might store, or even collect, information in addition to the data transmitted between components shown in FIG. 2.  Thus, in some embodiments,
an ASA [208] might respond to a call arriving at an SCS [101] by collecting data about the customer [100], perhaps including the customer's name, a customer ID, the customer's telephone number, or other information, some of which might be stored on
enterprise systems [104] outside the SCS [101], which the ASA [208] might access through means of an external data interface [205].  Additionally, in various embodiments, the ACA [206], the DA [207], or other components of the SCS [101] might contact
enterprise systems [104] though a data interface [205], and/or the ASA [208], as needed in order to obtain information for decision-making purposes.  Additionally, in some embodiments, the functionality of various components of the SCS [101] might be
combined.  For example, in some embodiments, the functionality of the ACA [206] and the DA [207] might be combined into a single component which coordinated all interaction within the SCS [101] and was capable of making decisions as to actions to take or
routes to follow.  Thus, the above discussion is intended to be illustrative only, and not to be limiting.


Returning to the high level architecture of FIG. 1, in some embodiments, an SCS [101] might be configured to be able to transfer a call to an ACS [102].  In some embodiments, when a call is transferred from an SCS [101] to an ACS [102], some or
all of the information contained in the AISDR, or the AISDR itself, might be transferred to the ACS [102] as well.  To illustrate how a customer interaction might be processed in an ACS [102], either with or without an associated AISDR, consider the
diagram of FIG. 3, which presents a potential conceptual architecture for an ACS [102].


In some embodiments which follow FIG. 3, when a call arrives at an ACS [102], an automated session popup [301] or other display might appear on a computer terminal being monitored by a human agent, such as a customer service representative.  For
the sake of convenience, the computer hardware and software which an individual agent uses to process a customer call will be referred to collectively as an agent terminal.  In some embodiments, the automated session popup [301] might contain a summary
of the automated session which took place at the SCS [101].  In some embodiments, a summary provided by the automated session popup [301] might contain information such as a transcription of the customer's speech, provided by the ASR component of the SCS
[101], the reason for the call as determined by the NLU unit in the SCS [101], and/or information about the customer, which might be culled from an AISDR provided when the call was transferred from the SCS [101] to the ACS [102].  Regardless of the
contents or existence of a summary provide by the automated session popup [301], the ACS [102] would process the customer through a human agent, who would interact with the customer through an assisted care user interface [302].


In some embodiments, while the interaction between the human agent and the customer was taking place, software routines similar to the ACA [208] from FIG. 2, which will be referred to collectively as a live session agent (LSA) [303], might gather
information summarizing the processing that takes place in the ACS [102].  In some embodiments, that information might be compiled into a Live Interactive Session Detail Record (LISDR) which might contain information describing the session and/or
information describing the disposition of the customer call.  In embodiments where an LISDR contains information describing the session, that information might include one or more of: a session identifier; a type identifier, which might be "live" in some
embodiments for sessions processed by an ACS [102]; a short indication of the disposition of the call (which might be determined either by the agent processing the customer call, or by the LSA automatically); and/or a listing of actions performed while
the call was being processed by the ACS [102].  In some embodiments where an LISDR contains information describing the disposition of the customer call, that information might include one or more of: an indication of the level of customer satisfaction
with the call; and/or information as to whether or not the call was transferred correctly by the SCS [101].  In some embodiments, an LISDR might also include an indication of the agent's determination of how satisfied the customer was with the
interaction, referred to as a customer satisfaction level.  In some embodiments in which both an LISDR and an AISDR are compiled, and both the LISDR and AISDR include a session identification, the session identification in both the LISDR and AISDR might
be coordinated, and might later be used to correlate an ACS session described in an LISDR with the appropriate SCS session described in an AISDR.


In some embodiments, even after a call has reached disposition, further processing might take place, which might have the effect of improving the performance of an SCS.  To discuss one type of processing which might take place after a call had
reached disposition, consider the diagram of FIG. 4, which illustrates a potential architecture for a feedback system which might be implemented in some embodiments.


In some embodiments following the architecture of FIG. 4, once a call has reached disposition, an SCS [101] and/or an ACS [102] might transfer information describing their interactions with the customer to the feedback system [103].  In some
embodiments, that information might be contained in an LISDR, an AISDR or both.  In some embodiments, the feedback system [103] might include one or more software routines specialized to assemble and store data related to call processing undertaken by an
SCS [101] and/or an ACS [102].  For the sake of convenience, such software routines will be referred to collectively as in interactive session repository (ISR) [403].  In embodiments containing an ISR [403], the ISR [403] might be designed such that,
when a call was processed by both an SCS [101] which produces an AISDR and an ACS [102] which produces an LISDR, the ISR [403] might use the session identifications on the LISDR and AISDR to combine the two data records into a Closed Loop Interaction
Data Record (CLIDR), as shown in FIG. 5.  In some embodiments, after correlating the data provided by the SCS [101] and ACS [102], the ISR [403] might store the correlated data for later use, perhaps by other components of the feedback system [103].


One potential use for the data stored in the ISR [403] is updating and/or generating of the models used in the SCS, which might be done by a set of specialized software routines which perform statistical correlations and calculations, which, for
the sake of convenience, will be referred to collectively as a model updater (MU) [404].  For the sake of convenience, the data created by such a MU [404] for the purpose of updating a model used by a component of an SCS [101] will be referred to as a
model update.  In some embodiments, the MU [404] might generate new models or model updates by organizing the data collected by the ISR [403] into different groups based on context information, then organizing each of those context groups according to
recommendations which might be made by a DA, and then organizing each of those recommendation groups based on call disposition, and finally using that organization of raw data to determine a conditional probability of a particular recommendation
resulting in a certain disposition given a call context.


Of course, it should be understood that, while an MU [404] might employ different algorithms for generating models updates and for generating new models, in some embodiments, the model updates generated by an MU [404] might be new models based on
models already being used by a component in an SCS [101].  In such an embodiment, the model updates might improve the performance of the SCS [101] by simply replacing the models currently being used by the SCS [101], rather than by modifying them only. 
Thus, the above linguistic distinction between model updates and the generation of new models should be understood as one made for the sake of convenience only, and not read to imply that model updates must necessarily be distinct from new models.


Additionally, in some embodiments, data stored in the ISR [403] might be used to update/generate semantic classification models which can be used in an NLU within an SCS [101].  For example, consider an embodiment which includes a CLIDR stored in
an ISR [403] comprising a semantic classification made by an NLU within an SCS [101], a textual rendition of the caller's statement provided by an ASR within an SCS [101], and a correct semantic classification provided by a human after the call had been
transferred to an ACS [102].  In such an embodiment, if the human within the ACS [102] and the NLU within the SCS [101] had come to different conclusions about the correct classification of the caller's statements, the semantic probability models used by
the NLU within the SCS [101] could be updated by correlating the text of the caller's statements with the correct classification as provided by the human within the ACS [102].  In this way, some embodiments might provide for adjusting semantic
classification models used by an NLU within an SCS [101] in an automatic, closed loop manner.


Additionally, in some embodiments, data stored in the ISR [403] might be used to update/generate statistical language models which can be used in an ASR within an SCS [101].  For example, consider the embodiment of the previous example, which
included a CLIDR stored in an ISR [403] comprising a semantic classification made by an NLU within an SCS [101], a textual rendition of the caller's statement provided by an ASR within an SCS [101], and a correct semantic classification provided by a
human after the call had been transferred to an ACS [102].  In such an embodiment, in addition to updating the semantic classification models within the NLU, an SLM utilized by an ASR could be updated by analyzing the words and phrases which make up
textual renditions of caller statements and modifying the SLM based on the measurements of the words and phrases in the CLIDR.  In this way, some embodiments might provide for adjusting SLMs utilized by ASRs within an SCS [101] in an automatic, closed
loop manner.


In some embodiments, when an MU [404] updates a model, that model might be stored by a set of specialized software routines which archive probability models.  For the sake of convenience, such specialized software routines will be referred to
collectively as a model repository (MR) [405].  In some embodiments, an MR [405] might contain an archive of past models, as well as any new models and/or model updates provided by an MU [404].


In some embodiments, when an MU [404] creates a model update, that update might be communicated to an SCS [101] and might be incorporated into the SCS [101] immediately as it is created (i.e., in real time).  Alternatively, in some embodiments,
instead of communicating updates to an SCS [101] in real time, an MU [404] might communicate with an SCS [101] only in response to certain triggers, that is, in response to the fulfillment of a set of conditions.  In some embodiments, such triggers might
be time based, that is, a feedback system [103] might be configured to provide scheduled updates to an SCS [101].  In some embodiments, triggers for the MU [404] might be based on usage milestones, e.g., an MU [404] might send an update to an SCS [101]
if ten new CLIDRs are created within a two minute period.  In some embodiments, multiple types of triggers might be present, for example, a feedback system [103] might be configured to provide scheduled updates to an SCS [101], and to provide unscheduled
updates if certain usage milestones are met.


In some embodiments, in addition to components to update models used in an SCS, a feedback system [103] might also include data related to the desirability of certain call dispositions, and/or means of creating and/or modifying that data.  In
some embodiments, there might be software routines specialized for creating and/or modifying data related to the desirability of certain dispositions, which will be referred to collectively, along with any associated hardware, as a Goal Updater (GU)
[401].  In some embodiments, the GU [401] might allow a human agent, referred to as a Goal Administrator (GA) [406] to specify certain goals, such as "Minimize Call Hold Time," and then assign values, which might be alphanumeric, to dispositions
indicating their desirability vis-a-vis the defined goals.  For example, for the goal "Minimize Call Hold Time," the goal administrator might assign a goal value of 10 for a call which lasts less than one minute, a goal value of 5 for calls which last
between one and two minutes, and a goal value of -10 for calls lasting longer than two minutes.  In some embodiments, rather than having goals and dispositions, a feedback system might have only dispositions, which might be defined by the GA [406].  In
some such embodiments, the dispositions and their corresponding values indicating desirability might be stored using a table representation, as shown in FIG. 6.


In some embodiments, when a GA [406] has utilized a GU [401] to create and/or modify goals, dispositions or their associated preference values, the updated information might be stored by a set of software routines specialized to store
dispositions, goals and preference values.  For the sake of convenience, in embodiments where such routines are present, they will be referred to collectively as a goal repository (GR) [402].  In some embodiments, in addition to storing updated
information provided by a GU [401] a GR [402] might also archive past goals, dispositions, and numeric preference values.


In some embodiments, a human administrator such as the GA [406] might have responsibilities beyond those related to updating and modifying goals, dispositions and numeric preference values.  For example, in some embodiments where model updates
created by an MU [404] are provided by a feedback system [103] to an SCS [101] in response to triggers, the update triggers might be defined by a GA [406].  Indeed, in some embodiments, a GA [406] might be responsible only for defining triggers, and not
be responsible for updating and/or creating goals, dispositions and preference values.


The foregoing should be considered as illustrative only of certain embodiments, and that numerous modifications will be readily apparent to one or ordinary skill in the art.  For example, some embodiments might have multiple GAs with each GA
having responsibility for a specialized task.  Further, modifications to the conceptual architectures discussed above will be readily apparent to one of skill in the art.  For instance, different embodiments might have diverged from the discussion above
in the specifics of the types of data communicated between components, or the paths which data travels, both within and between components of an embodiment.  For example, in some embodiments, information related to dispositions and their desirability
might be stored in the DA [207] in addition to, or as an alternative to, being stored in a GR [402].  It should be understood that various data storage and redundancy schemes will be immediately apparent to one of skill in the art and are too numerous to
catalog with any thoroughness.  Thus, the foregoing should be considered illustrative only, and should not be treated as an exhaustive listing of the embodiments of the invention.


In order to examine the processes of utilizing prediction models, and of creating new prediction models, SLMs, and semantic classification models which might take place in some embodiments, it will be helpful to consider a simple embodiment which
has certain characteristics.  Please note that this embodiment is provided for, primarily, illustrative purposes and should not be considered limiting against other embodiments of the invention.  First, the simple embodiment should utilize a relatively
simple probability model, so that the use of that model and the updating processes can be easily followed, and expressed on paper.  Second, the simple embodiment should store only minimal context information, which will allow updates based on that
context information to be expressed simply.  Third, the simple embodiment should be concerned with only a simple task, so that a representative discussion of call flows does not mushroom beyond the bounds of being easily expressed.  Fourth, the simple
embodiment should only recognize a few disposition states, so that the process of determining the most desirable and probable disposition state for a given action does not become impractical to set down on paper.  One such simple embodiment might
comprise an SCS which provides a natural language based call routing function, discussed below.


In embodiments in which an SCS provides a natural language-based call routing function, when a caller is connected to the SCS, the caller might be greeted with a voice prompt, such as "How may I help you?" The caller then might make a statement
that describes the reason for his or her call.  An SCS might then determine whether to transfer the call to an ACS, or to play another prompt for the caller.  If the SCS transfers the call to an ACS, the SCS might also determine to which ACS, or agent
group, the call should be transferred, depending on the embodiment.  In an embodiment where an SCS provides natural language based call routing, the SCS's interaction with the call ends either when the SCS transfers the call, or the call is abandoned. 
Thus, a simple embodiment in which an SCS provides a natural language based routing function, there might be only two possible actions: to transfer the caller or play a prompt, and there might be only five dispositions: the call might be abandoned; the
call being correctly transferred with the caller being satisfied; the call being correctly transferred with the caller being unsatisfied; the call being incorrectly transferred with the caller being satisfied; or the call being incorrectly transferred
with the caller being unsatisfied.  Further, the preference values assigned to those disposition states might be set as shown in the table of FIG. 6.


In such a simple embodiment, where there are only two possible actions and five possible dispositions as set forth in FIG. 6, a DA might determine which action to take by calculating the alignment of each of the two actions, and then recommending
the action with the highest alignment.  This might be done by applying Function 1 to the available actions, and making the simplifying assumption that the probability of a compound disposition, such as a caller being properly transferred and satisfied,
can be obtained by multiplying the probability that a caller will be satisfied given that he or she was properly transferred by the probability that the caller was properly transferred given the confidence level provided by the NLU.  Thus, some
embodiments might make the determination of whether to transfer or prompt a caller by evaluating the following equations:


 .function..times..times..times..function..times..function..times..times..- times..times..times..function..times..times..times..function..times..funct- ion..times..times..times..function..times..times..function..times..times..-
function..times..times..times..function..times..times..times..function..ti- mes..times..times..times..times..times..times..function..times..times.  ##EQU00002## where N is the number of prompts which have been played for the caller, CL is the confidence
level assigned to a statement classification by the NLU, S is the result of the caller being satisfied, C is the result of the caller being correctly transferred, U is the result of the caller being unsatisfied, and I is the result of the caller being
incorrectly transferred.  Using the above equations, and the simplifying assumption that the probability that a caller was correctly transferred given the confidence level provided by the NLU is equal to the confidence level provided by the NLU, the data
from FIG. 7 can be substituted into equations 1 and 2 to give the following results:


 .function..times..times..times..times..times..times..times..times..times.- .times..times..times..times..function..times..times..times..times..times..- times..times..times..times.  ##EQU00003## Based on those results, the DA would recommend the
action of transferring the caller, because the alignment score for transferring the caller (7.5000) was greater than the alignment score for prompting the caller (7.1195).  This demonstrates how some embodiments might use predictive modeling to determine
an appropriate action.


In order to understand how predictive models might be updated, assume that an embodiment following the above discussion has processed an additional 1,200 calls and that the data for the original 1,200 calls from FIG. 7, and the subsequent 1,200
calls is as shown in the table of FIG. 8.  In order to update the statistical models used in a DA, an MU might recalculate the values of the probabilities that various dispositions would result from particular actions being taken in different contexts. 
The MU might further communicate those updated probabilities to a DA for use when processing customer calls.  Thus, in some embodiments, the MU might have provided a DA new values for numeric substitutions, so that, based on the substitutions into the
equations 1 and 2, the following calculations would occur:


 .function..times..times..times..times..times..times..times..times..times.- .times..times..times..function..times..times..times..times..times..times..- times..times..times.  ##EQU00004##


The DA would then instruction the ACA to play an additional prompt for the caller, since the alignment for prompting the caller (7.5335) is greater than the alignment for transferring the caller (7.5000).  This example shows how a simple
embodiment might update the predictive models of the DA based on collected call data in a closed loop fashion without intervention from a system administrator.


While the above examples used the simplifying assumption that the probability of a caller being correctly transferred given a confidence level provided by an NLU is equal to the confidence level provided by the NLU, it is also possible to relax
that assumption and calculate the probability of a caller being correctly transferred given a confidence level provided by an NLU based on historical data.  In some embodiments, such a calculation might be based on the data of the table depicted in FIG.
9.  In that table, the numbers in the CL column represent ranges of NLU Confidence Levels.  For example, the first row (CL=(0.99,1.00]) applies for N=1, and NLU confidence levels that are greater than 0.99 but less than or equal to 1.00, while the second
row (CL=(0.98,0.99)) applies for N=1 and NLU confidence levels that are greater than 0.98 but less than 0.99.  Thus, the table of FIG. 9 expands on the table of FIG. 7 because the table of FIG. 9 includes 100 rows (one for each range of confidence level)
for each number of prompts played, while the table of FIG. 7 only had one row per number of prompts.


To demonstrate the use of the data from the table of FIG. 9, consider a scenario in which, after a customer responds to an initial prompt, an NLU provides a classification with a confidence level of 0.9831.  In some embodiments, this might result
in an ACA requesting that a DA decide whether to transfer the caller or play another prompt.  The DA might then consult predictive models based on the data from the table of FIG. 9 and make the following calculations:


 .function..times..times..times..times..times..times..times..function..tim- es..times..times..times..times..times..times..times..times.  ##EQU00005## Thus, the DA would instruct the ACA to take the action of transferring the caller, since the
alignment score for that action (10.2130) is higher than the alignment score for the action of playing an additional prompt (7.1195).


It should be understood that, while the above examples presented the use of simplifying assumptions separately from calculations based purely on historical data, some embodiments might use both types of calculations.  For example, some
embodiments might be configured to initially use the simplifying assumption that the probability that a caller is correctly transferred given the confidence level provided by the NLU is equal to the confidence level provided by the NLU, but to switch to
a calculation based on purely historical data once sufficient calls had been processed for the determination based purely on historical data to be meaningful.  Further, it should be understood that the above examples are not intended to be exhaustive
descriptions of the particulars of utilizing and updating predictive models, and that various embodiments might use different predictive models, might use predictive models in which the parameters such as the ranges of confidence levels vary, or may use
models in which parameters of data collected or the models themselves are user defined.  Since these and other modifications will be immediately apparent to one of ordinary skill in the art, the above examples should be treated as illustrative only, and
not limiting on the scope of the invention.


While the above examples demonstrated the use of historical data to update predictive models used by a DA, some embodiments might also be configured to update SLMs which could be used by an ASR.  To illustrate this process, consider FIG. 10,
which shows a sequence of actions which might take place to update an SLM in some embodiments.  In some embodiments following FIG. 10, when a customer made a statement, that statement would initially be captured [1001] and transferred to an ASR, which
would automatically transcribe the statement [1002].  After the statement had been transcribed, it would be associated with context data from the call [1003], perhaps by an ASA or some other component of an SCS [101].  The text generated by the ASR and
the call context data would then be combined into an AISDR [1004] which might be sent to a feedback system [103].  The feedback system [103] might then merge the AISDR with an associated LISDR to create a CLISDR [1005].  That CLISDR might then be
processed using a computer program to generate a new or updated SLM [1006].  The feedback system [103] might then automatically update the SCS [101] using that new or updated SLM [1007].


Similarly, FIG. 11 shows a method for automating the process of updating SCMs which might be incorporated into some embodiments of the invention.  In FIG. 11, the process of updating SCMs starts with the capture of call audio data [1101], and
proceeds with the automatic transcription of that data [1102].  After the data is transcribed, the process of FIG. 11 automatically classifies the utterances [1103] within the SCS [101], and adds the classification and the data to an AISDR [1104].  If
necessary, an embodiment following FIG. 11 might also append a human derived classification to an LISDR [1105], and then both the LISDR and AISDR are used to generate a CLISDR [106].  Then, in some embodiments following FIG. 11, the CLISDR might be used
in the automatic generation of SCMs [1107] and those automatically generated SCMs might then be used in the automatic updating of one or more NLUs [1108].


Of course, it should be understood that the above examples are intended to be illustrative only, and that many variations and alterations will be immediately obvious to one of ordinary skill in the art.  For example, the process of FIG. 11 might
be altered by retaining the automatic classification of utterances, but combining that with the step of creating a corpus of thousands of utterances prior to generating updated SCMs.  Thus all illustrative examples should be considered to be illustrative
and not limiting on the scope of the invention.


The foregoing is considered as illustrative only of the principles of the invention.  Further, since numerous changes and modifications will readily occur to those skilled in the art, it is not desired to limit the invention to the exact
construction and operation shown and described, and accordingly, all such suitable changes or modifications in operation which may be resorted to are intended to fall within the scope of the invention.


* * * * *























								
To top