Docstoc

Prospects for the UK Economy

Document Sample
Prospects for the UK Economy Powered By Docstoc
					Towards a wider use of
experiments in policy research

  Richard Dorsett

                         National Institute
                         of Economic and
                         Social Research
What do we mean by ‘experiment’?
• Greenberg and Shroder (2004) define a
  social experiment as having at least the
  following features:
  –   Random assignment
  –   Policy intervention
  –   Follow-up data collection
  –   Evaluation
Why experiments are important
• They represent the ‘gold standard’
  –   robust results
  –   easily understood, communicated and trusted
  –   unverifiable assumptions avoided
  –   statistical power maximised
• However…
  – Require careful design, planning and monitoring
  – ‘black-box’ estimates (true of other approaches)
  – ethical issues perceived to be greater
Potted history of labour market
experiments
• US has long and steady history dating
  back to Ford administration of mid-70s
• First UK example was Restart in 1989
• Very little since then until quite recently
• Now some momentum building:
  – October 2008 HMT conference 'Raising Standards in
    UK Policy Evaluation’ focused on experiments
    http://www.gsr.gov.uk/resources/policy_evaluation.asp
  – Recent/ongoing large-scale, influential experiments
Employment Retention and
Advancement demonstration
• Encourages FT work among lone parents
  and long-term unemployed people via:
  – 2 years of job coaching and support
  – Up to 6 payments of £400 for sustained FT work
  – £1,000 training bonus plus up to £1,000 for
    completing training
  – In-work emergency fund of £300/worker
• 16,000 people in 6 districts
Lessons from ERA
• Policy lesson
  – ERA increased earnings and employment, particularly
    for lone parents
• Evaluation lessons
  –   Technical advisers key to effective randomisation
  –   Qualitative interviews key to interpreting results
  –   Long-term outcomes key to observing full effects
  –   Applying nonexperimental techniques:
       • Nonparticipation study to understand validity
       • Cross-office analysis to understand what drives
         results
Considering a ‘pared-down’ RA
• Is the programme new? If not, consider:
  – less intensive monitoring
  – smaller-scale process study
  – using existing cost information for CBA
• Is the programme voluntary? If not:
  – non-participation analysis becomes less important
• Are outcomes narrowly-defined? If so:
  – use available administrative data (benefit receipt,
    benefit amount, employment status, earnings)
Example 1: ND25+ for the over-50s
• Effect on labour supply of mandating full
  ND participation for over-50s (ie applying
  same ND rules as for under-50s)
• Randomisation tool collected background
  details not available in admin data
• Outcomes taken from employment and
  benefit admin data
• Monthly monitoring reports to identify any
  problems and give emerging evidence on
           impacts
ND25+: plausible results
          0.15

           0.1

          0.05
 Impact




             0

          -0.05

           -0.1

          -0.15
                    1




                  331




                  541
                  121
                  151
                  181

                  241
                  271
                  301

                  361
                  391
                  421
                  451
                  481


                  571
                  601
                  631
                  661
                  691
                  721
                   31
                   61
                   91



                  211




                  511
                            Days since randomisation (New Deal entry)
                                 Employed      JSA     IB/IS

                  http://www.dwp.gov.uk/asd/asd5/rports2007-2008/rrep500.pdf
Example 2: Work-focused interviews
for partners (WFIP)
• April 2004, mandatory WFIs introduced for
  partners of benefit claimants
• Partner records had to be downloaded
  before WFIP possible
• Capacity constraint meant downloads took
  place weekly over 2 months
  – order of download determined by last three digits of
    the national insurance number
• So, order of download randomised
Randomisation worked well...
                             17 April                                 15 May
 100
   50
        0




                             24 April                                 22 May
 100
   50
        0




                              1 May                                   29 May
 100
   50
        0




                              9 May                                   5 June
 100
   50
        0




            0 100 200 300 400 500 600 700 800 9001000 0 100 200 300 400 500 600 700 800 9001000
                                              Identifier
       Graphs by date of download
...and gave plausible results
                 20
                 15
                 10
 % point diff.




                  5
                  0
                  -5         Lower 95 CI
                 -10         IV
                 -15         Upper 95 CI
                 -20
                  -11-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12
                                              Months after WFIP

                       http://www.dwp.gov.uk/asd/asd5/rports2005-2006/rrep352.pdf
Final points
• We have learnt a lot from the experience
  of ERA and other large experiments
• UK infrastructure undeveloped - much to
  learn from US colleagues
• Doing more experiments will strengthen
  evidence base and UK expertise
• Experiments can be affordable – room for
  smaller projects alongside ‘flagships’

				
DOCUMENT INFO