Published by Topic Study Group 6 Adult and Lifelong Education
10th International Congress on Mathematical Education – www.icme-10.dk – Programme – TSG6
The “Skills for Life” national survey of adult numeracy in England.
What does it tell us? What further questions does it prompt?
Centre for Developing and Evaluating Lifelong Learning
University of Nottingham, Nottingham. UK
Abstract. The Skills for Life surveys of adult literacy and numeracy in England were
carried out in 2002-3 for the Department for Education and Skills to meet their
requirements. This paper refers to the numeracy part of the survey and outlines and
comments on the survey design, questions aspects of the progression and levels of the
Adult Numeracy Core Curriculum, which the items were required to address, reports on
some of the findings, suggests ways in which the survey results can inform adult
numeracy learning and teaching, makes some initial comparisons with the IALS UK
survey of 1994-5. The findings confirm that for many, being ‘at a given level’ is not
meaningful for the individual, as levels embody predetermined assumptions about
progression and relative difficulty.
Introduction and aim of the survey
The Skills for Life (SfL) surveys of adult literacy and numeracy in England were
carried out in 2002- 3 for the Department for Education and Skills to meet their
requirements. This paper refers to the numeracy part of the survey, completed by a
representative sample of 8040 adults aged 16 - 65, chosen by postcode and interviewed
in their own homes. Discussion of the survey has already been reported (Gillespie, J
2002, 2003). The overall survey1 results were published in November 2003 (Williams J
et al 2003).
The aim of the survey was to produce national estimates, for the first time, of the
proportions of the adult population of England currently at each of five bands, related to
the levels of the Adult Numeracy Core Curriculum (Basic Skills Agency 2001).
At or below At At At At or above
Entry Level 1 Entry Level 2 Entry Level 3 Level 1 Level 2
(≤E1) (E2) (E3) (L1) (≥L2)
General descriptions and sample activities for the five levels are
• Entry Level 1 - Understands information given by numbers and symbols in simple
graphical, numerical and written material – e.g. count items up to 10, write a
shopping list with multiple items, use comparative words (e.g. larger, lightest).
Entry Level 2 - Understands information given by numbers, symbols, simple
diagrams and charts in graphical, numerical and written material – e.g. round prices
to nearest 10p, use and find halves, quarters, give directions, read simple scales to
nearest labelled division, extract information from lists and simple charts.
Entry Level 3 - Understands information given by numbers, symbols, diagrams and
charts used for different purposes and in different ways in graphical, numerical and
written material – e.g. write sums of money in columns, match times in words to
The survey itself was carried out by the British Market Research Bureau, using software designed by
Bradford Technology Limited. The numeracy items, their presentation and the survey algorithms were
the responsibility of The Centre for Developing and Evaluating Lifelong Learning, School of Education,
University of Nottingham. (CDELL).
clock faces, mix a baby’s feeding bottle following instructions , multiply/divide by
single digits, use 2 digit decimals in practical contexts.
Level 1 – Understands straightforward mathematical information used for different
purposes and can independently select relevant information from given graphical,
numerical and written material - e.g. find 20% increase, interpret bar charts, line
graphs, draw simple plans, use timetables, scale quantities using proportion.
Level 2 – Understands mathematical information used for different purposes and
can independently select and compare relevant information from a variety of
graphical, numerical and written material – e.g. compare data using mean and
median, work out discounts as fractions and percentages of amounts, use scale
Design requirements and concerns
The survey tasks were required to be multiple choice items, presented to respondents
via laptops operated by the interviewers, which also stored individual respondents’
responses. Respondents were allowed access to pencil and paper but not to calculators.
Two design objectives were to base the estimates of level on what respondents could do
rather than what they couldn’t, and to encourage and motivate respondents to engage in
and complete the survey, through their positive reactions to the survey experience.
The survey team had the following in mind.
The full range of numeracy ability would be encountered, from very limited, up to
mathematics degree level.
Respondents’ co-operation would be for altruistic, personal interest reasons only.
Many respondents’ previous experience of mathematics might make them likely to
be discouraged by a succession of questions they could not cope with.
The style of survey would need to be able to respond to individuals’ numeracy
profiles, for example, being comfortable with arithmetic of money, but not with
The numeracy part of survey had to be designed to take no more than 30 minutes.
The team was required to work to a tight timescale, with less than six months from
initial planning to the development and testing of final versions of the survey on
laptops. Inevitably this affected aspects of the survey development which in ideal
circumstances would have been significantly extended. Consideration of the findings
needs to take this into account.
The use of laptops to present the items imposed limitations on the amount of text or
graphics in an item. But it allowed ‘tuning’ of the sets of items presented to individual
respondents to suit their emerging numeracy capability, with software selecting the next
group of items based on the degree of success with the last. This added complexity and
raised key issues about the survey design - e.g. the majority of items would not be
presented to all respondents. However, it addressed the two objectives. Once
respondents commenced the survey, only a very few (approximately 3%) chose not to
complete it. The adaptive routing through the items resulted in over 90% of respondents
correctly answering 10 out of a possible 19 items, with a mean number of correct items
of 13.9 for all respondents. Many said they had enjoyed the experience of the survey.
Had a single set of items been used, with all respondents working through the same set,
then those with capabilities at Entry Levels 1 and 2 could have met a majority of items
beyond them, unlikely to engage their interest. Those with capabilities at or above Level
2 could have encountered only a few items at an appropriately more demanding level.
In all, 48 items were developed. Respondents were presented with 19 from these 48
items in seven groups or ‘steps’, selected by algorithm to suit the individual’s emerging
levels of proficiency. Each step targeted different aspects of numeracy – such as
interpreting charts, working with measures and scaling, working with money. In Step 1,
all respondents met the same items, two at Entry Level 1 and one each at Entry Level 2
and Entry Level 3. These were straightforward, everyday money tasks. Based on their
performance, they then met one of three overlapping groups of five items from the nine
items in Step 2 ranging from Entry Level 1 to Level 2, targeting whole number
calculations and time. Depending on their performance at Step 2, the algorithm took
respondents to an item of an appropriate level from the seven items in Step 3, again
ranging from Entry Level 1 to Level 2. This process was repeated for Steps 4 - 7.
Respondents met two items in each of Steps 3 - 7.
In general, if respondents got the first item in a step correct, they were then presented
with the second one at a higher level. If they got this one right, they met their first item
in the next step at this higher level; if they got it wrong, they reverted to their original
level for the first item in the next step.
If respondents got the first item in a step wrong, they were then presented with a second
one at a lower level. If they got this right, they met their first item in the next step at the
original level; if they got the second item wrong, they met their first item in the next
step at a further level below – that is two levels below their original level.
Thus their passage through the items was characterised by repeated chances to see if
they could manage higher level items, along with the provision of more straightforward
items when they encountered difficulties. Some high level respondents continued to
meet items mainly at Level 1 and Level 2. Others, at low levels, met items mainly at
Entry Level 1 and Entry Level 2. The majority met items at several different levels.
A proportion of the Level 1 and Level 2 items was closely based on previously-used
Adult Numeracy test items, adapted to fit the survey requirements and screen layout.
The items for the lower three levels were new. To produce balanced sets of items for
each respondent covering a range of topic areas, groups of items were required on the
same theme, pitched at different levels.
In designing new items, the team took account of tasks (for the most part short answer
or discussion-prompt) used in other surveys of standing, including
A fresh start: Improving numeracy and literacy, DfEE (1999),
The Basic Skills of Young Adults, Elkinsmyth and Bynner (1994),
International Adult Literacy Survey (IALS) survey tasks
Tasks developed at the Fruedenthal Institute, University of Utrecht, by van den
Heuvel-Panhuizen, M. (1994 and 1996),
Draft tasks for the PISA 2003 mathematics assessment.
Piloting took place with groups of adult numeracy students and tutors, enabling
revisions to the wording and presentation to be made. Items were re-checked against the
Core Curriculum statements. Piloting with the revised items was then undertaken with
small numbers of adults by members of the BMRB survey team in the same way that
survey respondents would be recruited.
The whole process of ascribing individuals to particular levels includes assumptions and
begs questions. The Core Curriculum levels represent stages in an overall learning
curriculum, tied to other national curricula. The levels are not necessarily natural levels
of progression for the individual. Individuals will have their own areas of strength (their
own ‘spiky profiles’) – connected to their areas of usage of numeracy in their work and
current life experience – which will vary from individual to individual. These profiles
will not necessarily be the same as for those following numeracy learning programmes.
The survey team therefore aimed to find the level which best matched the individual’s
overall performance, taken as a whole.
Estimating overall numeracy levels of respondents
Many respondents’ performances included correct responses to items at several
different levels. Alternative schemes for setting overall level were trialled and results
compared. The method finally chosen was to sum overall performance, scoring 1 for a
correct response to an Entry Level 1 item, up to 5 for a Level 2 item, thus taking into
account all aspects of the respondent's performance. This led to the setting of threshold
scores for minimum scores to best correspond to a particular level, carefully chosen
after scrutiny of individual performances and of the performance of individual items.
Level thresholds and overall results
The starting assumption was that a respondent subsequently classified at a particular
level would respond correctly to at least 60% of the items encountered at that level and
to nearly all items at lower levels. Thus, in setting the ≥ Level 2 threshold, the starting
assumption was that respondents would correctly respond to the lower level items in
Step 1 and Step 2, then to six of the remaining ten Level 2 items, leading to a score of
These draft thresholds were refined by a scrutiny of overall performance of a sample of
respondents scoring close to the proposed grade threshold. Respondents scoring 53 and
below were found to be best described as Level 1, those scoring 60 and above as Level
2, with the proportions for the two levels gradually changing between these scores. The
Level 2 threshold was finalised as 57. This process was repeated at the other three
Based on these thresholds, the proportions of the sample population in each band was
estimated as in Table 1 (from data available).
≤ EL1 EL2 EL3 L1 ≥ L2 totals
Number of respondents 392 1288 1951 2095 1844 7570
% of respondents 5.2 17.0 25.8 27.7 24.4 100
scores 0 - 17 18 - 29 30-43 44-56 57-76
Table 1. Proportions of the survey population in each band
Too much should not be read into small differences in the percentages.
The spiky profiles of individuals
In the event, nearly all respondents moved between levels (as recorded by the level of
the first item of a step) as they progressed from step to step. The records of a sub-
sample of the respondents was studied to estimate the extent to which individual
respondents moved between levels from one step to the next, after completing the initial
items in Steps 1 and 2. Table 2 shows this distribution for this sub-sample of the
respondents, together with the distribution for each group of respondents determined by
their overall level.
Overall levels of respondents
Range of % of whole % of ≤ EL1 % of EL2 % of EL3 % of L1 % of ≥ L2
levels sample respondents respondent respondent respondent respondent
s s s s
1 4 19 0 1 0 17
2 23 26 3 6 26 71
3 33 52 29 30 48 12
4 31 4 55 41 25 0
5 9 0 12 22 1 0
Table 2. Distribution of range of levels across steps 2 to 7
Thus, in the sub-sample, 17% of those classified ≥ Level 2 only met items from Step 3
onwards at one level, while 71% of those at overall ≥ Level 2 and 26% of those at ≤
Entry Level 1 moved between two levels. Those at levels Entry Level 2 and Entry Level
3 moved between levels much more, with, for instance, 67% and 63% with ranges of 4
or 5 levels. Overall, 73% of respondents in this sub-sample moved between at least
three levels as they progressed through the survey.
This suggests that many individuals have personal areas of strength and weakness
spanning several levels and that being ‘at a given level’ is not meaningful for them, as
levels embody predetermined assumptions about progression and relative difficulty.
Further, since most of the respondents have not undertaken either formal or informal
numeracy or mathematical learning for many years, the findings provide a glimpse of
the diversity of numeracy strengths and weaknesses which may be encountered when
adults return to learning and the need for numeracy support programmes to be flexible
enough to recognise individuals’ personal areas of strength and weakness and relevant
At first sight it might appear surprising that significant percentages of those classified at
the two extremes (17% of those at or above level 2 and 19% of those at or below Entry
Level 1) stayed at a single level as they progressed from Step to Step. However, we may
consider the survey as having concentrated on a middle part of the much wider
distribution of numeracy capability, from those far above Level 2 in capability to those
well below Entry Level 1. These two extreme ‘tails’ of capability would be likely to
find even the hardest survey items unchallenging, or even the most straightforward
items beyond them. We may note in passing that the top 1% of respondents scored a full
76 points, whilst the bottom 1% scored 11 or less, representing success at a little more
than half the Entry level 1 items.
Performance of individual items
For 43 of the 48 items, actual or inferred2 percentages of correct responses for
respondents from all levels were obtained, weighted to match proportions in the whole
population, so that the overall performance at that item by the full sample and hence by
the whole population can be estimated.
Table 3 below shows what proportions of the whole sample responded correctly, or
could be inferred as doing so, for the 43 items. Without tying items to specific levels,
these results give indications of the proportions of the adult population who might have
difficulty with specific items. For instance, looking at the seven Level 1 items, four
were answered correctly (or could be inferred as being answered correctly based on
respondents’ overall performance) by between 50 and 59% of the whole sample, two
more by between 70 and 79% and one by between 80 and 89% of the sample. Fourteen
items which appeared to be the hardest (two at Entry Level 3, four at Level 1 and
altogether eight at Level 2) provided difficulties for approximately half the sample.
Items classified according to the % of respondents total out of
answering them correctly
Overall % of respondents 90- 80 70 60 50 40 30 20 10 >10 43 48
answering item correctly 100 - - - - - - - -
(or inferred as doing so)
Number of items at ≥ L2 1 1 2 1 3 1 9 11
Number of items at L1 1 2 0 4 7 7
Number of items at EL3 1 1 3 1 2 8 8
Number of items at EL2 4 3 1 8 8
Number of items at ≤ EL1 9 1 1 11 14
Table 3. Proportions of the respondents who answered an item correctly or could be
inferred as doing so.
Examples of SfL items
Shown below are three SfL items with different task contexts and at different levels,
together with summaries of performance by respondents meeting these items. The ways
in which best estimates of overall levels for individuals were made is explained after the
Item 32 - designed for Level 1 - encountered by over half the respondents.
See the section below “examples of SfL items’ for an explanation of the term ‘inferred’.
≤ EL1 EL2 EL3 L1 ≥ L2 total Estimate for % of
who would answer
Number of 0 118 1193 1832 1110 4253
% of respondents 0 15.3 38.4 70.7 93.4 54.9
answering correctly (est.)
Table 4. Summary of correct responses for item 32
Table 4 shows that, of those who met it, 70.7% of those classified as overall Level 1
answered this item correctly. Nearly all ≥ Level 2 but only 15% of Entry Level 2
respondents answered it correctly.
Note that no Entry Level 1 respondent actually met this item. However, taking into
account the fact that 38% of Entry Level 3 respondents, but only 15 % of Entry Level 2
respondents who met the items, answered it correctly, it was reasonably inferred that the
item would have been too challenging for all classified as Entry level 1 or below.
When the results are weighed to correspond to the overall proportions in the five bands,
this enabled the overall estimate of the whole population who would answer this item
correctly to be made of about 55% .
Item 37 - designed for level Entry Level 2 - encountered by nearly 2000 respondents.
≤ EL1 EL2 EL3 L1 ≥ L2 total Estimate for % of
who would answer
Number of 189 539 901 210 - 1839
% of respondents 36.5 58.1 81.7 96.2 83.8
Table 5. Summary of correct responses for item 37
Note that the screen version of this item has the 5, 10, 15, 20 and 25 lines highlighted in
a contrasting colour. Table 5 shows that, of those who met it, 96.2% of those classified
as overall Level 1 answered this item correctly. 58.1% of Entry Level classified
respondents but only 36.5% of ≤ Entry Level 1 respondents answered it correctly.
Again, note that no-one classified as Level 2 or above actually met this item, However,
based on the 82% of those classified as Entry level 3 and the 96% at level 1 who
answered correctly, it is reasonable to infer that practically all those classified as at
Level 2 and above would have answered it correctly. When the results are weighed to
correspond to the overall proportions in the five bands, about 84% of the overall
population would be expected to answer this item correctly.
Item 48 - designed for Level 2 - encountered by nearly 1000 respondents.
≤ EL1 EL2 EL3 L1 ≥ L2 total Estimate for % of
population who would
Number of - - 2 84 884 970
% of respondents - - 0 26.2 75.9 25.8
Table 6. Summary of correct responses for item 48
In this item, the respondent tells the interviewer to slide three of the labels on to the
appropriate fridge-freezers. The three reductions are £300 £180, £240 £160 and £350
£280. Table 6 shows that, of those who met the item, 75.9% of those classified as
overall ≥ Level 2 answered this item correctly but only 26.2% of Level 1 respondents
answered it correctly. When the results are weighed to correspond to the overall
proportions in the five bands, about 26% of the overall population would be expected to
answer this item correctly.
Similar data is available for all the items, enabling estimates to be made of proportions
of the adult population who could be expected to answer each of the items correctly.
Comparisons with the IALS survey
The International Adult Literacy Survey (IALS) of 1995 aimed to find out how adults
used printed information to function in society. Its 15 short answer quantitative literacy
tasks focused on extracting information from printed materials. For many of these tasks,
the printed materials were significantly more extensive and complex than the SfL items.
The SfL design team simplified the language and diagrams on the screens, so that
performance on the items would be inhibited as little as possible by difficulties in
reading and visual comprehension. As the tasks were multiple choice items, the
respondents tended to ‘check out’ options, compare them with their own solution, and
then check their own steps where none of the four options matched, very different from
asking the respondent to provide a short answer solution, as in the IALS survey.
For IALS the results were reported at five levels as summarised below.
Level 1. Very low literacy skills – individual may have difficulty identifying correct
amount of medicine to give to a child.
Level 2. Respondents can deal with simple clearly laid out material, tasks are not too
complex; would have difficulty learning new job skills requiring higher level of
literacy; e.g. use a table and weather chart to find out how many degrees warmer
it is forecast to be in Bangkok than in Seoul.
Level 3. Considered minimum desirable threshold in some countries, some occupations
require higher skills; e.g. work out how much more energy Canada produces than
it consumes by comparing figures on two bar charts.
Levels 4&5. Increasingly higher literacy skills requiring ability to integrate several
sources of information or solve more complex problems; e.g. calculate the final
value of a $100 investment at 6% for 10 years using a compound interest table.
A limited comparison between items suggests that IALS Level 1 best corresponds to
Entry Levels 1 and 2, IALS 2 to Entry Level 3, IALS 3 to Level 1 and IALS 4 and 5 to
≥ Level 2. Table 7 shows how IALS UK results compare with SfL results for England.
Equivalent SfL survey levels SfL England survey (n=8040) IALS UK survey (n=2472)
≤ EL1 5
EL2 16 23
EL3 25 27
L1 28 30
≥ L2 25 20
Table 7. Comparison between surveys
Findings and comparisons
The numeracy settings and contexts were ‘givens’ from the core curriculum, rather than
contexts that individuals were actually involved in. There was no opportunity to ask
follow-on questions or to find out the reasoning that lay behind a particular option
selection. The survey could not find how individuals responded to numeracy challenges
of their own, just glimpse some aspects of their numeracy capabilities. However, the
large scale of the survey enabled many comparisons to be made between different
groups who took part – by region within England, by age, by sex, and by other
categories. These are well documented by Williams (op. cit.) but also beg questions.
An interpretation from the survey results such as that ‘nearly half the adults in England
are classified as below Level 1 in numeracy’ (Williams op. cit.) begs the questions
‘what is meant by Level 1 in Numeracy?’ and ‘does this test adequately assess it?’ The
multiple choice test items can only provide an indication of capability. Wedege (2003)
draws attention to the three overlapping contexts of adults mathematics in work of
culture, society and work and the range of definitions of the term ‘numeracy’. These
definitions encompass much more than performance at multiple choice items. As
already discussed, it is also unlikely that an individual can be said to be ‘at Level 1’,
say, even if an adequate assessment had been undertaken.
It is suggested that what the survey results can do is to
• provide broad indications of the range of performance likely to be shown by the adult
English population on a graded set of multiple choice items related to the Adult
Numeracy Core Curriculum (see above ‘Performance on individual items’),
• draw attention to and illustrate the likely actual range of capability of individuals
nominally said to be ‘at a given level’,
• provide data regarding adaptive styles of assessment that now needs further
investigation and development,
• suggest modifications to the current allocation of topics to levels in the Core
The wide acceptability of the combination of computer-presented items with adaptive
selection of items (even though modest in extent) evidenced by the very high level of
completion of the numeracy assessment suggests that the process was effective in
engaging the interest and co-operation of the respondents and indicating scope for the
wider use of this combination in initial diagnostic and formative assessments.
Williams’s analysis provides thought-provoking comparisons and prompts for further
investigation. For example, the finding that “only 21% of respondents in the North East
were at Level 2, compared with 32% in the South East” provides a measure of
comparison between the adult populations in the two regions which begs other
questions. The initial lifestyle questions in the survey also showed respondents’
personal circumstances, e.g. “33% of respondents in the north east showed high levels
of the Index of Multiple Deprivation, compared with only 4% in the South East”.
Over 90% of the respondents thought they were “very good or fairly good at
mathematics” (Williams et al 2003), suggesting that individuals have their own
‘comfort zones’, within which they can operate with some success, maybe without
being aware of differences in numeracy capability between themselves and others.
Many may not be aware of barriers to their own advancement represented by their own
limited numeracy capability, a point also noted in relation to the IALS survey
Conclusions and suggestions for further work
The following are some among many conclusions and suggestions for further work.
• The survey was conducted within a very tight timescale. There is need to check,
modify and re-run the survey under less time pressure.
• Assuming that an individual could be ‘at a given level’ begs many questions.
• Level 1 is the level associated with, for example, numerical components of many
Foundation Modern Apprenticeships, a widely-used range of training programmes
for those entering semi-skilled occupations. There may thus be a level of concern
with the indications from the survey that approximately 50% of the adult population
of England may be operating at numeracy levels below Level 1.
• While cause and effect links between levels of success in the survey and social and
economic backgrounds of the respondents are not clear, they do suggest that poor
numeracy skills may be hampering individuals’ success and personal advancement.
• Finding effective ways of helping individuals, satisfied by current low numeracy
levels, to improve their skills will require sensitive long-term investigation.
• Using adaptive testing, made possible by the selection of items for each respondent
by software in the lap-tops, is a new departure for surveys of this type. It needs to be
researched further. Likely benefits include the high levels of engagement of
respondents with the survey items, but areas of concern need investigation -
• The effects of different algorithms – how are results modified by the use of
particular algorithms, or no algorithm so that all respondents met all items?
• Short answer versus multiple choice responses – how do levels of success
change when items are presented in short answer, rather than in multiple choice
• The styles of question – the items could be viewed as relatively
‘school/mathematical’ compared to actual activities with numeracy elements.
Different styles of assessed activity, let alone styles of item need to be trialled.
• The voice of the interviewee is conspicuous by its absence. Estimates of so-called
level from this survey gave respondents no opportunity to show what they could do
in situations of interest or concern to themselves.
Basic Skills Agency (2001). Adult Numeracy Core Curriculum. London: Basic Skills Agency.
DfEE (1999). A fresh start: Improving numeracy and literacy. London: Department for
Education and Employment (now Department for Education and Skills).
Ekinsmyth, C. and Bynner, J. (1994). The Basic Skills of Young Adults: some findings from the
1970 British Cohort Study. London: Adult Literacy and Basic Skills Unit.
Gillespie, J. (2003). The National Survey of Numeracy in England - Findings, insights,
reflection and implications. In J. Maasz and W. Schoeglmann (Eds.) Learning
mathematics to Live and Work in our World. Proceedings of the 10th International
Conference on Adults Learning Mathematics (pp. 102-109). Linz: Universtatsverlag
Gillespie, J. (2002), The National Basic Skills Survey of adults in England 2002-3. The
numeracy survey to date. In J. Evans, P. Healy, D. Kaye, V. Seabright and A. Tomlin
(Eds.) Policies and Practicies for Adults Learning Mathematics:Opportunities and Risks.
Proceedings of the 9th International Conference on Adults Learning Mathematics.
London: ALM and Kings College London
Hardwick, C. (1996). International Survey on Adult Literacy in Perspectives -Summer 1996.
Ottawa: Statistics Canada.
OECD/UNESCO (1994-5 et seq.). International Adult Literacy Survey – IALS. Organisation of
Economic Development (OECD). Paris: Eurostat, and UNESCO.
PISA (2001). Draft Framework for the PISA 2003 mathematics assessment, August 2001.
Programme for International Student Assessment, Nijmegen.
Van den Heuvel-Panhuizen, M. (1994). New chances for paper and pencil tests. In Proceedings
of 45th CIEAEM meeting. Italy: University of Caglieri.
Van den Heuvel-Panhuizen, M. (1996). Assessment and realistic mathematics education.
Utrecht: Freudenthal Institute.
Wedege, T. (2003). Sociomathematics: Researching Adults’ Mathematics in Work. In J. Maasz
and W. Schoeglmann (Eds.) Learning mathematics to Live and Work in our World.
Proceedings of the 10th International Conference on Adults Learning Mathematics (pp.
38-48). Linz: Universtatsverlag
Williams, J. et al (2003). The Skills for Life Survey, a national needs and impact survey of
literacy, numeracy and ICT skills. London: Department for Education and Skills.