Introduction to conformity assessment by N74g04

VIEWS: 8 PAGES: 47

									MEDA-project:

Support to Conformity
Assessment Bodies Activities




                    Workshop on accreditation



          Requirements related to quality
    and technical aspects for inspection bodies



                                 Vademecum




                               Eng. Félix Gutiérrez
                                                                                         Workshop on Accreditation




WORKSHOP ON ACCREDITATION


                                              REQUIREMENTS RELATED TO
                                          QUALITY and TECHNICAL ASPECTS
                                                  FOR INSPECTION BODIES

INTRODUCTION

If you are involved in performing inspections and are implementing the requirements of ISO/IEC 17020:1998
General criteria for the operation of various types of bodies performing inspection, particularly with the aim of
achieving an accreditation for the inspections you perform, then this workshop is for you.

This course explains the criteria in a simple and easy way, allowing you to facilitate a cost-effective system to
help control your operations and achieve accreditation.



OBJECTIVES

On completion of this training workshop, participants will have an understanding of:

The requirements of the ISO/IEC 17020:1998 standard;
How to apply the criteria in your organization;
The accreditation process;
Verification and calibration of equipment;
Sampling procedures;
Determination of uncertainty;
Traceability of measurements; and
Validation of methods.



WORKSHOP OUTLINE

The workshop provides you with the following skills and knowledge:

The highlights of the standard and accreditation;
The application of ISO/IEC 17020 to a conformity assessment body;
A clause-by-clause analysis of the ISO/IEC 17020 accreditation criteria, what they are and what they mean,
and some guidance on their implementation.




                                                                                                                    2
                                                                     Workshop on Accreditation




                                          Abbreviations and acronyms
ANOVA     Analisys of variance
APLAC     Asia Pacific Laboratory Accreditation Cooperation
BIPM      Bureau International des Poids et Mesures
CAB       Conformity Assessment Body
CASCO     Conformity Assessment Committee
CEN       Comision Electrotechnique Internationale
CENELEC   European Committee for Electrotechnical Standardization
CGPM      Conférence Générale des Poids et Mesures
EA        European Cooperation for Accreditation
EAL       European Cooperation for Accreditation of Laboratories
EN        European Norm
EU        European Union
GUM       Guide to the expression of uncertainty in measurement
IAF       International Accreditation Forum
IEC       International Electrotechnical Commission
ILAC      International Laboratory Accreditation Cooperation
ILC       Interlaboratory Comparison
ISO       International Organization for Standardization
M&TE      Measuring and test equipment
MIL-STD   Military Standard (USA)
MRA       Multilateral Recognition Agreement
NATA      National Association of Testing Authorities (Australia)
NIST      National Institute of standards and Technology (USA)
PT        Proficiency Testing
RSS       Root-sum-of-squares
RvA       Raad voor Accreditatie (Dutch Accreditation Council)
SANAS     South African National Accreditation System
SI        International System of Units
VIM       International vocabulary of basic and general terms in metrology




                                                                                            3
                                                                   Workshop on Accreditation



                                                                          Summary

Chapter 1: Conformity assessment concepts
1.1 Introduction                                                                          7
1.2 Conformity assessment bodies (CABs) vs. accreditation bodies                          7

Chapter 2: ISO/IEC 17020:1998 Standard
2.1 The standards role                                                                   9
2.2 ISO/IEC 17020 requirements                                                          10
2.3 Implementing ISO/IEC 17020                                                          11
2.4 Guidance on ISO/IEC 17020 contents                                                  12

Chapter 3: The accreditation process
3.1 Scope of accreditation                                                              19
3.2 Accreditation process                                                               19
     3.2.1 Registration                                                                 21
     3.2.2 Preliminary investigation                                                    21
     3.2.3 Assessment                                                                   22
     3.2.4 Follow-up assessment                                                         23
     3.2.5 Accreditation decision                                                       23
     3.2.6 Maintaining the accreditation                                                24
3.3 Accreditation documents                                                             24

Chapter 4: Verification and calibration of equipment
4.1 Definitions                                                                         25
4.2 Quality system provisions                                                           26
4.3 Classification and identification of M&TE                                           26
4.4 Records                                                                             26
4.5 Management of M&TE                                                                  27
4.6 Intervals of calibration and verification of M&TE                                   27
4.7 Calibration and verification program                                                28
4.8 Calibration and verification procedures                                             28
4.9 Calibration and verification records                                                29
4.10 Calibration labelling                                                              30
4.11 Sealing for integrity                                                              30

Chapter 5: Sampling
5.1 Foreword                                                                            31
5.2 Why sampling                                                                        31
5.3 Systematic sampling                                                                 32
5.4 Cluster sampling                                                                    32
5.5 Stratified sampling                                                                 33
5.6 Quota sampling                                                                      34
5.7 Sampling errors                                                                     35



                                                                                          4
                                                         Workshop on Accreditation


Chapter 6: Determination of uncertainty
6.1 Definition                                                                35
6.2 Factor contributing to uncertainty of measurements                        35
6.3 Policy and concept of uncertainty                                         37
6.5 Type A evaluation of standard uncertainty                                 37
6.6 Type B evaluation of standard uncertainty                                 38
6.7 Combined standard uncertainty                                             38
6.8 Expanded uncertainty                                                      38

Chapter 7: Traceability of measurements
7.1 Definition and hierarchy                                                  39
7.2 Elements of traceability                                                  40
7.3 International level                                                       40
7.4 National Metrology institutes                                             40
7.5 Accredited calibration laboratories                                       41
7.6 In-house calibration                                                      41
7.7 Terminology in the hierarchy of standards                                 41

Chapter 8: Validation of methods
8.1 Introduction                                                              43
8.2 General principles to be used in validation                               43
8.3 Validation procedures                                                     45
8.4 Repeatability (of results of measurements)                                46
8.5 Reproducibility (of results of measurements)                              47

Bibliography and web pages                                                    47




                                                                                5
                                                                           Workshop on Accreditation




Chapter :    1
Conformity assessment concepts


1.1 Introduction

The process of determining whether products, processes, systems or people meet specified
requirements has been given the name conformity assessment.

The term covers such activities as inspection, testing and certification. Certification should
not be confused with accreditation:

      Accreditation is defined as a procedure by which an authoritative body gives formal
       recognition that a body or person is competent to carry out specific tasks; and
      Certification is defined as a procedure by which a third party gives written
       assurance that a product, management system or personnel conforms to specified
       requirements.

The standard ISO/IEC 17000:2004 “Conformity assessment – Vocabulary and general
principles” revises practically the whole body of terms in this field. This international
standard specifies general terms and definitions relating to conformity assessment,
including the accreditation of conformity assessment bodies, and to the use of conformity
assessment to facilitate trade.

Conformity assessment is now defined as an activity that provides demonstration that
specified requirements relating to a product, process, system, person or body are fulfilled.
It covers such activities as calibration, testing, inspection and certification, as well as the
accreditation of conformity assessment bodies.


1.2 Conformity assessment bodies (CABs) vs. accreditation bodies

A distinction is made between conformity assessment bodies (CABs) and accreditation
bodies.


                                                                                                  6
                                                                        Workshop on Accreditation


A conformity assessment body is a body that performs conformity assessment services and
that can be the object of accreditation. Examples are laboratories, inspection bodies,
product certification bodies, management system certification bodies and personnel
certification bodies.

An accreditation body for conformity assessment is defined as an authoritative body that
performs accreditation. The authority of an accreditation body is typically derived from
government – either government administered or government authorized by its designation
as a non-profit, public service or public benefit organization..




                              Fig. 1 Accreditation and CABs


Other definitions from ISO/IEC 17000 standard are:

      Test - determination of one or more characteristics according to a procedure;
      Inspection - examination of a product design, product, process or plant and
       determination of their conformity with specific requirements or, on the basis of
       professional judgment, general requirements;
      Certification is defined as a third-party attestation related to products, processes,
       systems or persons. Certification is applicable to all objects of conformity
       assessment except for conformity assessment bodies themselves, to which
       accreditation is applicable;
      Accreditation of conformity assessment bodies is defined as a third-party attestation
       that a conformity assessment body fulfils specified requirements and is competent
       to carry out specific conformity assessment tasks. Thus, formal recognition of the
       conformity assessment body’s competence is provided by an accreditation body.



                                                                                               7
                                                                        Workshop on Accreditation




Chapter :   2
ISO/IEC 17020:1998 Standard


2.1 The standard role

Accreditation bodies, in effect, supervise the proper implementation of standards by the
CABs or conformity assessment bodies (e.g. laboratories, certification bodies and
inspection bodies) and identify their competence to perform specific tasks, such as testing,
measurement, calibration, inspection and certification for given methods and product areas.




                    Fig. 2 Conformity assessment layout and standards


                                                                                               8
                                                                         Workshop on Accreditation




The Committee on Conformity Assessment (CASCO) of the International Organization for
Standardization (ISO) provides many of the accreditation and conformity assessment
standards on this subject. See figure 2 above.

The standard ISO/IEC 17020:1998 “General criteria for the operation of various types of
bodies performing inspection” has been draw up with the objective of promoting
confidence in those bodies performing inspection which conform to it, taking into account
requirements and recommendations of European and international documents such as ISO
9000 (EN ISO 9000) series of standards.

The standard was prepared by CEN and CENELEC as EN 45004, and adopted by
ISO/CASCO as ISO/IEC 17020, replacing ISO/IEC Guide 39:1998 “General requirements
for the acceptance of inspections bodies”, and ISO/IEC 57:1991 “Guidelines for the
presentation of inspection results”.


2.2 ISO/IEC 17020 requirements

The standard requirements covers the functions of bodies whose work may include the
examination of materials, products, installations, plants, processes, work procedures or
services, and the determination of their conformity with requirements, and the subsequent
reporting of results of these activities to clients and, when required, to supervisory
authorities.

Inspection of a product, an installation or plant may concern all stages during the lifetime of
these items, including the design stage. Such a work normally requires the exercise of
professional judgment in providing services, in particular when assessing conformity.

The standard includes 16 general criteria and 5 annexes as follow:

   1. Scope
   2. Definitions
   3. Administrative requirement
   4. Independence, impartiality and integrity
   5. Confidentiality
   6. Organization and management
   7. Quality system
   8. Personnel
   9. Facilities and equipment
   10. Inspection methods and procedures
   11. Handling inspection samples and items
   12. Records
   13. Inspection Reports and inspection certificates
   14. Subcontracting
   15. Complains and appeals
   16. Cooperation


                                                                                                9
                                                                         Workshop on Accreditation




The requirements for the independence of inspection bodies varies according to legislation
and market needs. ISO/IEC 17020 therefore includes, in annexes A, B and C, criteria for
independence related to clause 4.

Annexes D and ZZ are for information only. The first covers information to be included or
referenced in the quality manual and the second refers to the corresponding international
and European standards for which equivalents are not given in the text.


2.3 Implementing ISO/IEC 17020

Implementing the standard in a inspection body will require a set of documents composed
of: (i) a quality manual following the standard clauses and including or referencing to
information requested in the annex D; (ii) a set of procedures; and (iii) quality records.

Table of contents for the quality manual should follow the clauses of the standard,
including a “0 chapter” as the first part of the manual comprising relevant details about the
organization.

A list of suitable set of ISO/IEC 17020 procedures could consist of the following ones:

   1. Supervision of the inspections
   2. Document control
   3. Identification and control of non-conformities
   4. Corrective actions
   5. Preventive actions
   6. Internal audits
   7. Management review
   8. Training
   9. Preventive maintenance
   10. Calibration
   11. Supplier review
   12. Purchasing
   13. Verification of purchased products
   14. Contract review
   15. Sample preparation and identification
   16. Storage
   17. Control of quality and technical records
   18. Reporting of the results
   19. Subcontracting
   20. Customer complaints and feedback

Quality records could include the following forms:

   1. Definitions and terminology
   2. Quality policy and quality objectives


                                                                                              10
                                                                         Workshop on Accreditation


   3. Organization scheme
   4. Personal cards
   5. Function descriptions
   6. Responsibilities
   7. Training
   8. Maintenance cards
   9. Maintenance schedule
   10. Maintenance survey
   11. Tests and specifications
   12. Client complaints
   13. Supplier cards
   14. Supplier problems
   15. Supplier review
   16. Approved suppliers
   17. Measuring equipment
   18. Calibration schedule
   19. Calibration survey
   20. Audit schedule
   21. Audit reports
   22. Internal problems
   23. Projects

That document set could eventually need other items such as policies -i.e. quality,
independence or gifts policies- job descriptions, organizational charts, work and/or
technical instructions, external documents, etc..


2.4 Guidance on ISO/IEC 17020 contents

This paragraph provides guidance notes additional to ISO/IEC 17029 requirements. It is
intended to assist inspection bodies to understand issues that could be examined by
accreditation bodies during assessment.

It is to be used in conjunction with the standard. The commentary and interpretative
material is numbered with the relevant clause number of ISO/IEC 17020:1998.

1.4 Scope of accreditation. - In cases of ambiguity, final determination of whether a
particular activity may be included in the scope of accreditation as an inspection activity
should be made by the accreditation body, taking into account accepted international
practice where relevant.

2.1 Inspection.- For professional judgment to be exercised the staff member responsible
for the inspection, referred to in clause 8.2 of ISO/IEC 17020, should
personally perform the inspection or effectively supervise the inspection.




                                                                                              11
                                                                          Workshop on Accreditation


3.5 Administrative requirements.- The conditions referred to in this clause of ISO/IEC
17020 are contractual conditions, not physical conditions at inspection sites. Items which
are commonly included in conditions of contract include:

      Access to documented inspection history
      Responsibility for safe site access
      Timely availability of key client personnel
      Preparation of items for inspection
      Response to adverse weather conditions
      Level of reporting
      Terms of payment
      Level of liability insurance, etc.

3.5 - In the case of type C inspection bodies (see annex C in the standard) the conditions of
contract should include a clear statement of the activities that prevent it being classified as
a type A inspection body.

4.1 Independence, impartiality and integrity.- An effective procedure normally requires
personnel to report and record any incidents of undue pressure they experience.

4.1 - Undue pressure on personnel may be brought to bear through financial,
marketing, customer relations and personal matters, as well as by other
technical or non-technical considerations.

4.2.3 Type C inspection body.- In the case of type C inspection bodies, contractual
conditions should include a clear statement summarising the interests or activities of the
inspection body or associated bodies that resulted in the type C classification. This
statement should be sufficiently explicit to enable potential clients to make informed
decisions on the adequacy of the level of independence offered.

6.2 Organization and management.- Details of personal or position responsibilities should
be included in the quality system documentation. This may cover clerical staff as well as
management, technical and inspection personnel.

6.3 - Functions of the technical manager may include, but not be limited to, authorization of
inspection methods, and technical support for inspectors.

7.1 Quality system.- Policy statements are intended to demonstrate senior management
commitment to the quality system. Objectives should include measurable targets, which are
reviewed at least annually. Training records should include details of extent to which
familiarity with the quality system that has been assessed.

7.5 - In cases where an inspection body has a number of offices in different locations,
responsibility for the practical maintenance of the quality system should be assigned to a
named individual in each office.



                                                                                               12
                                                                         Workshop on Accreditation


7.6 - The document control system shall be documented. A statement that
documents will be controlled is not sufficient.

7.6a - There must be a clear and authoritative means for all employees to identify the
current authorized version of any controlled document.

7.6a - Effective systems must be in place to ensure that each relevant employee has been
made aware of and understands updates to any document that could affect the conduct,
outcome, recording or reporting of an inspection.

7.6b - It must always be possible to identify the individual who is responsible for the
technical validity of any specific technical document.

7.7 - Internal audits cannot be considered to meet the requirements of ISO/IEC 17020
unless there is evidence of effective corrective action following identification of any non-
compliances.

7.8 - Feedback includes internal feedback for the purposes of improvement, as well as
complaints and preventive action.

7.8 - Procedures for feedback and corrective action should normally include but not be
limited to the following constituents:

      Description of the issue
      Investigation of the cause
      Description of immediate action taken
      Description of corrective action to be taken to prevent recurrence
      Identification of the person responsible for corrective action
      Target date for completion of corrective action
      Monitoring of progress of corrective action
      Sign off of completed corrective action.

Records of feedback and corrective action should be kept.

7.9 - An important aspect of management review is the identification of trends in all forms
of feedback that may indicate areas of the quality system that would benefit from review.
This aspect of management review need not be carried out more than once a year unless
there are large volumes of feedback suggesting an urgent need for review.

7.9 - The outcome of a management review should include the setting of objectives for the
coming period, proposed improvements to the quality system or an explicit statement that
no improvements are required.

8.2 Personnel.- For professional judgment to be exercised the staff member responsible for
inspection, referred to in clause 8.2 of ISO/IEC 17020, should personally perform the
inspection or effectively supervise the inspection.


                                                                                               13
                                                                          Workshop on Accreditation




8.4 - All stages of training, such as those detailed in clause 8.3 of ISO/IEC 17020, should
be recorded.

       NOTE 1 - Training records do not establish competence. They are a statement that
       the management considers the individual to be competent to perform specific
       inspection tasks and, where relevant, to use specific equipment.

       NOTE 2 - Training records should detail competence levels assessed in all relevant
       technical and administrative areas and should be reviewed regularly (normally
       annually).

8.6 - In cases where it is impossible to separate remuneration from the number of
inspections done, i.e. in very small inspection bodies, other means, such as recording the
duration of inspections, should be established to ensure that the quality of inspections is not
compromised by financial considerations.

9.7 Facilities and equipment.- The definition of measurement traceability given in ILAC
P10:2002 should be applied in understanding this clause.

9.7 - Where accuracy requirements permit calibrations of working instruments to be
performed in-house, traceability to national standards should be assured by the use of
reference standards of measurement for which the inspection body holds current traceable
calibration certificates. The calibration certificate should detail an uncertainty of
measurement that is appropriate for the equipment that is to be calibrated from the
reference standard. For further information on uncertainty of measurement see ISO/IEC
17025, clause 5.4.6, and EA-4/02 “Expressions of the Uncertainty of Measurements in
Calibration”.

9.9 - Records of in-service checks on equipment should be maintained.

9.10 - In situations where non-certified reference materials are used, inspection reports
should clearly state that the stated conclusion on conformity was based on uncertified
reference materials.

9.13a - In-house developed software such as spreadsheet programs shall be validated before
use. Validation may be accomplished by processing a known data set and performing
equivalent processing manually or by other means. The extent of the known data set should
be such that all possible outcomes of the software manipulation can be adequately checked.
Software should be protected from unauthorised alteration. Unauthorised alteration may be
detected by processing the known data set periodically. Records of software validations and
any necessary periodic checks should be maintained.

9.13a - All software, including proprietary products, should be controlled in an equivalent
way to hard-copy documents. Records of version numbers and dates when each version was
brought into or taken out of service should be maintained.



                                                                                               14
                                                                         Workshop on Accreditation


9.13d - Where electronic records are the primary storage medium, appropriate methods
such as regular backups and offsite safekeeping of backups should be implemented. The
frequency of backups should be set to reduce the risk of loss to an acceptable level.

10.3 Inspection methods and procedures.- All non-standard methods should be authorised
by the technical manager or another technically qualified person. Non-standard methods
should be documented, retained and referenced in relevant reports.

10.7 - Checking points should be identified in operating procedures. The extent of checking
should also be defined, e.g. checks for completeness, for technical consistency or
typographical errors.

10.7 - There should be documentary evidence that checks have been done. This evidence
should include the identity of the checker and the date of the check.

10.7 - An inspection body that performs large numbers of routine inspections may perform
less than 100% checking. In such cases, justification of the sampling method and sample
size should be documented.

12.2 Records.- Inspection records should be detailed and comprehensive. Satisfactory
evaluation of the inspection may require more than the following types of information to be
recorded:

      Client instructions
      Details of job review
      Details of the items inspected
      Inspection conditions
      Information provided by the client
      Identity of the person who performed the inspection
      Equipment used
      Equipment verification records
      The inspection procedures used
      Inspection observations
      Conformity decisions (with supporting justification)
      Aspects not inspected, with reasons given, i.e. lack of safe access.

Additional information requirements should be considered at the times of the inspection
and subsequent reporting.

13.2 Inspection reports and inspection certificates.- Accreditation cannot normally be
claimed for an inspection report that relies upon material provided by a subcontractor that
does not have demonstrated competence as required by clause 14.2 of ISO/IEC 17020.
Where regulations or other authoritative requirements stipulate that a report must include a
claim of accreditation, all information, critical to the inspection decision, provided by a
body that does not have demonstrated competence, should be clearly identified as such.



                                                                                              15
                                                                        Workshop on Accreditation


Elements of inspection reports and inspection certificates:

   1. Designation of the document, i.e. as an inspection report or an inspection certificate,
       as appropriate (*);
   2. Identification of the document, i.e. date of issue and unique identification (*);
   3. Identification of the issuing body (*);
   4. Identification of the client;
   5. Description of the inspection work ordered (*);
   6. Date(s) of inspection (*);
   7. Identification of the object(s) inspected and, where applicable, identification of the
       specific components that have been inspected and identification of locations where
       i.e. NDT methods have been applied (*);
   8. Information on what has been omitted from the original scope of work (*);
   9. Identification or brief description of the inspection method(s) and procedure(s)
       used, mentioning the deviations from, additions to or exclusions from the agreed
       methods and procedures;
   10. Identification of equipment used for measuring/testing;
   11. Where applicable, and if not specified in the inspection method or procedure,
       reference to or description of the sampling method and information on where, when,
       how and by whom the samples were taken;
   12. If any part of the inspection work has been subcontracted, the results of this work
       shall be clearly identified (*);
   13. Information on where the inspection was carried out;
   14. Information on environmental conditions during the inspection, if relevant;
   15. The results of the inspection including a declaration of conformity and any defects
       or other non-compliances found (results can be supported by tables, graphs,
       sketches and photographs) (*);
   16. A statement that the inspection results relate exclusively to the work ordered or the
       object(s) or the lot inspected;
   17. A statement that the inspection report shall not be reproduced except in full without
       the approval of the inspection body and the client;
   18. The inspector’s mark or seal;
   19. Names (or unique identification) of the staff members who have performed the
       inspection and in cases when secure electronic authentication is not undertaken,
       their signature. (See also clause 13.3 of ISO/IEC 17020) (*)

   NOTE - The elements of inspection reports/certificates that are considered to be
   mandatory for compliance with EN 45004 (ISO/IEC 17020) are marked with an asterisk
   (*). See also EA 5/01 “EA Guidance on the application of EN 45004 (ISO/IEC
   17020)”.

14.2 Subcontracting.- To maintain consistent standards of assessment of competence (as
required by international MRAs among accreditation bodies), where the assessment of a
subcontractor is carried out by an inspection body, it should be able to be demonstrated that
the assessment team is technically competent and knowledgeable in the application of
ISO/IEC 17020 or ISO/IEC 17025, as appropriate, and that the assessing body complies
with the requirements of ISO/IEC 17011.


                                                                                             16
                                                                          Workshop on Accreditation




14.4 - Clause 14.4 refers to work outside the accredited scope of the inspection body, the
results of which have a critical influence on conformity decisions in the inspection body’s
reports or certificates.

14.4 - Inspection reports that rely on data or services not covered by a scope of
accreditation for their conclusions cannot include any reference to accreditation except
under the circumstances outlined in clause 14.4c of the ILAC/IAF guidance document on
ISO/IEC 17020.

14.4 - When an endorsed inspection report or certificate is required by regulations and no
providers of demonstrated competence are available for a particular supporting service, the
report or certificate shall state prominently that the conformity decision is made in good
faith, based on information of unknown reliability, and that the conformity decision is not
covered by an accreditation. The report or certificate should clearly identify which
information is of unproven reliability. The inspection body shall take full responsibility for
the conformity decision irrespective of the source of any information required to support
the decision.

15.1 Complaints and appeals.- Complaints should include all feedback from dissatisfied
clients, regulators or other stakeholders, however received.




                                                                                               17
                                                                                      Workshop on Accreditation




Chapter :        3
The accreditation process


3.1 Scope of accreditation

The scope of accreditation specifies for which activities the accreditation body considers
the accreditation to apply. The scope of accreditation also specifies which locations are
covered if the accreditation is for an organisation with more then one site.

The scope of accreditation for inspection bodies states the field of inspection (for example
products or product groups, installations, processes and services that are subject to the
inspection) for which the body is accredited and which types of inspection activities (for
example design evaluation, new-building inspection, inspection during use) are carried out.

The methods and procedures (for example standards, specifications, legal guideline, EU
guidelines) used by the body are also specified in the scope.

The inspection body shall demonstrate to the accreditation body that it has carried out all
the types of inspections mentioned in the scope. The assessment team shall be given the
opportunity to witness the inspections onsite. If the inspections are implemented or
controlled from several branches, then these branches are listed in the scope description.
During its assessment, the assessment team will visit the branches.


3.2 Accreditation process

Inspection body accreditation is the process by which an accreditation body gives a formal
recognition that an inspection body is competent and impartial to carry out the inspection
service according to ISO/IEC 17020. The objective of accreditation is to assure the clients
of the quality of the inspection service of the inspection body as well as to avoid duplicate
inspection which may be required on the clients.

The diagram (Fig. 3 in next page) presents an overview of the accreditation process 1. The
accreditation process consists of a registration followed by a preliminary investigation and
an assessment. The process is completed with an accreditation decision. After the
accreditation is granted, the accreditation maintenance phase begins.

1
    Accreditation process may vary slightly from one accreditation body to another.


                                                                                                           18
                                                 Workshop on Accreditation




Fig. 3 Overview of and accreditation process (RvA)


                                                                      19
                                                                              Workshop on Accreditation


3.2.1 Registration

The accreditation process starts as soon as the organisation (inspection body) registers. For
this purpose the organisation will return to the accreditation body an application form
including the following documents:

       Proof of legal status;
       A description of how the inspection body is organized;
       A description of the mother organization and other relates bodies within those
        framework the organization operates; and
       A statement of other activities, if any, conducted by the legal entity.

After having reviewed the application, the accreditation body confirms the registration and sends
the organisation an additional registration form (specific to the area for which the
accreditation is requested), the regulations and an invoice for registration. Reference is also
made to the web site, where an overview of relevant documents is included per type of
accreditation

A registration is valid for a period of three years. The organisation is notified when this
period is nearing its end and the accreditation body has not yet accredited the organisation.
During this period, the organisation is kept up to date regarding any changes in procedures
and requirements.


3.2.2 Preliminary investigation

As soon as an organisation considers itself prepared for accreditation, it contacts the
accreditation body for further action. The accreditation body will start the assessment with
a preliminary investigation. The aim of the preliminary investigation is to determine
whether the organisation is sufficiently prepared for an initial assessment to have a
reasonable chance of success. The preliminary investigation is also meant to obtain a clear
idea of the intended scope of the accreditation.

The preliminary investigation consists of checking the documents of the organisation for
compliance with the criteria of the relevant standard by a qualified lead assessor. The
documents that have to be submitted by the applicant for this purpose are:

       Documents referred to in the application form, in case these are changed since
        registration;
       Articles of association of the organization, in case of application for type A
        inspection body;
       Top tier document, for example the quality manual, in which the management
        system is documented;
       General management system procedures; and
       Cross reference table between accreditation criteria and documented system.



                                                                                                    20
                                                                          Workshop on Accreditation


After assessing the documents, the assessor draws up a report on the preliminary
investigation in which the findings regarding the relevant standard are listed. Instead of a
report, the findings may be discussed and explained during a visit to the organisation. In the
latter case, the written report is limited to a summary of the observed nonconformities from
the standard, remarks regarding the implementation of the system and a listing of the
actions agreed upon. It is of course also possible to combine a more detailed report with a
visit.

If the preliminary investigation reveals that the system described by the applicant is not
adequate to meet accreditation criteria, such concerns shall be resolved to the satisfaction of
the accreditation body before further resources will be expended on the assessment. The
organisation is informed of this in writing. The organisation then has one opportunity to
take corrective actions that are demonstrably adequate. The period of time between the
preliminary investigation and the conclusion of the initial assessment cannot exceed one
year.

In case the organisation did not succeed in resolving the deficiencies in the documentation,
the accreditation body and the organisation will together decide on the follow-up.


3.2.3 Assessment

Accreditation bodies usually utilise the following assessment methods:

      Document review: assessing quality manuals, procedures, etc. for compliance with
       the criteria. A document review can also involve records at the organisation, such as
       personnel files, quality control charts, audit reports, management review reports,
       audit files etc.;
      Office assessment: an assessment at the premises of the organisation in order to
       assess the implementation of the system;
      Witnessing: observing activities carried out by the organisation (such inspections);
      Proficiency testing: comparing results obtained by various organisations during
       comparative investigations organised by the accreditation body; and
      Interviews: evaluating the expertise of the organisation's personnel via targeted
       interviews.

An accreditation body assessment usually consists of a combination of these assessment
methods.

A situation that is not in compliance with the accreditation requirements is considered to be
a nonconformity. Accreditation bodies usually distinguish two categories of
nonconformities:

      Category A: The absence of, or failure to implement or maintain one or more
       requirements of the accreditation standard, or a situation that, on the basis of
       objective observations, leads to doubts about the quality of the work carried out by


                                                                                               21
                                                                          Workshop on Accreditation


       the organisation applying for accreditation. In fact, this means that the declarations
       of conformity provided by the organisation are of little or no value.

      Category B: Not having maintained one or more requirements of the accreditation
       standard or a situation that, on the basis of objective observations, leads to doubts
       regarding the assurance of quality of the work carried out by the organisation
       applying for accreditation.

Both categories of nonconformities shall be demonstrably corrected and verified by the
accreditation body before a positive decision can be taken regarding granting an
accreditation. Organisations not yet accredited will have six months to correct
nonconformities and to request a follow-up assessment.


3.2.4 Follow-up assessment

If nonconformities have been found, the organisation shall, in consultation with the
assessment team, determine when the corrective actions are implemented. The assessment
team determines, via a once-only follow-up assessment, whether the actions taken to
correct the nonconformities are adequate.

As soon as the accreditation body has reviewed the corrective actions regarding the
nonconformities, the final report will be drafted, in which any comments made by the
organisation regarding the draft report initial assessment are also taken into consideration.


3.2.5 Accreditation decision

The decision-making process regarding accreditation has three elements:

   1. Based on the results of the assessment, the assessment team delivers the final report
      and its recommendation to the accreditation body.
   2. An independent committee, the accreditation committee, studies the final report and
      the recommendation given by the team and then advises the chief executive of the
      accreditation body as to whether accreditation shall or shall not be granted.
   3. The chief executive may follow the recommendation or may decide otherwise. If he
      does not follow the commission’s recommendation, a discussion first takes place
      with the board of governors of the accreditation body.

The accreditation body informs the organisation in writing of the decision taken. If a
positive decision is taken, the accreditation body will draft the accreditation documents. In
case of a negative decision, the accreditation body will wait for a period of six months
before accepting a new application from the same organisation.




                                                                                               22
                                                                         Workshop on Accreditation


3.2.6 Maintaining the accreditation

Accreditation is valid for four years. After the accreditation has been granted, it shall be
periodically reviewed with regard to its validity. To this end, accreditation bodies normally
carry out an annually surveillance assessment and a re-assessment within four years.
During the accreditation period, the scope of the accreditation may be changed.


3.3 Accreditation documents

As soon as accreditation is granted, two documents are sent to the organisation:

   1. Accreditation agreement: this contains the rights and obligations of both parties, the
      party providing the accreditation and the party being accredited. This agreement,
      signed by both parties, remains legally binding until one of the two parties cancels
      the agreement.
   2. Accreditation certificate with accompanying appendix (scope of accreditation): the
      certificate specifies the date on which the accreditation was granted, the standard
      according to which the assessment took place and period of validity of the
      certificate. The appendix to the certificate describes the scope of the accreditation.




                                                                                              23
                                                                         Workshop on Accreditation




Chapter :   4
Verification and calibration of equipment


4.1 Definitions

Verification: Confirmation by examination and provision of evidence that specified
requirements have been met. (ISO/IEC Guide 25 - 3.8)

       NOTE 1 - In connection with the management of measuring equipment, verification
       provides a means for checking that the deviations between values indicated by a
       measured instrument and corresponding known values of a measured quantity are
       consistently smaller than the maximum allowable error defined in a standard,
       regulation or specification peculiar to the management of the measuring equipment.

       NOTE 2 - The result of verification leads to a decision either to restore to service, or
       to perform adjustments, or to repair, or to downgrade, or to declare obsolete. In all
       cases, a written record of the verification performed should be kept on the file for
       the measuring instrument.

Calibration: Set of operations that establish, under specified conditions, the relationship
between values of quantities indicated by a measuring instrument or measuring system, or
values represented by a material measure or a reference material, and the corresponding
values realised by standards. (VIM - 6.11)

       NOTE 1 - The result of a calibration permits either the assignment of values of
       measurands to the indicated values, or the determination of corrections with respect
       to the indicated values.

       NOTE 2 - A calibration may also determine other metrological properties such as
       the effect of influence quantities.

       NOTE 3 - The result of a calibration may be recorded in a document, sometimes
       called a calibration certificate or a calibration report.




                                                                                              24
                                                                         Workshop on Accreditation


4.2 Quality system provisions

The organization’s quality system definition, management, audit and review should include
all the policies and objectives and its commitment, procedures, responsibilities, etc, related
with calibration and maintenance of measuring and test equipment.

The quality manual and related quality documentation (quality management procedures,
including calibration, test, operating, maintenance, etc.) should contain provisions for
control, calibration and verification and maintenance of measuring and test equipment
(M&TE). The organization should assign responsibilities for each of these activities, and
should maintain records of training and competency of the personnel involved.


4.3 Classification and identification of M&TE

The organization should have an inventory of all its major M&TE. The inventory could be
a list or a database with the following contents: code, M&TE description, manufacturer’s
name, type identification and serial number.

Classification of should establish:

      M&TE subjected to control in the organization;
      M&TE subjected to calibration and/or verification;
      M&TE subjected to corrective and/or preventive maintenance.

The identification should ensure that each item is under control and, when necessary, that
each item can be unambiguously referred to in a record, procedure, etc. Identification is
normally done through identification labels, marks, etc. The system selected should ensure
security and durability.

All M&TE should have an unique identification. Normally, this is ensured by assigning a
unique code to each item. The organization should define the level of items for which a
code is needed (M&TE, accessories, etc) and the information in it (number, type of item,
calibration, verification and/or maintenance requirements, allocation, etc). When a code is
defined, it should normally be included in the identification label, mark or any other
identification system.


4.4 Records

The organization should maintain records for each item of M&TE relevant to the
calibrations or tests performed. Each record of M&TE should include, where appropriate:

      The name of the item of equipment;
      The manufacturer’s name, type identification, and serial number or other unique
       identification;


                                                                                              25
                                                                         Workshop on Accreditation


      Date received and date placed in service;
      Current location, if appropriate;
      Condition when received (i.e. new, used, reconditioned);
      Copy of the manufacturer’s instructions, where available;
      Dates and results of calibrations and/or verifications and date of next calibration
       and/or verification;
      Dates and results of preventive maintenance and adjustments carried out to date and
       planned for the future; and
      History of any damage, malfunction, modification or repair.


4.5 Management of M&TE

The inspection body should be furnished with all M&TE required for the correct
performance of its work.

The organization should define a purchasing process for M&TE including:

      Specifications of characteristics required, taking into account requirements on
       tolerances and uncertainties;
      Selection of suppliers, including quality requirements and procedures to ensure that
       purchased equipment, materials and services comply with specified requirements
       where no independent assurance of the quality of these services or suppliers is
       available;
      Analysis of offers against specification and selection of equipment; and
      Written order to the supplier including requirements on documentation, calibration,
       delivery period, etc.

Before the M&TE is put into service the organization should check that:

      The M&TE is as ordered;
      Documentation, including calibration or compliance certificates etc (if required), is
       provided and is in order (if it is not, the M&TE should not be put into service);
      The M&TE has not been damaged and works correctly; and
      The M&TE is coded, included in the inventory and identified as established.

   NOTE - All requirements of clause 9 from ISO/IEC 17020 standard should be taken
   into account.


4.6 Intervals of calibration and verification of M&TE

The intervals of calibration and/or verification should be such as to minimise the risk that
the results of any inspection or test may be affected because the M&TE used has failed to
perform to specified requirements. The intervals selected must take into account the
following:


                                                                                               26
                                                                         Workshop on Accreditation




      The accuracy and permissible limits of errors;
      The stability of the M&TE;
      The purpose and usage (extent and severity, etc.);
      Any experience with similar M&TE; and
      The recommendations of the manufacturer;
      Other characteristics of the item and the organization.

An interval can be defined as: (i) a period of time; (ii) number of times used; (iii) a
calibration and/or verification before each use; and (iv) a combination of these. The
intervals selected should be documented (i.e. in record cards, calibration and/or verification
certificates).

To ensure continued accuracy, an interval may be revised to take into account the results of
previous calibrations and verifications; change in usage, etc. Any revision of an interval of
calibration and/or verification must be justified by documented historical evidence (i.e. in
record cards, change of interval reports).


4.7 Calibration and verification program

The calibration and verification program should contain, for each calibrated and/or verified
M&TE, the dates of the last and next calibration and/or verification. This program can be
defined in a list, a database, a level’s chart, etc. The content could include code,
description, activity, last date and next date.

A listing of outstanding calibrations and/or verifications could be issued regularly, and
periodic audits and daily staff checks before use could be carried out to ensure that the
program is being maintained.


4.8 Calibration and verification procedures

In case of calibration or verification in-company or internal, the staff should have the
education, training, technical knowledge and experience necessary for performing these
activities in accordance with documented procedures.

   NOTE - If an accredited laboratory is to be engaged to carry out external calibrations,
   the inspection body must ensure that the measurement uncertainty achieved is
   appropriate for the intended use of the calibrated instrument.

The documentation of calibration and verification procedures should be sufficient to ensure
proper implementation, and to ensure consistency of application from one occasion to
another. The contents should include:




                                                                                              27
                                                                         Workshop on Accreditation


      Identification of the M&TE that may be calibrated and/or verified using the
       procedure;
      Identification of all measurement standards and associated equipment required to
       perform the calibration/verification;
      Stipulated environmental conditions and, if relevant, location;
      Preparation requirements (conditioning, M&TE warming, handling, checks, etc.);
      Sequence of activities;
      Details of the measurement or calibration data to be recorded; and
      Analysis of uncertainty of measurement;

   NOTE - The organization should define a method for the analysis of uncertainties in
   accordance with internationally accepted methods. This documented method should be
   applied to estimate the limits of uncertainties associated to each calibration/verification
   method. The analysis, including a quantification of each component uncertainty
   considered, could be included in the calibration/verification procedure. Estimated
   uncertainties of measurement should be used to define or reduce permissible limits of
   error and limits of tolerance. (See chapter 6)

The calibration and verification procedures could be in the form of a document, or a set of
documents (comprising manufacturer’s recommendations, published standard measurement
practices, etc) which would be included in the organization’s quality system.

       NOTE - Most calibration procedure documentation can be done on a computer.


4.9 Calibration and verification records

The organization should ensure that all calibration and verification records are complete
and contain sufficient information to permit their repetition (personnel, date, environmental
conditions, procedure, measurements, calculations, results, etc.).

The recorded information for each internal calibration and/or verification should include:

      The description and unique identification of the M&TE;
      The description and unique identification of each item of equipment used in the
       performance of the calibration or verification and its traceability;
      The date on which each calibration or verification was carried out;
      The calibration/verification results obtained after and, where relevant, before any
       adjustment and repair, and/or a statement of compliance with an identified
       metrological specification;
      A statement of the uncertainties assigned to the calibration/verification performed;
      Details of any maintenance or adjustment carried out;
      Any limitations in use imposed as a consequence of the calibration/verification; and
      Identification of the person(s) performing the calibration/verification.




                                                                                              28
                                                                             Workshop on Accreditation


       NOTE – When the inspection body choose external calibration of its M&TE,
       calibration laboratory should be duly accredited according to ISO/IEC 17025 and
       contents of calibration certificates2 should include all the information required in clause
       5.10 in said standard.

A copy of the calibration certificate should be maintained as a record.


4.10 Calibration labelling

Each M&TE should be securely and durably labelled by a self-adhesive label, by a tie-on
label or by direct marking. The labelling or marking should clearly indicate the calibration
status and any restriction of use.

The organization may use different labels to:

           Indicate that the M&TE may be used within its permissible limits of error -the label
            should display code, description, calibration date, next calibration date, and
            authorised signature;
           Indicate that the M&TE has a restriction in its use (accuracy, measurement range,
            etc) - the label should display code, description, calibration date, next calibration
            date, authorised signature and limitation in use; and
           Indicate that the M&TE has been withdrawn from use.


4.11 Sealing for integrity

The organization should have procedures to prevent any adjustment of M&TE other than
that intended to be made by the user.

Tamper-proof seals (labels, solder, wire, paint) should be designed so that any unauthorised
adjustment is clearly indicated. This is also applicable to software used in the M&TE.




2
    Displaying the logo of the accreditation scheme.


                                                                                                  29
                                                                                   Workshop on Accreditation




Chapter :        5
Sampling


5.1 Foreword

“The inspection body shall have and use adequate documented instructions […] on standard
sampling […] techniques…” (ISO/IEC 17020 – 10.2). To meet this requirement, the
organization should select adequate sampling procedures from more than 150 standards on
sampling offered by ISO, classified by products or service, or develop its own sampling
methods.

ACME and Military Standard from USA such as MIL-STD-105 or from ISO such as ISO
28593 and other standards could eventually be very useful.

This chapter aims to teach the inspection body about different sampling methods.


5.2 Why sampling?

When organisations require data they either use data collected by somebody else
(secondary data), or collect it themselves (primary data). This is usually done by sampling,
that is collecting data from a representative sample of the population they are interested in.

In statistics we define a population as the collection of all the items about which we want to
know some characteristics. It is usually far too expensive and too time consuming to collect
information from every member of the population, exceptions being the general elections
and the census, so instead we collect it from a sample.

If it is to be of any use the sample must represent the whole of the population we are
interested in, and not be biased in any way. This is where the skill in sampling lies: in
choosing a sample that will be as representative as possible. As a general rule the larger the
sample, the better it is for estimating characteristics of the population.

Although information about our sample will be of immediate interest, the point of
collecting it is usually to deduce information about the entire population. In statistics this is
called making inferences. If such inferences are to be reliable then the sample must be

3
    Sampling plans indexed by limiting quality (LQ) for isolated lot inspection.


                                                                                                        30
                                                                          Workshop on Accreditation


truly representative of the population, i.e. free from bias. The basis for selecting any sample
is the list of all the subjects from which the sample is to be chosen - this is the sampling
frame.


5.3 Systematic sampling

This means random sampling with a system. From the sampling frame, a starting point is
chosen at random, and thereafter at regular intervals.

For example, suppose you want to sample 8 houses from a street of 120 houses. 120/8=15,
so every 15th house is chosen after a random starting point between 1 and 15. If the
random starting point is 11, then the houses selected are 11, 26, 41, 56, 71, 86, 101, and
116.

If there were 125 houses, 125/8=15.625, so should you take every 15th house or every 16th
house? If you take every 16th house, 8*16=128 so there is a risk that the last house chosen
does not exist. To overcome this the random starting point should be between 1 and 10.

On the other hand if you take every 15th house, 8*15=120 so the last five houses will never
be selected. The random starting point should now be between 1 and 20 to ensure that
every house has some chance of being selected.

In a random sample every member of the population has an equal chance of being chosen,
which is clearly not the case here, but in practice a systematic sample is almost always
acceptable as being random.

Advantages:

      Spreads the sample more evenly over the population; and
      Easier to conduct than a simple random sample.

Disadvantages:

      The system may interact with some hidden pattern in the population, i.e. every third
       house along the street might always be the middle one of a terrace of three


5.4 Cluster sampling

In cluster sampling the units sampled are chosen in clusters, close to each other. Examples
are households in the same street, or successive items off a production line.

The population is divided into clusters, and some of these are then chosen at random.
Within each cluster units are then chosen by simple random sampling or some other




                                                                                               31
                                                                        Workshop on Accreditation


method. Ideally the clusters chosen should be dissimilar so that the sample is as
representative of the population as possible.

Advantages:

      Saving of travelling time, and consequent reduction in cost; and
      Useful for surveying employees in a particular industry, where individual
       companies can form the clusters.

Disadvantages:

      Units close to each other may be very similar and so less likely to represent the
       whole population; and
      Larger sampling error than simple random sampling.


5.5 Stratified sampling

In a stratified sample the sampling frame is divided into non-overlapping groups or strata,
i.e. geographical areas, age-groups, genders. A sample is taken from each stratum, and
when this sample is a simple random sample it is referred to as stratified random sampling.

Advantages:

      Stratification will always achieve greater precision provided that the strata have
       been chosen so that members of the same stratum are as similar as possible in
       respect of the characteristic of interest. The bigger the differences between the
       strata, the greater the gain in precision;
      It is often convenient to stratify a sample. The results from each stratum may be of
       intrinsic interest and can be analysed separately; and
      It ensures better coverage of the population than simple random sampling.

Disadvantages:

      Difficulty in identifying appropriate strata.
      More complex to organise and analyse results.

   NOTE 1 - In general the size of the sample in each stratum is taken in proportion to the
   size of the stratum. This is called proportional allocation.

   NOTE 2 - Sometimes there is greater variability in some strata compared with others. In
   this case, a larger sample should be drawn from those strata with greater variability.




                                                                                             32
                                                                          Workshop on Accreditation


5.6 Quota sampling

In quota sampling the selection of the sample is made by the practitioner, who has been
given quotas to fill from specified sub-groups of the population. For example, an
interviewer may be told to sample 50 females between the age of 45 and 60.

There are similarities with stratified sampling, but in quota sampling the selection of the
sample is non-random. Anyone who has had the experience of trying to interview people in
the street knows how tempting it is to ask those who look most helpful, hence it is not the
most representative of samples, but extremely useful.

Advantages:

      Quick and cheap to organise

Disadvantages:

      not as representative of the population as a whole as other sampling methods
       because the sample is non-random it is impossible to assess the possible sampling
       error.


5.7 Sampling errors

Estimates derived from sample surveys are subject to two types of errors -sampling errors
and nonsampling errors. Nonsampling errors can be attributed to many sources, such as
response differences from materials, definitional difficulties, differing interpretations, and
inability to recall information.

Sampling errors occur when estimates are derived from a sample rather than a complete list
of items to be inspected. The sample used for a particular inspection is only one of a large
number of possible samples of the same size and design that could have been selected. This
difference, termed sampling error, occurs by chance, and its variability is measured by the
standard error associated with a particular survey.

Another related term is the variance which is the square of the standard error and is
sometimes used in standard error calculations.




                                                                                               33
                                                                          Workshop on Accreditation




Chapter :    6
Determination of uncertainty


6.1 Definition

According to the international “vocabulary of basic and general terms in metrology”,
uncertainty of measurement is a parameter, associated with the result of a measurement,
that characterises the dispersion of the values that could reasonably be attributed to the
measurand. This parameter could be a standard deviation or another part of an interval
indicating a certain confidence range. It is important that one does not only consider the
single measurement but also the overall result of a test. In this case uncertainty of
measurement embraces all components of a test. Some of them may be obtained by
interpreting the statistical spread of results of a series of measurements.

Other components have to be worked out from complementary methods (sampling plans,
experience). Testing results should be the best approximation to the true value. Statistical
Random and systematic factors effects contribute to the uncertainty of measurement of the
testing results. If possible, the latter should be eliminated by using for instance correction
factors.


6.2 Factors contributing to uncertainty of measurement

Consideration should be given to the different factors which may contribute to the overall
uncertainty of a measurement (not all are relevant in all cases). Some examples are given
below:

      Definition of the measurand;
      Sampling;
      Transportation, storage and handling of samples;
      Preparation of samples;
      Environmental and measurement conditions;
      The personnel carrying out the tests;
      Variations in the test procedure;
      The measuring instruments;
      Calibration standards or reference materials;
      Software and/or, in general, methods associated with the measurement; and



                                                                                               34
                                                                          Workshop on Accreditation


          Uncertainty arising from correction of the measurement results for systematic
           effects.

Above factors or components should be identified according to the method used to estimate
their numerical values:

      A. Those which are evaluated by statistical methods; and
      B. Those which are evaluated by other means.


6.3 Policy and concept of uncertainty

Uncertainty of measurement has to be taken into account when testing procedures and/or
testing results are compared with each other or against specifications. An understanding of
the concept of uncertainty of measurement is important in order to be able to choose testing
methods that are fit for purpose. The overall uncertainty of measurement should be
consistent with the given requirements.

Testing laboratories must report uncertainty estimates where specified by the method,
where required by the client and/ or where the interpretation of the result could be
compromised by a lack of knowledge of the uncertainty. This should at least be the case
where testing results have to be compared to other testing results or other numerical values,
such as specifications. In any case laboratories should know the uncertainty associated with
a measurement whether it is reported or not.

Some tests are purely qualitative and consideration is still being given as to how uncertainty
of measurement applies in such cases. One approach is to estimate the probability of false
positive or false negative results. The issue of estimating uncertainty of measurement in
regard to qualitative results is recognised as an area in which further guidance is required.
As a first step, inspection bodies should concentrate on the introduction of uncertainty of
measurement for quantitative results.


6.4 Guidance on implementation

The implementation of the concept of uncertainty of measurement has to be in line with
implementation of the standard. To start with it is necessary to agree on the following
fundamental points:

      1. The statement of uncertainty of measurement should contain sufficient information
         for comparative purposes;
      2. The GUM4 and ISO/IEC 17020 form the basic documents but sector specific
         interpretations may be needed;
      3. Only uncertainty of measurement in quantitative testing is considered for the time
         being. A strategy on handling results from qualitative testing has to be developed;

4
    ISO Guide to the expression of uncertainty in measurement.


                                                                                               35
                                                                          Workshop on Accreditation


   4. The basic requirement should be either an estimation of the overall uncertainty, or
      identification of the major components followed by an attempt to estimate their size
      and the size of the combined uncertainty;
   5. The basis for the estimation of uncertainty of measurement is to use existing
      knowledge. Existing experimental data should be used (quality control charts,
      validation, round robin tests, handbooks etc.);

   6. When using a standard test method there are three cases:

              When using a method, which contains guidance to the uncertainty
               evaluation, testing laboratories are not expected to do more than to follow
               the uncertainty evaluation procedure as given in the standard;
              If a standard gives a typical uncertainty of measurement for test results,
               laboratories are allowed to quote this figure if they can demonstrate full
               compliance with the test method; and
              If a standard implicitly includes the uncertainty of measurement in the test
               results there is no further action necessary.

          Inspection bodies should not be expected to do more than take notice of, and
          apply the uncertainty-related information given in the method, i.e. quote the
          applicable figure, or perform the applicable procedure for uncertainty estimation.

   7. The required depth of the uncertainty estimations may be different in different
      fields. Factors to be taken into account include:

              Common sense;
              Influence of the uncertainty of measurement on the result (appropriateness
               of the determination);
              Classification of the degree of rigour in the determination of uncertainty of
               measurement.

   8. In certain cases it can be sufficient to report only the reproducibility;
   9. When the estimation of the uncertainty of measurement is limited any report of the
       uncertainty should make this clear; and
   10. There should be no development of new guides where usable guides already exist.

6.5 Type A evaluation of standard uncertainty

A type A evaluation of standard uncertainty may be based on any valid statistical method
for treating data.

Examples are calculating the standard deviation of the mean of a series of independent
observations; using the method of least squares to fit a curve to data in order to estimate the
parameters of the curve and their standard deviations; and carrying out an analysis of
variance (ANOVA) in order to identify and quantify random effects in certain kinds of



                                                                                               36
                                                                       Workshop on Accreditation


measurements. If the measurement situation is especially complicated, one should consider
obtaining the guidance of a statistician.


6.6 Type B evaluation of standard uncertainty

A type B evaluation of standard uncertainty is usually based on scientific judgment using
all the relevant information available, which may include:

      Previous measurement data,
      Experience with, or general knowledge of, the behaviour and property of relevant
       materials and instruments,
      Manufacturer’s specifications,
      Data provided in calibration and other reports, and
      Uncertainties assigned to reference data taken from handbooks.


6.7 Combined standard uncertainty

The combined standard uncertainty of a measurement result is taken to represent the
estimated standard deviation of the result.

It is obtained by combining the individual standard uncertainties (and covariances as
appropriate), whether arising from a type A evaluation or a type B evaluation, using the
usual method for combining standard deviations. This method is often called the law of
propagation of uncertainty and in common parlance the “root-sum-of-squares” or square
root of the sum-of-the-squares or “RSS” method of combining uncertainty components
estimated as standard deviations.


6.8 Expanded uncertainty

Although the combined standard uncertainty is used to express the uncertainty of many
measurement results, for some commercial, industrial, and regulatory applications, what is
often required is a measure of uncertainty that defines an interval about the measurement
result within which the value of the measurand is confidently believed to lie. The measure
of uncertainty intended to meet this requirement is termed expanded uncertainty, and is
obtained by multiplying the combined uncertainty by a coverage factor.

   NOTE – A complete guide for evaluating and expressing the uncertainty of
   measurement results is the NIST Technical Note 1297. See bibliography at the end of
   this Vademecum.




                                                                                            37
                                                                          Workshop on Accreditation




Chapter :    7
Traceability of measurements


7.1 Definition and hierarchy

The term traceability means a process whereby the indication of a measuring instrument or
a material measure can be compared, in one or more stages, with a national standard for the
measurand in question.

In each of these stages, a calibration has been performed using a standard with a
metrological quality already determined by calibration with a higher level standard. There
is therefore a calibration hierarchy, as shown in fig. 4. The figure illustrates in particular
how an in-house calibration system (right hand side of the diagram) may interact with the
existing metrological infrastructure (left hand side of the diagram).




                        Fig. 4 Metrological infrastructure interaction




                                                                                               38
                                                                            Workshop on Accreditation


7.2 Elements of traceability

Traceability is characterised by a number of essential elements:

   1. An unbroken chain of comparisons going back to a standard acceptable to the
      parties, usually a national or international standard;
   2. Measurement uncertainty: the measurement uncertainty for each step in the
      traceability chain must be calculated according to agreed methods and must be
      stated so that an overall uncertainty for the whole chain may be calculated;
   3. Documentation: each step in the chain must be performed according to documented
      and generally acknowledged procedures; the results must equally be documented;
   4. Competence: the laboratories or bodies performing one or more steps in the chain
      must supply evidence for their technical competence, i.e. by demonstrating that they
      are accredited;
   5. Reference to SI units: the chain of comparisons must end at primary standards for
      the realization of the SI units;
   6. Re-calibrations: calibrations must be repeated at appropriate intervals; the length of
      these intervals will depend on a number of variables, i.e. uncertainty required,
      frequency of use, way of use, stability of the equipment.

In some fields, reference materials take the position of physical reference standards. It is
equally important that such reference materials are traceable to relevant SI units.
Certification of reference materials is a method that is often used to demonstrate
traceability to SI units.


7.3 International level

At the international level, decisions concerning the International System of Units (SI) and
the realization of the primary standards are taken by the Conférence Générale des Poids et
Mesures (CGPM). The Bureau International des Poids et Mesures (BIPM) is in charge
with coordinating the development and maintenance of primary standards and organises
intercomparisons on the highest level.


7.4 National Metrology Institutes

The National Metrology Institutes are the highest authorities in metrology in almost all
countries. In most cases they maintain the ‘national standards’ of the country which are the
sources of traceability for the associated physical quantity in that country. The National
Metrology Institutes ensure that the primary standards themselves are internationally
comparable.

They are responsible for disseminating the units of measurement to users, be these
scientists, public authorities, laboratories or industrial enterprises and are therefore the top
level of the calibration hierarchy in a country.



                                                                                                 39
                                                                          Workshop on Accreditation




7.5 Accredited calibration laboratories

Accredited laboratories are often at the top of a firm’s internal calibration hierarchy. Their
task is then to compare, at appropriate intervals, the firm’s own working standards (factory
standards) with reference standards which are calibrated by a National Metrology Institute
or an accredited laboratory with a suitable best measurement capability.

Many accredited laboratories carry out calibrations for third parties on request, i.e. for firms
that do not have calibration and measurement facilities with suitable equipment, and for
private test laboratories working in the field of product certification.


7.6 In-house calibration

An in-house calibration system5 ensures that all measuring and test equipment used in an
inspection body is calibrated regularly against its own reference standards.

The organization reference standards shall have traceability of measurement by being
calibrated at an accredited calibration laboratory or a National Metrology Institute. The in-
house calibration may be evidenced by a calibration certificate, a calibration label, or some
other suitable method.

The nature and scope of the metrological control of in-house calibration are at the
discretion of the organization concerned. They must be adapted to the particular
applications so that the results obtained with the measuring and test equipment are
sufficiently accurate and reliable. Accreditation of organisations performing in-house
calibration is not necessary to satisfy the requirements of the ISO/IEC 17020 requirement.

The hierarchy of standards and a resulting metrological organisational structure for tracing
measurement and test results within an organization to national standards are shown in Fig.
5 on next page. The user of the standard or the measuring and test equipment is given for
each level of the hierarchy, together with his functions within the structure, and the
metrological basis and the result of his activity (documentation).


7.7 Terminology in the hierarchy of standards

The value of any standard has an uncertainty. In the calibration hierarchy, the higher
ranking standard has a smaller uncertainty. Each additional subordinate level therefore
leads to an increase in the uncertainty of measurements

Primary standard: Standard that is designated or widely acknowledged as having the
highest metrological qualities and whose value is accepted without reference to other
standards of the same quantity.

5
    Also called internal calibration.


                                                                                               40
                                                                         Workshop on Accreditation




International standard: Standard recognised by an international agreement to serve
internationally as the basis for assigning values to other standards of the quantity
concerned.

National standard: Standard recognised by a national decision to serve, in a country, as the
basis for assigning values to other standards of the quantity concerned.

Reference standard: Standard, generally having the highest metrological quality available
at a given location or in a given organisation, from which measurements made there are
derived.

Working standard: A standard which, usually calibrated against a reference standard, is
used routinely to calibrate or check material measures, measuring instruments or reference
materials.

   NOTE - Working standards may also at the same time be reference standards. This is
   particularly the case for working standards directly calibrated against the standards of a
   national metrology institute.




                        Fig. 5 Metrological organizational structure




                                                                                              41
                                                                           Workshop on Accreditation




Chapter :    8
Validation of methods


8.1 Introduction

The definition used for validation is the confirmation by examination and provision of
objective evidence that the particular requirements for a specific intended use are fulfilled.
This definition gives the impression of confined and well-defined (exact) operations.
Inspection and test methods are normally developed for an intended application range. The
reference to particular requirements must in many cases be interpreted in a flexible way as
the requirements can be of general nature. Both standardised and non-standardised methods
are covered.

The validation of a method becomes in this context a way of demonstrating that the method
is fit for its intended purpose. The fitness for purpose includes an assessment and a
balancing of technological possibilities, risks and costs.


8.2 General principles to be used in validation

In the validation process the ultimate aim is to secure that the test methods are good enough
with respect to representativeness, reproducibility and repeatability.

How much effort should be spent on validation must be decided on a case by case basis.
The frequency of use of the test method should also be considered when determining the
extent of validation. The total consequences of wrong results are of course larger for
methods in extensive use than for test methods used occasionally.

The validation of test methods covers to a large extent the uncertainty, repeatability and
reproducibility of the test method. As the factors affecting the results and contributing most
to the uncertainty change from one technical sector to another or even from one test method
to another, a universal solution cannot be given.

To develop a representative test method, adequate knowledge is required of the practical
use of the test results and of the real service conditions of the object of the test. Based on
such knowledge, the 'representative' properties to be determined by the test may be
identified.



                                                                                                 42
                                                                        Workshop on Accreditation


The factors affecting the test results and their uncertainty may be grouped into three main
categories:

      Instrumental and technical factors:
      Human factors; and
      Environmental factors.

Instrumental and technical factors are related to the constructional and functional
characteristics of the test and measurement equipment, as well as to other technical
operations involved in the inspection. Their effect may be minimised and kept under
control by the following provisions:

      Define the equipment as precisely as necessary;
      Provide a clear description of the procedure as well as the equipment operation;
      Establish procedures for operational control and calibration; and
      Ensure where applicable traceability of measurements to the SI units.

Human factors are related to the competence of the staff and may be controlled
through:

      Education/basic knowledge; and
      On job training/practical experience.

The qualification required for the personnel employed for a given test may be specified in
the test method or reference can be made to the applicable internal procedures.

Environmental factors are associated to the environment where the inspection or test is
performed. Among others the effect of the following parameters must be assessed and
properly controlled:

      Atmospheric conditions (temperature, pressure, humidity);
      Pollution/contamination; and
      Other environmental characteristics.

The effect of the above parameters should be described in the method or reference to other
applicable documents should be made. However, for new test methods this information is
often not available. In some cases the data base for method validation is so large that
statistical methods should be applied.

The validation process must consider the expected or required uncertainty of the test results
and their intended use.

The required depth of the validation process depends also on the maturity of
the test method and the prevalence of its use. One can distinguish between the
following categories:
     Novel methods;


                                                                                              43
                                                                           Workshop on Accreditation


      Methods used by several organizations;
      Modification of established methods; and
      Standardised methods.

The ways, in which the validation is performed in the different cases, need not be clearly
differentiated. If the fitness for purpose concept is maintained, it is often possible to
validate at reasonable cost but with a higher degree of uncertainty.

The aim of the validation of methods must always be to demonstrate that the method is fit
for the intended purpose and that the results have an acceptable uncertainty. It is important
that the rules of validation of methods do not prevent the natural technological development
from taking place.

The validation of test methods consists of two interrelated steps:

   1. suitability of the test to solve the problem (customer needs); and
   2. demonstration of the technical capability of the test method within the specified test
      range.

The suitability or representativeness of a test method is in many cases an attribute which is
difficult to define especially for tests related to product acceptance. The test methods must
be such that the results obtained correlate with the performance characteristics and
operational experience of the product.


8.3 Validation procedures

Both testing laboratories and accreditation bodies are looking for procedures and guidelines
for planning and controlling the test method validation process. However, the discussion
above has clearly indicated that one single procedure cannot be developed. Consequently, a
palette of different choices of validation techniques has to be developed. How detailed the
validation will be, depends on the circumstances (needs, costs, possibilities, risks, etc.).

The different validation possibilities are built up around:

      Utilization of calibration;
      Intercomparisons including the use of reference materials and reference methods;
      Well qualified staff and their professional judgement;
      Simulation and modelling; and
      Other approaches.

The validation used can be direct or comparative. Focusing the effort on the most critical
factors affecting the test method will lead to a different solution for the validation of exact
physical and chemical test methods as compared to that for product or subjective testing.




                                                                                                44
                                                                          Workshop on Accreditation


As said above different validation procedures may be followed, their effectiveness and
applicability depending on the type of test considered. They can be characterised as
scientific or comparative:

Scientific approach: In the scientific approach the assessment of the representativeness,
repeatability and reproducibility of the method is performed with reference to the different
constitutive elements and features. Evidence should describe the representativeness of the
selected properties and the associated uncertainty. This can be based on information
published in the scientific and technical literature or on ad hoc investigations performed by
the organization developing the method. The organization shall demonstrate that relevant
influencing factors (instrumental and technical, human, environmental) have been analysed
and that they are under control within the uncertainty associated with the method.

Comparative approach: The method is assessed by comparing its results to those obtained
by means of another already validated test method, which has been developed for the same
purposes. If this is not possible, the performance characteristics of the method may be
assessed through interlaboratory comparisons. The method is valid if the results obtained
by the different laboratories fall within the expected uncertainty limit. Deviations beyond
such limits may indicate i.e. a lack of control of the influencing parameters. The causes of
this behaviour should be clarified and the method is to be redefined accordingly.

The interlaboratory comparison does not always provide a comprehensive validation of the
representativeness of the method, which may be accurate and stable, though physically
wrong.

The organization should always describe the way the validation of methods is done and this
description should be a part of the quality system/manual when appropriate.

As simplified validation procedures (fast validation methods) must be used in many cases,
the capability to use professional judgement in assessing whether the validation is
comprehensive enough becomes pronounced. However, even when talking about simplified
or fast validation procedures, the validation must be done with such a depth that the method
is fit for the intended use and acceptable to the customer and/or authorities. It is clear that
the definition of the use and scope of the method and assumption of uncertainty should not
be misleading and too optimistic.


8.4 Repeatability (of results of measurements)

Closeness of the agreement between the results of successive measurements of the same
measurand carried out under the same conditions of measurement. These conditions are
called repeatability conditions

Repeatability conditions include:

      The same measurement procedure;
      The same observer;

                                                                                               45
                                                                          Workshop on Accreditation


      The same measuring instrument, used under the same conditions;
      The same location; and
      Repetition over a short period of time.

   NOTE - Repeatability may be expressed quantitatively in terms of the dispersion
   characteristics of the results.


8.5 Reproducibility (of results of measurements)

Closeness of the agreement between the results of measurements of the same measurand
carried out under changed conditions of measurement. A valid statement of reproducibility
requires specification of the conditions changes.

The changes conditions may include:

      Principle of measurement;
      Method of measurement;
      Observer;
      Measuring instrument;
      Reference standard;
      Location;
      Conditions of use; and
      Time.

   NOTE - Reproducibility may be expressed quantitatively in terms of the dispersion
   characteristics of the results. Results are here usually understood to be corrected results.




                                                                                               46
                                                                 Workshop on Accreditation




Bibliography and web pages
APLAC TC 003          Management review for laboratories and inspection bodies.
DISCUSS (2004)        Coventry University. Sampling methods.
EA-4/02 (1999)        Expression of the uncertainty of measurement in calibration.
EA-5/01 (2003)        Guidance on the application of EN 45004 (ISO/IEC 17020).
EAL-G3 (1996)         Internal audits and management review for laboratories.
EAL-G12 (1995)        Traceability of measuring and test equipment to national
                      standards.
EAL-G19 (1996)        Calibration and maintenance of measuring and test equipment
                      in testing laboratories.
EAL-P11 (1997)        Validation of test methods. General principles and concepts.
ILAC-G17 (2002)       Introducing the concept of uncertainty of measurement in
                      testing.
ISO/IEC 17000:2004    Conformity assessment. Vocabulary and general principles.
ISO/IEC 17020:1998    General criteria for the operation of various types of bodies
                      performing inspection.
ISO/IEC 17025:1999    General requirements for the competence of testing and
                      calibration laboratories.
MIL-STD-105C (1963)   Sampling procedures and tables for inspection by attributes.
NATA (2004)           Course: Understanding inspection accreditation requirements
                      ISO/IEC 17020.
NIST (1994)           Technical Note 1297 – Guidelines for evaluating and
                      expressing the uncertainty of measurement results.
NIST (1995)           Guide for the use of the International System of Units (SI).
RvA-F1 (2004)         Application form.
RvA-F5 (2001)         Supplementary registration form. Inspection.
RvA-R2 (2002)         Regulations for accreditation.
SANAS F 40 (2004)     General checklist for accreditation of inspection bodies.
VIM (1993)            International vocabulary of basic and general terms in
                      metrology.

APLAC                 www.aplac.org
Coventry University   www.mis.coventry.ac.uk
EA - EAL              www.european-accreditation.org
ILAC                  www.ilac.org
ISO                   www.iso.ch
NATA                  www.nata.asn.au
NIST                  www.nist.gov
RvA                   www.rva.nl
SANAS                 www.sanas.co.za




                                                                                      47

								
To top