MV FAQ 210711

Document Sample
MV FAQ 210711 Powered By Docstoc
					Topic               Date
Model Validation
Assumed Knowledge   Mar-11




Alternative
Assumptions
                    May-11


Assumptions -       Mar-11
Alternative
Assumptions

Assumptions - Key   Mar-11
Assumptions
Assumptions -       Mar-11
Uncertain
Assumptions
Data Adequacy




Data Granularity



                    Jul-11




Data Policy



                    Jul-11




Data Scope
                    Jul-11

Dependencies        Apr-11


Draft Validation
Report

                    Jul-11




Evidence Template   Apr-11
External Model
Documentation
                    May-11



External models
                    May-11


External models

                    May-11


External Models     Mar-11




Future Management
Actions


                    Jun-11




Future Management
Actions
                    Jun-11



Independent
Validation



                    May-11




Independent
Validation
                    May-11


Independent
Validation          May-11

Independent         Mar-11
Validation
Independent        Mar-11
Validation

Independent        Mar-11
Validation




Independent        Mar-11
Validation



Internal Model     Mar-11
Steering Groups




Materiality
                   Jul-11


Materiality
                   Jul-11


Materiality
                   Jul-11


Model Validation



                   Jul-11




Pass/Fail Test     Mar-11
Profit and Loss
Attribution
                       May-11



Profit & Loss
                       May-11
Attribution
Risk Mitigation
                       Jul-11


Validation
                       Jul-11


Validation -
Alternative Models
                       May-11



Validation -
Benchmark              May-11
information
Validation -
Bootstrapping
                       Jul-11



Validation Frequency
                       May-11

Validation -
Independence           Jul-11

Validation - Model
Authorisation

                       Jul-11




Validation Report
Contents
                       Jun-11


Validation Report


                       May-11



Validation Report


                       May-11
Validation - SCR



                        May-11




Validation Tests

                        May-11



Validation - Use Test


                        May-11




Validation -
Xchanging               May-11


Validation Report       Feb-11




Validation Report       Feb-11

Validation Report       Feb-11

Validation Report       Feb-11
Validation Report       Mar-11

Validation Report       Mar-11

Validation Report       Mar-11




Validation Tools        Mar-11




Validation Scope        Feb-11



Validation Scope        Mar-11
                                               Question
dation
         How are agents addressing issues of assumed knowledge both internally (i.e. board
         and senior managements knowledge of actuarial methods) and externally (i.e.
         Lloyd’s analysts who validate syndicate models having appropriate modelling
         experience)?
         Testing of alternative methodologies and assumptions could be very time
         consuming, given the number of assumptions and methodologies in a typical model.
         Is it acceptable for agents to prioritise and focus on key areas?

         In what circumstances should agents be testing alternative assumptions?



         Is it possible for Lloyd's to put together a list of the key assumptions used by agents
         and share these with the Market?
         If a syndicate was uncertain with an assumption (perhaps due to lack of data) would
         a prudent stance be appropriate, or would regulators enforce that a market average
         be adopted?
         How good is good enough with data adequacy? How should we approach the risk of
         over-allocating resource to data quality, away from other areas of the model?




         What level of granularity is required for data? Is Lloyd's intending to provide
         guidance to try and achieve consistency across the market?




         How is the distinction made between contributers and owners of the data policy?




         How wide is the definition of data when thinking about the data policy and data
         directory

         When collecting metrics and comparing dependencies will the focus be on input or
         output?
         What level of completion is acceptable for the draft validation report submission?
         Should some or all areas be completed?




         Does the full model validation evidence template need to be completed?
External suppliers (such as RMS) are providing substantial documents for Solvency
II validation purposes. Are these sufficient for agent validation of these external
models?



It is common practice for agents to use catastrophe model output from brokers (with
adjustment to correct for perceived biases or differences in opinion). How would
agents be expected to validate this?

Back-testing of catastrophe models is not typically possible as exposure databases
are not normally preserved through time due to the huge amount of information this
would require, and where they have been preserved, not all inputs for current model
versions are available
It is not clear how the materiality of external models would be validated apart from
validating the output.




Does Lloyd's have an expectation that future management actions should be
inlcuded within the model?




There are many elements of the SBF that could be classed as future management
actions (as the execution of the plan lies in the future). To what extent does Lloyd's
think that the SBF process is a future management action?



Will agents be told if their approach to independent validation is unacceptable?




For small agents it is a challenge to find an independent person, can different people
be used for different areas of the validation?


How much can agents used external reports such as SAO in the Validation Report?


What is the view on proportionality for independent validation and how much
resource are smaller managing agents expected to spend on validation?
How independent does the validation need to be?


What is the output on independent validation?




To what extent can agents rely on Lloyd's reviews as independent or external
validation?


Who should sit on committees for Internal Model Steering Groups?




How consistent does materiality need to be across the Market? Does a varying
definition of materiality across the market have any knock on implication for LIM?


Is it acceptable for an agent's definition of materiality to change over time?



Is it possible for Lloyd's to provide feedback in advance of September on the
approach that individual agents are taking to materiality?


What are the key things to focus on to validate the model?




Are agents expected to set criteria for the Pass/Fail test ahead of time?
Will Lloyd's provide any further guidance on the appropriate level of granularity for
Profit & Loss attribution?



Who would Lloyd's expect to be accountable for P&L attribution

The scope of risk mitigation appears to have increased via the inclusion of
operational controls. Are agents expected to validate all the outcomes of risk
mitigation?
Is there a view on the involvement of non-execs in the validation process?



One of the suggested validation approaches is the use of alternative methodologies
(e.g. building a second mini model), which could be time consuming. Should agents
prioritise this over (say) P&L attribution?


Will Lloyd's provide any benchmark output to assist agents with validation?


How does a syndicate ensure that the bootstrapping method is adequate to measure
the risk in question?



Could the frequency of the validation be linked to the analysis of change?


Does Lloyd's have any view on the balance between independence and
understanding arising from familiarity with the model for validation?

Will Lloyd's continue to challenge capital numbers even following the authorisation
of internal models (and therefore the acceptance of robust validation processes at
agent level)?




What is the difference between the draft validation report expected in August and
the final submission ion October 2011 and what does Lloyd’s expect to see in each
of these reports?


Will Lloyd's make the LIM Validation Report available to the Market?




Will Lloyd's provide a check list of items that should be used in the Validation
Report?
What is an appropriate numerical threshold for applying model validation (e.g. +/-
5% of SCR)?




Does Lloyd's have a view on how many tests is "enough" for validation? Related to
this, how many test fails is acceptable for a final model?



What does validation mean in the context of the use test?




Is it necessary for agents to check the data they receive from Xchanging or is this
something that can be coordinated centrally?


What is the scope of the Validation report? Does it extend to Use Test and ORSA?



When will a template of the Validation Report be released?
Can Lloyd's provide a market-wide utility for providing independent validation of the
model?
Who should sign off the validation report?
Is Lloyd's expecting the validation to be a statement claiming that "the Internal Model
is reasonable/appropriate" OR "the Internal Model is not unreasonable?"
Will Lloyd's be publishing further guidance for the validation report due in August?

Would it be possible for Lloyd's to share consistent themes and questions for a
validation report with the Market?



Validation tools are very similar to parameterisation tools - how are the processes
different?




What is Lloyd's doing to make sure the scope of model validation is reasonable and
proportionate, given quite different views and often significant estimated effort in the
market and among consultants?
Is there a recommended number of assumptions that should be validated?
                                                        Answer

One of the difficulties of Solvency II involves appropriate challenge at board and senior management, especially for
those technical aspects of Solvency II. One approach that agents have adopted includes external (or internal) run
training sessions on technical aspects of Solvency II, for board and senior management. Lloyd's will ensure that the
relevant skills will be available to support the validation process.
Lloyd's would expect that agents would always apply the principal of proportionality when performing any validation
processes (not simply testing alternatives). Agents need to ensure that their work is extensive enough to provide
sufficient assurance over the model (as defined in their validation policy), and this work should necessarily focus on
the most material areas of the model.
It is Lloyd's view that agents will have considered alternatives in every case they have made an assumption (as any
decision will involve considering options for the answer). In many cases, all that Lloyd's will require is an explanation
of this process and the rationale for the selected assumption. It would only be in the case of the most material
assumptions where Lloyd's would typically expect to see detailed testing of alternative model implementations
Lloyd's has no issue with sharing information on an anonymous basis - this depends more on whether individual
agents are comfortable with this. Lloyd's will investigate the practicality of sharing this information (and potentially
Solvency II considers that (re)insurance undertakings apply best estimate and do not include margin. Lloyd's would
expect syndicates to be able to validate their choice of assumptions by appropriate stress / sensitivity tests (or
appropriate validation tools). However it is not Lloyd's intention to apply a convergence to a mean when setting
As with all areas of validation, Lloyd's would expect agents to take a proportionate approach, and to focus efforts on
data quality in the way that will have the most impact on the robustness of their internal model outputs

Lloyd's is, however, aware that some items of data are typically very material in internal model outputs, and that
there is a risk that data quality is neglected because it is perceived as less exciting than other areas of modelling.
Agents should therefore be prepared to explain why their definition of quality is appropriate given the impact data
As in many areas, this will be a decision for agents to make guided by the materiality of the data to the results of
their internal model. Lloyd's does not intend to mandate any specific level of granularity as agent businesses and
internal models vary greatly in their size and complexity, and therefore a one-size-fits-all solution would not be
appropriate

As an example, Lloyd's conducted several studies of the impact of increasing the granularity of reserving data,
between a high level 10 reserving classes, a mid level of approx 50 reserving classes, and and full risk code level.
Lloyd's recognises that a range of individuals throughout the business will contribute to the data policy (and
associated data governance) as owners and/or users of different data around the agent. Typically, it is practically
useful for there to be a single owner where this is pulled together (and when it is implemented), but this ownership
does not preclude significant involvement from others.

Lloyd's is not mandating any specific individual / department as the owner of the data policy, and it is the expectation
that a range of people will be involved in the production and implementation of the policy
In common with the published guidance, Lloyd's is expecting agents to take a relatively wide definition of data within
their data policy and data directory. That being said, agents should still be mindful of the principle of proportionality,
and ensure focus on data that is really material to the SCR, whether this feeds directly or indirectly into the model
Lloyd's will discuss inputs with agents but review will focus on outputs.

One of the primary purposes of the draft Validation Report is to allow Lloyd's to feed back to agents. From this
perspective, it would be helpful if at least one area was fully completed so that Lloyd's can give appropriate feedback
on the style and extent of the proposed content for the final Validation Report. Lloyd's would also expect agents to
provide a full "skeleton", so this feedback can also identify if there are any areas that agents are not proposing to
address in the final report.

Agents should note that, as the final Validation Report is now required in December with the Final Application Pack,
As a minimum, those areas covered under the first workshop, Core Validation 1, will need to be completed for the
submission due by 28 April (i.e. tabs 4, 6, 11 and 12). We would however encourage agents to complete as much of
the template as possible, as this will enable earlier review and feedback on the additional areas not covered at the
Lloyd's expects that agents will be able to place some reliance on work performed by external model providers when
performing their validation, but that this is unlikely to ever be sufficient in isolation. For example, Lloyd's would not
typically expect agents to independently verify the underlying mathematics, or to perform algebraic testing of the
model implementation, as these are areas that would be best addressed by the model provider. Lloyd's would
however expect the agents to understand how these processes had been done, and to explain why they felt able to
rely on them. Further to this, agents would always be expected to validate the output of their particular
The principles of validation would apply in the same way in these circumstances. Agents will need to be able to
demonstrate that they have a good understanding of the model and the way it has been implemented, and that the
output is appropriate for their business

Lloyd's recognises that any validation process will have limitations, and that useful testing can still be achieved using
approximations. Where full data is not available, Lloyd's would still expect agents to make use of whatever data is
available (such as historical loss ratios in the example given) to test the performance of the latest model

Lloyd's is aware of this issue with external models however many providers will have qualitative information about
the model. Apart from that it is advisable to use sensitivity testing and refer to papers to demonstrate understanding
of the methodology. Lloyd's expects that in many of the most material cases, the models are already in use (e.g.
many agents already use catastrophe models to support underwriting decisions and aggregation management).
Lloyd's would therefore expect that agents had conducted sufficient qualitative and quantitative validation on the
operation of these models to satisfy themseleves that they are appropriate for these purposes. The validation of the
Future Management Actions are explicitly referred to in the EIOPA Level 2 advice (and the draft Level 2 from the
Commission) as an element of models that need particular treatment. This is particularly true for some longer-term
life-insurance products, where management actions over the term of the product (such as setting the bonus levels
on with profit policies) can have a large impact on the economic value of the policy.

Non-life models may also contain some implicit and explicit management actions (such as the purchase of run-off
reinsurance cover at a future point in the modelled time horizon, or the reduction of premium volumes in response to
a down-turn in the cycle). Lloyd's would expect that agents consider the programming of their models carefully to
Lloyd's believes that typically the execution of the approved SBF should not count as a "future management action"
within the internal model, and it is actions that lead to deviations from this SBF that are to be considered.

Agents should, however, consider any elements of their SBF that are particularly contingent on future events (e.g.
the purchase of reinsurance at 1/7) and apply the future management actions criteria in these cases

As with many other areas of Solvency II, Lloyd's is not mandating an approach to independent validation, and there
are a wide range of potentially acceptable solutions that agents could implement.

Guidance on the requirement for independence in validation was included with the Validation Report guidance,
issued in May 2011. Following this, Lloyd's will be engaging with agents on their approach to independent validation
via the Evidence Templates (first submission May 2011), and via the second stage of the model walkthrough
process (from late June). Agents can therefore expect feedback at this time.

The guiding principle is to ensure an adequate level of objective challenge during model validation - agents that
Agents should refer to the Validation Report guidance in the first instance. It is not expected that a single individual
will be responsible for performing all validation processes (indeed, this would not be a desirable position), and the
validation report would typically be expected to bring together work from a number of people (with varying degrees of
independence). For agents with a small team, Lloyd's would typically expect that independent validation would
Where external reports contain relevant and appropriate testing, agents should feel free to use this information as
part of their validation. Agents should note that it is very likely that there will be limitations in any such work in a
model validation context because of the difference in purpose. For example, Lloyd's does not expect that model
The proportionality principle requires that more time is spent on the more material risks. Another rule is that more
complex risks will generally have more complex models, so they will also require more validation time. Agents
should note, however, that Lloyd's regards validation as a key part of any model, and has a strong preference for a
relatively simple model that has been robustly and proportionately validated, compared to a highly complex model
with relatively weak validation. Agents should note that a robust model design and build processes that incorporates
ongoing testing and objective challenge may achieve a substantial proportion of the validation requirements. Agents
should always consider the work that has already been undertaken when setting the scope and extent of required
Some obvious requirements include no self-review and no conflict of interest. Lloyd's recognises that for smaller
managing agents independence may be an issue whilst Lloyd's will provide a benchmarking model which may help
with sense checking the numbers, Lloyd's cannot provide the independent validation. Every business has
The output from the independent review should form a key part of agents' validation reports.

The core purpose of independent review is to provide additional assurance on the quality of model results to all
model stakeholders, not solely to provide an output for Lloyd's. The content of the independent review will therefore
be determined according to agents' individual needs and circumstances (e.g. the extent and content of work that the
agent Board feels is necessary for them to provide sign off of the application for model approval - this will obviously
vary according to the complexity of agent models and the structure of agent teams).

Lloyd's will, however, be required to demonstrate to the FSA that the market as a whole meets the requirements of
the guidance on independent review. Lloyd's will therefore be looking for two things in agents' validation reports
 - An independent view on the implementation of the validation policy (i.e. the quality and objectivity of validation
processes that have been performed)
 - A degree of independent testing of the most material elements of the model
Whilst agents can take comfort from their interaction with Lloyd's and even assume Lloyd's will raise significant
concerns on aspects of the business or modelling this is not a substitute for validation and should not be taken as
such. Lloyd's expect validation to be clearly defined with the key responsibilities and deliverables of the individuals
(or functions) conducting validation to be explicitly scoped. This does not include a consideration of how
A justification should be made an individual's level of expertise to sit on the committee. Although it can be
challenging for non-actuaries to understand assumptions one of the requirements for Solvency II is that the Steering
Committee will consist of individuals who can and will challenge the actuary on their assumptions.

Agents should note that Solvency II formalises a wide range of stakeholders for internal models, and agents should
consider reflecting this in the make-up of their steering committees. Areas such as the use test require wide
understanding of and buy-in to internal models, and this is often best achieved by involving a wide range of people
in the design and running of a model.

There are no mandated positions on steering groups (indeed agents do not necessarily need to have an Internal
Materiality for LIM is likely to be higher than an appropriate definition of materiality for any given agent. Therefore,
as long as agent materiality definitions are appropriate for their own SCR, then LIM will not need to impose any
standard across all agents.

Yes, as with many things within the internal model, Lloyd's would expect that the definition of materiality would
evolve over time to reflect changes in agent circumstances (e.g. as a new syndicate grows and matures, we might
expect to see an increase in the materiality of reserve risk compared to premium risk). Agents should note,
however, that they would be expected to explain and justify the rationale for changes in the assessment of
Agent approach to materiality will be considered as part of the second phase of walkthroughs, and Lloyd's will
highlight any specific concerns at this point as and when we become aware of them.


The key processes will obviously vary between agents, depending on their model structure. A few key items to
consider are:
 - the purpose of validation is not to reproduce the SCR. Instead, it is there to support the proposed calculation
through testing the material assumptions and parameters to ensure it is not demonstrably wrong
 - have a clear idea of pass and fail criteria prior to conducting the validation test. Here, Lloyd's is genuinely requiring
agents to consider materiality and not perfection or "nice to have" adjustments to the calculated SCR.
 - This is a significant addition to the requirements beyond the standard needed under ICAS. Where agents have
stated some, perhaps obvious, bases for the calculation, the validation does need to address the reasoning behind
It may not be appropriate to set strict pass/fail criteria in advance. In some cases with large amounts of data it may
be possible to do so- for example, defining how many residuals can be +/- 2 standard deviations from the mean.
However in many cases there will be a large degree of expert judgement involved. It is important to make sure you
are able to explain the steps you have taken which have led to the result.

Agents should note, however, that Lloyd's considers evidence of clear pass/fail criteria (whether these were set in
advance, or iterated during the testing process) as strong evidence of robust validation. Agents should therefore
expect to be able to explain why the results of their testing processes mean that they believe their model is fit for
purpose, and not just to be able to produce the results of testing processes.
Lloyd's is not intending to mandate any particular set of drivers or classes for P&L attribution - the appropriate set
will vary from agent to agent and an overall list would not therefore be right for everyone.

Agents should develop their own understanding of materiality to define material business units and risk drivers, and
should ensure that their P&L attribution is at a sufficiently granular level of detail to capture each material business
This will vary from agent to agent, although Lloyd's notes that it is expected to require input from individuals in
Finance and Business Planning as well as those on the modelling team
As with all areas of validation, Lloyd's would expect agents to take a proportionate approach, focussing on those risk
mitigation techniques that have a material impact on their internal model output. For example, if reliance on
Operational Risk Controls causes a 15% reduction in the SCR, then detailed validation of the effectiveness of this
control might be expected
The more detailed elements of the validation are inherently technical, and NEDs would not typically have the
appropriate skill sets to get heavily involved in these areas. NEDs are, however, required to understand the
validation process and conclusions, in order to gain comfort in the robustness of the model, so they might be
expected to get involved at this less technical level
Lloyd's does not mandate any approach to validation - examples have been provided to show the sorts of things that
may be appropriate in some circumstances. There is no requirement for any agent to adopt a particular approach to
validation (beyond adopting those techniques mentioned in the guidance)

Agents should always focus their efforts on the areas that they believe are necessary to validate their model, which
Lloyd's has no specific plans to provide any additional output (beyond that which is currently available). We
understand that the LMA is in the process of collating some information on market practice, and would suggest that
this may be a suitable forum for the sharing of benchmark information
Lloyd's is not expecting individual agents to replicate publically available academic research into commonly-used
modelling techniques as part of their validation. It is acceptable to rely on the work done by others in this regard.
Agents should note, however, that they would still be required to validate their own implementation of the technique,
and the results coming out of it. This could involve, for example, testing the method against individual data sets or
testing the assumptions implicit in the methodology against the agent's own data. As part of validation, agents are
Validation is required to be a continuous process, with a major cycle at least annually. Notwithstanding this, Lloyd's
expects agents to focus validation efforts on the areas that pose the greatest risk of material misstatement to the
SCR (or other key model output), and would consider that analysis of change would be an appropriate consideration
One of the major purposes for requiring independent validation is to ensure that material elements of the model are
subject to robust objective challenge. Lloyd's would expect agents to reflect this goal in determining the appropriate
level for independent review and challenge in each element of their internal model
Yes. Lloyd's faces a dual mandate in this regard - both to ensure regulatory compliance and to ensure equity
between members regarding the exposure of the Central Fund.

It is the expectation that robust and transparent validation at agent level will reduce the incidence of circumstances
where there is a disagreement between Lloyd's and an agent around the appropriate level of capital. Agents should,
however, note that Lloyd's will continue to benchmark capital requirements, and require capital loadings where they
do not believe that the capital calculated by the agent is consistent with the goal of preserving equity between
The Validation Report Guidance document issued in May has been updated to clarify Lloyd’s expectations for these
two submissions and to provide some practical guidance on approaching common areas of uncertainty. Please
paste the following link into your browser. http://www.lloyds.com/The-Market/Operating-at-Lloyds/Solvency-
II/Information-for-managing-agents/Guidance-and-
workshops/~/media/Files/The%20Market/Operating%20at%20Lloyds/Solvency%20II/2011%20Guidance/SII%20Vali
No, this will be an internal document for Lloyd's. Given the difference in structure between Lloyd's and a managing
agent, it is not thought that the Lloyd's validation report would provide a useful template for agents.

The LIM team have been involved in developing the Validation Report guidance for agents, and agents should refer
to this guidance where they have any questions as to what the appropriate content for a validation report is.

The Validation Report is an internal document for agents to ensure their Boards are comfortable with their internal
model. Lloyd's is not mandating what this report needs to contain beyond those areas discussed in EIOPA
guidance, which are reflected in the existing Lloyd's guidance for agents. Therefore a more detailed 'checklist' of
contents for the Validation Report (or the underlying validation processes) would not be appropriate.

Lloyd's will issue further guidance, clarifications and/or examples where appropriate as our reviews progress, and we
Lloyd's does not believe that providing a numerical threshold for validation accuracy is helpful in this context. For
example, it is not clear what validating a binary event assumption to +/-5% would mean, as there is no data for this
assertion to be based on.

Validation should focus on developing the supporting rationale for agent selections within the acceptable range of
possibilities, with greater effort placed where either the model component is particularly material to the outcome, or
where there is a particularly wide range of acceptable selections. Agents will need to develop their own definitions to
ensure that enough validation work is completed, and should expect some level of challenge to these definitions and
In both cases, this will be down to agent judgment. Testing must be extensive enough to ensure that there is no
material misstatement in the SCR.

Lloyd's does, however, recognise that all validation processes have limitations, and that 100% accuracy in a typical
capital modelling question is neither achievable nor desirable. In practice, therefore, agents are required to do
EIPOA guidance specifically states that the use test is within scope of validation. Lloyd's recognises that validation
will need to be applied differently in this case, and would suggest the following considerations for validation of the
use test:

 - That the key outputs (on which the use test is based) are free from material misstatements - as per validation of
the SCR
 - That model output is really being considered in business decisions in the way it is expected to be in the use test
A group has been set up via the LMA to discuss this further with the aim of doing as much of this centrally as
possible. However, agents are required to apply the principles of validation to satisfy themselves that any data they
use meet their internal definitions of complete accurate and appropriate. Part of this may involve reliance on
centralised validation work (such as that described above), but agents should note that this is unlikely to provide the
Scope of the validation report will follow Level 2 and proposed Level 3 requirements as closely as possible. Lloyd's
would expect it to address all of the model tests and standards in articles 120 - 126 and the SCR number output. It
will not need to validate the wider risk management system which would fall under the ORSA and which will require
its own sign off. Lloyd's will provide further guidance around the content of the validation report by 30 April.
Lloyd's will publish required content of the validation report by 30 April Whilst a draft report template will be
provided, the format will not be mandated.
Each managing agent must provide their own independent validation under the terms of the Directive. Whilst
Lloyd's review of models will provide feedback and benchmarking, Lloyd's is unable to provide the independent
As a key deliverable, Lloyd's will expect a properly constituted sub-committee of the managing agency board to
approve this with onward reporting to the full Board.
Lloyd's will require positive assurance from agents and as such boards must be able to sign off that the model is
reasonable/appropriate. Further guidance will be issued by Lloyd's as part of the Validation Report and final
Further guidance on the validation report will be available the end of April. Agents should note, however, that there
is already substantial existing guidance available, and a number of agents have been able to produce first-draft
Lloyd's has undertaken to produce more guidance on validation reports by end of April and agents are welcome to
feed into the process by helping out with the consistency of headings etc.

Lloyd's also expects to share themes and issues arising from model reviews at the workshops over the remainder of
2011. Any suggestions from the market as to the format and / or content of this feedback would be welcome, and
The difference is more in how they are applied. For example, a chi-squared test can be used to parameterise a
distribution, but comparing chi-squared results for several distributions would be a part of validation. It may not be
possible in practice to separate all these processes cleanly, and agents might expect that where a greater number of
tests and processes have been applied during "parameterisation", a correspondingly smaller number might be
applied during "validation". The key output from validation will be an explanation of why the selections are
appropriate for a particular agent's business - this may be supported by a robust and extensive parameterisation
process, although agents should note the requirements for independence and objective challenge during validation.
It is certainly not Lloyd's intention that agents repeat parameterisation processes for validation where this does not
As above, However we note that will be determined by final Level 2 this does 3 measure to example getting an
add value. validation requirementsthere may be circumstances where and Level add value, for be published in 2011
and agents/syndicates will be required to meet these. However, Lloyd's will provide additional guidance and
clarification where possible.
There is no rule of thumb for this - agents will have different definitions for what qualifies as an assumption. The key
point is to validate all assumptions in line with their materiality. Lloyd's would therefore expect detailed validation

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:4
posted:9/30/2011
language:English
pages:15