You are on page 1of 15

Introduction The Quantified Risk Analysis (QRA) methodology has proven to be of substantial use for the evaluation of different

decision options, e.g. different design alternatives, for the determination of major contributions to risk, and in risk tolerability decisions. Risk tolerability decisions inherently focus on absolute risk levels, the impact of uncertainties ill play a major role in the usefulness of such criteria. A key merit of QRA is that the procedure provides a structured ay to determine the major contributions to the overall risk, hich ill prove useful in the risk management situation. !no ing the major contributions to the overall risk is a prere"uisite for being able to direct efforts to ards managing and reducing the risk to those areas here they ill have the greatest impact, thus facilitating cost effectiveness in risk management. #o ever, due to a lack of consensus concerning hich methods, models and inputs should be used in an analysis, and ho the, sometimes considerable, uncertainties that ill inevitably be introduced during the process should be handled, "uestions arise regarding the credibility and usability of the absolute results from QRA. $ithout a description of and discussion on the uncertainties involved in such an analysis, the practical use of the results ill be severely limited. %or instance, comparison of the results ith established risk targets, or tolerability criteria, something that is becoming increasingly common, becomes a fairly arbitrary e&ercise. The problem of ackno ledging and treating uncertainty is central for the "uality and practical usability of "uantitative risk analysis. $hen performing a QRA, a ide range of uncertainties ill inevitably be introduced during the process. The impact of these uncertainties must be addressed if the analysis is to serve as a tool in the decision'making process. Possible ways of handling problems associated with absolute measures of risk %or QRA results to be valuable they ould have to be comparable bet een analyses of different establishments and activities, transparent and reproducible. T o conceptually different approaches ere discerned. Dutch Approach At one e&treme the (utch approach, prescribes the starting points, models and default values for several parameters to be used in the analysis. To some e&tent, this means that the regulatory body accepts responsibility for any uncertainty involved in an assessment and the impact this might have on the regulatory decision. This approach has considerable advantages regarding consistency in risk' related decision making, since assessments using the same models and variable values ill be comparable. )erhaps this level of standardisation of the risk analysis process is re"uired for the e&plicit use of target risks or tolerability criteria to make sense.

This approach might have negative effects on ' scientific progress regarding the development of ne models for use in risk assessment, ' risk assessor*s motivation for finding situation'specific data to use in his+her analysis. ,t should be mentioned ho ever, that the (utch guideline encourages the development of situation'specific models and the use of site'specific data, as long as the deviations from the prescribed models and data are e&plicitly e&plained and justified for the authorities concerned. Environmental Protection Agency (EPA) Approach At the other e&treme, the American -nvironmental )rotection Agency (-)A) policy for the use of probabilistic analysis in risk assessment focuses more on providing conditions to be met in an assessment to ensure high'"uality science, regarding transparency, reproducibility, and the use of sound methods. ,t also recogni.es the fact that there are situations here a fully probabilistic approach is not called for, and it provides guidance on ho to decide hether to perform a QRA or not. The strength of this approach, from a scientific point of vie , is that it does not dictate any specific method or methods, but highlights the importance of transparency and of being e&plicit about the methods and input used in an assessment. %or an approach adopted by the -)A to be successful it is vital to define methods for characteri.ing, "uantitatively, the variability and uncertainty of a risk estimate, to identify the main sources of variability and uncertainty, and their relative contributions to the overall uncertainty in the results. Why be e plicit about uncertainties! ,t should be clear from the above discussion that uncertainties are ever present in the QRA process and ill by definition affect the practical usefulness of the results. "b#ectives of $ncertainty Analysis /implified, comprehensive uncertainty analysis can be regarded as having three major objectives. ' making clear to the decision'maker that e do not kno everything, but decisions must be based on hat e do kno . ' to define ho uncertain e are. ,s the uncertainty involved acceptable in meeting the decision'making situations e face, or is it necessary to try to reduce the uncertainty in order to be able to place enough trust in the information0 ' to try to reduce the uncertainty involved to an acceptable level. Introducing uncertainties in the %&A process 'ources ( classes of uncertainty To help understand the concept of uncertainty, and to be able to treat uncertainties in a structured manner, many attempts have been made to

characterise classes of uncertainty and the underlying sources of uncertainty. The three major groups of uncertainty, according to this definition, are1 parameter uncertainty model uncertainty completeness uncertainty )arameter uncertainty is introduced hen the values of the parameters used in the models are not accurately kno n. ,t is often dealt ith by assigning probability distributions or some other kind of distribution to the parameters, representing the analyst*s kno ledge about them. )arameters used in a model may also be subject to natural variability, hich may be dealt ith the same ay. 2odel uncertainty arises from the fact that any model, conceptual or mathematical, ill inevitably be a simplification of the reality it is designed to represent. 3ompleteness uncertainty originates from the fact that not all contributions to risk are addressed in QRA models. %or e&ample, it ill not be feasible to cover all possible initiating events in a QRA. !no ing the sources of uncertainty involved in the analysis plays an important role in the overall handling of uncertainty. %irst of all, different kinds of uncertainty call for different methods of treatment. Another aspect is the possibility of reducing uncertainty. ,f one kno s hy there are uncertainties and hat kinds of uncertainty are involved, one has a better chance of finding the right methods for reducing them. Epistemic vs) aleatory uncertainty At fundamental level, t o major groups of uncertainty are recognised. 4n the one hand there is the aleatory, or stochastic, uncertainty and on the other the epistemic, or kno ledge'based uncertainty. Aleatory uncertainty represents randomness in nature and it is only in the domain of this type of uncertainty that the fre"uentist definition of probability is valid. -pistemic uncertainty (e.g. ambiguity, ignorance, kno ledge'based, reducible or subjective uncertainty) represents a lack of kno ledge about fundamental phenomena. ,t is hen dealing ith this kind of uncertainty that one often has to rely on e&perts and their subjective judgment. (ifferent techni"ues for eliciting information from subjective opinions given by e&perts, together ith a discussion of some possible pitfalls, are available in literature. 3an uncertainty just be considered as uncertainty regardless of its origin0 ,s there really a need to identify and separate various kinds of uncertainty0 5At a fundamental level, uncertainty is uncertainty, yet the distinctions are related to very important practical aspects of modelling and obtaining information. /uch aspects include decomposition in model building, bounding models, identification

and incorporation of different types of information, probability assessment, value of information, and sensitivity analysis.6 There is no fundamental reason for distinguishing bet een different types of uncertainty, but it may ell be appropriate in many practical applications. The most idespread tool for "uantifying uncertainties is the mathematical concept of probability. The fre"uentist school defines probability as a limiting frequency, hich applies only if one can identify a sample of independent, identically distributed observations of the phenomenon of interest. The 7ayesian school, on the other hand, regards the concept of probability as a degree of belief. This means that not only statistical data and physical models ill serve as information, but also e&pert opinions, hich ill be subjective. The 7ayesian frame ork also provides methods of updating probabilities hen ne data are introduced. The most obvious distinction of practical importance bet een the types of uncertainty is the fact only knowledge-based uncertainty can be reduced, e.g. by gathering more information . The stochastic uncertainty is, by definition, irreducible. Another important difference is that the stochastic uncertainty (random variation) partially cancels itself out in a risk analysis, but kno ledge' based uncertainty does not. (ifferent methods are available for representing and propagating these t o types of uncertainty, either together or separately. *ethods of representing uncertainty +he probabilistic approach The most common approach used to represent uncertainty regarding a "uantity, either stochastic or epistemic, is to use probabilistic distributions. (ue to the high degree of epistemic, or kno ledge'based, uncertainty involved in the QRA process the fre"uentist interpretation of probability is valid only if it is possible to identify a sample of independent, identically distributed observations of the phenomenon of interest, does not ork in all situations making a 7ayesian approach necessary. +reatment of model uncertainty The use of risk estimates in a relative sense is often much less sensitive to error. 7ecause the same methodologies and assumptions are used to the e&tent possible to evaluate the various alternatives under consideration, the resulting risk estimates are subject to similar uncertainties. Thus, the relative ranking of the various alternatives may be less affected by uncertainty than the absolute value of the risk measure. (ue to persistent lack of data, use of e&pert judgment to provide estimates of unkno n "uantities has become a necessity. /everal studies have been undertaken during the past decade regarding the impact of uncertainty on the results of "uantitative risk analyses. ,n a benchmark e&ercise on major ha.ard analysis for a chemical plant, managed by the 8oint Research 3entre (8R3) during 9:;;'9::<, 99 teams from different -uropean

countries performed an analysis for a reference object, an ammonia storage facility, to evaluate the state of the art and to obtain estimates of the degree of uncertainty in risk studies. The results of this study sho ed great variability in risk estimations bet een the different analysis teams. ,t is not hard to see the practical implications these results ill have on the applicability of absolute risk measures in tolerability judgment situations. The level of risk could be judged to be tolerable or totally unacceptable depending on hich assessment you choose to put your trust in. $ncertainties introduced at the different stages of %&A +he identification stage The identification stage includes system description as ell as the actual identification of possible initiating events and scenarios. ,n this stage of analysis the main objective is to produce a comprehensive list of possible initiating events, and possibly also to identify priorities bet een them and make decisions on hich of them are to be analysed further. The dominant "uestion regarding uncertainty at this stage ill be that of completeness. $ell'established methods for structured identification are used in order to facilitate completeness, e.g. #A=ard and 4)erability (#A=4)) procedures, hat'if analysis and %ailure 2ode and -ffects Analysis (%2-A). 2aster >ogic (iagram also helps in identification of initiating events based on a structured method. (uring this stage of an analysis accident and failure databases are also useful. This type of uncertainty, related to completeness, is often very difficult to "uantify. )eer revie s, by e&perts in QRA, conducted on the lists identified can help to ards achieving completeness ,re-uency estimation Based on Historical record The approach of using historical records and incident fre"uencies is idely used. 4ne major benefit of this approach is that (provided that some fundamental criteria, such as sufficient number of records and applicability of the data to the process in "uestion, etc, are met) the fre"uency estimate ill include most relevant circumstances leading to the event. /uch circumstances include failure modes that are inherently difficult to analyse, such as human errors and common cause failures. The obvious problems related to such an approach originate from "uestions of accuracy and applicability. #istorical data may be inaccurate, incomplete or inappropriate. %or instance, it is seldom the case that an ade"uate amount of data has been collected from the activity one is about to analyse, making the use of data from related activities necessary. 3aution should al ays be used hen applying this kind of generic data to one specific establishment. >ocal conditions may deviate considerably from those at hich the generic data ere gathered. Another dra back of this approach is that direct and uncritical use of historical data may fail to recognise changes in the system, Fault and event tree analysis

%ault tree analysis is used to derive the fre"uency of a ha.ardous incident, using a logical model consisting of basic system components, safety systems and human reliability. -vent tree analysis essentially constitutes a model that identifies and "uantifies possible outcomes follo ing an initiating event. /ome problems associated ith fault tree and event tree techni"ues, related to "uestions of completeness and simplification, as ell as uncertainty regarding parameters in the model have been identified. 2uch effort must be devoted to developing a ell'structured fault tree, and the omission of significant failure mechanisms can lead to erroneous results. Additionally, many of the parameters in the models must be determined using historical data, e&pert judgment or a combination, making them to some e&tent vulnerable to the same problems as the historical record approach described above. .onse-uence estimation The conse"uence estimation part of the analysis consists of several interacting parts. )hysical models are used to estimate, for instance, concentrations of dispersed radioactive substances (at various locations around the source), etc. ?arious effect (e.g. stochastic and non'stochastic) models are used to predict the effect on the object of the study, e.g. death or injury to human beings, effects on physical property such as damage to structures etc. @ot surprisingly, all these e&ercises are, to some e&tent, afflicted ith uncertainties, both stochastic and epistemic. /ome general e&amples are given belo . The actual physical modelling is a process in hich mathematical models are used to represent reality, e.g. real physical processes, for e&ample dispersion of radionuclides. 4bviously, any mathematical model of such a comple& physical process can only be an appro&imation of that process. This kind of (kno ledge' based) uncertainty is often difficult to "uantify, although attempts have been made to establish uncertainty bounds on model estimates using a semi' "uantitative approach. $hen modelling the effects on humans of e&posure to to&ic substances etc., the prevailing approach is to use results from dose'response tests performed on laboratory animals, by e&trapolating these data to humans. ,n case of radiation e&posure, the atomic bomb survivor data+ data obtained from radiation e&posures for treatment of certain diseases has been used. 52ost to&icological considerations are based on the dose'response function. A fi&ed dose is administered to a group of test organisms and, depending on the outcome, the dose is either increased until a noticeable effect is obtained, or decreased until no effect is obtained6. ,t is not difficult to realise that such an approach ill be associated ith substantial uncertainties, both in the e&trapolation from animal data to humans (kno ledge'based uncertainty), and the fact that in any population e&posed to the same dose of a substance there ill be a significant spread in response (stochastic uncertainty). Estimation of risk

The final step in the "uantitative risk analysis process is to generate the actual risk measure. This is usually done by combining the probability of a certain outcome ith the conse"uence of that particular outcome, then aggregating the information from all the outcomes identified. @umerous risk measures have been suggested in the literature, but here only t o main groups of measures ill be briefly introduced, i.e. individual risk measures and societal risk measures. The term individual risk refers to the risk to hich a person present at a specific location in the vicinity of a ha.ard is e&posed. ,ndividual risk is often e&pressed as the probability of fatality at that location per year. /ocietal risk is a measure of the risk to a group of people, and is often used to complement individual risk measures in order to account for the fact that major incidents often have the potential to affect many people. The most common form of presentation of societal risk is the %@'curve, hich is the fre"uency distribution of multiple casualty events identified at the object under study The uncertainties introduced during this stage of the QRA process are principally related to assumptions and simplifications made in order to decrease the comple&ity of the analysis, i.e. the computational burden. ?arious symmetry assumptions regarding, for instance, e"ually probable ind directions, distribution of ignition sources and population distribution, together ith assumptions on a single or a fe ind and stability conditions, raise "uestions regarding the completeness of the analysis. *ethods of representing uncertainty +he probabilistic approach The, by far, most common approach used to represent uncertainty regarding a "uantity, either stochastic or epistemic, is to use probabilistic distributions. (ue to the high degree of epistemic, or kno ledge'based, uncertainty involved in the QRA process the fre"uentist interpretation of probability, hich is valid only if it is possible to identify a sample of independent, identically distributed observations of the phenomenon of interest, does not ork in all situations making a 7ayesian approach necessary. Interval representation The interval representation of uncertainty is useful in situations here e are absolutely sure about the bounds of a "uantity, but e kno little or nothing else. ,nterval analysis can be used to estimate the possible bounds on model outputs using bounds (i.e. intervals) to represent uncertainty about model inputs and parameters. +he probability bounds approach A pair of probability bounds may be used to circumscribe the uncertainty regarding a probability distribution. )robability bounds may be constructed from parametric probability distributions here the parameters are uncertain, as sho n in %igure, here parameter X is a log'normal distribution ith the mean A BC.D,EF and standard deviation GA B<.;,9F. ,t is also possible to construct

probability bounds in a distribution'free conte&t here the particular shape of the distribution cannot be specified. ,n these cases bounds on the possible distributions that are consistent ith the empirical information are generated. %or e&ample, the only information available on parameter Y is its min A C, ma& A E and mean A C.H. @othing is kno n about the shape of the distribution. ,n %igure, the bounds on all possible distributions given this information are sho n for parameter Y. )robability bounds have been derived for various sets of information regarding the uncertain variable. -&amples of such sets of information are sample data, kno ledge about the mean and variance, kno ledge about the minimum, ma&imum and mode etc.

Figure. Examples of probability bounds representing uncertainties in unknown quantities X and Y.

,u//y representation The theory of fu..y sets provides means of modelling the uncertainty (or vagueness) of natural language. $ithin this frame ork, notions like 5densely populated6 and 5relatively strong inds6 can be formali.ed using so'called membership functions. The main idea is easily grasped by a comparison ith classical set theory.
1 iff A ( x) = 0 iff x A x A

%u..y set theory allo s for a continuous value of bet een < and 9, as
1 iff x A A = 0 iff x A p; 0 < p < 1 if x partially belongs to

%u..y arithmetic, is an offshoot from fu..y set theory, and can also be regarded as a generalisation of interval analysis. Incertainty can also arise due to general "uality issues, such as science and engineering state of the art, improper definition of the assessment problem, competence of the analyst team etc. The scope of this method is very broad and perhaps unattainable at a practical level. 4rganisational factors and effects of managerial decisions can also affect the results of a risk analysis and this has gained increased interest in recent years.

+he use of models in risk analysis The use of models, either conceptual or mathematical, to represent reality, is by far the most common approach in the risk analysis process. /ince it is literally impossible to create the 5perfect6 model, i.e. a model that imitates reality e&actly in every detail, there ill al ays be limitations on the use of any e&isting model. 7eing a are of the fact and taking precautions so that the model used is valid for the specific situation under consideration, is a very important step in reducing the uncertainty caused by imperfect models. /ometimes, ho ever, there are no models e&plicitly validated for the specific situation, or it may not be kno n hich of the available models should be used to obtain the best results. ,n situations like these, one may, for instance, make use of several parallel models in order to compare the results and in this ay enhance the credibility of the results. All use of models ill introduce subjective judgment into the analysis. The model+models that best represent reality in a specific situation ill al ays be a "uestion of belief hen there is no, or sparse, empirical data available to support any of them. Inder these circumstances one often has to rely on subjective e&pert judgment. +reatment of model uncertainty *odel validation 2odel validation in this conte&t refers to e&ercises here model predictions are tested against e&perimental data that are independent of the data set used to develop the model. The topic of model validation has been e&tensively discussed in many areas (e.g. standard problem e&ercises for various thermal hydraulic codes, aerosol codes). A general rule is that henever a risk analyst is about to use a model it is up to that analyst to assess the usefulness of the model in the specific situation. /ituations might arise, ho ever, here model validation is not viable, making model comparison studies, possibly by using theoretical test cases, one ay to evaluate model predictions. #o ever, model comparison e&ercises cannot by any means replace a proper validation process, since each of the models used in the e&ercise might contain inaccuracies. ,n addition to validation of the models, model uncertainty comprising of relevance, validation and variability of the modeled phenomenon also needs to be considered. *ethods of parameter uncertainty propagation and analysis The impact uncertainties in the input parameters ill have on the model output is e&amined. ,t is possible to discern three major groups of techni"ues for e&amining the effects of uncertain inputs on model output. /ensitivity analysis, i.e. methods of assessing the effect various changes in input parameters might have on the model output.

Incertainty propagation, i.e. methods of transmitting the uncertainty in the model inputs to the model output. ,mportance measures, i.e. methods of calculating the relative contribution of the uncertainty in the input parameters to the uncertainty in the model output.

These three analyses are routinely carried out in the various )/A studies carried out for ,ndian @))s. &esponse surface methods 2ost of the models commonly used in "uantitative risk analysis are computer programs. Regression analysis and response surface methods may be used to produce an analytical e&pression, based only on a fe input variables, representing the more comple& computer model. $hen using response surface methods it is imperative to make sure that the variables used for the response surface e"uation are the ones of most interest for the uncertainty analysis, and that the model is not used outside the parameter range defined by the regression analysis. ,t should also be noted that the generation of the response surface e"uations introduces yet another kind of model uncertainty. #o ever, statistical measures of goodness of fit of the surface are available. 'ensitivity analysis /ensitivity studies are aimed at identifying the important variables in a model, i.e. the variables that have the greatest impact on the model output. /ensitivity analysis is often performed by changing the value of one uncertain parameter at a time, hile maintaining all others at their nominal value, and then assessing the relative impact each change has on the model output. #o ever, one serious problem associated ith simple sensitivity in comparing the importance of the uncertainty in different inputs is that it depends on the units of the inputs and output. @evertheless, sensitivity analysis is fre"uently used to identify hich parameters should be included in the full uncertainty analysis, i.e. propagation and importance studies. Probabilistic uncertainty analysis )robabilistic frame ork is by far the most idely used for dealing ith uncertainty in most areas of risk analysis, $ithin this frame ork one primarily makes use of probabilistic distributions to describe the parameter uncertainty. propagation of the uncertain variables f ! f" and f#, (here presented using their respective probability density function, )(%) through the model function $f ! f"! f#% is schematically described in the figure that is sho n belo . &nalytical met'ods The use of analytical methods for propagating uncertainty is still idely recognised, not ithstanding the fact that they are often only appro&imate methods ith some hat

Fig: Propagation of uncertainty through a model constrained validity, and the fact that development in personal computers has made computationally e&pensive sampling methods more feasible. #ere, only a selection of the methods available ill be presented and discussed. &pproximation from t'e (aylor series -&act analytical methods of propagating uncertainty are rarely employed in risk analysis since they are tractable only for simple cases, such as linear combinations of normal variables. The appro&imate techni"ues, often referred to as the 5method of moments6, are based on Taylor series e&pansion of the function. The name 5methods of moments6 refers to the fact that ith these methods one propagates and analyses uncertainty using mostly the mean and variance, but sometimes higher order moments of the probability distributions. 3onsider X hich is a vector of n uncertain inputs, and f(X) the function representing a model generating the output y as follo s1 J A (x ! x" ) xn) y * f$X% Assume that the nominal value, for each input is e"ual to its e&pectation value1 0 %or i A9 to n xi *E+xi, %rom this follo s that the nominal scenario is also the mean scenario1 0 0 0 , x2 , .....x n J< A ( x1 ) A EBXF

The Taylor series e&pansion provides a ay of e&pressing deviations in the output from its nominal value, y - y. in terms of deviations in its inputs from their 0 nominal values, xi - xi . /uccessive terms contain higher order po ers of deviations and higher order derivatives of the function ith respect to each input. 7elo , the e&pansion around the nominal scenario including the first three terms is sho n.

,t should be noted that all derivatives are evaluated for the nominal 0 scenario X.. ,f the deviations , xi - xi are relatively small, the higher po ers ill become very small. And if the function is relatively smooth in the region of interest, the higher derivatives ill be small. Inder these conditions the Taylor series produces a good appro&imation even hen the higher order terms are ignored. First order approximation ,n order to simplify the calculations, one usually only takes the first order term into consideration. To the first order, the e&pected value of y can be appro&imated by the nominal value, since the e&pected value of the deviation in y is .ero1 EBy K y.F G<, EByF Gy. A f(X.) 4ne can no obtain the general first order appro&imation of the variance in the output, using only the first order term from the e"uation for y-y..

The above e&pression can, after some modifications and assumption of independence bet een the uncertain inputs, be transformed to the simple Laussian appro&imation formula given belo 1

As sho n in the above e"uation, the variance of the output y is appro&imately the sum of the s"uares of the products of the standard deviation and sensitivity of each input, xi.

/onte 0arlo sampling $hen e have a model ith several uncertain inputs, a value is sampled from the respective distributions of the uncertain inputs and then the model output is calculated for each iteration. 7y performing a large number of iterations, a distribution of the model output ill be produced, representing the total uncertainty in the model output due to the uncertainties in the model inputs. 1atin 'ypercube sampling >atin hypercube sampling is a refinement of classical 2onte 3arlo (or random) sampling, hich uses 5stratified sampling ithout replacement6 (wo-p'ase sampling procedures ,n situations here it is desirable to keep different uncertainties separate in an analysis, for instance separating stochastic and epistemic uncertainty, 5t o' phase6 sampling procedures are suitable. A t o'phase sampling procedure is based on either traditional 2onte 3arlo sampling or another kind of sampling scheme, e.g. the >atin hypercube procedure described above. The procedure is conceptually relatively simple. The sampling is performed in t o 5loops6, an outer and an inner loop, to hich the t o different groups of uncertain parameters belong. %or each iteration in the outer loop, a specified number of iterations is performed in the inner loop. Interval arithmetic The concept of interval arithmetic offers a computationally ine&pensive, logically consistent methodology that produces conservative estimates of uncertainty in the final result of an analysis. ,nterval analysis can be used to propagate uncertainty concerned ith input parameters (specified as intervals) through a model. An appealing feature of interval analysis is that it is fairly straightfor ard, hich makes the methodology attractive, for e&ample, in the screening phase of an analysis. 3onsider t o variables X and Y, given as intervals Bxl,xuF and Byl,yuF respectively, here xl M xu and yl M yu. The most basic arithmetic operations for intervals are given belo 1 X N Y A Bxl N yl, xu N yuF X K Y A Bxl ' yu, xu ' ylF X O Y ABmin(xlyl, xlyu, xuyl, xuyu), ma&(xlyl, xlyu, xuyl, xuyu)F X + Y ABmin(xl+yl, xl+yu! xu+yl! xu+yu), ma&(xl+yl, xl+yu! xu+yl! xu+yu)FP <Byl,yuF Probability bounds analysis The most attractive feature of this approach is that it provides a ay of using the information available on a parameter to construct bounds on possible probability distributions ithout having to make any (unjustified) assumptions ,u//y arithmetic %u..y arithmetic can be regarded as a generalisation of interval analysis in that a fu..y number can be considered to be a nested stack of intervals, each at a

different level of presumption , <Q. The range of values is idest at a presumption or Rpossibility* level of .ero. 8ust above Glevel .ero is the interval that everyone ould agree contains the true value, i.e. the most conservative range. %u..y arithmetic is an offshoot from fu..y set theory and the rules for combining fu..y numbers in calculations are given ithin this frame ork. The arithmetic of fu..y numbers essentially reduces to interval analysis repeated once for each Glevel. The difference is that fu..y arithmetic generates an entire distribution instead of a simple interval or range. %u..y numbers in the sense described above have been used to some e&tent to represent uncertainty in various risk analysis applications in the past 9<'C< years. The main argument for using fu..y numbers and fu..y arithmetic over the more classical probabilistic approach in risk analysis is that it is claimed to 5make fe er assumptions6 than probability theory, principally because it is based on eaker a&ioms. 4bviously, no one can argue against probability theory possibly proving more po erful in situations here all of its a&ioms are satisfied but it is claimed that risk analysis is often performed in situations here, for e&ample, access to data is severely limited. Arguments for and against the different approaches to uncertainty analysis ,t is impossible to identify a single approach to uncertainty analysis that ill prove to be the most po erful in all situations. The choice of approach is a delicate one and ill most certainly be dependent on factors such as the purpose of the risk assessment, the information at hand, the nature of the uncertainty, e.g. variability or ambiguity etc. ,t ill al ays, to some e&tent, be a matter of opinion hich method is most appropriate in a given situation. ,ndeed, the differences bet een the methods are substantial, and the choice of method may significantly influence the final result of the risk assessment. ,t might at times prove ise to employ more than one methodology for a particular situation, perhaps at different times during the process, e.g. interval analysis in the screening phase and some other method for the more detailed analysis, or to ans er different "uestions or address different problems. *ethods of ranking uncertain parameters 4ne of the major objectives in performing a complete parameter uncertainty analysis is to rank the parameters ith respect to their contributions to the uncertainty in the model prediction . The most obvious reason for this being that such a ranking makes it possible to allocate research resources efficiently, should a reduction in the calculated uncertainties in the output prove necessary in order to reach an acceptable degree of reliability in the results. 4f course, the methods available for this kind of ranking ill be dependent on the type of uncertainty propagation method used. The probabilistic analytical methods of uncertainty propagation provide the variance of the model prediction as a function of the variances and covariances

of the uncertain parameters, and an immediate ranking of the individual parameters ith respect to their contribution to the overall uncertainty in the model prediction is thus possible. 4ne fairly simple and straightfor ard method of ranking uncertain parameters is to calculate the sample correlation coefficient of the model prediction and each of the uncertain parameters, using the sample of output values and the corresponding sample of values for each input. The correlation coefficient provides an estimate of the degree of linear relationship bet een the sample values of the model output and the input parameter. This is done for every input parameter, providing a measure of ho much each input contributes to the output uncertainty. #o ever, here there are significant correlations bet een the input parameters, the correlation coefficient measure fails to recognise these correlations. .onclusions ,n this paper, uncertainties involved in various phases of "uantified risk assessment are identified. 2ethods for estimation of the uncertainties are presented. #o the uncertainties cab be propagated are highlighted. %inally, ays of ranking the uncertainties is presented. &eference 9. Incertainty in Quantitative Risk Analysis K 3haracterisation and 2ethods of Treatment, 2arcus Abrahamsson, >und Iniversity, / eden, Report @o. 9<CH, >und, C<<C

You might also like