Professional Documents
Culture Documents
Abstract The use of single-valued assessments of company portfolios and projects continues to decline as the industry accepts that strong subsurface uncertainties dictate an ongoing consideration of ranges of outcomes. Exploration has pioneered the use of probabilistic prospect assessments as being the norm, in both majors and independents. Production has lagged, in part because of the need to comply with US Security and Exchange (SEC) reserves-reporting requirements that drive a conservative deterministic approach. Look-backs continue to show the difficulty of achieving a forecast within an uncertainty band as well as the difficulty of establishing what that band should be. Ongoing challenges include identifying relevant static and dynamic uncertainties, efficiently and reliably determining ranges and dependencies for those uncertainties, incorporating production history (brownfield assessments), and coupling subsurface with operational and economic uncertainties. Despite these challenges, none of which are fully resolved, a systematic approach based on probabilistic principles [often including design-of-experiment (DoE) techniques] provides the best auditable and justifiable means of forecasting projects and presenting decision makers with a suitable range of outcomes to consider. Introduction Probabilistic subsurface assessments are the norm within the exploration side of the oil and gas industry, both in majors and independents (Rose 2007). However, in many companies, the production side is still in transition from single-valued deterministic assessments, sometimes carried
Martin Wolff, SPE, is a Senior Reservoir Engineering Advisor for Hess Corporation. He earned a BS degree in electrical engineering and computer science and an MS degree in electrical engineering from the University of Illinois and a PhD degree in petroleum engineering from the University of Texas at Austin. Previously, Wolff worked for Schlumberger, Chevron, Fina, Total, and Newfield. He has served as a Technical Editor and Review Chairperson for SPE Reservoir Evaluation & Engineering and has served on steering committees for several SPE Forums and Advanced Technology Workshops.
out with ad-hoc sensitivity studies, to more-rigorous probabilistic assessments with an auditable trail of assumptions and a statistical underpinning. Reflecting these changes in practices and technology, recently revised SEC rules for reserves reporting (effective 1 January 2010) allow for the use of both probabilistic and deterministic methods in addition to allowing reporting of reserves categories other than proved. This paper attempts to present some of the challenges facing probabilistic assessments and present some practical considerations to carry out the assessments effectively. Look BacksCalibrating Assessments Look-backs continue to show the difficulty of achieving a forecast within an uncertainty band along with the difficulty of establishing what that band should be. Demirmen (2007) reviewed reserves estimates in various regions over time and observed that estimates are poor and that uncertainty does not decrease over time. Otis and Schneidermann (1997) describe a comprehensive exploration-prospect-evaluation system that, starting in 1989, included consistent methods of assessing risk and estimating hydrocarbon volumes, including post-drilling feedback to calibrate those assessments. Although detailed look-backs for probabilistic forecasting methodologies have been recommended for some time (Murtha 1997) and are beginning to take place within companies, open publications on actual fields with details still are rare, possibly because of the newness of the methodologies or because of data sensitivity. Identifying Subsurface Uncertainties A systematic process of identifying relevant subsurface uncertainties and then categorizing them can help by breaking down a complex forecast into simple uncontrollable static or dynamic components that can be assessed and calibrated individually (Williams 2006). Nonsubsurface, controllable, and operational uncertainties also must be considered, but the analysis usually is kept tractable by
Copyright 2010 Society of Petroleum Engineers This is paper SPE 118550. Distinguished Author Series articles are general, descriptive representations that summarize the state of the art in an area of technology by describing recent developments for readers who are not specialists in the topics discussed. Written by individuals recognized as experts in the area, these articles provide key references to more definitive work and present specific details only to illustrate the technology. Purpose: to inform the general readership of recent advances in various areas of petroleum engineering.
86
including them later with decision analysis or additional rounds of uncertainty analysis. Grouping parameters also can reduce the dimensionality of the problem. When parameters are strongly correlated (or anticorrelated), grouping them is justifiable. In fact, not grouping or Balkanizing a group of such parameters could cause them to be dropped in standard screening methods such as Pareto charts. For example, decomposing a set of relative permeability curves into constituent parameters such as saturation endpoints, critical saturations, relative permeability endpoints, and Corey exponents can cause them all to become insignificant individually. Used together, relative permeability often remains a dominant uncertainty. Assigning Ranges to Uncertainties Having been identified, ranges for each uncertainty must be quantified, which may appear straightforward but contains subtle challenges. Breaking down individual uncertainties into components (e.g., measurement, model, or statistical error) and carefully considering portfolio and sample-bias effects can help create reasonable and justifiable ranges. Some uncertainties, especially geological ones, are not handled easily as continuous variables. In many studies, several discrete geological models are constructed to represent the spectrum of possibilities. To integrate these models with continuous parameters and generate outcome distributions, likelihoods must be assigned to each model. Although assigning statistical meaning to a set of discrete models may be a challenge if those models are not based on any underlying statistics, the models do have the advantage of more readily being fully consistent scenarios rather than a combination of independent geological-parameter values that may not make any sense (Bentley and Woodhead 1998). As noted previously, validation with analog data sets and look-backs should be carried out when possible because many studies and publications have shown that people have a tendency to anchor on what they think they know and to underestimate the true uncertainties involved. Therefore, any quantitative data that can help establish and validate uncertainty ranges are highly valuable. Assigning Distributions to Uncertainties In addition to ranges, distributions must be specified for each uncertainty. There are advocates for different approaches. Naturalists strongly prefer the use of realistic distributions that often are observed in nature (e.g., log normal), while pragmatists prefer distributions that are well-behaved (e.g., bounded) and simple to specify (e.g., uniform or triangular). In most cases, specifying ranges has a stronger influence on forecasts than the specific distribution shape, which may have little effect (Wolff 2010). Statistical correlations between uncertainties also should be considered, although these too are often secondary effects. Uncertainty-to-Forecast Relationships Having been identified and quantified, relationships then must be established between uncertainties and forecasts.
These relationships sometimes can be established from analytical and empirical equations but also may be derived from models ranging from simple material-balance through full 3D reservoir-simulation models. When complex models are used to define relationships, it often is useful to use DoE methods to investigate the uncertainty space efficiently. These methods involve modeling defined combinations of uncertainties to fit simple equations that can act as efficient surrogates or proxies for the complex models. Monte-Carlo methods then can be used to investigate the distribution of forecast outcomes, taking into account correlations between uncertainties. DoE methods have been used for many years in the petroleum industry. The earliest SPE reference found was from a perforating-gun study by Vogel (1956), the earliest reservoir work was on a wet-combustion-drive study by Sawyer et al. (1974), while early references to 3D reservoir models include Chu (1990) and Damsleth et al. (1992). These early papers all highlight the main advantage of DoE over traditional one-variable-at-a-time (OVAT) methodsefficiency. Damsleth et al. (1992) give a 30 to 40% advantage for D-optimal designs compared with OVAT sensitivities. For an extensive bibliography of papers showing pros and cons of different types of DoE and application of DoE to specific reservoir problems, see an expanded version of this paper, Wolff (2010). Model Complexity Given that computational power has increased vastly from the 1950s and 1970s to ever-more-powerful multicore processors and cluster computing, an argument can be made that computational power should not be regarded as a significant constraint for reservoir studies. However, Williams et al. (2004) observe that gains in computational power are generally used to increase the complexity of the models rather than to reduce model run times. Most would agree with the concept of making things no more complex than needed, but different disciplines have different perceptions regarding that level of complexity. This problem can be made worse by corporate peer reviews, especially in larger companies, in which excessively complex models are carried forward to ensure buy in by all stakeholders. Highly complex models also may require complex logic to form reasonable and consistent development scenarios for each run. Finally, the challenge of quality control (QC) of highly complex models cannot be ignoredgarbage in, garbage out applies more strongly than ever. Launching directly into tens to hundreds of DoE runs without ensuring that a base-case model makes physical sense and runs reasonably well will often lead to many frustrating cycles of debug and rework. A single model can readily be quality controlled in detail, while manual QC of tens of models becomes increasingly difficult. With hundreds or thousands of models, automatic-QC tools become necessary to complement statistical methods by highlighting anomalies.
87
3 2.5 2
3 1.5 1 0.5 0 1 0.5 1 1 0.8 0.6 0.2 0.4 0.2 0 0.6 0.2 0.4 0.6 0.8 1 1 0.2 0.6 1 1 0.8 0.6 0.4 0.2 0.2 0.2 0 0.6 0.2 0.4 0.6 1 0.8 1 0 0.6 1 2
8 7 6 5
9 8 7 6 5
4 4 3 2 1 1 0 1 0.8 0.6 0.2 0.4 0.2 0 0.6 0.2 0.4 0.6 1 0.8 1 0.2 0.6 1 0 1 0.8 0.2 0.6 0.4 1 0.2 0 0.2 0.4 0.6 1 0.8 0.6 0.2 0.6 3 2 1
Proxy Equations Fig. 1 shows proxy surfaces of varying complexity that can be obtained with different designs. A Plackett-Burman (PB) design, for example, is suitable only for linear proxies. Folded-Plackett-Burman (FPB) (an experimental design with double the number of runs of the PB design formed by adding runs reversing the plus and minus 1s in the matrix)
can provide interaction terms and lumped second-order terms (all second-order coefficients equal). D-optimal can provide full second-order polynomials. More-sophisticated proxies can account for greater response complexities, but at the cost of additional refinement simulation runs. These more-sophisticated proxies may be of particular use in brownfield studies in which a more-quantitative proxy could
88
The Kuwait Oil Company invites you to discover the opportunities of a lifetime in its exploration and production. We are looking for petro-technical specialists for full time internal employment within the company and based in Kuwait City. We offer attractive employment benefits that includes a tax-free salary package, annual bonuses, 42 calendar days annual paid vacation with airfare, accommodation and furnishing allowances, world class free healthcare for employees and their families, children education allowance, a fully maintained company vehicle and other great benefits. We also offer a challenging, safe and healthy work environment along with an enviable lifestyle filled with leisure activities such as a golf course, cricket pitch, football field, tennis & squash courts, horse riding facilities, bowling alley, swimming pools, health & social clubs, modern shopping malls and many more...
Minimum Requirements
Bachelors Degree 15+ years of experience in the relevant positions
www.jvg-media.com
90
Determining which development options (i.e., unconstrained or realistic developments, including controllable variables) to choose for building the proxy equations and running Monte Carlo simulations also has challenges. One approach is to use unconstrained scenarios that effectively attempt to isolate subsurface uncertainties from the effects of these choices (Williams 2006). Another approach is to use a realistic base-case development scenario if such a scenario already exists or make an initial pass through the process to establish one. Studies that use DoE for optimization often include key controllable variables in the proxy equation despite knowing that this may present difficulties such as more-irregular proxy surfaces requiring higherlevel designs. Integrated models consider all uncertainties together (including surface and subsurface), which eliminates picking a development option. These models may be vital for the problem being analyzed; however, they present additional difficulties. Either computational costs will increase or compromises to subsurface physics must be made such as eliminating reservoir simulation in favor of simplified dimensionless rate-vs.-cumulative-production tables or proxy equations. That reopens the questions: What development options were used to build those proxies? and How valid are those options in other scenarios? Deterministic Models Short of using integrated models, there still remains the challenge of applying and optimizing different development scenarios to a probabilistic range of forecasts. Normal practice is to select a limited number of deterministic models that capture a range of outcomes, often three (e.g., P10/50/90) but sometimes more if testing particular uncertainty combinations is desired. Normal practice is to match probability levels of two outcomes at once (e.g., pick a P90 model that has both P90 OOIP and P90 oil recovery). Some studies attempt to match P90 levels of other outcomes at the same time, such as discounted oil recovery (which ties better to simplified economics because it puts a time value on production), recovery factor, or initial production rate. The more outcome matches that are attempted, the more difficult it is to find a suitable model. The subsurface uncertainties selected to create a deterministic model, and how much to vary each of them, form a subjective exercise because there are an infinite number of combinations. Williams (2006) give guidelines for building such models, including trying to ensure a logical progression of key uncertainties from low to high models. If a proxy is quantitatively sound, it can be used to test particular combinations of uncertainties that differ from those of the DoE before building and running time-consuming simulation models. The proxy also can be used to estimate response behavior for uncertainty levels between the two or three levels (1/0/+1) typically defined in the DoE. This can be useful for tuning a particular combination to achieve a desired response, and it allows moderate combinations of uncertainties. Such moderate combinations, rather than extremes used in many designs, will be perceived as more realistic. This choice also will solve the problem of not being able to set all key variables to 1 or +1 levels and
follow a logical progression of values to achieve P90 and P10 outcomes. However, interpolation of uncertainties can sometimes be: Challenging (some uncertainties such as permeability may not vary linearly compared with others such as porosity) Challenging and time-consuming (e.g., interpolating discrete geological models) Impossible [uncertainties with only a discrete number of physical states such as many decision variables (e.g., 1.5 wells is not possible)] Finally, selecting the deterministic models to use is usually a whole-team activity because each discipline may have its own ideas about which uncertainties need to be tested and which combinations are realistic. This selection process achieves buy-in by the entire team before heading into technical and management reviews. Probabilistic brownfield forecasting has the additional challenge of needing to match dynamic performance data. Although forecasts should become more-tightly constrained with actual field data, data quality and the murky issue of what constitutes an acceptable history match must be considered. History-match data can be incorporated into probabilistic forecasts through several methods. The traditional and simplest method is to tighten the individual uncertainty ranges until nearly all outcomes are reasonably history matched. This approach is efficient and straightforward but may eliminate some more-extreme combinations of parameters from consideration. Filter-proxy methods that use quality-of-history-match indicators (Landa and Gyagler 2003) will accept these more-extreme uncertainty combinations. The filter-proxy method also has the virtue of transparencyexplanation and justification of the distribution of the matched models is straightforward, as long as the proxies (especially those of the quality of the history match) are sufficiently accurate. More-complex history-matching approaches such as genetic algorithms, evolutionary strategies, and ensemble Kalman filter are a very active area for research and commercial activities, but going into any detail on these methods is beyond the scope of this paper. ConclusionWhat Do We Really Know? In realistic subsurface-forecasting situations, enough uncertainty exists about the basic ranges of parameters that absolute numerical errors less than 5 to 10% usually are considered relatively minor, although it is difficult to give a single value that applies for all situations. For example, when tuning discrete models to P10/50/90 values, a typical practice is to stop tuning when the result is within 5% of the desired result. Spending a lot of time to obtain a more precise result is largely a wasted effort, as look-backs have shown consistently. Brownfield forecasts are believed to be more accurate than greenfield forecasts that lack calibration, but look-backs also suggest that it may be misleading to think the result is the correct one just because a great history match was obtained. Carrying forward a reasonable and defensible set of working models that span a range of outcomes makes much more sense than hoping to identify a single true answer. As
91
92