You are on page 1of 10

header for SPIE use

Reasoning and Modeling Systems in Diagnosis and Prognosis


Amit Mathura, Kevin F. Cavanaugha, Krishna R. Pattipatia, Peter K. Willettb, and Thomas R. Galiec Qualtech Systems, Inc., 100 Great Meadow Road, Suite 501, Wethersfield, CT 06109 Department of Electrical & Computer Engineering, University of Connecticut, Storrs, CT 06269 c Naval Surface Warfare Center - Carderock Division, Naval Business Center, Philadelphia, PA 19112
b a

ABSTRACT
Diagnosis and prognosis are processes of assessment of a systems health past, present and future based on observed data and available knowledge about the system. Due to the nature of the observed data and the available knowledge, the diagnostic and prognostic methods are often a combination of statistical inference and machine learning methods. The development (or selection) of appropriate methods requires appropriate formulation of the learning and inference problems that support the goals of diagnosis and prognosis. An important aspect of the formulation is modeling relating the real system to its mathematical abstraction. This paper explores the impact of diagnostic and prognostic goals on modeling and reasoning system requirements, with the purpose of developing a common software framework that can be applied to a large class of systems. In particular, the role of failure-dependency modeling in the overall decision problem is discussed. The applicability of Qualtech Systems modeling and diagnostic software tools to the presented framework for both the development and implementation of diagnostics and prognostics is assessed. Finally, a potential application concept for advancing the reliability of Navy shipboard Condition Based Maintenance (CBM) systems and processes is discussed. Keywords: Prognostics, diagnostics, dependency modeling, data analysis

1. INTRODUCTION
Diagnosis and prognosis are processes of assessment of a systems health. Diagnosis is an assessment about the current (and past) health of a system based on observed symptoms, and prognosis is an assessment of the future health. While diagnostics, the science dealing with diagnosis, has existed as a discipline in medicine, and in system maintenance and failure analysis, prognostics is a relatively new area of research. Indeed, the word prognostics does not yet exist in the English language dictionaries. The term was likely coined by the research community developing diagnosis and prognosis techniques to aid condition-based maintenance. Hess et al ([3],[5]) have proposed a working definition of prognostics as the capability to provide early detection and isolation of precursor and/or incipient fault condition to a component or sub-element failure condition, and to have the technology and means to manage and predict the progression of this fault condition to component failure. This definition includes prognosis (i.e., the prediction of the course of a fault or ailment) only as a second stage of the prognostic process. The preceding stage of detection and isolation could in fact be considered a diagnosis process. Even the traditional meaning of prognosis used in the field of medicine implies the recognition of the presence of a disease and its character prior to performing prognosis. However, in the maintenance context, the final step of making decisions about maintenance and mission planning should also be included, while developing prognostics, for more convenient traceability to top-level goals. This paper reports the results of investigations performed during the first phase of a recent research and development effort towards an integrated system for diagnosis and prognosis for supporting condition-based maintenance of complex, highassurance systems such as aircraft, ships, and industrial plants. The paper focuses on prognostics, but also discusses the need to consider both diagnostics and prognostics when developing an integrated health management system. In particular, it recognizes that since the two processes are based on observed and monitored (operational) data, the development of required methods needs to consider the modeling of the relationships between the system and the observed data as well as the analytical methods for making decisions. The next section addresses problem formulation issues, specifically of prognostics, while relating them to diagnostics. Section 3 describes current modeling approaches and introduces the dependency modeling approach used in QSIs TEAMS software to support integrated diagnostics. Section 4 describes an integrated system for CBM support that provides for user-configurable diagnosis and prognosis. Section 5 concludes with a discussion of a potential back-fit of the proposed integrated system in a CBM environment.

2. PROGNOSTICS AND DIAGNOSTICS


Since both diagnostics and prognostics are concerned with health assessment, it is logical to study them together. However, the decision-making goals of the two are different. Diagnosis results are used for reactive decisions about corrective (repair/replacement) actions; prognosis results are used for proactive decisions about preventive and/or evasive actions (CBM, mission reconfiguration, etc.) with the economic goal of maximizing the service life of replaceable/serviceable components while minimizing operational risk. On the other hand, in many situations diagnosis and prognosis aid each other. Diagnostic techniques can be used to isolate components where incipient faults have occurred based on observed degradation in system performance; these components can then become the focus of prognostic methods to estimate when incipient faults would progress to critical levels causing system failure. Prognostics could be used to update the failure rates (reliabilities) of the system components, and in the event of a failure these updated reliability values can be used to isolate the failed component(s) via a more efficient troubleshooting sequence. Diagnostics Diagnosis is the process of identification of the cause(s) of a problem. The causes of interest may not be root causes: depending on the application or the level of maintenance activity, diagnosis can imply the isolation of a faulty component, a failure mode, or a failure condition. It involves the recognition of one or more symptoms or anomalies (that what is observed is not normal), and their association with a ground truth. The cause could be within the system (failed component) or be an external factor which subsequently damages the system and prevents it from functioning normally in future. The objectives, job-function or capacity of the user of the diagnostic results determines whether s/he needs to address the external root cause, the damaged system component(s) or both (whether the corrective action should focus on eliminating the external root cause, repairing/replacing the damaged components, or both). In many large systems instrumented with built-in sensors and diagnostic tests, the steps of anomaly detection and root-cause isolation are distinct. The failure of one or more tests signifies anomalies or failures, and the processing of these results to isolate the failure source constitutes the isolation step. In some applications, however, failure detection and isolation are not separable steps. For example, diagnostic problems are often formulated as classification problems: associate a vector of features obtained from the system with a class corresponding either to the normal (healthy) state or to one of the failure modes. Such approaches are effective during corrective maintenance if the relationship between the failure modes constituting the classes and the implicated components is obvious or implied. Prognostics - Formulation While diagnosis is based on observed data and available knowledge about the system and its environment, prognosis makes use of not only the historical data and available knowledge, but also profiles of future usage and external factors. Both diagnosis and prognosis can be formulated as inference problems that depend on the objectives of diagnosis and prognosis and the nature of the available data and system knowledge. Two characteristics common to all applications are the incomplete or imprecise knowledge about the systems of interest, especially in the failure space, and the uncertainty or randomness of observed data. The development of an appropriate decision model, thus, requires consideration of these characteristics along with the objectives of the decision problem. The early research on prognostics has dealt largely with specific applications or case studies (e.g., [10],[14]). This is expectedly so, since prognostics as an engineering problem arose from a need to promote condition-based maintenance (CBM) practices for reducing costs incurred during inefficient schedule-based preventive maintenance [9]. Irrespective of the prognostic approaches considered (empirical or analytical), their implementation requires reference to the application domain: either to obtain data required for training prognostic algorithms, or to model the functional relationships based on the physical laws governing the system. Prognostics is receiving most attention for systems consisting of mechanical and structural components, where there is an opportunity to change current maintenance practices from scheduled-preventive to condition-based maintenance. A large number of recent industrial research and development efforts in prognostics has been spurred by the military seeking to change its maintenance practices; to a lesser extent, the need for prognostics has also been recognized by other industries relying heavily on mechanical plant equipment such as power turbines, diesel engines, and other rotating machinery. Unlike electronic or electrical systems, mechanical systems typically fail slowly as structural failures (faults) in mechanical parts progress slowly to a critical level, and monitoring the growth of these failures provides an opportunity to assess degradation and re-compute remaining component life over a period of time. The typical scenario is a slowly evolving structural failure due to fatigue; the causes of this fatigue are repetitive stresses induced by vibration from rotating machinery (high cycle fatigue) and stresses due to temperature cycles (low cycle fatigue). In electronic parts, faults are rather abrupt at the component level.

Due to the complexity of geometry-dependent physical models of mechanical structures, developing failure evolution models for such applications can quickly become computationally prohibitive or intractable, and recourse is taken to empirical (datadependent) forecasting models. Thus, developing prognostics for mechanical systems has become heavily dependent on the availability of reliable historical data, even where some physical modeling is possible [4]. The selection of the techniques too depends on the nature of the data. Most efforts are still in their infancy, and therefore results are not easily available in the public domain. Researchers have reported general approaches and the specific methods (neural networks, wavelets, fuzzy methods, etc.) used in the context of specific machinery or components (e.g., [4],[10]-[14]), but results pertaining to success with validation efforts are not readily (or publicly) available. One reason that prevents public reporting of these results could be that the studies are dependent on possibly proprietary data either supplied by the customer (system owner) or collected at significant cost. The working definition quoted in Section 1 specifies an objective of prognostics as progression of fault conditions to component failure. Indeed, most prognostics research addresses the prognosis of individual components or subsystems/modules. This is why the techniques tend to be physics-based, based on reliability modeling of specific components, or based on test data collected from specially instrumented systems. The extension of component-level prognostic conclusions to system-level decisions depends on the relationship of the system components to the system functionality, and on the criticality of component failures on system function (see, for example, [8]). Engel, et al [3] have attempted to formulate prognosis as a general problem of estimating failure probability distributions. In general, all formulations aim to predict the time to failure. The presentation of the output can vary depending on the application; some of the typical outputs are as follows:

Time to failure (of what? System, component, or a failure mode of the system?) Remaining useful life or time to end-of-life (of what? Of a specific component?) Probability that failure will occur before next (scheduled) inspection, maintenance or overhaul Probability that component life will end before next (scheduled) overhaul or replacement

In many studies, prognostics has been developed with a specific failure of a specific component in mind. Hence, the of what? question appended to the items listed above is relevant as it has an impact on the implementation of prognostics for the entire system. It highlights the importance of integrated hierarchical cause-effect modeling of the system, so that one can relate prognostic results for individual components and failure modes to system-wide conclusions. Defining time-to-failure or time-to-end-of-life require defining what constitutes failure or end of life. Failure can be defined as the inability to perform a function. Hence, defining failure requires defining the function, and time-to-failure estimation is dependent on how the original function is defined. For example, functional requirements can change with mission type, resulting in different prognosis and decisions derived from prognosis for the same system/component. Depending on the technique applied, a measure of confidence for the prediction also needs to be supplied. Since techniques are inevitably statistical (probabilistic), confidence measures are probabilistic 90% confidence implies that the conditional probability of time to failure lying in the estimated interval is 0.9. (Another approach to handling uncertainty is the use of fuzzy reasoning. However, the representation of uncertainty using fuzzy set theory is very different and must be interpreted accordingly.) Answers to questions such as what constitutes failure or end-of-life may be straightforward in many applications when posed in the context of the system of interest. However, they may not be as straightforward if the observed data and features derived from them do not have a known quantifiable relationship with the failure event. For example, degradation may be observed via worsening vibration and may be reliably trended using sophisticated techniques (e.g., [13]), but their use to predict the time to failure still depends on knowledge about the critical vibration level at which the system would fail (does the system fail when the vibration level increases by 20dB, by 30dB, and so on, from the current level?). This is especially a problem if training data is not available (what if vibration alone is not a reliable predictor of failure?). Other considerations in formulating the prognostics problem include:

What data is available? Is data available only from the instance of interest or is population data available from the entire class of systems to which a particular instance belongs? If population data is available, then learning is possible (supervised (training) or unsupervised). If only the current and past data from the instance of interest is available, then blind methods have to be used.

If the probability distribution of the data is known and parametric, parametric estimation methods can be applied. Estimation of the probability of the lifetime-to-go estimate is an analytical or computational problem. If the distribution of the data is not known to belong to a parametric form, then simulation and bootstrapping approaches may be used.

From which parts of the system is the data obtained? Is it obtained directly from the component of interest, or from some system-level monitoring? Gas turbine engine prognostics is largely being developed using test-cell data measured directly from accelerometers mounted on the engine (e.g., [15]) or using sophisticated measurements made on turbine blades (e.g., [14]). In an engine mounted on an aircraft, observations may be limited to temperature measurements in the gas path, and corresponding RPM and fuel-flow readings, while the rest of the measurements may include air-speed, throttle-stick position, etc. Decision-Making : If prognosis of multiple components in the same system is being done simultaneously, how should they be analyzed together and what decisions should be made regarding inspection, maintenance and overhaul of the entire system or its components? Can each components life-to-go be tracked independently of other components?

Relating to the maintenance environment: The maintenance infrastructure for large systems (such as aircraft fleet) is typically structured as a hierarchical organization for effective management of resources and logistics. The applicability of prognostics, just as the applicability of diagnostics, is, thus, tied to the maintenance activity in the respective hierarchical layers. For example, the maintenance activity of the defense forces is usually organized into three levels: the Organization level (O-level or, in the Armys case, Unit-level) is the lowest-level activity and functions in the mission (system operation) environment; Intermediate-level (I-level) is the next higher-level, and Depot-level (D-level) is the highest level and supports off-line functions. The main purpose of prognostics is to anticipate and prevent critical failures and the subsequent need for corrective maintenance. At the O-level, preventive maintenance consists of scheduled inspections, LRU replacement, and on-system (e.g., on-aircraft or flight-line) servicing. The advantage of prognostics would be realized in an O-level maintenance activity if the schedules for preventive maintenance can be updated for individual pieces of equipment and their components based on operational data and future usage. This implies predicting time to failure of LRUs, as well as the impact of such failure on the system. For the entire system, times-to-failure need to be tracked at the LRU and system levels. The utility of prognostics at the I- and D-levels is lower than at the O-level. The I-level activity is primarily concerned with scheduled maintenance and repairing/servicing LRUs that can be serviced without having to send parts to the Depot. So, prognostics can be used to predict parts requirements and thus support lower inventory requirements. The D-level maintenance is concerned with specialized repairs or overhauls of LRUs as well as with inspection, servicing and repair/replacement of SRUs. Thus, prognostics has minimal applicability to the D-level. The types of prognostic techniques depend on the level of complexity of the subsystem, assembly, or component. The prognostic techniques at the individual parts level (e.g., SRU) tend to be physics based or based on reliability model updates. The prognostic techniques at the sub-system or assembly levels (e.g., LRU) tend to be less dependent on physical models and more on machine learning methods since physics-based modeling of systems or assemblies consisting of several interacting components becomes difficult.

3. MODELS
Models that relate the physical system to the data observed from it are an important part of the formulation of the diagnostic/prognostic inference (reasoning) process. As mentioned in the previous section, several approaches are being researched and applied in the development of diagnostics/prognostics and health management. Some of the model paradigms are described below: Physical models Physical models are models founded in the natural laws governing the system operation or used to design the system, e.g., structural mechanics (properties of materials solid, liquid and gas), statics and dynamics of rigid bodies (e.g., finite-element models), thermodynamics, etc. Physical models are usually developed to explain the normal behavior of a system to facilitate system engineering, not the failure behavior. Indeed, the failure space of systems tends to be larger than the normal functional space. Physics-based failure models need to be specially built usually one model for each failure mode. Needless to say, intricate knowledge possessed by domain experts (scientists and engineers) is required, and, hence, such modeling is expensive.

Reliability models Reliability modeling involves developing reliability distributions of individual components, and system reliability evaluation using reliability block diagrams. The reliability analysis of individual components consists of specification of the failure probability distribution functions based on the statistical analysis of empirical and laboratory data. When a system assembled using such individual components is considered, the reliability block diagrams are used to analyze the overall system reliability using probabilistic and graph-theoretic techniques. Judicious assumptions often need to be made pertaining to the probabilistic independence of individual failures, and situations such as sympathetic failures, redundancy/reconfiguration, need to be accounted for. The reliability models are helpful in engineering the overall health management system by identifying parts of the system in need of health monitoring and diagnostics/prognostics. Component reliabilities can be used to update periodic maintenance and inspection schedules, and computed reliabilities of each subassembly, module, etc. can be used to efficiently isolate causes of anomalies or failures. Machine Learning models Machine learning models are purely data dependent models and require sufficient amounts of relevant historical training data to be effective. The most prominent techniques in this class are the neural-network-based techniques. Neural networks are useful for modeling phenomena that are hard to model using parametric/analytic expressions and equations, but the downside is that they are hard to validate and, furthermore, do not enhance the basic understanding of the system/process under study. The learning demonstrated by neural networks is often impressive, but care is required during generalizations made using them. Dependency models Dependency modeling, rooted in artificial intelligence approaches, captures cause-effect relationships. At a lower level of detail, causes can be associated with physical components, effects with failures of components, diagnostic tests, or symptoms, and the relations between causes and effects with physical links, between components or directions of energy flow. Furthermore, a priori knowledge about occurrences of component failures or failure modes can be specified and used in the analysis for designing appropriate diagnostics/prognostics. In the following paragraphs, the failure dependency modeling used in QSIs integrated tool set is described. Failure Dependency Modeling in TEAMS for Diagnostic Reasoning A model of the target system to drive the diagnostic and prognostic reasoning is one of the most important steps in implementing integrated diagnostics and prognostics. A dependency model of the target system is necessary for diagnostic reasoning as it provides the relationships among component faults and their failure/anomaly effects observable in the rest of the system. It can also support prognostics by the inclusion of degraded performance modes as failure modes and anomalies as tests (failure effects). QSIs TEAMS-based modeling (also called multi-signal or multi-functional modeling) methodology [1] was developed to capture functional dependencies in a system while retaining hierarchical and structural relationships required to perform system-level reasoning by fault-isolation modules. The system can be modeled to as low a level of hierarchy as desired depending on the level to which the diagnosis or prognosis is desired. The specification of the diagnostic tests can also be entered into the model to enable the signal-processing library used by the diagnostic engines to invoke the tests using run-time data. The dependency modeling approach used in TEAMS offers several useful features that support the development of integrated diagnostics and prognostics. These are discussed below: 1. The TEAMS multi-functional dependency graphs can retain the physical structure and hierarchy of the systems to a large extent; fault dependencies are overlaid onto the structure using signals or functions. Since layering signals over a structural model is more intuitive than building propagation models for abstract events, they are easier to build than conventional dependency graphs. A flexible means for labeling the multiple hierarchy levels makes modeling easier for various system types and maintenance environments. The failure modes modeled can include degraded failure modes to handle prognostics along with diagnostics. The models are essentially fault models, i.e., they only model the dependencies in the system as relevant to the various system faults or anomalies. Since the modeling is done in a systems fault space, the complete physical functionality of the system in its normal mode of operation need not be modeled. Thus the models are relatively small, and their development and validation requires less effort than simulation models. The ability to model sensors just like any other modules allows consideration of faulty sensors causing test failures along with the normal diagnostic processing: sensor problems are not ruled out when sensor data shows anomalies. Sensor validation is further enabled by the ability of the diagnostic routines resulting from TEAMS models to detect inconsistent test results. Furthermore, errors in tests (exceedances, alarms, BIT tests, etc.) can be specified using probabilities of false alarm and missed detection. Tests enabled by virtual sensors can be modeled as easily as real sensor tests.

2.

3.

4.

5.

Provisions for special situations are available, e.g., AND nodes to model redundancies and reconfiguration in faulttolerant systems, switches to model multiple modes of operation (e.g., regimes), and special signals for built-in tests or alarms. In addition, faults that feed back in a dependency loop can be handled by identifying breakable loops in the model for off-line fault-isolation within the loop. As a system design evolves and additional information becomes available (e.g., new, previously unknown/unanticipated failure modes or anomalies), the models can be updated. In conventional dependency models, on the other hand, new dependency paths may have to be built if new failure modes are to be modeled. The multi-functional model data structures support an object-oriented implementation that facilitates reuse and easy maintenance.

6.

Figure 1 illustrates the modeling of a subsystem in TEAMS. The analysis options provided by TEAMS support several capabilities needed for developing the requirements for a health management system for the system. These include flexible testability analysis, reliability analysis, FMECA analysis (failure modes, effects, and criticality analysis), and optimal/suboptimal diagnostic strategy generation [2]. The testability analysis can be conducted at any level of hierarchy and for any classes of tests (e.g., built-in tests/alarms, sensor exceedances, prognostic tests, manual troubleshooting tests, automatic tests, pilot/operator observable symptoms/exceedances, etc.) to compute the fault coverage (levels of detection and isolation) of the tests.

Figure 1 Hierarchical dependency-graph modeling in TEAMS.

4. A FRAMEWORK FOR INTEGRATING DIAGNOSIS AND PROGNOSIS


The concept of integrated diagnostics and prognostics pertains to making the best or most appropriate set of strategies, techniques, and algorithms available at any level of system hierarchy (system, subsystem, LRU, SRU, etc.) and at every stage of a systems deployment (operation/mission, flight-line, off-line intermediate or depot maintenance, etc.) in support of operational efficiency, safety, reliability and availability. Thus, the concept implies the ability to specify, design and develop

systems engineering tools, run-time tools and reasoners, user interfaces, databases, analysis and optimization algorithms, and a framework necessary for deploying these in the operational and support environment. QSI has formulated and developed an architecture for such a system, called the Integrated Diagnostic and Prognostic System (IDPS) framework with the goal of making diagnostics and prognostics available to the users in a CBM environment. The objective of this system is to provide an environment that can interface with the target systems condition/health monitoring environment (a CBM or HUMS environment), use the monitored data to detect degradation, anomalies, or failures, diagnose the cause(s), assess the health of the system based on prognostics of the faulty or failing components, and determine maintenance requirements. The CBM environment to which the IDPS is back-fit might itself perform some of these functions, such as failure/anomaly detection (e.g., alarm computation) and system-specific rule-based diagnosis. Assuming the results of these capabilities are available to IDPS, the IDPS will utilize them to perform complete diagnosis, prognosis, and maintenance requirements assessment. The main motivation behind designing the IDPS is the need for an integrated environment that interfaces with existing computer-based monitoring and maintenance systems and supports the evolution and growth of diagnostic and prognostic technologies as the needs of the systems change and new research yields increasingly sophisticated methods and techniques (see, for example, [5]). The effort is to provide a consistent methodology for different kinds of systems using different maintenance strategies. The key components of the IDPS framework/architecture are (see Figure 2): 1. 2. 3. 4. 5. 6. 1. System Configuration (Model) Editor Executive Signal Processing (Test) Module Diagnostic Module Prognostic Module Database System Configuration (Model) Editor environment for editing system models, and diagnostic and prognostic test definitions. The System Configuration Editor is based on QSIs TEAMS environment that allows the modeling of system structure, component hierarchy, module and component properties including design reliabilities, fault-to-test dependencies, test-point placement, and test definitions (see previous section for an example of engine modeling in TEAMS). The editor environment will provide the system engineers the opportunity to build/edit or import system information, validate, and update the multi-functional dependency models and module properties such as failure modes, failure rates, and repair times/costs. A library of reusable tests and modules (components) in the IDPS database can facilitate the development and management of the models. Executive run-time engine for retrieving monitored data from the systems CBM database or data acquisition environment, for scheduling/dispatching the data for signal processing, and diagnostic and prognostic analyses, and for communicating with the user via a graphical interface. The Executive controls the following functional components:

These modules are described below.

2.

Test Executive for scheduling failure detection tests in accordance with test scripts in the model. Its function is to hide the implementation of the test routines from the rest of the system Diagnostic executive for invoking inference engines for fault isolation based on test results from the test executive. Inference engines may differ depending on whether the monitored data is real-time or post-mission data for off-line analysis. QSIs diagnostic engines use a common TEAMS dependency model of the system, and can diagnose in the presence of multiple simultaneous failures. The function of the Diagnostic Executive is to hide the implementation of the diagnostic reasoning engine from the rest of the system, thus, allowing new techniques to be added seamlessly provided the interface API is conformed with. If diagnosis is adequately performed by the existing CBM environment, this stage of inference may be unnecessary. Prognostic Executive for invoking usage and remaining life calculations, and displaying mission planning information to maintenance control. Similar to the Diagnostic Executive, its function is to hide the implementation of the prognostic routines, thus, allowing open-systems expandability and scalability.

Prognostics User Prognostics User Interface Interface

Interactive Diagnosis Interactive Diagnosis

Data Acquisition Data Acquisition

Machine-level Machine-level Open Interfaces Open Interfaces

Sensor Data

CBM Open Interface CBM Open Interface

(e.g., ICAS) (e.g., ICAS)

Existing Existing CBM CBM Software Software

Prognostic Prognostic Executive Executive Prognostic Prognostic Methods Methods Library Library

Diagnostic Diagnostic Engine Engine

TEAMS-RT TEAMS-RT

TEAMATE TEAMATE

Executive Executive Test Test Executive Executive Signal Signal Processing Processing Library Library

CBM Database

ODBC

Central DB (Models, Config Data)


System Configuration System Configuration (TEAMS) (TEAMS)

Figure 2 IDPS architecture. The Executive is designed to manage and use a shared-memory based message pool for data exchange, that ensures that data is dispatched to the appropriate data processing, reasoning and interface modules while maintaining the necessary synchronization between the monitoring and diagnosis/prognosis functions. It will also support a distributed architecture, allowing communication among components over the network. The Executive will control the following functions via agents (except initialization which is its basic function): a) Initialization initialize the shared memory, launch the failure detection, fault isolation, and prognostic processes, and the data logging and display processes

b) Measure interface with the CBM or HUMS system, or with the IDPS database, to obtain condition/health data and write the data to its shared memory c) Test process raw measurements according to failure detection test definitions in the system model using signal processing and statistical testing routines in the Test (Signal Processing) Library

d) Diagnosis call a diagnostic reasoning engine to process test results assign good, faulty, suspected, and unknown health status to each component in the system e) f) Prognosis process the status of faulty and suspected components for estimated time to failure or remaining useful life Logging display health report and prognostic assessment on screen and log to database. The Executive is the primary module that supports expandability and scalability of the integrated environment by hiding the implementation of the various data acquisition, signal processing, and diagnostic and prognostic functions from each other. 3. Signal Processing Library Many failure types require signal processing before they can claim status as features for decision-making. To some extent signal processing is ancillary, although each application is interesting for its specific

needs. The real issues are of feature selection/reduction, treatment of missing data, and of training set usage. The Signal Processing Library is a dynamically linked library (DLL) of routines that supports the addition of new routines without requiring recompilation/rebuild of the entire software. This capability is enabled via configuration data that is updated each time a new routine is added to the libraries. The configuration data format forms a part of an open systems specification; providers of new routines need supply only the necessary configuration data. 4. Prognostic Routines Library The Prognostics Library is a DLL of prognostic routines with a similar design as the signal processing library. It supports the addition of new routines without requiring recompilation/rebuild of the entire software. This capability will be enabled via configuration data that is updated each time a new routine is added to the libraries. The configuration data format forms a part of an open systems specification; providers of new routines need supply only the necessary configuration data. One class of prognostics capabilities is expected to include routines for updating the usage of system components based on actual operation hours, operational regimes or mission type, damage modes, etc., which can be saved in the system configuration data in the database for assessment of maintenance requirements as well as to aid dynamic test sequencing during interactive diagnosis. Database for local management of system configuration data, component health, usage, and maintenance history data, diagnostic data, and instance-specific parameters for reusable tests and system components User Interfaces for interaction with maintenance control to perform maintenance planning, with operators for debrief and interactive diagnosis, and with maintainers for maintenance task execution. Human factors considerations need to guide the development of interface components and accessibility requirements. A Web-based design of the servers can support a distributed, multi-platform, three-tier architecture.

5. 6.

External Interfaces: The external links to the systems CBM database and application software are necessary both for accessing health and usage data acquired by the CBM environment, and to ensure that the system configuration information is current. The need for maintaining concurrency of system information between the CBM environment and IDPS also requires the CBM environment to allow access to its system hierarchy and sensor information. Deployment: A one-time installation of the configuration editor, executive, user-interfaces, and the data processing and prognostic routine libraries will be required. Since all modules are data-driven only the configuration data (including models) in the database need to be updated when necessary the architecture will support plug-and-play capability. As more sophisticated prognostic algorithms are made available by equipment OEMs and domain experts, the DLLs can simply be added to the appropriate libraries with corresponding updates to the configuration data.

5. CBM OF SHIPBOARD SYSTEMS


In a recent Navy-funded project, the potential integration of the IDPS within a naval CBM architecture was investigated. The CBM system that was considered is called the Integrated Condition Assessment System (ICAS), which was developed by the Naval Sea Systems Command (NAVSEA) and is deployed on a majority of US Navy ships for monitoring the health of shipboard systems. Thus, the ICAS serves as a platform for enabling condition-based maintenance practices for the naval ships and for integrating sophisticated diagnostics and prognostics. The ICAS environment integrates the onboard sensornetwork with data acquisition, data collection, analysis and user interfaces for operators and maintainers. It provides a set of open software interfaces for the addition and specification of new networked sensors in the shipboard configuration managed by it [7]. Another set of interfaces, called the Demand Data Interface [6], is provided for third-party software applications to access sensor data on demand. The information on specific sensor data accessible from ICAS is provided in ICAS Configuration Data Set (CDS). The Demand Data Interface is a potential integration point for the IDPS to obtain sensor data for diagnostic/prognostic reasoning. A complete integration of the IDPS with ICAS would also require a software interface to access the CDS data so that the system models used by the diagnostic and prognostic techniques can be related to the system configuration. In general, the potential integration of IDPS with ICAS would require a client-server relationship, where ICAS is viewed as a server providing data acquisition, storage, and possibly some analysis services to the IDPS client. This would require ICAS to ensure that such services are exposed via a sufficiently open Application Program Interface (API). Other considerations include the nature of data exchanged between the prognostics module and ICAS (real-time data, on-demand data, raw-versus-processed data), data processing responsibilities of the prognostics module vis--vis ICAS, and interface to other computerized maintenance management systems (CMMS).

6. CONCLUSIONS
The development of diagnostics, prognostics, and health management of large modern systems requires multi-disciplinary research and the integration of a spectrum of modeling approaches, analysis techniques, and algorithms. Thus, a system-level perspective as well as an environment that facilitates this integration are important. This paper has discussed some of the recent approaches used in these areas, and has presented QSIs approach to the problem along with an environment containing several of the essential features required to effect integrated diagnostics and prognostics.

7. REFERENCES
[1] S. Deb, K.R. Pattipati, V. Raghavan, M. Shakeri, and R. Shrestha, Multi-signal flow graphs: a novel approach for system testability analysis and fault diagnosis, Proceedings of the 1994 IEEE AUTOTESTCON, Anaheim, CA, pp. 361373, September 1994. [2] S. Deb, S. Ghoshal, A. Mathur, R. Shrestha, and K. R. Pattipati, Multi-signal modeling for diagnosis, FMECA, and reliability, Proceedings of the 1998 IEEE International Conference on Systems, Man, and Cybernetics, San Diego, October 11-14, 1998. [3] S.J. Engel, B.J. Gilmartin, K. Bongort, and A. Hess, Prognostics, the real issues involved with predicting life remaining, Proceedings of the 2000 IEEE Aerospace Conference, Big Sky, Montana, March 18-25, 2000. [4] A.K. Garga, K.T. McClintic, R.L. Campbell, C-C. Yang, M.S. Lebold, T.A. Hay, and C.S. Byington, Hybrid reasoning for prognostic learning in CBM systems, Proceedings of the 2001 IEEE Aerospace Conference, Big Sky, Montana, March 10-17, 2001. [5] W. Hardman, A. Hess, and D. Blunt, A USN development strategy and demonstration results for propulsion and mechanical systems diagnostics, prognostics and health management, Proceedings of the 2001 IEEE Aerospace Conference, Big Sky, Montana, March 10-17, 2001. [6] ICAS Demand Data Library Specification, Interface Control Document, Draft/Electronic Copy available from Naval Surface Warfare Center, NAVSEA. [7] ICAS Users Manual, Volume Three: CDS Developer Manual, Draft/Electronic Copy available from Naval Surface Warfare Center, NAVSEA. [8] G.J. Kacprzynski, M.J. Roemer, A.J. Hess, and K.R. Bladen, Extending FMECA - health management design optimization for aerospace applications, Proceedings of the 2001 IEEE Aerospace Conference, Big Sky, Montana, March 10-17, 2001. [9] Open Systems Alliance for Condition-Based Maintenance, http://osacbm.org [10] M.J. Roemer and G.J. Kacprzynski, Advanced diagnostics and prognostics for gas turbine engine risk assessment, Proceedings of the 2000 IEEE Aerospace Conference, Big Sky, Montana, March 18-25, 2000. [11] M.J. Roemer, G.J. Kacprzynski, and R.F. Orsagh, Assessment of data and knowledge fusion strategies for prognostics and health management, Proceedings of the 2001 IEEE Aerospace Conference, Big Sky, Montana, March 10-17, 2001. [12] M.J. Roemer, G.J. Kacprzynski, E.O. Nwadiogbu, and G. Bloor, Development of diagnostic and prognostic technologies for aerospace health management applications, Proceedings of the 2001 IEEE Aerospace Conference, Big Sky, Montana, March 10-17, 2001. [13] D.C. Swanson, A general prognostic tracking algorithm for predictive maintenance, Proceedings of the 2001 IEEE Aerospace Conference, Big Sky, Montana, March 10-17, 2001. [14] P. Tappert, A. von Flotow, and M. Mercadal, Autonomous PHM with blade-tip sensors: algorithms and seeded fault experience, Proceedings of the 2001 IEEE Aerospace Conference, Big Sky, Montana, March 10-17, 2001. [15] F. Wen and P. Willett, Condition monitoring for helicopter data, Proceedings of the 2000 IEEE International Conference on Systems, Man, and Cybernetics, pp. 224-229, Nashville, Tennessee, October 2000.

You might also like