You are on page 1of 3

Computer System Validation Overview Computer systems and software should be validated during all life-time phases.

When setting User Requirements and Functional Specifications This is called design qualification (DQ) and includes vendor qualification During installation. This is called installation qualification (IQ) Before and during operation. This is called operational qualification (OQ) During routine use This is called performance qualification (PQ) -------------------------------------------------------------------------------For a specific project, validation activities should follow a validation plan. The user requirements are set. These describe the analysis problem and include i nstrument performance requirements for a specific analysis task. From the user requirements, the type of analytical equipment, and computer syste m, the Functions and Functional Specifications are derived. The user then selects a standard instrument and appropriate options. A vendor should be selected who develops hardware and software equipment in acco rdance with a quality assurance system, for example ISO 9001. If the standard software supplied by the vendor does not cover all of the user s r equirements, user specific software is developed as an add-on macro, either by t he user, the vendor, or by a third party. The modules are installed and put together as a system. Correct installation and operation should be verified against Functional Specifications as defined by th e user, a process that is called Installation Qualification (IQ) and Operational Qualification (OQ). The proper functioning of analytical methods should be verified on the new syste m. This covers testing of significant method characteristics, for example, limit of detection, limit of quantification, selectivity, and linearity. If the metho d has not been validated or if its scope did not cover the new instrument, the m ethod should be either newly validated or revalidated. The proper functioning of analytical methods should be verified on the new syste m. This covers testing of significant method characteristics, for example, limit of detection, limit of quantification, selectivity, and linearity. If the metho d has not been validated or if its scope did not cover the new instrument, the m ethod should be either newly validated or revalidated. The performance of the complete system should be validated against the user s Requ irement Specifications. The system combines the instrument hardware, computer ha rdware and software, and the analytical method. In chromatography, it also inclu des a column and reference standards for calibration. This validation, usually r eferred to as system suitability testing, tests a system against documented perf ormance specifications for the specific analytical method. Analytical systems sh ould be tested for suitability prior to and during routine use, practically on a day-to-day basis. When analyzing samples, the data should be validated. The validation process inc ludes documentation and checks for data plausibility, data integrity, traceabili ty, and security. A complete audit trail that allows the final result to be traced back to the raw data should be in place. Software product life cycle Software development often takes several years and it is impossible to ensure a certain quality standard simply by testing the program at the end of its develop

ment process. Quality cannot be designed into the software after the code is wri tten; it must be designed and programmed into software prior to and during its d evelopment phases by following written development standards, including the use of appropriate test plans and test methods. The product life cycle approach as illustrated in figure 1 has been widely accep ted to validate computerized systems during their entire life. The product life is divided into phases: setting user requirements and functional specifications design and implementation, with code generation and inspection test of subsystems, then build a system and test as a system installation and qualification of system before it can be put into routine use monitoring performance of system during its entire use maintenance and recording history of changes -------------------------------------------------------------------------------Operational qualification Correct functioning of software and computer systems should be verified after in stallation and before routine use. While in the past regulatory agencies did not pay much attention to software and computer systems, this has recently changed. For example, the OECD consensus paper number 10 requires acceptance testing whi ch is part of an operational qualification (OQ). Operational qualification for software and computer systems is more difficult th an for hardware. There are three reasons: It is more difficult to define specifications for software. It is more difficult to define test procedures and acceptance criteria. There are hardly any guidelines available on OQ of software and computer systems . While equipment hardware performance problems are easily identified this is not always the case with software. Even though they may be present from the start, t hey may only become evident after certain combinations of software modules are e xecuted. Because of these problems, there is even more uncertainty for software and compu ter systems than for equipment hardware. The basic questions are How much testing is enough? Should all functions be tested? How to perform the tests? If I have multiple computers with the same configurations, should I repeat all t ests for all systems? Too much testing can become quite expensive and insufficient testing can be a pr oblem during an audit. For example, I have seen test protocols of 200 and more p ages that users of an off the shelf commercial computerized chromatographic syst em have developed over several weeks. Each software function, such as switching the integrator on and off has been verified as part of an OQ procedure. This is not necessary if the tests have been done and documented at the vendor s site. Because of the uncertainty and special problems with software and computer syste ms, I have dedicated this chapter to the problem. However, because of the scope of this book which covers all aspects of validation, I can not furnish enough ba ckground to acquire the basics of software and computer validation. This subject matter, many examples and standard operating procedures are discussed and docum ented in a book which is dedicated to this topic. The type of testing required for the qualification of software and computer syst

ems depends very much on the type and complexity of software. We can differentia te between three different situations leading into further discussions in this c hapter: Vendor supplied software and computer hardware is an integral part of an analysi s system, for example, of a computerized spectrographic system where the compute r is used for instrument control, data acquisition and data evaluation. Testing of computer functions can be done processing reference samples. Several computer systems are interconnected to each other and may also be interf aced to analytical systems. Examples are client/servers and laboratory informati on management systems (LIMS). Software has been developed in the user's laboratory either as an add on to a ve ndor supplied software package, for example, a Macro or a standalone software pa ckage. Indeed in practice computer systems found in analytical laboratories are combina tions of categories 1, 2 and 3. I will discuss validation requirements for each category separately. If combinations of the categories are used, the validation activities can also be combined. Testing is very different among the categories but the basic procedure is the same for all three categories. Define the functions. Develop test cases and define expected results and acceptance criteria. Execute the tests. Compare the results with the expected results and acceptance criteria. Document everything.

You might also like