You are on page 1of 3

ERROR & EXCEPTION HANDLING MECHANISM

ERROR HANDLING:

How errors will be processed for the EBIE Data Foundation application and what actions may be required to process the errors. Error handling can be broadly classified as Critical Error handling Non-Critical Error handling

Critical Errors: are Informatica related errors like missing parameters, missing File sets, & Transaction errors. Non-Critical Errors: are Database related errors like Deadlock or Timeout, No table space, MQ connection failure DB down & Informatica Server down. 1. Handling Critical Errors: In case of Critical Errors, Operations team will be intimated to take up further action. Transaction Errors: All the error data is loaded in to error table and auto generated email to be sent to the Operations team. Missing File sets/ Missing Parameters: The Infa workflow/worklet job will abend.

2. Handling Non-Critical Errors: In case of Non-Critical errors Job will be subjected to Restart ability.

By Lokesh Ceeba

ERROR & EXCEPTION HANDLING MECHANISM 2


EXCEPTION HANDLING

Exception handling during the loading of the SIM database tables will be taken care at the individual ETL mapping level. For all the key tables, where the data discrepancy is expected, this below mentioned error handling mechanism can be put in place. Expression transformation will be used to validate the data based on the below criterion: Natural Key/Primary Key Validation: When natural key/Primary Key in Source record isnt available or appears Null then route them to error table. Mandatory Columns Validation: Check for Not Null condition for all the mandatory columns mentioned in the source interface if they come in as Null then route them to an error table. Date Validation: If the incoming source row appears with just Date, convert them to DateTime (6) if it is null let it go as long as the Date column is not defined as Not Null in the SIM table. Customized Validation: Mark the incoming record as an error/reject record based on scenario specific expected errors. This contains the user defined error identifications similar to RIP checks, boundary limits for a given value etc.,(E.g. Consider a filed which denotes the code type and let us assume that there are only three different codes say, A,B,C. When a particular record fetches the value other than the expected, then that record should be marked as an error record. This type of scenario may come when we try to extract the code from a free text using substring, instring functions) Data Duplication: The key constraints set at every target instance of the mapping ensure that the true duplicate records incoming to that particular data flow are discarded. The expression transformation would be present in almost all the ETL mappings as applicable. This is a reusable transformation (ETL logic) which supports effective error handling. The error records identified will be routed to an error table via a Router/Filter transformation. While loading the erroneous record into the error table, an error code which denotes the type of error will be inserted into the error table along with all the other error related attributes. This error code is based on the predefined criterion set up in the ETL Logic. Hence, it is very easy to classify an error and also very easy to find the resolution with minimum analysis based on the error code for a particular erroneous record.

By Lokesh Ceeba

ERROR & EXCEPTION HANDLING MECHANISM This mechanism includes validation of data such as natural key, null values etc. A single table which captures all the error records along with the table name and stream details will be maintained and populated in each and every mapping wherever error handling is required. The below mentioned is the proposed table structure: SRC_SIM_ERROR A Sample Error Table Structure: SRC_SIM_ERROR INFA_WRKF_ID PBLM_DATE SRC_SYS_NAME SRC_INTRFC ETL_PRCS_NAME SRC_INPT_IMG PRCS_ERR_MSG NUMBER(15) DATE VARCHAR(20) VARCHAR(30) VARCHAR(20) VARCHAR(4000) VARCHAR(1000)

An Example for PK validation: Source System Interface Source MQ_MSG_TS PK Last_Update_Timestamp Data 2013-01-15 10:20 NULL 2013-01-08 10:20 ABC 2013-01-15 10:25 NULL 2013-01-15 10:22 XYZ If the PK of the data is a composite key of the two middle columns, each would be a new record as they are unique. But since both rows are null PK they should be routed to an Error table.

By Lokesh Ceeba

You might also like