Professional Documents
Culture Documents
Challenge
Description
Informatica offers flexible migration techniques that can be adapted to fit the
existing technology and architecture of various sites, rather than proposing a single
fixed migration strategy. The means to migrate work from development to
production depends largely on the repository environment, which is either:
• Standalone PowerCenter, or
• Distributed PowerCenter
This Best Practice describes several migration strategies, outlining the advantages
and disadvantages of each. It also discusses an XML method provided in
PowerCenter 5.1 to support migration in either a Standalone or a Distributed
environment.
Standalone PowerMart/PowerCenter
Workspace segregation can be achieved by creating separate folders for each work
area. For instance, we might build a single data mart for the finance division within a
When unit testing has been completed successfully, the mappings are copied into the
FINANCE_QA folder. This process continues until the mappings are integrated into
the production schedule. At that point, new sessions will be created in the
FINANCE_PROD folder, with the database connections adjusted to point to the
production environment.
A common folder can be used for sharing reusable objects such as shared sources,
target definitions, and reusable transformations. If a common folder is used, there
should be one common folder for each environment (i.e., SHARED_DEV,
SHARED_TEST, SHARED_QA, SHARED_PROD).
Copying the mappings into the next stage enables the user to promote the desired
mapping to test, QA, or production at the lowest level of granularity. If the folder
where the mapping is to be copied does not contain the referenced source/target
tables or transformations, then these objects will automatically be copied along with
the mapping. The advantage of this promotion strategy is that individual mappings
can be promoted as soon as they are ready for production. However, because only
one mapping at a time can be copied, promoting a large number of mappings into
production would be very time consuming. Additional time is required to re-create or
copy all sessions from scratch, especially if pre- or post-session scripts are used.
On the initial move to production, if all mappings are completed, the entire
FINANCE_QA folder could be copied and renamed to FINANCE_PROD. With this
approach, it is not necessary to promote all mappings and sessions individually. After
the initial migration, however, mappings will be promoted on a “case-by-case” basis.
1. If using shortcuts, first follow these substeps; if not using shortcuts, skip to step
2
• Create four common folders, one for each migration stage (COMMON_DEV,
COMMON_TEST, COMMON_QA, COMMON_PROD).
• Copy the shortcut objects into the COMMON_TEST folder.
• In the PowerCenter Designer, open the appropriate test folder, and drag and
drop the mapping from the development folder into the test folder.
3. If using shortcuts, follow these substeps; if not using shortcuts, skip to step 4:
However, if any of the objects are active, first delete the old shortcut before linking
the output ports.
4. Create or copy a session in the Server Manager to run the mapping (make sure
the mapping exists in the current repository first).
Often, Production loads run late at night, and most Development and Test loads run
during the day so this does not pose a problem. However, situations do arise where
performance benchmarking with large volumes or other unusual circumstances can
cause test loads to run overnight, contending with the pre-scheduled Production
runs.
Distributed PowerCenter
With a fully distributed approach, separate repositories provide the same function as
the separate folders in the standalone environment described previously. Each
repository has a similar name for the folders in the standalone environment. For
instance, in our Finance example we would have four repositories, FINANCE_DEV,
FINANCE_TEST, FINANCE_QA, and FINANCE_PROD.
The mappings are created in the Development repository, moved into the Test
repository, and then eventually into the Production environment. There are three
main techniques to migrate from Development to Production, each involving some
advantages and disadvantages:
• Repository Copy
• Folder Copy
• Object Copy
Repository Copy
The main advantage to this approach is the ability to copy everything at once from
one environment to another, including source and target tables, transformations,
mappings, and sessions. Another advantage is the ability to automate this process
without having users perform this process. The final advantage is that everything
can be moved without breaking/corrupting any of the objects.
There are, however, three distinct disadvantages to the repository copy method. The
first is that everything is moved at once (also an advantage). The trouble with this is
that everything is moved, ready or not. For example, there may be 50 mappings in
QA but only 40 of them are production-ready. The 10 unready mappings are moved
into production along with the 40 production-ready maps, which leads to the second
disadvantage -- namely that maintenance is required to remove any unwanted or
excess objects. Another disadvantage is the need to adjust server variables,
sequences, parameters/variables, database connections, etc. Everything will need to
be set up correctly on the new server that will now host the repository.
To successfully perform the copy, the user must delete the current repository in the
new location. For example, if a user was copying a repository from DEV to TEST,
then the TEST repository must first be deleted using the Delete option in the
Repository Manager to create room for the new repository. Then the Copy Repository
routine must be run.
The Backup and Restore Repository is another simple method of copying an entire
repository. To perform this function, go to the File menu in the Repository Manager
and select Backup Repository. This will create a .REP file containing all repository
information. To restore the repository simply open the Repository Manager on the
destination server and select Restore Repository from the File menu. Select the
created .REP file to automatically restore the repository in the destination server. To
ensure success, be sure to first delete any matching destination repositories, since
the Restore Repository option does not delete the current repository.
PMREP
Using the PMREP commands is essentially the same as the Backup and Restore
Repository method except that it is run from the command line. The PMREP utilities
can be utilized both from the Informatica Server and from any client machines
connected to the server.
The following is a sample of the command syntax used within a batch file to connect
to and backup a repository. Using the code example below as a model, scripts can be
written to be run on a daily basis to perform functions such as connect, backup,
restore, etc:
After following one of the above procedures to migrate into Production, follow these
steps to convert the repository to Production:
1. Disable sessions that schedule mappings that are not ready for Production or
simply delete the mappings and sessions.
• Disable the sessions in the Server manager by opening the session properties,
and then clearing the Enable checkbox under the General tab.
• Delete the sessions in the Server Manager and the mappings in the Designer.
• In the Server Manager, open the session properties, and from the General tab
make the required changes to the pre- and post-session scripts.
Folder Copy
Copying an entire folder allows you to quickly promote all of the objects in the
Development folder to Test, and so forth. All source and target tables, reusable
transformations, mappings, and sessions are promoted at once. Therefore,
everything in the folder must be ready to migrate forward. If certain mappings are
not ready, then after the folder is copied, developers (or the Repository
Administrator) must manually delete these mappings from the new folder.
2. Drag and drop the folder onto the production repository icon within the
Navigator tree structure. (To copy the entire folder, drag and drop the folder icon
just under the repository level.)
3. Follow the Copy Folder Wizard steps. If a folder with that name already exists,
it must be renamed.
4. Point the folder to the correct shared folder if one is being used:
• In the Server Manager, open the session properties, and from the General tab
make the required changes to the pre- and post-sessions scripts.
Object Copy
Copying mappings into the next stage within a networked environment has many of
the same advantages and disadvantages as in the standalone environment, but the
process of handling shortcuts is simplified in the networked environment. For
additional information, see the previous description of Object Copy for the
standalone environment.
Advantages:
• In each of the dedicated repositories, create a common folder with the exact
same name and case.
• Copy the shortcuts into the common folder in Production making sure the
shortcut has the exact same name.
• In the Designer, connect to both the QA and Production repositories and open
the appropriate folders in each.
• Drag and drop the mapping from QA into Production.
3. Create or copy a session in the Server Manager to run the mapping (make
sure the mapping exists in the current repository first).
Recommendations
For migrating from Development into Test, Informatica recommends using the
Object Copy method. This method gives you total granular control over the objects
that are being moved. It ensures that the latest development maps can be moved
over manually as they are completed. For recommendations on performing this copy
procedure correctly, see the steps outlined in the Object Copy section.
The XML Object Copy Process works in a manner very similar to the Repository Copy
backup and restore method, as it allows you to copy sources, targets, reusable
transformations, mappings, and sessions. Once the XML file has been created, that
XML file can be changed with a text editor to allow more flexibility. For example, if
you had to copy one session many times, you would export that session to an XML
file. Then, you could edit that file to find everything within the <Session> tag, copy
that text, and paste that text within the XML file. You would then change the name
of the session you just pasted to be unique. When you imported that XML file back
into your folder, two sessions will be created. The following demonstrates the
import/export functionality:
3. Sessions can be exported and imported into the Server Manager in the same
way (the corresponding mappings must exist for this to work).
Challenge
Using the PowerCenter product suite to most effectively to develop, name, and
document components of the analytic solution. While the most effective use of
PowerCenter depends on the specific situation, this Best Practice addresses some
questions that are commonly raised by project teams. It provides answers in a
number of areas, including Scheduling, Backup Strategies, Server Administration,
and Metadata. Refer to the product guides supplied with PowerCenter for additional
information.
Description
The following pages summarize some of the questions that typically arise during
development and suggest potential resolutions.
Q: How does source format affect performance? (i.e., is it more efficient to source
from a flat file rather than a database?)
In general, a flat file that is located on the server machine loads faster than a
database located on the server machine. Fixed-width files are faster than
delimited files because delimited files require extra parsing. However, if there
is an intent to perform intricate transformations before loading to target, it
may be advisable to first load the flat-file into a relational database, which
allows the PowerCenter mappings to access the data in an optimized fashion
by using filters and custom SQL SELECTs where appropriate.
Q: What are some considerations when designing the mapping? (i.e. what is the
impact of having multiple targets populated by a single map?)
Q: What are some considerations for determining how many objects and
transformations to include in a single mapping?
There are several items to consider when building a mapping. The business
requirement is always the first consideration, regardless of the number of
objects it takes to fulfill the requirement. The most expensive use of the DTM
is passing unnecessary data through the mapping. It is best to use filters as
early as possible in the mapping to remove rows of data that are not needed.
This is the SQL equivalent of the WHERE clause. Using the filter condition in
the Source Qualifier to filter out the rows at the database level is a good way
to increase the performance of the mapping.
Q: What documentation is available for the error codes that appear within the error
log files?
Scheduling Techniques
Using a batch to group logical sessions minimizes the number of objects that
must be managed to successfully load the warehouse. For example, a
hundred individual sessions can be logically grouped into twenty batches. The
Operations group can then work with twenty batches to load the warehouse,
which simplifies the operations tasks associated with loading the targets.
Other batch options, such as nesting batches within batches, can further
reduce the complexity of loading the warehouse. However, this capability
allows for the creation of very complex and flexible batch streams without the
use of a third-party scheduler.
Q: Assuming a batch failure, does PowerCenter allow restart from the point of
failure?
Yes. When a session or sessions in a batch fail, you can perform recovery to
complete the batch. The steps to take vary depending on the type of batch:
If the batch is sequential, you can recover data from the session that failed
and run the remaining sessions in the batch. If a session within a concurrent
batch fails, but the rest of the sessions complete successfully, you can
recover data from the failed session targets to complete the batch. However,
if all sessions in a concurrent batch fail, you might want to truncate all targets
and run the batch again.
The number of sessions that can run at one time depends on the number of
processors available on the server. The load manager is always running as a
process. As a general rule, a session will be compute-bound, meaning its
throughput is limited by the availability of CPU cycles. Most sessions are
transformation intensive, so the DTM always runs. Also, some sessions
require more I/O, so they use less processor time. Generally, a session needs
about 120 percent of a processor for the DTM, reader, and writer in total.
• One session per processor is about right; you can run more, but all
sessions will slow slightly.
• Remember that other processes may also run on the PowerCenter
server machine; overloading a production machine will slow overall
performance.
Each session creates three processes: the Reader, Writer, and DTM.
At this point, you should have a good idea of what is left for concurrent
sessions. It is important to arrange the production run to maximize use of this
memory. Remember to account for sessions with large memory
requirements; you may be able to run only one large session, or several small
sessions concurrently.
%s Session name
%e Session status
%a<filename> Attaches the named file. The file must be local to the Informatica
Server. The following are valid filenames: %a<c:\data\sales.txt>
or %a</users/john/data/sales.txt>
The PowerCenter Server on UNIX uses rmail to send post-session e-mail. The
repository user who starts the PowerCenter server must have the rmail tool
installed in the path in order to send e-mail.
1. Login to the UNIX system as the PowerCenter user who starts the
PowerCenter Server.
2. Type rmail <fully qualified email address> at the prompt and press Enter.
4. You should receive a blank e-mail from the PowerCenter user's e-mail
account. If not, locate the directory where rmail resides and add that
directory to the path.
5. When you have verified that rmail is installed correctly, you are ready to
send post-session e-mail.
Session complete.
Session name: sInstrTest
Total Rows Loaded = 1
Total Rows Rejected = 0
Completed
No errors encountered.
Start Time: Tue Sep 14 12:26:31 1999
Completion Time: Tue Sep 14 12:26:41 1999
Elapsed time: 0:00:10 (h:m:s)
This information, or a subset, can also be sent to any text pager that accepts
e-mail.
Q: Can individual objects within a repository be restored from the back-up or from a
prior version?
Server Administration
Q: What built-in functions, does PowerCenter provide to notify someone in the event
that the server goes down, or some other significant event occurs?
The pmprocs utility, which is available for UNIX systems only, shows
the currently executing PowerCenter processes.
Q: What cleanup (if any) should be performed after a UNIX server crash? Or after an
Oracle instance crash?
If the UNIX server crashes, you should first check to see if the
Repository Database is able to come back up successfully. If this is the
case, then you should try to start the PowerCenter server. Use the
pmserver.err log to check if the server has started correctly. You can
also use ps -ef | grep pmserver to see if the server process (the Load
Manager) is running.
Metadata
With PowerCenter, you can enter description information for all repository
objects, sources, targets, transformations, etc, but the amount of metadata
that you enter should be determined by the business requirements. You can
also drill down to the column level and give descriptions of the columns in a
table if necessary. All information about column size and scale, datatypes,
and primary keys are stored in the repository.
Informatica does not recommend accessing the repository directly, even for
SELECT access. Rather, views have been created to provide access to the
metadata stored in the repository.
Challenge
Accuracy is one of the biggest obstacles blocking the success of many data
warehousing projects. If users discover data inconsistencies, the user community
may lose faith in the entire warehouse’s data. However, it is not unusual to discover
that as many as half the records in a database contain some type of information that
is incomplete, inconsistent, or incorrect. The challenge is therefore to cleanse data
online, at the point of entry into the data warehouse or operational data store (ODS),
to ensure that the warehouse provides consistent and accurate data for business
decision making.
Description
Informatica has several partners in the data cleansing arena. The partners
and respective tools include the following:
DataMentors - Provides tools that are run before the data extraction and
load process to clean source data. Available tools are :
• DMDataFuse
TM
- a data cleansing and householding system with the
power to accurately standardize and match data.
• DMValiData
TM
- an effective, data analysis system that profiles and
identifies inconsistencies between data and metadata.
• DMUtils - a powerful non-compiled scripting language that operates on
flat ASCII or delimited files. It is primarily used as a query and
reporting tool. It also provides a way to reformat and summarize files.
Integration Examples
This following sections describe how to integrate two of the tools with
PowerCenter.
FirstLogic – ACE
The following graphic illustrates a high level flow diagram of the data
cleansing process.
ACE Processing
There are four ACE transformations available to choose from. They will parse,
standardize and append address components using Firstlogic’s ACE Library.
The transformation choice depends on the input record layout. A fourth
transformation can provide optional components. This transformation must be
attached to one of the three base transformations.
TrueName Process
Matching Process
All matching routines are predefined and, if necessary, the configuration files
can be accessed for additional tuning. The five predefined matching scenarios
include: individual, family, household (the only difference between household
and family, is the household doesn't match on last name), firm individual, and
firm. Keep in mind that the matching does not do any data parsing, this must
be accomplished prior to using this transformation. As with ACE and
TrueName, error and status codes are reported.
Trillium
Each record that passes through the Trillium Parser external module is first
parsed and then, optionally, postal geocoded and census geocoded. The level
of geocoding performed is determined by a user-definable initialization
property.
Challenge
Description
• BW uses the pull model.BW must request data from an external source
system, which is PowerCenter before the source system can send data to BW.
PowerCenter uses PCISBW to register with BW first, using SAP’s Remote
Function Call (RFC) protocol.
• External source systems provide transfer structures to BW. Data is moved
and transformed within BW from one or more transfer structures to a
communication structure according to transfer rules. Both, transfer structures
and transfer rules, must be defined in BW prior to use. Normally this is done
from the BW side. An InfoCube is updated by one communication structure as
defined by the update rules.
• Staging BAPIs (an API published and supported by SAP) is the native
interface to communicate with BW. Three PowerCenter product suites use this
API. PowerCenter Designer uses the Staging BAPIs to import metadata for the
target transfer structures. PCISBW uses the Staging BAPIs to register with
BW and receive requests to run sessions. PowerCenter Server uses the
Staging BAPIs to perform metadata verification and load data into BW.
• Programs communicating with BW use the SAP standard saprfc.ini file to
communicate with BW. The saprfc.ini file is similar to the tnsnames file in
Oracle or the interface file in Sybase. The PowerCenter Designer reads
metadata from BW and the PowerCenter Server writes data to BW.
The PCISBW server must be installed in the same directory as the PowerCenter
Server. On NT you can have only one PCISBW. Informatica recommends installing
PCISBW client tools in the same directory as the PowerCenter Client. For more
details on installation and configuration refer to the Installation Guide.
Required for PowerCenter and PCISBW to connect to BW. You need the same
saprfc.ini on both the PowerCenter Server and the PowerCenter Client).
Pmbwserver [DEST_Entry_for_R_type]
[repo_user][repo_passwd][port_for_PowerCenter_Server]
Note: The & sign behind the start command doesn’t work when you start up the
PCISBW in a Telnet session
5. Build mappings
Import the InfoSource into PowerCenter Warehouse Designer and build a mapping
using the InfoSource as a target. Use the DEST_for_A_type as connect string.
Use DEST entry_for A_type of the saprfc.ini as the connect string in the PowerCenter
Server Manager
7. Load data
Create a session in PowerCenter and an InfoPackage in BW. You can only start a
Session from BW (Scheduler in the Administrator Workbench of BW). Before you can
start a session, you have to enter the session_name into BW. To do this, open the
Scheduler dialog box, go to the “Selection 3rd Party Tab and click on the “Selection
Refresh” button (symbol is a recycling sign) which then prompts you for the session
name. To start the session go to the last tab.
PowerCenter uses two types of entries to connect to BW through the saprfc.ini file:
Do not use Notepad to edit this file. Notepad can corrupt the saprfc.ini file.
Set RFC_INI environment variable for all Windows NT, Windows 2000 and Windows
95/98 machines equal with saprfc.ini file. RFC_INI is used to locate the saprfc.ini.
Error Messages
PCISBW writes error messages to the screen. In some case PCISBW will generate a
file with extension *.trc in the PowerCenter Server directory. Look for error
messages there.
Challenge
Accessing important, but difficult to deal with, legacy data sources residing on
mainframes and AS/400 systems, without having to write complex extract programs.
Description
After the above information has been imported and saved in the datamaps,
PowerCenter uses SQL to access the data – which it sees as relational tables -
at runtime.
1. Perform the Windows install. This includes entering the Windows license
key, updating the configuration file (dbmover.cfg) to add a node entry for
communication between the client and the mainframe or AS/400, adding the
PowerConnect ODBC driver and setting up a client ODBC DSN.
Challenge
Description
MQSeries Architecture
MQSeries architecture has three parts: (1) Queue Manager, (2) Message Queue and
(3) MQSeries Message.
Queue Manager
In order for PowerCenter to extract from a queue, the queue must be in a form of
COBOL, XML, Flat File or Binary. When extracting from a queue you need to use
either of two Source Qualifiers: MQ Source Qualifier (MQ SQ) or Associated Source
Qualifier (SQ).
• Select Associated Source Qualifier - this is necessary if the file is not binary.
• Set Tracing Level - verbose, normal, etc.
• Set Message Data Size – default 64,000; used for binary.
• Filter Data – set filter conditions to filter messages using message header
ports, control end of file, control incremental extraction, and control syncpoint
queue clean-up.
• Use mapping parameters and variables
Loading to a Queue
There are two types of MQ Targets that can be used in a mapping: Static MQ Targets
and Dynamic MQ Targets. Only one type of MQ Target can be used in a single
mapping.
• Static MQ Targets – Does not load data to the message header fields.
(??CORRECT INTERPRETATION??) Use the target definition specific to the
format of the message data (i.e., flat file, XML, COBOL). Design the mapping
as if it were not using MQ Series, then make all adjustments in the session
when using MQ Series.
• Dynamic – Used for binary targets only and when loading data to a message
header. Note that certain message headers in a MQSeries message require a
predefined set of values assigned by IBM.
After you create mappings in the Designer, you can create and configure sessions in
the Server Manager. You can create a session with an MQSeries mapping using the
Session Wizard in the Server Manager.
Note that there are two pages on the Source Options dialog: XML and MQSeries. You
can alternate between the two pages to set configurations for each.
For Static MQSeries Targets, select File Target type from the list. When the target is
an XML file or XML message data for a target message queue, the target type is
automatically set to XML.
• On the MQSeries page, select the MQ connection to use for the source
message queue, and click OK.
• Be sure to select the MQ checkbox in Target Options for the Associated file
type. Once this is done, click Edit Object Properties and enter:
• Enter the Format of the Message Data in the Target Queue (ex.
MQSTR).
Appendix Information
Challenge
Description
• Run the setup program and select PowerConnect for PeopleSoft client
from the setup list.
• Client installation wizard points to the PowerCenter Client directory for
the driver installation as a default, with the option to change the
location.
Importing Sources
PowerConnect for PeopleSoft extracts source data from two types of PeopleSoft
objects:
• Records
• Trees
PeopleSoft Records
When you import a PeopleSoft record, the Designer imports both the PeopleSoft
source name and the underlying database table name. The Designer uses the PS
source name as the name of the source definition. The PowerCenter Server uses the
underlying database table name to extract source data.
PeopleSoft Trees
Types of Trees
The Tree Manager enables you to create many kinds of trees for a variety of
purposes, but all trees fall into these major types:
PowerConnect for PeopleSoft extracts data from the following PeopleSoft tree
structure types:
Detail Trees: In the most basic type of tree, the "lowest" level is the level farthest
to the right in the Tree Manager window, and holds detail values. The next level is
made up of tree nodes that group together the detail values, and each subsequent
level defines a higher level grouping of the tree nodes. This kind of tree is called a
detail tree. PowerConnect for PeopleSoft extracts data from loose-level and strict-
level detail trees with static detail ranges.
Winter Trees: Extracts data from loose-level and strict level node-oriented trees.
Winter trees contain no details ranges.
Summary Trees: In a summary tree, the detail values aren't values from a
database field, but tree nodes from an existing detail tree. The tree groups the nodes
from a specific level in the detail tree differently from the higher levels in the detail
tree itself. PowerConnect for PeopleSoft extracts data from loose-level and strict
level summary trees.
Node Oriented trees: In a node-oriented tree, the tree nodes represent the data
values from the database field. The Departmental Security tree in PeopleSoft HRMS
is a good example of a node-oriented tree.
Query access trees: are used to maintain security within the PeopleSoft
implementation. PeopleSoft records are grouped into logical groups, which are
represented as nodes on the tree. This way, a query written by a certain logged in
user within a group can only access the rows that are part of the records that are
assigned to the group the user has access to. There are no branches in query trees,
but children can/do exist.
Flattening trees
When you extract data from a PeopleSoft tree, the PowerCenter Server
denormalizes the tree structure. It uses either of the following methods to
denormalize trees.
To access PeopleSoft metadata and data, PowerCenter Client and Server require a
database username and password. You can either create separate users for
metadata and source extraction or alternatively use one for both. Extracting data
from PeopleSoft is a three-step process:
Before extracting data from a source, you need to import its source definition. You
need a user with read access to PeopleSoft system to access the PeopleSoft physical
and metadata tables via an ODBC connection. To import a PeopleSoft source
definition, create an ODBC data source for each PeopleSoft system you want to
access.
When creating an ODBC data source, configure the data source to connect to the
underlying database for the PeopleSoft system. For example, if PeopleSoft system
resides on Oracle database, configure an ODBC data source to connect to the Oracle
database.
• System Catalog Tables store physical attributes of tables and views, which
your database management system uses to optimize performance.
• PeopleTools Tables contain information that you define using PeopleTools.
• Application Data Tables house the actual data your users will enter and
access through PeopleSoft application windows and panels.
Importing Records
You can import records from two tabs in the Import from PeopleSoft dialog box:
• Records tab.
• Panels tab.
Note: PowerConnect for PeopleSoft works with all versions of PeopleSoft systems. In
PeopleSoft 8, Panels are referred to as Pages. PowerConnect for PeopleSoft uses the
Panels tab to import PeopleSoft 8 Pages.
2. Create a Mapping
After you import or create the source definition, you connect to an ERP Source
Qualifier to represent the records the PowerCenter Server queries from a PeopleSoft
source. An ERP Source Qualifier is used for all ERP sources like SAP, PeopleSoft etc.
An ERP Source Qualifier like the Source Qualifier allows you to use user-defined joins
and filters.
When using the default join option between two PeopleSoft tables, the query created
will automatically append a PS_ prefix to the PeopleSoft tables. However, there are
certain tables that are stored on the database without that prefix, so an override and
a user-defined join will need to be made to correct this.
Take care when using user-defined primary-foreign key relationships with trees,
since changes made within Tree Manager may alter such relationships.
Denormalization of the tables that made up the tree will be changed, so simply
altering the primary-foreign key relationship within Source Analyzer can be
dangerous and it is advisable to re-import the whole tree.
You need a valid mapping, registered PowerCenter Server, and a Server Manager
database connection to create a session. When you configure the session, select
PeopleSoft as the source database type and then select a PeopleSoft database
connection as source database. If the database user is not the owner of the source
tables, enter the table owner name in the session as a source table prefix.
PowerCenter uses SQL to extract data directly from the physical database tables,
performing code page translations when necessary. If you need to extract large
amount of source data, you can partition the sources to improve session
performance.
Note: You cannot partition an ERP Source Qualifier for PeopleSoft when it is
connected to or associated with a PeopleSoft tree.
Challenge
Understanding how to install PowerConnect for SAP R/3, extract data from SAP R/3,
build mappings, and run sessions to load SAP R/3 data into data warehouse.
Description
PowerConnect for SAP R/3 provides the ability to integrate SAP R/3 data into
data warehouses, analytic applications, and other applications. All of this is
accomplished without writing ABAP code. PowerConnect extracts data from
transparent tables, pool tables, cluster tables, hierarchies(Uniform & Non
Uniform), SAP IDOCs and ABAP function modules.
The database server stores the physical tables in the R/3 system, while the
application server stores the logical tables. A transparent table definition on
the application server is represented by a single physical table on the
database server. Pool and cluster tables are logical definitions on the
application server that do not have a one-to-one relationship with a physical
table on the database server.
Communication Interfaces
Note: if the ABAP programs are installed in the $TMP class then they cannot
be transported from development to production.
Extraction Process
R/3 source definitions can be imported from the logical tables using RFC
protocol. Extracting data from R/3 is a four-step process:
Designer connects to the R/3 application server using RFC. The Designer calls
a function in the R/3 system to import source definitions.
2. Create a mapping.
When creating a mapping using an R/3 source definition, you must use an
ERP Source Qualifier. In the ERP Source Qualifier, you can customize
properties of the ABAP program that the R/3 server uses to extract source
data. You can also use joins, filters, ABAP program variables, ABAP code
blocks, and SAP functions to customize the ABAP program.
• File mode. Extract data to file. The PowerCenter Server accesses the
file through FTP or NFS mount.
• Stream Mode. Extract data to buffers. The PowerCenter server
accesses the buffers through CPI-C, the SAP protocol for program-to-
program communication.
The R/3 system needs development objects and user profiles established to
communicate with PowerCenter. Preparing R/3 for integration involves the
following tasks:
For PowerCenter
The PowerCenter Server and Client need drivers and connection files to
communicate with SAP R/3. Preparing PowerCenter for integration involves
the following tasks:
The system number and port numbers are provided by the BASIS
administrator.
Configure database connections in the Server Manager to access the SAP R/3
system when running a session.
Challenge
Data warehousing incorporates large volumes of data, making the process of loading
into the warehouse without compromising its functionality increasingly difficult. The
goal is to create a load strategy that will minimize downtime for the warehouse and
allow quick and robust data management.
Description
As time windows shrink and data volumes increase, it is important to understand the
impact of a suitable incremental load strategy. The design should allow data to be
incrementally added to the data warehouse with minimal impact to the overall
system. The following pages describe several possible load strategies.
Considerations
Source Analysis
• Delta Records - Records supplied by the source system include only new or
changed records. In this scenario, all records are generally inserted or
updated into the data warehouse.
• Record Indicator or Flags - Records that include columns that specify the
intention of the record to be populated into the warehouse. Records can be
selected based upon this flag to all for inserts, updates and delete.
• Date stamped data - Data is organized by timestamps. Data will be loaded
into the warehouse based upon the last processing date or the effective date
range.
Once the sources are identified, it is necessary to determine which records will be
entered into the warehouse and how. Here are some considerations:
• Compare with the target table. Determine if the record exists in the target
table. If the record does not exist, insert the record as a new row. If it does
exist, determine if the record needs to be updated, inserted as a new record,
or removed (deleted from target or filtered out and not added to the
warehouse). This occurs in cases of delta loads, timestamps, keys or
surrogate keys.
• Record indicators. Record indicators can be beneficial when lookups into the
target are not necessary. Take care to ensure that the record exists for
updates or deletes or the record can be successfully inserted. More design
effort may be needed to manage errors in these situations.
1. Joins of Sources to Targets. Records are directly joined to the target using Source
Qualifier join conditions or using joiner transformations after the source qualifiers
(for heterogeneous sources). When using joiner transformations, take care to ensure
the data volumes are manageable.
2. Lookup on target. Using the lookup transformation, lookup the keys or critical
columns in the target relational database. Keep in mind the caches and indexing
possibilities
3. Load table log. Generate a log table of records that have been already inserted
into the target system. You can use this table for comparison with lookups or joins,
depending on the need and volume. For example, store keys in the a separate table
and compare source records against this log table to determine load strategy.
The simplest method of incremental loads is from flat files or a database in which all
records will be loaded. This particular strategy requires bulk loads into the
warehouse, with no overhead on processing of the sources or sorting the source
records.
Loading Method
Data can be loaded directly from these locations into the data warehouse. There is
no additional overhead produced in moving these sources into the warehouse.
This method involves data that has been stamped using effective dates or
sequences. The incremental load can be determined by dates greater than the
previous load date or data that has an effective key greater than the last key
processed.
Loading Method
With the use of relational sources, the records can be selected based on this effective
date and only those records past a certain date will be loaded into the warehouse.
Views can also be created to perform the selection criteria so the processing will not
have to be incorporated into the mappings. Placing the load strategy into the ETL
component is much more flexible and controllable by the ETL developers and
metadata.
Non-relational data can be filtered as records are loaded based upon the effective
dates or sequenced keys. A router transformation or a filter can be placed after the
source qualifier to remove old records.
To compare the effective dates, you can use mapping variables to provide the
previous date processed. The alternative is to use control tables to store the date
and update the control table after each load.
For detailed instruction on how to select dates, refer to Best Practice: Variable and
Mapping Parameters.
Data that is uniquely identified by keys can be selected based upon selection criteria.
For example, records that contain key information such as primary keys, alternate
keys etc can be used to determine if they have already been entered into the data
warehouse. If they exist, you can also check to see if you need to update these
records or discard the source record.
Load Method
It may be possible to do a join with the target tables in which new data can be
selected and loaded into the target. It may also be feasible to lookup in the target to
see if the data exists or not.
Loading directly into the target is possible when the data will be bulk loaded. The
mapping will be responsible for error control, recovery and update strategy.
Load into Flat Files and Bulk Load using an External Loader
The data will be loaded into a mirror database to avoid down time of the active data
warehouse. After data has been loaded, the databases are switched, making the
mirror the active database and the active as the mirror.
In the Informatica Designer, with the mapping designer open, go to the menu and
select Mappings, then select Parameters and Values.
Name the variable and, in this case, make your variable a date/time. For the
Aggregation option, select MAX.
In the same screen, state your initial value. This is the date at which the load should
start. The date must follow one of these formats:
• MM/DD/RR
• MM/DD/RR HH24:MI:SS
• MM/DD/YYYY
• MM/DD/YYYY HH24:MI:SS
Where
In the expression create a variable port and use the SETMAXVARIABLE variable
function and do the following:
SETMAXVARIABLE($$INCREMENT_DATE,CREATE_DATE)
CREATE_DATE is the date for which you would like to store the maximum value.
• Expression
• Filter
• Router
• Update Strategy
The variable constantly holds (per row) the max value between source and variable.
So if one row comes through with 9/1/2001, then the variable gets that value. If all
subsequent rows are LESS than that, then 9/1/2001 is preserved.
After the mapping completes, that is the PERSISTENT value stored in the repository
for the next run of your session. You can view the value of the mapping variable in
the session log file.
The value of the mapping variable and incremental loading is that it allows the
session to use only the new rows of data. No table is needed to store the
max(date)since the variable takes care of it.
Challenge
Description
5. Facilitate reuse.
7. When DTM bottlenecks are identified and session optimization has not
helped, use tracing levels to identify which transformation is causing the
bottleneck (use the Test Load option in session properties).
• When your source is large, cache lookup table columns for those
lookup tables of 500,000 rows or less. This typically improves
performance by 10-20%.
• The rule of thumb is not to cache any table over 500,000 rows. This is
only true if the standard row byte count is 1,024 or less. If the row
byte count is more than 1,024, then the 500k rows will have to be
adjusted down as the number of bytes increase (i.e., a 2,048 byte row
• Using flat files located on the server machine loads faster than a
database located in the server machine.
• Fixed-width files are faster to load than delimited files because
delimited files require extra parsing.
• If processing intricate transformations, consider loading first to a
source flat file into a relational database, which allows the
PowerCenter mappings to access the data in an optimized fashion by
using filters and custom SQL Selects where appropriate.
15. If working with data that is not able to return sorted data (e.g., Web
Logs) consider using the Sorter Advanced External Procedure.
5. Do not reuse mapplets if you only need one or two transformations of the
mapplet while all other calculated ports and transformations are obsolete
6. Source data for a mapplet can originate from one of two places:
7. To pass data out of a mapplet, create mapplet output ports. Each port in
an Output transformation connected to another transformation in the mapplet
becomes a mapplet output port.
Challenge
Using Informatica’s suite of metadata tools effectively in the design of the end-user
analysis application.
Description
The levels of metadata available in the Informatica tool suite are quite extensive.
The amount of metadata that is entered is dependent on the business requirements.
Description information can be entered for all repository objects, sources, targets,
transformations, etc. You also can drill down to the column level and give
descriptions of the columns in a table if necessary. Also, all information about column
size and scale, data types, and primary keys are stored in the repository. The
decision on how much metadata to create is often driven by project timelines. While
it may be beneficial for a developer to enter detailed descriptions of each column,
expression, variable, etc, it will also require a substantial amount of time to do so.
Therefore, this decision should be made on the basis of how much metadata will be
required by the systems that use the metadata.
Informatica offers two recommended ways for accessing the repository metadata.
Metadata Reporter
The need for the Informatica Metadata Reporter arose from the number of clients
requesting custom and complete metadata reports from their repositories. The
Metadata Reporter allows report access to every Informatica object stored in the
repository. The architecture of the Metadata Reporter is web-based, with an Internet
(Note: The Metadata Reporter will not run directly on Microsoft IIS because IIS does
not directly support servlets.)
The Metadata Reporter is accessible from any computer with a browser that has
access to the web server where the Metadata Reporter is installed, even without the
other Informatica Client tools being installed on that computer. The Metadata
Reporter connects to your Informatica repository using JDBC drivers. Make sure the
proper JDBC drivers are installed for your database platform.
(Note: You can also use the JDBC to ODBC bridge to connect to the repository. Ex.
Syntax - jdbc:odbc:<data_source_name>)
• The Metadata Reporter allows you to go easily from one report to another.
The name of any metadata object that displays on a report links to an
associated report. As you view a report, you can generate reports for objects
on which you need more information.
The Metadata Reporter provides 15 standard reports that can be customized with the
use of parameters and wildcards. The reports are as follows:
• Batch Report
• Executed Session Report
• Executed Session Report by Date
• Invalid Mappings Report
For a detailed description of how to run these reports, consult the Metadata Reporter
Guide included in your PowerCenter Documentation.
The MX architecture was intended primarily for Business Intelligence (BI) vendors
who wanted to create a PowerCenter-based data warehouse and then display the
warehouse metadata through their own products. The result was a set of relational
views that encapsulated the underlying repository tables while exposing the
metadata in several categories that were more suitable for external parties. Today,
Informatica and several key vendors, including Brio, Business Objects, Cognos, and
MicroStrategy, are effectively using the MX views to report and query the Informatica
metadata.
Ability to write (push) metadata into the repository. Because of the limitations
associated with relational views, MX could not be used for writing or updating
metadata in the Informatica repository. As a result, such tasks could only be
accomplished by directly manipulating the repository’s relational tables. The MX2
interfaces provide metadata write capabilities along with the appropriate verification
and validation features to ensure the integrity of the metadata in the repository.
Integration with third-party tools. MX2 offers the object-based interfaces needed
to develop more sophisticated procedural programs that can tightly integrate the
repository with the third-party data warehouse modeling and query/reporting tools.
MX2 provides a set of COM-based programming interfaces on top of the C++ object
model used by the client tools to access and manipulate the underlying repository.
This architecture not only encapsulates the physical repository structure, but also
leverages the existing C++ object model to provide an open, extensible API based
on the standard COM protocol. MX2 can be automatically installed on Windows 95,
98, or Windows NT using the install program provided with its SDK. After the
successful installation of MX2, its interfaces are automatically registered and
available to any software through standard COM programming techniques. The MX2
COM APIs support the PowerCenter XML Import/Export feature and provide a COM
based programming interface in which to import and export repository objects.
Challenge
Choosing a good naming standard for the repository and adhering to it.
Description
Although naming conventions are important for all repository and database objects,
the suggestions in this document focus on the former. Choosing a convention and
sticking with it is the key point - and sometimes the most difficult in determining
naming conventions. It is important to note that having a good naming convention
will help facilitate a smooth migration and improve readability for anyone reviewing
the processes.
FAQs
The following paragraphs present some of the questions that typically arise in
naming repositories and suggest answers:
The following tables illustrate some naming conventions for transformation objects
(e.g., sources, targets, joiners, lookups, etc.) and repository objects (e.g.,
mappings, sessions, etc.).
There are often several instances of the same target, usually because of different
actions. When looking at a session run, there will be the several instances with own
successful rows, failed rows, etc. To make observing a session run easier, targets
should be named according to the action being executed on that target.
• CUSTOMER_DIM_UPD
• CUSTOMER_DIM_INS
• CUSTOMER_DIM_DEL
• CUSTOMER_DIM_REJ
Port Names
Ports names should remain the same as the source unless some other action is
performed on the port. In that case, the port should be prefixed with the
appropriate name.
When you bring a source port into a lookup or expression, the port should be
prefixed with “IN_”. This will help the user immediately identify the ports that are
being inputted without having to line up the ports with the input checkbox. It is a
good idea to prefix generated output ports. This helps trace the port value
throughout the mapping as it may travel through many other transformations. For
variables inside a transformation, you should use the prefix 'var_' plus a meaningful
name.
Batch Names
Batch names follow basically the same rules as the session names. A prefix, such as
'b_' should be used and there should be a suffix indicating if the batch is serial or
concurrent.
Shared Objects
If you have an object that you want to use in several mappings or across multiple
folders, like an Expression transformation that calculates sales tax, you can place the
object in a shared folder. You can then use the object in other folders by creating a
shortcut to the object in this case the naming convention is ‘SC_’ for instance
SC_mltCREATION_SESSION, SC_DUAL.
Set up all Open Database Connectivity (ODBC) data source names (DSNs) the same
way on all client machines. PowerCenter uniquely identifies a source by its Database
Data Source (DBDS) and its name. The DBDS is the same name as the ODBC DSN
since the PowerCenter Client talks to all databases through ODBC.
If ODBC DSNs are different across multiple machines, there is a risk of analyzing the
same table using different names. For example, machine1 has ODBS DSN Name0
that points to database1. TableA gets analyzed in on machine 1. TableA is uniquely
identified as Name0.TableA in the repository. Machine2 has ODBS DSN Name1 that
points to database1. TableA gets analyzed in on machine 2. TableA is uniquely
identified as Name1.TableA in the repository. The result is that the repository may
refer to the same object by multiple names, creating confusion for developers,
testers, and potentially end users.
Also, refrain from using environment tokens in the ODBC DSN. For example, do not
call it dev_db01. As you migrate objects from dev, to test, to prod, you are likely to
wind up with source objects called dev_db01 in the production repository. ODBC
database names should clearly describe the database they reference to ensure that
users do not incorrectly point sessions to the wrong databases.
Using a convention like User1_DW allows you to know who the session is logging in
as and to what database. You should know which DW database, based on which
repository environment, you are working in. For example, if you are creating a
session in your QA repository using connection User1_DW, the session will write to
the QA DW database because you are in the QA repository.
Using this convention will allow for easier migration if you choose to use the Copy
Folder method. When you use Copy Folder, session information is also copied. If the
Database Connection information does not already exist in the folder you are copying
to, it is also copied. So, if you use connections with names like Dev_DW in your
development repository, they will eventually wind up in your QA, and even in your
Challenge
Description
When these factors have been considered and a partitioned strategy has been
selected, the iterative process of adding partitions can begin. Continue adding
partitions to the session until the desired performance threshold is met or
degradation in performance is observed.
2. The next step is to set up the partition. The following are selected hints
for session setup; see the Session and Server Guide for further directions on
setting up partitioned sessions.
Assumptions
Challenge
Understanding how parameters, variables, and parameter files work and using them
for maximum efficiency.
Description
Prior to the release of PowerCenter 5.x, the only variables inherent to the product
were defined to specific transformations and to those Server variables that were
global in nature. Transformation variables were defined as variable ports in a
transformation and could only be used in that specific Transformation object (e.g.,
Expression, Aggregator and Rank Transformations). Similarly, global parameters
defined within Server Manager would affect the subdirectories for Source Files,
Target Files, Log Files, etc.
PowerCenter 5.x has made variables and parameters available across the entire
mapping rather than for a specific transformation object. In addition, it provides
built-in parameters for use within Server Manager. Using parameter files, these
values can change from session-run to session-run.
Mapping Variables
You declare mapping variables in PowerCenter Designer using the menu option
Mappings -> Parameters and Variables. After mapping variables are selected,
you use the pop-up window to create a variable by specifying its name, data type,
initial value, aggregation type, precision and scale. This is similar to creating a port
in most transformations.
Variables, by definition, are objects that can change value dynamically. Informatica
added four functions to affect change to mapping variables:
• SetVariable
• SetMaxVariable
• SetMinVariable
• SetCountVariable
Name
The name of the variable should be descriptive and be preceded by ‘$$’ (so that it is
easily identifiable as a variable). A typical variable name is:
$$Procedure_Start_Date.
Aggregation Type
This entry creates specific functionality for the variable and determines how it stores
data. For example, with an aggregation type of Max, the value stored in the
repository would be the max value across ALL session runs until the value is deleted.
Initial Value
This value is used during the first session run when there is no corresponding and
overriding parameter file. This value is also used if the stored repository value is
deleted. If no initial value is identified, then a data type specific default value is
used.
Variable values are not stored in the repository when the session:
• Fails to complete.
• Is configured for a test load.
• Is a debug session.
• Runs in debug mode and is configured to discard session output.
Order of Evaluation
The start value is the value of the variable at the start of the session. The start value
can be a value defined in the parameter file for the variable, a value saved in the
repository from the previous run of the session, a user-defined initial value for the
variable, or the default value based on the variable data type.
The PowerCenter Server looks for the start value in the following order:
Since parameter values do not change over the course of the session run, the value
used is based on:
• Expression
• Filter
• Router
• Update Strategy
Mapping parameters and variables also can be used within the Source Qualifier in the
SQL query, user-defined join, and source filter sections.
Parameter Files
[USER1.s_m_subscriberstatus_load]
$$Post_Date_Var=10/04/2001
[USER1.s_test_var1]
$$PMSuccessEmailUser=XXX@informatica.com
;$$Help_User
A parameter file is declared for use by a session, either within the session properties,
at the outer-most batch a session resides in, or as a parameter value when utilizing
PMCMD command.
The following parameters and variables can be defined or overridden within the
parameter file:
Parameter & Variable Type Parameter & Variable Name Desired Definition
String Mapping Parameter $$State MA
Datetime Mapping Variable $$Time 10/1/2000 00:00:00
Source File (Session $InputFile1 Sales.txt
Parameter)
Database Connection $DBConnection_Target Sales (database
(Session Parameter) connection)
Session Log File (Session $PMSessionLogFile d:/session
Variables and parameters can enhance incremental strategies. The following example
uses a mapping variable, an expression transformation object, and a parameter file
for restarting.
Scenario
Company X wants to start with an initial load of all data but wants subsequent
process runs to select only new information. The environment data has an inherent
Post_Date that is defined within a column named Date_Entered that can be used.
Process will run once every twenty-four hours.
Sample Solution
Create a mapping with source and target objects. From the menu create a new
mapping variable named $$Post_Date with the following attributes:
• TYPE – Variable
• DATATYPE – Date/Time
• AGGREGATION TYPE – MAX
• INITIAL VALUE – 01/01/1900
Within the Source Qualifier Transformation, use the following in the Source_Filter
Attribute: DATE_ENTERED > to_Date(' $$Post_Date','MM/DD/YYYY HH24:MI:SS')
Also note that the initial value 01/01/1900 will be expanded by the PowerCenter
Server to 01/01/1900 00:00:00, hence the need to convert the parameter to a date
time.
SETMAXVARIABLE($$Post_Date,DATE_ENTERED)
The function evaluates each value for DATE_ENTERED and updates the variable with
the Max value to be passed forward. For example:
The first time this mapping is run the SQL will select from the source where
Date_Entered is > 01/01/1900 providing an initial load. As data flows through the
mapping, the variable gets updated to the Max Date_Entered it encounters. Upon
successful completion of the session, the variable is updated in the Repository for
use in the next session run. To view the current value for a particular variable
associated with the session, right-click on the session and choose View Persistent
Values.
The following graphic shows that after the initial run, the Max Date_Entered was
02/03/1998. The next time this session is run, based on the variable in the Source
Qualifier Filter, only sources where Date_Entered > 02/03/1998 will be processed.
To reset the persistent value to the initial value declared in the mapping, view the
persistent value from Server Manager (see graphic above) and press Delete Values.
This will delete the stored value from the Repository, causing the Order of Evaluation
to use the Initial Value declared from the mapping.
If a session run is needed for a specific date, use a parameter file. There are two
basic ways to accomplish this:
• Create a generic parameter file, place it on the server, and point all sessions
to that parameter file. A session may (or may not) have a variable, and the
parameter file need not have variables and parameters defined for every
session ‘using’ the parameter file. To override the variable, either change,
uncomment or delete the variable in the parameter file.
• Run PMCMD for that session but declare the specific parameter file within the
PMCMD command.
Parameter files can be declared in Session Properties under the Log & Error Handling
Tab.
In this example, after the initial session is run the parameter file contents may look
like:
;$$Post_Date=
By using the semicolon, the variable override is ignored and the Initial Value or
Stored Value is used. If, in the subsequent run, the data processing date needs to be
set to a specific date (for example: 04/21/2001), then a simple Perl Script can
update the parameter file to:
[Test.s_Incremental]
$$Post_Date=04/21/2001
Upon running the sessions, the order of evaluation looks to the parameter file first,
sees a valid variable and value and uses that value for the session run. After
successful completion, run another script to reset the parameter file.
Reusable mappings that can source a common table definition across multiple
databases, regardless of differing environmental definitions (e.g. instances, schemas,
user/logins), are required in a multiple database environment.
Scenario
Company X maintains five Oracle database instances. All instances have a common
table definition for sales orders, but each instance has a unique instance name,
schema and login.
Each sales order table has a different name, but the same definition:
Sample Solution
Note that the parameter attributes vary based on the specific environment. Also, the
initial value is not required as this solution will use parameter files.
Open the source qualifier and use the mapping parameter in the SQL Override as
shown in the following graphic.
Using Server Manager, create a session based on this mapping. Within the Source
Database connection, drop down place the following parameter:
$DBConnection_SourcePoint the target to the corresponding target and finish.
Now create the parameter file. In this example, there will be five separate parameter
files.
Parmfile1.txt
[Test.s_Incremental_SOURCE_CHANGES]
$$Source_Schema_Table=aardso.orders
$DBConnection_Source= ORC1
Parmfile2.txt
[Test.s_Incremental_SOURCE_CHANGES]
$DBConnection_Source= ORC99
Parmfile3.txt
[Test.s_Incremental_SOURCE_CHANGES]
$$Source_Schema_Table=hitme.order_done
$DBConnection_Source= HALC
Parmfile4.txt
[Test.s_Incremental_SOURCE_CHANGES]
$$Source_Schema_Table=snakepit.orders
$DBConnection_Source= UGLY
Parmfile5.txt
[Test.s_Incremental_SOURCE_CHANGES]
$$Source_Schema_Table= gmer.orders
$DBConnection_Source= GORF
Use PMCMD to run the five sessions in parallel. The syntax for PMCMD for starting
sessions is as follows:
Alternatively, you could run the sessions in sequence with one parameter file. In this
case, a pre- or post-session script would change the parameter file for the next
session.
Challenge
Description
The first step in using mappings to trap errors is understanding and identifying the
error handling requirement.
Capturing data errors within a mapping and re-routing these errors to an error table
allows for easy analysis for the end users and improves performance. For example,
suppose it is necessary to identify foreign key constraint errors within a mapping.
This can be accomplished by creating a lookup into a dimension table prior to loading
the fact table. Referential integrity is assured by including this functionality in a
mapping. The database still enforces the foreign key constraints, but erroneous data
will not be written to the target table. Also, if constraint errors are captured within
Data content errors also can be captured in a mapping. Mapping logic can identify
data content errors and attach descriptions to the errors. This approach can be
effective for many types of data content errors, including: date conversion, null
values intended for not null target fields, and incorrect data formats or data types.
In the following example, we want to capture null values before they enter into a
target field that does not allow nulls.
After we’ve identified the type of error, the next step is to separate the error from
the data flow. Use the Router Transformation to create a stream of data that will be
the error route. Any row containing an error (or errors) will be separated from the
valid data and uniquely identified with a composite key consisting of a MAPPING_ID
and a ROW_ID. The MAPPING_ID refers to the mapping name and the ROW_ID is
generated by a Sequence Generator. The composite key allows developers to trace
rows written to the error tables.
Error tables are important to an error handling strategy because they store the
information useful to error identification and troubleshooting. In this example, the
two error tables are ERR_DESC_TBL and TARGET_NAME_ERR.
The ERR_DESC_TBL table will hold information about the error, such as the mapping
name, the ROW_ID, and a description of the error. This table is designed to hold all
error descriptions for all mappings within the repository for reporting purposes.
The TARGET_NAME_ERR table will be an exact replica of the target table with two
additional columns: ROW_ID and MAPPING_ID. These two columns allow the
TARGET_NAME_ERR and the ERR_DESC_TBL to be linked. The TARGET_NAME_ERR
table provides the user with the entire row that was rejected, enabling the user to
trace the error rows back to the source. These two tables might look like the
following:
The error handling functionality must assigned to a unique description for each error
in the rejected row. In this example, any null value intended for a not null target
After field descriptions are assigned, we need to break the error row into several
rows, with each containing the same content except for a different error description.
You can use the Normalizer Transformation A mapping approach to break one row of
data into many rows After a single row of data is separated based on the number of
possible errors in it, we need to filter the columns within the row that are actually
errors. For example, one row of data may have as many as three errors, but in this
case, the row actually has only one error so we need to write only one error with its
description to the ERR_DESC_TBL.
When the row is written to the ERR_DESC_TBL, we can link this row to the row in the
TARGET_NAME_ERR table using the ROW_ID and the MAPPING_ID. The following
chart shows how the two error tables can be linked. Focus on the bold selections in
both tables.
TARGET_NAME_ERR
ERR_DESC_TBL
By adding another layer of complexity within the mappings, errors can be flagged as
‘soft’ or ‘hard’. A ‘hard’ error can be defined as one that would fail when being
written to the database, such as a constraint error. A ‘soft’ error can be defined as a
data content error. A record flagged as a hard error is written to the error route,
while a record flagged as a soft error can be written to the target system and the
error tables. This gives business analysts an opportunity to evaluate and correct data
imperfections while still allowing the records to be processed for end-user reporting.
Ultimately, business organizations need to decide if the analysts should fix the data
in the reject table or in the source systems. The advantage of the mapping approach
is that all errors are identified as either data errors or constraint errors and can be
properly addressed. The mapping approach also reports errors based on projects or
categories by identifying the mappings that contain errors. The most important
aspect of the mapping approach however, is its flexibility. Once an error type is
identified, the error handling logic can be placed anywhere within a mapping. By
using the mapping approach to capture identified errors, data warehouse operators
can effectively communicate data quality issues to the business users.
Challenge
Understanding the need for an error handling strategy, identifying potential errors,
and determining an optimal plan for error handling.
Description
It important to realize the need for an error handling strategy, then devise an
infrastructure to resolve the errors. Although error handling varies from project to
project, the typical requirement of an error handling system is to address data
quality issues (i.e., dirty date). Implementing an error handling strategy requires a
significant amount of planning and understanding of the load process. You should
prepare a high level data flow design to illustrate the load process and the role that
error handling plays in it.
Error handling is an integral part of any load process and directly affects the process
when it starts and stops. An error handling strategy should be capable of accounting
for unrecoverable errors during the load process and provide crash recovery, stop,
and restart capabilities. Stop and restart processes can be managed through the pre-
and post- session shell scripts for each PowerCenter session.
Although source systems vary widely in functionality and data quality standards, at
some point a record with incorrect data will be introduced into the data warehouse
from a source system. The error handling strategy should reject these rows, provide
a place to put the rejected rows, and set a limit on how many errors can occur
before the load process stops. It also should report on the rows that are rejected by
the load process, and provide a mechanism for reload.
1) E-mail
Tablespace check and Database constraints If the required Tablespace is not available, the
check for creating Target Tables system load for all the loads that are part of the
system are aborted, and notification is sent to the
DBA and Production Support.
1) E-mail
2) Page
Timer to check if the load has completed by 5:00 If the load has not completed within the 2-hour
AM. window, by 5:00 AM, then send an e-mail notification
to Product Support.
1) E-mail
2) Page
The rejected record number crosses the error Load the rejected records to a reject file and send an
threshold limit OR Informatica PowerCenter e-mail notification to Production Support.
session fails for any other reason.
1) E-mail
2) Page
Match the Hash Total and the Column Totals If the Hash total and the total number of records do
loaded in the target tables with the contents of not match, rollback the data load and send
the .SENT file. If they do not match, do a notification to Production Support.
rollback of the records loaded in the target.
1) E-mail
2) Page
Infrastructure Overview
A better way of identifying and trapping errors is to create tables within the mapping
to hold the rows that contain errors.
A Sample Scenario:
The entire process of defining the error handling strategy within a particular mapping
depends on the type of errors that you expect to capture.
The following examples illustrate what is necessary for successful error handling.
<TARGET_TABLE_NAME>_RELOAD
Fields: LKP1 LKP2 LKP3 ASOF_DT SEQ_ID MAPPING_NAME
ENTERPRISE_ERR_TBL
FOLDER_NAME MAPPING_NAME SEQ_ID ERROR_DESC LOAD_DATE SOURCE Target LKP_TBL
Challenge
Description
Once the mappings and sessions contain the proper metadata, it is important to
develop a plan for extracting this metadata. PowerCenter provides several ways to
access the metadata contained within the repository. One way of doing this is
through the generic Crystal Reports that are supplied with PowerCenter. These
reports are accessible through the Repository Manager. (Open the Repository
Manager, and click Reports.) You can choose from the following four reports:
Mapping report (map.rpt). Lists source column and transformation details for each
mapping in each folder or repository.
Source and target dependencies report (S2t_dep.rpt). Shows the source and
target dependencies as well as the transformations performed in each mapping.
In PowerCenter 5.1, you can develop a metadata access strategy using the Metadata
Reporter. The Metadata Reporter allows for customized reporting of all repository
information without direct access to the repository itself. For more information on the
Metadata Reporter, consult Metadata Reporting and Sharing, or the Metadata
Reporter Guide included with the PowerCenter documentation.
A printout of the mapping object flow is also useful for clarifying how objects are
connected. To produce such a printout, arrange the mapping in Designer so the full
mapping appears on the screen, then use Alt+PrtSc to copy the active window to the
clipboard. Use Ctrl+V to paste the copy into a Word document.
Challenge
Efficiently load data into the Enterprise Data Warehouse (EDW) and Data Mart (DM).
This Best Practice describes various loading scenarios, the use of data profiles, an
alternate method for identifying data errors, methods for handling data errors, and
alternatives for addressing the most common types of problems.
Description
When loading data into an EDW or DM, the loading process must validate that the
data conforms to known rules of the business. When the source system data does
not meet these rules, the process needs to handle the exceptions in an appropriate
manner. The business needs to be aware of the consequences of either permitting
invalid data to enter the EDW or rejecting it until it is fixed. Both approaches present
complex issues. The business must decide what is acceptable and prioritize two
conflicting goals:
In general, there are three methods for handling data errors detected in the loading
process:
• Reject All. This is the simplest to implement since all errors are rejected
from entering the EDW when they are detected. This provides a very reliable
EDW that the users can count on as being correct, although it may not be
complete. Both dimensional and factual data are rejected when any errors are
encountered. Reports indicate what the errors are and how they affect the
completeness of the data.
The development effort required to fix a Reject All scenario is minimal, since
the rejected data can be processed through existing mappings once it has
been fixed. Minimal additional code may need to be written since the data will
only enter the EDW if it is correct, and it would then be loaded into the data
mart using the normal process.
• Reject None. This approach gives users a complete picture of the data
without having to consider data that was not available due to it being rejected
during the load process. The problem is that the data may not be accurate.
Both the EDW and DM may contain incorrect information that can lead to
incorrect decisions.
With Reject None, data integrity is intact, but the data may not support
correct aggregations. Factual data can be allocated to dummy or incorrect
dimension rows, resulting in grand total numbers that are correct, but
incorrect detail numbers. After the data is fixed, reports may change, with
detail information being redistributed along different hierarchies.
The development effort to fix this scenario is significant. After the errors are
corrected, a new loading process needs to correct both the EDW and DM,
which can be a time-consuming effort based on the delay between an error
being detected and fixed. The development strategy may include removing
information from the EDW, restoring backup tapes for each night’s load, and
reprocessing the data. Once the EDW is fixed, these changes need to be
loaded into the DM.
This approach requires categorizing the data in two ways: 1) as Key Elements
or Attributes, and 2) as Inserts or Updates.
Key elements are required fields that maintain the data integrity of the EDW
and allow for hierarchies to be summarized at different levels in the
organization. Attributes provide additional descriptive information per key
element.
Inserts are important for dimensions because subsequent factual data may
rely on the existence of the dimension data row in order to load properly.
Updates do not affect the data integrity as much because the factual data can
usually be loaded with the existing dimensional data unless the update is to a
Key Element.
Informatica generally recommends using the Reject Critical strategy to maintain the
accuracy of the EDW. By providing the most fine-grained analysis of errors, this
method allows the greatest amount of valid data to enter the EDW on each run of
the ETL process, while at the same time screening out the unverifiable data fields.
However, business management needs to understand that some information may be
held out of the EDW, and also that some of the information in the EDW may be at
least temporarily allocated to the wrong hierarchies.
Using Profiles
Profiles are tables used to track history of dimensional data in the EDW. As
the source systems change, Profile records are created with date stamps that
indicate when the change took place. This allows power users to analyze the EDW
using either current (As-Is) or past (As-Was) views of dimensional data.
Profiles should occur once per change in the source systems. Problems occur when
two fields change in the source system and one of those fields produces an error.
When the second field is fixed, it is difficult for the ETL process to produce a
reflection of data changes since there is now a question whether to update a
previous Profile or create a new one. The first value passes validation, which
produces a new Profile record, while the second value is rejected and is not included
in the new Profile. When this error is fixed, it would be desirable to update the
existing Profile rather than creating a new one, but the logic needed to perform this
UPDATE instead of an INSERT is complicated.
If a third field is changed before the second field is fixed, the correction process
cannot be automated. The following hypothetical example represents three field
values in a source system. The first row on 1/1/2000 shows the original values. On
1/5/2000, Field 1 changes from Closed to Open, and Field 2 changes from Black to
BRed, which is invalid. On 1/10/2000 Field 3 changes from Open 9-5 to Open 24hrs,
but Field 2 is still invalid. On 1/15/2000, Field 2 is finally fixed to Red.
Three methods exist for handling the creation and update of Profiles:
1. The first method produces a new Profile record each time a change is detected
in the source. If a field value was invalid, then the original field value is maintained.
By applying all corrections as new Profiles in this method, we simplify the process by
directly applying all changes to the source system directly to the EDW. Each change -
- regardless if it is a fix to a previous error -- is applied as a new change that creates
a new Profile. This incorrectly shows in the EDW that two changes occurred to the
source information when, in reality, a mistake was entered on the first change and
should be reflected in the first Profile. The second Profile should not have been
created.
2. The second method updates the first Profile created on 1/5/2000 until all
fields are corrected on 1/15/2000, which loses the Profile record for the change to
Field 3.
If we try to apply changes to the existing Profile, as in this method, we run the risk
of losing Profile information. If the third field changes before the second field is fixed,
we show the third field changed at the same time as the first. When the second field
was fixed it would also be added to the existing Profile, which incorrectly reflects the
changes in the source system.
3. The third method creates only two new Profiles, but then causes an update to
the Profile records on 1/15/2000 to fix the Field 2 value in both.
If we try to implement a method that updates old Profiles when errors are fixed, as
in this option, we need to create complex algorithms that handle the process
correctly. It involves being able to determine when an error occurred and examining
all Profiles generated since then and updating them appropriately. And, even if we
create the algorithms to handle these methods, we still have an issue of determining
if a value is a correction or a new value. If an error is never fixed in the source
system, but a new value is entered, we would identify it as a previous error, causing
an automated process to update old Profile records, when in reality a new Profile
record should have been entered.
A method exists to track old errors so that we know when a value was rejected.
Then, when the process encounters a new, correct value it flags it as part of the load
strategy as a potential fix that should be applied to old Profile records. In this way,
the corrected data enters the EDW as a new Profile record, but the process of fixing
old Profile records, and potentially deleting the newly inserted record, is delayed until
the data is examined and an action is decided. Once an action is decided, another
process examines the existing Profile records and corrects them as necessary. This
method only delays the As-Was analysis of the data until the correction method is
determined because the current information is reflected in the new Profile.
Quality indicators can be used to record definitive statements regarding the quality
of the data received and stored in the EDW. The indicators can be append to existing
data tables or stored in a separate table linked by the primary key. Quality indicators
can be used to:
• show the record and field level quality associated with a given record at the
time of extract
• identify data sources and errors encountered in specific records
• support the resolution of specific record error types via an update and
resubmission process.
Quality indicators may be used to record several types of errors – e.g., fatal errors
(missing primary key value), missing data in a required field, wrong data
type/format, or invalid data value. If a record contains even one error, data quality
(DQ) fields will be appended to the end of the record, one field for every field in the
record. A data quality indicator code is included in the DQ fields corresponding to
the original fields in the record where the errors were encountered. Records
containing a fatal error are stored in a Rejected Record Table and associated to the
original file name and record number. These records cannot be loaded to the EDW
because they lack a primary key field to be used as a unique record identifier in the
EDW.
• A source record does not contain a valid key. This record would be sent to a
reject queue. Metadata will be saved and used to generate a notice to the
sending system indicating that x number of invalid records were received and
could not be processed. However, in the absence of a primary key, no
tracking is possible to determine whether the invalid record has been replaced
or not.
• The source file or record is illegible. The file or record would be sent to a
reject queue. Metadata indicating that x number of invalid records were
received and could not be processed may or may not be available for a
general notice to be sent to the sending system. In this case, due to the
nature of the error, no tracking is possible to determine whether the invalid
record has been replaced or not. If the file or record is illegible, it is likely
that individual unique records within the file are not identifiable. While
information can be provided to the source system site indicating there are file
In these error types, the records can be processed, but they contain errors:
When an error is detected during ingest and cleansing, the identified error type is
recorded.
The requirement to validate virtually every data element received from the source
data systems mandates the development, implementation, capture and maintenance
of quality indicators. These are used to indicate the quality of incoming data at an
elemental level. Aggregated and analyzed over time, these indicators provide the
information necessary to identify acute data quality problems, systemic issues,
business process problems and information technology breakdowns.
The quality indicators: “0”-No Error, “1”-Fatal Error, “2”-Missing Data from a
Required Field, “3”-Wrong Data Type/Format, “4”-Invalid Data Value and “5”-
Outdated Reference Table in Use, apply a concise indication of the quality of the data
within specific fields for every data type. These indicators provide the opportunity
for operations staff, data quality analysts and users to readily identify issues
potentially impacting the quality of the data. At the same time, these indicators
provide the level of detail necessary for acute quality problems to be remedied in a
timely manner.
The need to periodically correct data in the EDW is inevitable. But how often should
these corrections be performed?
The correction process can be as simple as updating field information to reflect actual
values, or as complex as deleting data from the EDW, restoring previous loads from
tape, and then reloading the information correctly. Although we try to avoid
performing a complete database restore and reload from a previous point in time, we
cannot rule this out as a possible solution.
As errors are encountered, they are written to a reject file so that business analysts
can examine reports of the data and the related error messages indicating the
causes of error. The business needs to decide whether analysts should be allowed to
fix data in the reject tables, or whether data fixes will be restricted to source
systems. If errors are fixed in the reject tables, the EDW will not be synchronized
with the source systems. This can present credibility problems when trying to track
When attribute errors are encountered for a new dimensional value, default values
can be assigned to let the new record enter the EDW. Some rules that have been
proposed for handling defaults are as follows:
Reference tables are used to normalize the EDW model to prevent the duplication of
data. When a source value does not translate into a reference table value, we use
the ‘Unknown’ value. (All reference tables contain a value of ‘Unknown’ for this
purpose.)
The business should provide default values for each identified attribute. Fields that
are restricted to a limited domain of values (e.g. On/Off or Yes/No indicators), are
referred to as small value sets. When errors are encountered in translating these
values, we use the value that represents off or ‘No’ as the default. Other values, like
numbers, are handled on a case-by-case basis. In many cases, the data integration
process is set to populate ‘Null’ into these fields, which means “undefined” in the
EDW. After a source system value is corrected and passes validation, it is corrected
in the EDW.
The business also needs to decide how to handle new dimensional values such as
locations. Problems occur when the new key is actually an update to an old key in
the source system. For example, a location number is assigned and the new location
is transferred to the EDW using the normal process; then the location number is
changed due to some source business rule such as: all Warehouses should be in the
5000 range. The process assumes that the change in the primary key is actually a
new warehouse and that the old warehouse was deleted. This type of error causes a
separation of fact data, with some data being attributed to the old primary key and
some to the new. An analyst would be unable to get a complete picture.
Fixing this type of error involves integrating the two records in the EDW, along with
the related facts. Integrating the two rows involves combining the Profile
The situation is more complicated when the opposite condition occurs (i.e., two
primary keys mapped to the same EDW ID really represent two different IDs). In this
case, it is necessary to restore the source information for both dimensions and facts
from the point in time at which the error was introduced, deleting affected records
from the EDW and reloading from the restore to correct the errors.
If we let the facts enter the EDW and subsequently the DM, we need to create
processes that update the DM after the dimensional data is fixed. This involves
updating the measures in the DM to reflect the changed data. If we reject the facts
when these types of errors are encountered, the fix process becomes simpler. After
the errors are fixed, the affected rows can simply be loaded and applied to the DM.
Fact Errors
If there are no business rules that reject fact records except for relationship errors to
dimensional data, then when we encounter errors that would cause a fact to be
rejected, we save these rows to a reject table for reprocessing the following night.
This nightly reprocessing continues until the data successfully enters the EDW. Initial
and periodic analyses should be performed on the errors to determine why they are
not being loaded. After they are loaded, they are populated into the DM as usual.
Data Stewards
Data Stewards are generally responsible for maintaining reference tables and
translation tables, creating new entities in dimensional data, and designating one
primary data source when multiple sources exist. Reference data and translation
tables enable the EDW to maintain consistent descriptions across multiple source
systems, regardless of how the source system stores the data. New entities in
dimensional data include new locations, products, hierarchies, etc. Multiple source
data occurs when two source systems can contain different data for the same
dimensional entity.
Reference Tables
The EDW uses reference tables to maintain consistent descriptions. Each table
contains a short code value as a primary key and a long description for reporting
purposes. A translation table is associated with each reference table to map the
The translation tables contain one or more rows for each source value and map the
value to a matching row in the reference table. For example, the SOURCE column in
FILE X on System X can contain ‘O’, ‘S’ or ‘W’. The data steward would be
responsible for entering in the Translation table the following values:
These values are used by the data integration process to correctly load the EDW.
Other source systems that maintain a similar field may use a two-letter abbreviation
like ‘OF’, ‘ST’ and ‘WH’. The data steward would make the following entries into the
translation table to maintain consistency across systems:
The data stewards are also responsible for maintaining the Reference table that
translates the Codes into descriptions. The ETL process uses the Reference table to
populate the following values into the DM:
Error handling results when the data steward enters incorrect information for these
mappings and needs to correct them after data has been loaded. Correcting the
above example could be complex (e.g., if the data steward entered ST as translating
to OFFICE by mistake). The only way to determine which rows should be changed is
to restore and reload source data from the first time the mistake was entered.
Processes should be built to handle these types of situations, include correction of
the EDW and DM.
Dimensional Data
New entities in dimensional data present a more complex issue. New entities in the
EDW may include Locations and Products, at a minimum. Dimensional data uses the
same concept of translation as Reference tables. These translation tables map the
source system value to the EDW value. For location, this is straightforward, but over
time, products may have multiple source system values that map to the same
product in the EDW. (Other similar translation issues may also exist, but Products
serves as a good example for error handling.)
When the dimensional value is left as ‘Pending Verification’ however, facts may be
rejected or allocated to dummy values. This requires the data stewards to review the
status of new values on a daily basis. A potential solution to this issue is to generate
an e-mail each night if there are any translation table entries pending verification.
The data steward then opens a report that lists them.
The situation is more complicated when the opposite condition occurs (i.e., two
products are mapped to the same product, but really represent two different
products). In this case, it is necessary to restore the source information for all loads
since the error was introduced. Affected records from the EDW should be deleted and
then reloaded from the restore to correctly split the data. Facts should be split to
allocate the information correctly and dimensions split to generate correct Profile
information.
Manual Updates
Over time, any system is likely to encounter errors that are not correctable using
source systems. A method needs to be established for manually entering fixed data
and applying it correctly to the EDW, and subsequently to the DM, including
beginning and ending effective dates. These dates are useful for both Profile and
Date Event fixes. Further, a log of these fixes should be maintained to enable
identifying the source of the fixes as manual rather than part of the normal load
process.
Multiple Sources
The data stewards are also involved when multiple sources exist for the same data.
This occurs when two sources contain subsets of the required information. For
example, one system may contain Warehouse and Store information while another
contains Store and Hub information. Because they share Store information, it is
difficult to decide which source contains the correct information.
When this happens, both sources have the ability to update the same row in the
EDW. If both sources are allowed to update the shared information, data accuracy
and Profile problems are likely to occur. If we update the shared information on only
one source system, the two systems then contain different information. If the
changed system is loaded into the EDW, it creates a new Profile indicating the
To avoid this type of situation, the business analysts and developers need to
designate, at a field level, the source that should be considered primary for the field.
Then, only if the field changes on the primary source would it be changed. While this
sounds simple, it requires complex logic when creating Profiles, because multiple
sources can provide information toward the one Profile record created for that day.
One solution to this problem is to develop a system of record for all sources. This
allows developers to pull the information from the system of record, knowing that
there are no conflicts for multiple sources. Another solution is to indicate, at the field
level, a primary source where information can be shared from multiple sources.
Developers can use the field level information to update only the fields that are
marked as primary. However, this requires additional effort by the data stewards to
mark the correct source fields as primary and by the data integration team to
customize the load process.
Challenge
Description
General Suggestions
• To edit any cell in the grid, press <F2, then move the cursor to the
character you want to edit and click OK.
• To move the current field in a transformation Down, first highlight it,
then press <Alt><w> and click OK..
• To move the current field in a transformation Up, first highlight it, then
press <Alt><u> and click OK.
• To add a new field or port, first highlight an existing field or port, then
press <Alt><f> to insert the new field/port below it and click OK.
• To validate the Default value, first highlight the port you want to
validate, then press <Alt><v> and click OK).
• When adding a new port, just begin typing. You don't need to press
DEL first to remove the ‘NEWFIELD’ text, then click OK when you have
finished.
• When moving about the expression fields via arrow keys:
o Use the SPACE bar to check/uncheck the port type. The box
must be highlighted in order to check/uncheck the port type.
o Press <F2> then <F3> to quickly open the Expression Editor of
an OUT/VAR port. The expression must be highlighted.
• To cancel an edit in the grid, press <Esc> then click OK.
• For all combo/dropdown list boxes, just type the first letter on the list
to select the item you want.
• To copy a selected item in the grid, press <Ctrl><C>.
• To past a selected item from the Clipboard to the grid, press
<Ctrl><V>.
• To delete a selected field or port from the grid, press <Alt><C>.
• To copy a selected row from the grid, press <Alt><O>.
• To paste a selected row from the grid, press <Alt><P> .
Expression Editor
Challenge
Description
Reusable Objects
The first step in creating an inventory of reusable objects is to review the business
requirements and look for any common routines/modules that may appear in more
than one data movement. These common routines are excellent candidates for
reusable objects. In PowerCenter, reusable objects can be single transformations
(lookups, filters, etc.) or even a string of transformations (mapplets).
Common objects are sometimes created just for the sake of creating common
components when in reality, creating and testing the object does not save
development time or future maintenance. For example, if there is a simple
calculation like subtracting a current rate from a budget rate that will be used for two
different mappings, carefully consider whether the effort to create, test, and
document the common object is worthwhile. Often, it is simpler to add the
calculation to both mappings. However, if the calculation were to be performed in a
number of mappings, if it was very difficult, and if all occurrences would be updated
following any change or fix – then this would be an ideal case for a reusable object.
The second criterion for a reusable object concerns the data that will pass through
the reusable object. Many times developers see a situation where they may perform
a certain type of high-level process (e.g., filter, expression, update strategy) in two
Document the list of the reusable objects that pass this criteria test, providing a
high-level description of what each object will accomplish. The detailed design will
occur in a future subtask, but at this point the intent is to identify the number and
functionality of reusable objects that will be built for the project. Keep in mind that it
will be impossible to identify 100 percent of the reusable objects at this point; the
goal here is to create an inventory of as many as possible, and hopefully the most
difficult ones. The remainder will be discovered while building the data integration
processes.
Mappings
The goal here is to create an inventory of the mappings needed for the project. For
this exercise, the challenge is to think in individual components of data movement.
While the business may consider a fact table and its three related dimensions as a
single ‘object’ in the data mart or warehouse, five mappings may be needed to
populate the corresponding star schema with data (i.e., one for each of the
dimension tables and two for the fact table, each from a different source system).
Typically, when creating an inventory of mappings, the focus is on the target tables,
with an assumption that each target table has its own mapping, or sometimes
multiple mappings. While often true, if a single source of data populates multiple
tables, this approach yields multiple mappings. Efficiencies can sometimes be
realized by loading multiple tables from a single source. By simply focusing on the
target tables, however, these efficiencies can be overlooked.
When completed, the spreadsheet can be sorted either by target table or source
table. Sorting by source table can help determine potential mappings that create
multiple targets.
When using a source to populate multiple tables at once for efficiency, be sure to
keep restartabilty/reloadability in mind. The mapping will always load two or more
target tables from the source, so there will be no easy way to rerun a single table. In
this example, potentially the Customers table and the Customer_Type tables can be
loaded in the same mapping.
When merging targets into one mapping in this manner, give both targets the same
number. Then, re-sort the spreadsheet by number. For the mappings with multiple
sources or targets, merge the data back into a single row to generate the inventory
of mappings, with each number representing a separate mapping.
At this point, it is often helpful to record some additional information about each
mapping to help with planning and maintenance.
First, give each mapping a name. Apply the naming standards generated in 2.2
DESIGN DEVELOPMENT ARCHITECTURE. These names can then be used to
distinguish mappings from each other and also can be put on the project plan as
individual tasks.
Next, determine for the project a threshold for a High, Medium, or Low number of
target rows. For example, in a warehouse where dimension tables are likely to
number in the thousands and fact tables in the hundred thousands, the following
thresholds might apply:
Add any other columns of information that might be useful to capture about each
mapping, such as a high-level description of the mapping functionality, resource
(developer) assigned, initial estimate, actual completion time, or complexity rating.
Challenge
The PowerCenter repository has more than eighty tables, and nearly all use one or
more indexes to speed up queries. Most databases keep and use column distribution
statistics to determine which index to use in order to optimally execute SQL queries.
Database servers do not update these statistics continuously, so they quickly become
outdated in frequently-used repositories, and SQL query optimizers may choose a
less-than-optimal query plan. In large repositories, choosing a sub-optimal query
plan can drastically affect performance. As a result, the repository becomes slower
and slower over time.
Description
For the repository tables, it is helpful to understand that all PowerCenter repository
tables and index names begin with "OPB_" or "REP_". The following information is
useful for generating scripts to update distribution statistics.
Oracle
Save the output to a file. Then, edit the file and remove all the headers. (i.e., the
lines that look like:
Run this as a SQL script. This updates statistics for the repository tables.
MS SQL Server
select 'update statistics ', name from sysobjects where name like
'OPB_%'
name
update statistics OPB_ANALYZE_DEP
update statistics OPB_ATTR
update statistics OPB_BATCH_OBJECT
.
.
Save the output to a file, then edit the file and remove the header information (i.e.,
the top two lines) and add a 'go' at the end of the file.
Run this as a SQL script. This updates statistics for the repository tables.
Sybase
select 'update statistics ', name from sysobjects where name like
'OPB_%'
name
update statistics OPB_ANALYZE_DEP
update statistics OPB_ATTR
update statistics OPB_BATCH_OBJECT
Save the output to a file, then remove the header information (i.e., the top two
lines), and add a 'go' at the end of the file.
Run this as a SQL script. This updates statistics for the repository tables.
Informix
select 'update statistics low for table ', tabname, ' ;' from systables
where tabname like 'opb_%' or tabname like 'OPB_%';
Save the output to a file, then edit the file and remove the header information (i.e.,
the top line that looks like:
Run this as a SQL script. This updates statistics for the repository tables.
DB2
Run this as a SQL script to update statistics for the repository tables.
Challenge
Once the data warehouse has been moved to production, the most important task is
keeping the system running and available for the end users.
Description
The Service Level agreement outlines how the overall data warehouse system
will be maintained. This is a high-level document that discusses the system to
be maintained, the components of the system, and identifies the groups
responsible for monitoring the various components of the system. At a
minimum, it should contain the following information:
Operations Manual
Challenge
Knowing that all data for the current load cycle has loaded correctly is essential for
good data warehouse management. However, the need for load validation varies,
depending on the extent of error checking, data validation or data cleansing
functionality inherent in the your mappings.
Description
Methods for validating the load process range from simple to complex. The first step
is to determine what information you need for load validation (e.g., batch names,
session names, session start times, session completion times, successful rows and
failed rows). Then, you must determine the source of this information. All this
information is stored as metadata in the repository, but you must have a means of
extracting this information.
Finally, you must determine how you want this information presented to you. Do you
want it stored as a flat file? Do you want it e-mailed to you? Do you want it
available in a relational table, so that history easily be preserved? All of these
factors weigh in finding the correct solution for you.
The following paragraphs describe three possible solutions for load validation,
beginning with a fairly simple solution and moving toward the more complex:
Post-session e-mail is configured in the session, under the General tab and
‘Session Commands’
- %s Session name
- %e Session status
- %b Session start time
- %c Session completion time
- %i Session elapsed time
- %l Total records loaded
TIP: One practical application of this functionality is the situation in which a key
business user waits for completion of a session to run a report. You can configure e-
mail to this user, notifying him/her that the session was successful and the report
can run.
Almost any query can be put together to retrieve data about the load execution from
the repository. The MX view, REP_SESS_LOG, is a great place to start . This view is
likely to contain all the information you need. The following sample query shows how
to extract folder name, session name, session end time, successful rows and session
duration:
3. Use a mapping
Challenge
Description
When moving into production, many companies require the use of a third-party
scheduler that is the company standard.
A third-party scheduler can start and stop an Informatica session or batch using the
PMCMD commands. Because PowerCenter has a scheduler, there are several levels
at which to integrate a third-party scheduler with PowerCenter. The correct level of
integration depends on the complexity of the batch/schedule and level and type of
production support.
In general, there are three levels of integration between a third-party scheduler and
Informatica: Low Level, Medium Level, and High Level.
Low Level
Low level integration refers to a third-party scheduler kicking off only one
Informatica session or a batch. That initial PowerCenter process subsequently kicks
off the rest of the sessions and batches. The PowerCenter scheduler handles all
processes and dependencies after the third-party scheduler has kicked off the initial
batch or session. In this level of integration, nearly all control lies with the
PowerCenter scheduler.
This type of integration is very simple and should only be used as a loophole to fulfill
a corporate mandate on a standard scheduler. A low level of integration is very
simple to implement because the third-party scheduler kicks off only one process.
The third-party scheduler is not adding any functionality that cannot be handled by
the PowerCenter scheduler.
Medium Level
Medium level integration is when a third-party scheduler kicks off many different
batches or sessions, but not all sessions. A third-party scheduler may kick off several
PowerCenter batches and sessions but within those batches, PowerCenter may have
several sessions defined with dependencies. Thus, PowerCenter is controlling the
dependencies within those batches. In this level of integration, the control is shared
between PowerCenter and a third-party scheduler.
This type of integration is more complex than low level integration because there is
much more interaction between the third-party scheduler and PowerCenter.
However, to reduce total amount of work required to integrate the third-party
scheduler and PowerCenter, many of the PowerCenter sessions may be left in
batches. This reduces the integration chores because the third-party scheduler is
only communicating with a limited number of PowerCenter batches.
Medium level integration requires Production Support personnel to have a fairly good
knowledge of PowerCenter. Because Production Support personnel in many
companies are knowledgeable only about the company’s standard scheduler.
Therefore, one significant disadvantage of this level of integration is that if the batch
fails at some point, the Production Support personnel may not be able to determine
the exact breakpoint. They are probably able to determine the general area, but not
necessarily the specific session. Thus, the production support burden is shared
between the Project Development team and the Production Support team.
High Level
High level integration is when a third-party scheduler has full control of scheduling
and kicks off all PowerCenter sessions. Because the PowerCenter sessions are not
part of any batches, the third-party scheduler controls all dependencies among the
sessions.
This type of integration is the most complex to implement because there are many
more interactions between the third-party scheduler and PowerCenter. The third-
party scheduler controls all dependencies between the sessions.
High level integration allows the Production Support personnel to have only limited
knowledge of PowerCenter. Because the Production Support personnel in many
companies are knowledgeable only about the company’s standard scheduler, one of
the main advantages of this level of integration is that if the batch fails at some
point, the Production Support personnel are usually able to determine the exact
breakpoint. Thus, the production support burden lies with the Production Support
team.
Challenge
Description
If the session is waiting on its source file to be FTP’ed from another server, the FTP
process should be scripted so that it creates the indicator file upon successful
completion of the source file FTP. This file can be an empty, or dummy, file. The
mere existence of the dummy file is enough to indicate that the session should
start. The dummy file will be removed immediately after it is located. It is,
therefore, essential that you do not use your flat file source as the indicator file.
Challenge
Description
The following paragraphs describe several of the key tasks involved in managing the
repository:
Two back-up methods are advisable for repository backup: (1) either the
PowerCenter Repository Manager or ‘pmrep’ command line utility, and (2) the
traditional database backup method. The native PowerCenter backup is required,
and Informatica recommends using both methods, although both are not essential. If
database corruption occurs, the native PowerCenter backup provides a clean backup
that can be restored to a new database.
Similarly, if folder copies are taking an unusually long time, the OPB_SESSION_LOG
and/or OPB_SESS_TARG_LOG tables may be being transferred. Removing
unnecessary data from these tables will expedite the repository backup process as
well as the folder copy operation. To determine which logs to eliminate, execute the
following select statement to retrieve the sessions with the most entries in
OPB_SESSION_LOG:
1. Copy the original session, then delete original session. When a session is
copied, the entries in the repository tables do not duplicate. When you delete the
session, the entries in the tables are deleted, eliminating all rows for an individual
session.
2. Log into Repository Manager and expand the sessions in a particular folder.
When you select one of the sessions, all of the session logs will appear on the right-
hand side of the screen. You can manually delete any of these by highlighting a
particular log, then selecting Delete from the Edit menu. Respond ‘Yes’ when the
system prompts you with the question “‘Delete these logs from the Repository?”
pmrep Utility
The pmrep utility has two modes: command line and interactive mode.
• Command line mode lets you execute pmrep commands from the windows
command line. This mode invokes and exits each time a command is issued.
Command line mode is useful for batch files or scripts.
• Interactive mode invokes pmrep and allows you to issue a series of
commands from a pmrep prompt without exiting after each command.
d:\PROGRA~1\INFORM~1\pmrep cleanup
Name>…
The section of the registry that you can import and export contains the following
repository connection information:
• Repository name
• Database username and password (must be in US-ASCII)
• Repository username and password (must be in US-ASCII)
• ODBC data source name (DSN)
The registry does not include the ODBC data source. If you import a registry
containing a DSN that does not exist on that client system, the connection fails. Be
sure to have the appropriate data source configured under the exact same name as
the registry you are going to import, for each imported DSN.
Challenge
Description
While there are many types of hardware and many ways to configure a clustered
environment, this example is based on the following hardware and software
characteristics:
One of the Sun 4500’s serves as the primary data integration server, while the other
server in the cluster is the secondary server. Under normal operations, the
PowerCenter Server ‘thinks’ it is physically hosted by the primary server and uses
the resources of the primary server, although it is physically located on its own
server.
When the primary server goes down, the Sun high-availability software automatically
starts the PowerCenter Server on the secondary server using the basic auto
start/stop scripts that are used in many UNIX environments to automatically start
the PowerCenter Server whenever a host is rebooted. In addition, the Sun high-
availability software changes the ownership of the disk where the PowerCenter
Server is installed from the primary server to the secondary server. To facilitate this,
a logical IP address can be created specifically for the PowerCenter Server. This
logical IP address is specified in the pmserver.cfg file instead of the physical IP
addresses of the servers. Thus, only one pmserver.cfg file is needed.
Challenge
Description
2. Monitor the server. By running a session and monitoring the server, it should
immediately be apparent if the system is paging memory or if the CPU load is too
high for the number of available processors. If the system is paging, correcting the
system to prevent paging (e.g., increasing the physical memory available on the
machine) can greatly improve performance.
3. Use the performance details. Re-run the session and monitor the performance
details. This time look at the details and watch for the Buffer Input and Outputs for
the sources and targets.
4. Tune the source system and target system based on the performance details.
When the source and target are optimized, re-run the session to determine the
impact of the changes.
5. Only after the server, source, and target have been tuned to their peak
performance should the mapping be analyzed for tuning.
6. After the tuning achieves a desired level of performance, the DTM should be the
slowest portion of the session details. This indicates that the source data is arriving
quickly, the target is inserting the data quickly, and the actual application of the
business rules is the slowest portion. This is the optimum desired performance. Only
minor tuning of the session can be conducted at this point and usually has only a
minor effect.
Challenge
Oracle
Oracle offers many tools for tuning an Oracle instance. Most DBAs are already
familiar with these tools, so we’ve included only a short description of some of the
major ones here.
• V$ Views
• Explain Plan
Explain Plan, SQL Trace, and TKPROF are powerful tools for revealing
bottlenecks and developing a strategy to avoid them.
Explain Plan allows the DBA or developer to determine the execution path of a
block of SQL code. The SQL in a source qualifier or in a lookup that is running
• SQL Trace
• TKPROF
The output of SQL Trace is provided in a dump file that is difficult to read.
TKPROF formats this dump file into a more understandable report.
‘UTLESTAT’ ends the statistics collection process and generates an output file
called ‘report.txt.’ This report should give the DBA a fairly complete idea
about the level of usage the database experiences and reveal areas that
should be addressed.
Disk I/O
Disk I/O at the database level provides the highest level of performance gain in most
systems. Database files should be separated and identified. Rollback files should be
separated onto their own disks because they have significant disk I/O. Co-locate
tables that are heavily used with tables that are rarely used to help minimize disk
contention. Separate indexes so that when queries run indexes and tables, they are
not fighting for the same resource. Also be sure to implement disk striping; this, or
RAID technology can help immensely in reducing disk contention. While this type of
planning is time consuming, the payoff is well worth the effort in terms of
performance gains.
Memory and processing configuration is done in the init.ora file. Because each
database is different and requires an experienced DBA to analyze and tune it
for optimal performance, a standard set of parameters to optimize
PowerCenter is not practical and will probably never exist.
The settings presented here are those used in a 4-CPU AIX server running
Oracle 7.3.4 set to make use of the parallel query option to facilitate parallel
processing of queries and indexes. We’ve also included the descriptions and
documentation from Oracle for each setting to help DBAs of other (non-
Oracle) systems to determine what the commands do in the Oracle
environment to facilitate setting their native database commands and settings
in a similar fashion.
• HASH_AREA_SIZE = 16777216
o Default value: 2 times the value of SORT_AREA_SIZE
o Range of values: any integer
o This parameter specifies the maximum amount of memory, in
bytes, to be used for the hash join. If this parameter is not set,
its value defaults to twice the value of the SORT_AREA_SIZE
parameter.
o The value of this parameter can be changed without shutting
down the Oracle instance by using the ALTER SESSION
command. (Note: ALTER SESSION refers to the Database
Administration command issued at the svrmgr command
prompt.)
• Optimizer_percent_parallel=33
• parallel_max_servers=40
o Used to enable parallel query.
o Initially not set on Install.
o Maximum number of query servers or parallel recovery
processes for an instance.
• Parallel_min_servers=8
o Used to enable parallel query.
• SORT_AREA_SIZE=8388608
o Default value: Operating system-dependent
o Minimum value: the value equivalent to two database blocks
o This parameter specifies the maximum amount, in bytes, of
Program Global Area (PGA) memory to use for a sort. After the
sort is complete and all that remains to do is to fetch the rows
out, the memory is released down to the size specified by
SORT_AREA_RETAINED_SIZE. After the last row is fetched out,
all memory is freed. The memory is released back to the PGA,
not to the operating system.
o Increasing SORT_AREA_SIZE size improves the efficiency of
large sorts. Multiple allocations never exist; there is only one
memory area of SORT_AREA_SIZE for each user process at any
time.
o The default is usually adequate for most database operations.
However, if very large indexes are created, this parameter may
need to be adjusted. For example, if one process is doing all
database access, as in a full database import, then an increased
value for this parameter may speed the import, particularly the
CREATE INDEX statements.
On an HP/UX server with Oracle as a target (i.e., PMServer and Oracle target on
same box), using an IPC connection can significantly reduce the time it takes to build
a lookup cache. In one case, a fact mapping that was using a lookup to get five
columns (including a foreign key) and about 500,000 rows from a table was taking
19 minutes. Changing the connection type to IPC reduced this to 45 seconds. In
another mapping, the total time decreased from 24 minutes to 8 minutes for ~120-
130 bytes/row, 500,000 row write (array inserts), primary key with unique index in
place. Performance went from about 2Mb/min (280 rows/sec) to about 10Mb/min
(1360 rows/sec).
A normal tcp (network tcp/ip) connection in tnsnames.ora would look like this:
DW.armafix =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL =TCP)
(HOST = armafix)
(PORT = 1526)
)
)
(CONNECT_DATA=(SID=DW)
)
)
DWIPC.armafix =
(DESCRIPTION =
(ADDRESS =
(PROTOCOL=ipc)
(KEY=DW)
)
(CONNECT_DATA=(SID=DW))
)
Dropping and reloading indexes during very large loads to a data warehouse
is often recommended but there is seldom any easy way to do this. For
example, writing a SQL statement to drop each index, then writing another
SQL statement to rebuild it can be a very tedious process.
Run the following to generate output to disable the foreign keys in the data
warehouse:
FROM USER_CONSTRAINTS
Dropping or disabling primary keys will also speed loads. Run the results of
this SQL statement after disabling the foreign key constraints:
FROM USER_CONSTRAINTS
FROM USER_CONSTRAINTS
Save the results in a single file and name it something like ‘DISABLE.SQL’
Re-enable constraints in the reverse order that you disabled them. Re-enable
the unique constraints first, and re-enable primary keys before foreign keys.
TIP: Dropping or disabling foreign keys will often boost loading, but this also
slows queries (such as lookups) and updates. If you do not use lookups or
updates on your target tables you should get a boost by using this SQL
statement to generate scripts. If you use lookups and updates (especially on
large tables), you can exclude the index that will be used for the lookup from
your script. You may want to experiment to determine which method is
faster.
SQL*Loader
• Loader Options
SQL*Loader is a bulk loader utility used for moving data from external files
into the Oracle database. To use the Oracle bulk loader, you need a control
file, which specifies how data should be loaded into the database. SQL*Loader
has several options that can improve data loading performance and are easy
to implement. These options are:
• DIRECT
• PARALLEL
• SKIP_INDEX_MAINTENANCE
• UNRECOVERABLE
LOAD DATA
INFILE <dataFile>
The DIRECT path obtains an exclusive lock on the table being loaded and
writes the data blocks directly to the database files, bypassing all SQL
The PARALLEL option can be used with the DIRECT option when loading
multiple partitions of the same table. If the partitions are located on separate
disks, the performance time can be reduced to that of loading a single
partition.
The UNRECOVERABLE option in the control file allows you to redo log writes
during a CONVENTIONAL load. Recoverability should not be an issue since the
data file still exists.
Keep in mind, however, that b-tree indexing is still the Oracle default. If you
don’t specify an index type when creating an index, Oracle will default to b-
tree. Also note that for certain columns, bitmaps will be smaller and faster to
create than a b-tree index on the same column.
The relationship between Fact and Dimension keys is another example of low
cardinality. With a b-tree index on the Fact table, a query processes by
joining all the Dimension tables in a Cartesian product based on the WHERE
clause, then joins back to the Fact table. With a bitmapped index on the Fact
table, a ‘star query’ may be created that accesses the Fact table first followed
by the Dimension table joins, avoiding a Cartesian product of all possible
Dimension attributes. This ‘star query’ access method is only used if the
STAR_TRANSFORMATION_ENABLED parameter is equal to TRUE in the init.ora
file and if there are single column bitmapped indexes on the fact table
foreign keys. Creating bitmap indexes is similar to creating b-tree indexes. To
specify a bitmap index, add the word ‘bitmap’ between ‘create’ and ‘index’. All
other syntax is identical.
• Bitmap indexes:
• B-tree indexes:
To enable bitmap indexes, you must set the following items in the instance
initialization file:
Also note that the parallel query option must be installed in order to create
bitmap indexes. If you try to create bitmap indexes without the parallel query
option, a syntax error will appear in your SQL statement; the keyword
‘bitmap’ won't be recognized.
• TIP: To check if the parallel query option is installed, start and log into
SQL*Plus. If the parallel query option is installed, the word ‘parallel’ appears
in the banner text.
Index Statistics
• Table Method
Index statistics are used by Oracle to determine the best method to access
tables and should be updated periodically as normal DBA procedures. The
following will improve query results on Fact and Dimension tables (including
appending and updating records) by updating the table and index statistics
for the data warehouse:
The following SQL statement can be used to analyze the tables in the
database:
FROM USER_TABLES
The following SQL statement can be used to analyze the indexes in the
database:
FROM USER_INDEXES
• Schema Method
In this example, BDB is the schema for which the statistics should be
updated. Note that the DBA must grant the execution privilege for
dbms_utility to the database user executing this command.
TIP: These SQL statements can be very resource intensive, especially for
very large tables. For this reason, we recommend running them at off-peak
times when no other process is using the database. If you find the exact
computation of the statistics consumes too much time, it is often acceptable
to estimate the statistics rather than compute them. Use ‘estimate’ instead of
‘compute’ in the above examples.
Parallelism
Hints are used to define parallelism at the SQL statement level. The following
examples demonstrate how to utilize four processors:
TIP: When using a table alias in the SQL Statement, be sure to use this alias
in the hint. Otherwise, the hint will not be used, and you will not receive an
error message.
Here, the parallel hint will not be used because of the used alias “A” for table
EMP. The correct way is:
Parallelism can also be defined at the table and index level. The following
example demonstrates how to set a table’s degree of parallelism to four for all
eligible SQL statements on this table:
Ensure that Oracle is not contending with other processes for these resources
or you may end up with degraded performance due to resource contention.
Additional Tips
You can execute queries as both pre- and post-session commands. For a
UNIX environment, the format of the command is:
For example, to execute the ENABLE.SQL file created earlier (assuming the
data warehouse is on a database named ‘infadb’), you would execute the
following as a post-session command:
sqlplus -s pmuser/pmuser@infadb @
/informatica/powercenter/Scripts/ENABLE.SQL
In some environments, this may be a security issue since both username and
password are hard-coded and unencrypted. To avoid this, use the operating
system’s authentication to log onto the database instance.
In the following example, the Informatica id “pmuser” is used to log onto the
Oracle database. Create the Oracle user “pmuser” with the following SQL
statement:
• DRIVING_SITE ‘Hint’
If the source and target are on separate instances, the Source Qualifier
transformation should be executed on the target instance.
For example, you want to join two source tables (A and B) together, which
may reduce the number of selected rows. However, Oracle fetches all of the
data from both tables, moves the data across the network to the target
instance, then processes everything on the target instance. If either data
source is large, this causes a great deal of network traffic. To force the Oracle
optimizer to process the join on the source instance, use the ‘Generate SQL’
option in the source qualifier and include the ‘driving_site’ hint in the SQL
statement as:
SQL Server
Description
Managing random access memory (RAM) buffer cache is a major consideration in any
database server environment. Accessing data in RAM cache is much faster than
accessing the same Information from disk. If database I/O (input/output operations
to the physical disk subsystem) can be reduced to the minimal required set of data
and index pages, these pages will stay in RAM longer. Too much unneeded data and
index information flowing into buffer cache quickly pushes out valuable pages. The
primary goal of performance tuning is to reduce I/O so that buffer cache is best
utilized.
• Max async I/O is used to specify the number of simultaneous disk I/O
operations (???) that SQL Server can submit to the operating system. Note
that this setting is automated in SQL Server 2000
• SQL Server allows several selectable models for database recovery,
these include:
- Full Recovery
- Bulk-Logged Recovery
- Simple Recovery
Use this option to specify the threshold where SQL Server creates and executes
parallel plans. SQL Server creates and executes a parallel plan for a query only when
the estimated cost to execute a serial plan for the same query is higher than the
value set in cost threshold for parallelism. The cost refers to an estimated elapsed
time in seconds required to execute the serial plan on a specific hardware
configuration. Only set cost threshold for parallelism on symmetric multiprocessors
(SMP).
Use this option to limit the number of processors (a max of 32) to use in parallel plan
execution. The default value is 0, which uses the actual number of available CPUs.
Set this option to 1 to suppress parallel plan generation. Set the value to a number
greater than 1 to restrict the maximum number of processors used by a single query
execution .
Use this option to specify whether SQL Server should run at a higher scheduling
priority than other processors on the same computer. If you set this option to one,
SQL Server runs at a priority base of 13. The default is 0, which is a priority base of
seven.
Use this option to reserve physical memory space for SQL Server that is equal to the
server memory setting. The server memory setting is configured automatically by
SQL Server based on workload and available resources. It will vary dynamically
between min server memory and max server memory. Setting ‘set working set’ size
means the operating system will not attempt to swap out SQL Server pages even if
they can be used more readily by another process when SQL Server is idle.
For SQL Server databases that are stored on multiple disk drives,
performance can be improved by partitioning the data to increase the amount
of disk I/O parallelism.
• Transaction log
• Tempdb
• Database
• Tables
• Non-clustered indexes
Two mechanisms exist inside SQL Server to address the need for bulk movement of
data. The first mechanism is the bcp utility. The second is the BULK INSERT
statement.
• Bcp is a command prompt utility that copies data into or out of SQL Server.
• BULK INSERT is a Transact-SQL statement that can be executed from within
the database environment. Unlike bcp, BULK INSERT can only pull data into
SQL Server. An advantage of using BULK INSERT is that it can copy data into
instances of SQL Server using a Transact-SQL statement, rather than having
to shell out to the command prompt.
TIP: Both of these mechanisms enable you to exercise control over the batch size.
Unless you are working with small volumes of data, it is good to get in the habit of
specifying a batch size for recoverability reasons. If none is specified, SQL Server
commits all rows to be loaded as a single batch. For example, you attempt to load
1,000,000 rows of new data into a table. The server suddenly loses power just as it
finishes processing row number 999,999. When the server recovers, those 999,999
rows will need to be rolled back out of the database before you attempt to reload the
data. By specifying a batch size of 10,000 you could have saved significant recovery
time, because SQL Server would have only had to rollback 9999 rows instead of
999,999.
• Remove indexes
• Use Bulk INSERT or bcp
• Parallel load using partitioned data files into partitioned tables
• Run one load stream for each available CPU
• Set Bulk-Logged or Simple Recovery model
• Use TABLOCK option
• Create indexes
• Switch to the appropriate recovery model
• Perform backups
Teradata
Description
Teradata offers several bulk load utilities including FastLoad, MultiLoad, and TPump.
FastLoad is used for loading inserts into an empty table. One of TPump’s advantages
is that it does not lock the table that is being loaded. MultiLoad supports inserts,
updates, deletes, and “upserts” to any table. This best practice will focus on
MultiLoad since PowerCenter 5.x can auto-generate MultiLoad scripts and invoke the
MultiLoad utility per PowerCenter target.
Tuning MultiLoad
There are many aspects to tuning a Teradata database. With PowerCenter 5.x
several aspects of tuning can be controlled by setting MultiLoad parameters to
maximize write throughput. Other areas to analyze when performing a MultiLoad job
include estimating space requirements and monitoring MultiLoad performance.
Note: In PowerCenter 5.1, the Informatica server transfers data via a UNIX named
pipe to MultiLoad, whereas in PowerCenter 5.0, the data is first written to file.
MultiLoad Parameters
Always estimate the final size of your MultiLoad target tables and make sure the
destination has enough space to complete your MultiLoad job. In addition to the
space that may be required by target tables, each MultiLoad job needs permanent
space for:
• Work tables
• Error tables
• Restart Log table
Note: Spool space cannot be used for MultiLoad work tables, error tables, or the
restart log table. Spool space is freed at each restart. By using permanent space for
the MultiLoad tables, data is preserved for restart operations after a system failure.
Work tables, in particular, require a lot of extra permanent space. Also remember to
account for the size of error tables since error tables are generated for each target
table.
Use the following formula to prepare the preliminary space estimate for one target
table, assuming no fallback protection, no journals, and no non-unique secondary
indexes:
2. Use the Teradata RDBMS Query Session utility to monitor the progress of the
MultiLoad job.
3. Check for locks on the MultiLoad target tables and error tables.
4. Check the DBC.Resusage table for problem areas, such as data bus or CPU
capacities at or near 100 percent for one or more processors.
6. Check the size of the error tables. Write operations to the fallback error tables are
performed at normal SQL speed, which is much slower than normal MultiLoad tasks.
7. Verify that the primary index is unique. Non-unique primary indexes can cause
severe MultiLoad performance problems.
Challenge
The following tips have proven useful in performance tuning UNIX-based machines.
While some of these tips will be more helpful than others in a particular environment,
all are worthy of consideration.
Description
Running ps-axu
• Are there any processes waiting for disk access or for paging? If so check the
I/O and memory subsystems.
• What processes are using most of the CPU? This may help you distribute the
workload better.
• What processes are using most of the memory? This may help you distribute
the workload better.
• Does ps show that your system is running many memory-intensive jobs? Look
for jobs with a large set (RSS) or a high storage integral.
Use vmstat or sar to check swapping actions. Check the system to ensure that
swapping does not occur at any time during the session processing. By using sar 5
10 or vmstat 1 10, you can get a snapshot of page swapping. If page swapping does
occur at any time, increase memory to prevent swapping. Swapping, on any
database system, causes a major performance decrease and increased I/O. On a
memory-starved and I/O-bound server, this can effectively shut down the
PowerCenter process and any databases running on the server.
Some swapping will normally occur regardless of the tuning settings. This occurs
because some processes use the swap space by their design. To check swap space
availability, use pstat and swap. If the swap space is too small for the intended
applications, it should be increased.
If memory seems to be the bottleneck of the system, try following remedial steps:
• Reduce the size of the buffer cache, if your system has one, by decreasing
BUFPAGES. The buffer cache is not used in system V.4 and SunOS 4.X
systems. Making the buffer cache smaller will hurt disk I/O performance.
• If you have statically allocated STREAMS buffers, reduce the number of large
(2048- and 4096-byte) buffers. This may reduce network performance, but
netstat-m should give you an idea of how many buffers you really need.
• Reduce the size of your kernel’s tables. This may limit the system’s capacity
(number of files, number of processes, etc.).
• Try running jobs requiring a lot of memory at night. This may not help the
memory problems, but you may not care about them as much.
• Try running jobs requiring a lot of memory in a batch queue. If only one
memory-intensive job is running at a time, your system may perform
satisfactorily.
• Try to limit the time spent running sendmail, which is a memory hog.
• If you don’t see any significant improvement, add more memory.
Use iostat to check i/o load and utilization, as well as CPU load. Iostat can be
used to monitor the I/O load on the disks on the UNIX server. Using iostat permits
monitoring the load on specific disks. Take notice of how fairly disk activity is
distributed among the system disks. If it is not, are the most active disks also the
fastest disks?
Run sadp to get a seek histogram of disk activity. Is activity concentrated in one
area of the disk (good), spread evenly across the disk (tolerable), or in two well-
defined peaks at opposite ends (bad)?
• Reorganize your file systems and disks to distribute I/O activity as evenly as
possible.
• Using symbolic links helps to keep the directory structure the same
throughout while still moving the data files that are causing I/O contention.
• Use your fastest disk drive and controller for your root filesystem; this will
almost certainly have the heaviest activity. Alternatively, if single-file
throughput is important, put performance-critical files into one filesystem and
use the fastest drive for that filesystem.
• Put performance-critical files on a filesystem with a large block size: 16KB or
32KB (BSD).
If your system has disk capacity problem and is constantly running out of
disk space, try the following actions:
• Write a find script that detects old core dumps, editor backup and auto-save
files, and other trash and deletes it automatically. Run the script through
cron.
• If you are running BSD UNIX or V.4, use the disk quota system to prevent
individual users from gathering too much storage.
• Use a smaller block size on file systems that are mostly small files (e.g.,
source code files, object modules, and small data files).
Use sar –u to check for CPU loading. This provides the %usr (user), %sys
(system), %wio (waiting on I/O), and %idle (% of idle time). A target goal should be
%usr + %sys= 80 and %wio = 10 leaving %idle at 10. If %wio is higher, the disk
and I/O contention should be investigated to eliminate I/O bottleneck on the UNIX
server. If the system shows a heavy load of %sys, and %usr has a high %idle, this is
indicative of memory and contention of swapping/paging problems. In this case, it is
necessary to make memory changes to reduce the load on the system server.
When you run iostat 5above, also observe for CPU idle time. Is the idle time
always 0, without letup? It is good for the CPU to be busy, but if it is always busy
100 percent of the time, work must be piling up somewhere. This points to CPU
overload.
If collisions and network hardware are not a problem, figure out which system
appears to be slow. Use spray to send a large burst of packets to the slow system.
If the number of dropped packets is large, the remote system most likely cannot
respond to incoming data fast enough. Look to see if there are CPU, memory or disk
I/O problems on the remote system. If not, the system may just not be able to
tolerate heavy network workloads. Try to reorganize the network so that this system
isn’t a file server.
A large number of dropped packets may also indicate data corruption. Run
netstat-s on the remote system, then spray the remote system from the local
system and run netstat-s again. If the increase of UDP socket full drops (as indicated
by netstat) is equal to or greater than the number of drop packets that spray
reports, the remote system is slow network server If the increase of socket full drops
is less than the number of dropped packets, look for network errors.
Run nfsstat and look at the client RPC data. If the retransfield is more than 5
percent of calls, the network or an NFS server is overloaded. If timeout is high, at
least one NFS server is overloaded, the network may be faulty, or one or more
servers may have crashed. If badmixis roughly equal to timeout, at least one NFS
server is overloaded. If timeoutand retrans are high, but badxidis low, some part of
the network between the NFS client and server is overloaded and dropping packets.
Try to prevent users from running I/O- intensive programs across the
network. The greputility is a good example of an I/O intensive program. Instead,
have users log into the remote system to do their work.
Reorganize the computers and disks on your network so that as many users as
possible can do as much work as possible on a local system.
If you are short of STREAMS data buffers and are running Sun OS 4.0 or System
V.3 (or earlier), reconfigure the kernel with more buffers.
Avoid raw devices. In general, proprietary file systems from the UNIX vendor are
most efficient and well suited for database work when tuned properly. Be sure to
check the database vendor documentation to determine the best file system for the
specific machine. Typical choices include: s5, The UNIX System V File System; ufs,
The “UNIX File System” derived from Berkeley (BSD); vxfs, The Veritas File System;
and lastly raw devices that, in reality are not a file system at all.
Finally, when tuning UNIX environments, the general rule of thumb is to tune the
server for a major database system. Most database systems provide a special tuning
supplement for each specific version of UNIX. For example, there is a specific IBM
Redbook for Oracle 7.3 running on AIX 4.3. Because PowerCenter processes data in
a similar fashion as SMP databases, by tuning the server to support the database,
you also tune the system for PowerCenter.
Challenge
The following tips have proven useful in performance tuning NT-based machines.
While some are likely to be more helpful than others in any particular environment,
all are worthy of consideration.
Note: Tuning is essentially the same for both NT and 2000 based systems, with
differences for Windows 2000 noted in the last section.
Description
When using the Performance Monitor, look for these performance indicators to
check:
Processor: percent processor time. For SMP environments you need to add one
monitor for each CPU. If the system is “maxed out” (i.e. running at 100 percent for
all CPUs), it may be necessary to add processing power to the server. Unfortunately,
NT scalability is quite limited, especially in comparison with UNIX environments. Also
keep in mind NT’s inability to split processes across multiple CPUs. Thus, one CPU
may be at 100% utilization while the other CPUs are at 0% utilization. There is
currently no solution for optimizing this situation, although Microsoft is working on
the problem.
Physical disks: percent time. This is the best place to tune database performance
within NT environments. By analyzing the disk I/O, the load on the database can be
leveled across multiple disks. High I/O settings indicate possible contention for I/O;
files should be moved to less utilized disk devices to optimize overall performance.
Physical disks: queue length. This setting is used to determine the number of
users sitting idle waiting for access to the same disk device. If this number is greater
than two, moving files to less frequently used disk devices should level the load of
the disk device.
Load reasonableness. Assume that some software will not be well coded, and
some background processes, such as a mail server or web server running on the
same machine, can potentially starve the CPUs on the machine. Off-loading CPU
hogs may be the only recourse.
Device Drivers. The device drivers for some types of hardware are notorious for
wasting CPU clock cycles. Be sure to get the latest drivers from the hardware vendor
to minimize this problem.
I/O Optimization. This is, by far, the best tuning option for database applications
in the NT environment. If necessary, level the load across the disk devices by
moving files. In situations where there are multiple controllers, be sure to level the
load across the controllers too.
Finally, on NT servers, be sure to implement disk stripping to split single data files
across multiple disk drives and take advantage of RAID (Redundant Arrays of
Inexpensive Disks) technology. Also increase the priority of the disk devices on the
NT server. NT, by default, sets the disk device priority low. Change the disk priority
setting in the Registry at service\lanman\server\parameters and add a key for
ThreadPriority of type DWORD with a value of 2.
Windows 2000 provides the following tools (accessible under the Control
Panel/Administration Tools/Performance) for monitoring resource usage on your
computer:
• System Monitor
• Performance Logs and Alerts
These Windows 2000 monitoring tools enable you to analyze usage and detect
bottlenecks at the disk, memory, processor, and network level.
The System Monitor displays a graph which is flexible and configurable. You can
copy counter paths and settings from the System Monitor display to the Clipboard
and paste counter paths from Web pages or other sources into the System Monitor
display. The System Monitor is portable. This is useful in monitoring other systems
that require administration. Typing perfmon.exe at the command prompt causes the
system to start System Monitor, not Performance Monitor.
The Performance Logs and Alerts tool provides two types of performance-related
logs—counter logs and trace logs—and an alerting function. Counter logs record
sampled data about hardware resources and system services based on performance
objects and counters in the same manner as System Monitor. Therefore they can be
viewed in System Monitor. Data in counter logs can be saved as comma-separated or
tab-separated files that are easily viewed with Excel.
Trace logs collect event traces that measure performance statistics associated with
events such as disk and file I/O, page faults, or thread activity.
The alerting function allows you to define a counter value that will trigger actions
such as sending a network message, running a program, or starting a log. Alerts are
Note:You must have Full Control access to a subkey in the registry in order to create
or modify a log configuration. (The subkey is
HKEY_CURRENT_MACHINE\SYSTEM\CurrentControlSet\Services\SysmonLog\Log_Qu
eries.)
The predefined log settings under Counter Logs named System Overview, are
configured to create a binary log that, after manual start-up, updates every 15
seconds and logs continuously until it achieves a maximum size. If you start
logging with these settings, data is saved to the Perflogs folder on the root directory
and includes the counters: Memory\ Pages/sec, PhysicalDisk(_Total)\Avg. Disk
Queue Length, and Processor(_Total)\ % Processor Time.
If you want to create your own log setting press the right mouse on one of the log
types.
Some other useful counters include Physical Disk: Reads/sec and Writes/sec and
Memory: Available Bytes and Cache Bytes.
Challenge
Description
Analyze mappings for tuning only after you have tuned the system, source and
target for peak performance.
If several mappings use the same data source, consider a single-pass reading.
Consolidate separate mappings into one mapping with either a single Source
Qualifier Transformation or one set of Source Qualifier Transformations as the data
source for the separate data flows.
There are a number of ways to optimize lookup transformations that are setup in a
mapping.
When caching is enabled, the PowerCenter Server caches the lookup table and
queries the lookup cache during the session. When this option is not enabled, the
PowerCenter Server queries the lookup table on a row-by-row basis.
NOTE: All the tuning options mentioned in this Best Practice assume that memory
and cache sizing for lookups are sufficient to ensure that caches will not page to
disks. Practices regarding memory and cache sizing for Lookup transformations are
covered in Best Practice: Tuning Sessions for Better Performance.
In general, if the lookup table needs less than 300MB of memory, lookup caching
should be enabled.
A better rule of thumb than memory size is to determine the ‘size’ of the potential
lookup cache with regard to the number of rows expected to be processed. For
example, consider the following example.
In Mapping X, the source and lookup contain the following number of records:
5000
ITEMS (source):
records
200
MANUFACTURER:
records
100000
DIM_ITEMS:
records
Consider the case where DIM_ITEMS is the lookup table. If the lookup table is
cached, it will result in 105,000 total disk reads to build and execute the lookup. If
the lookup table is not cached, then the disk reads would total 10,000. In this case
the number of records in the lookup table is not small in comparison with the
number of times the lookup will be executed. Thus the lookup should not be cached.
Use the following eight step method to determine if a lookup should be cached:
(LS*NRS*CRS)/(CRS-NRS) = X
Where X is the breakeven point. If your expected source records is less than
X, it is better to not cache the lookup. If your expected source records is
more than X, it is better to cache the lookup.
Thus, if the source has less than 66,603 records, the lookup should not be
cached. If it has more than 66,603 records, then the lookup should be
cached.
• Within a specific session run for a mapping, if the same lookup is used
multiple times in a mapping, the PowerCenter Server will re-use the cache for
the multiple instances of the lookup. Using the same lookup multiple times in
the mapping will be more resource intensive with each successive instance. If
multiple cached lookups are from the same table but are expected to return
different columns of data, it may be better to setup the multiple lookups to
bring back the same columns even though not all return ports are used in all
lookups. Bringing back a common set of columns may reduce the number of
disk reads.
• Across sessions of the same mapping, the use of an unnamed persistent
cache allows multiple runs to use an existing cache file stored on the
PowerCenter Server. If the option of creating a persistent cache is set in the
lookup properties, the memory cache created for the lookup during the initial
run is saved to the PowerCenter Server. This can improve performance
because the Server builds the memory cache from cache files instead of the
database. This feature should only be used when the lookup table is not
expected to change between session runs.
• Across different mappings and sessions, the use of a named persistent
cache allows sharing of an existing cache file.
There is an option to use a SQL override in the creation of a lookup cache. Options
can be added to the WHERE clause to reduce the set of records included in the
resulting cache.
NOTE: If you use a SQL override in a lookup, the lookup must be cached.
In the case where a lookup uses more than one lookup condition, set the conditions
with an equal sign first in order to optimize lookup performance.
The PowerCenter Server must query, sort and compare values in the lookup
condition columns. As a result, indexes on the database table should include every
column used in a lookup condition. This can improve performance for both cached
and un-cached lookups.
¨ In the case of an un-cached lookup, since a SQL statement created for each row
passing into the lookup transformation, performance can be helped by indexing
columns in the lookup condition.
Filtering data as early as possible in the data flow improves the efficiency of a
mapping. Instead of using a Filter Transformation to remove a sizeable number of
rows in the middle or end of a mapping, use a filter on the Source Qualifier or a Filter
Transformation immediately after the source qualifier to improve performance.
Filters or routers should also be used to drop rejected rows from an Update
Strategy transformation if rejected rows do not need to be saved.
Aggregator Transformations often slow performance because they must group data
before processing it.
Use the Sorted Input option in the aggregator. This option requires that data sent
to the aggregator be sorted in the order in which the ports are used in the
aggregator’s group by. The Sorted Input option decreases the use of aggregate
caches. When it is used, the PowerCenter Server assumes all data is sorted by group
and, as a group is passed through an aggregator, calculations can be performed and
information passed on to the next transformation. Without sorted input, the Server
must wait for all rows of data before processing aggregate calculations. Use of the
Joiner transformations can slow performance because they need additional space in
memory at run time to hold intermediate results.
Define the rows from the smaller set of data in the joiner as the Master
rows. The Master rows are cached to memory and the detail records are then
compared to rows in the cache of the Master rows. In order to minimize memory
requirements, the smaller set of data should be cached and thus set as Master.
Use Normal joins whenever possible. Normal joins are faster than outer joins
and the resulting set of data is also smaller.
Use the database to do the join when sourcing data from the same database
schema. Database systems usually can perform the join more quickly than the
Informatica Server, so a SQL override or a join condition should be used when
joining multiple tables from the same database schema.
For the most part, making calls to external procedures slows down a session. If
possible, avoid the use of these Transformations, which include Stored Procedures,
External Procedures and Advanced External Procedures.
2. Copy the mapping and replace half the complex expressions with a constant.
4. Make another copy of the mapping and replace the other half of the complex
expressions with a constant.
This can reduce the number of times a mapping performs the same logic. If a
mapping performs the same logic multiple times in a mapping, moving the task
upstream in the mapping may allow the logic to be done just once. For example, a
mapping has five target tables. Each target requires a Social Security Number
lookup. Instead of performing the lookup right before each target, move the lookup
to a position before the data flow splits.
Anytime a function is called it takes resources to process. There are several common
examples where function calls can be reduced or eliminated.
Aggregate function calls can sometime be reduced. In the case of each aggregate
function call, the Informatica Server must search and group the data.
SUM(Column A) + SUM(Column B)
For example if you have an expression which involves a CONCAT function such as:
FIRST_NAME || ‘ ‘ || LAST_NAME
Remember that IIF() is a function that returns a value, not just a logical test.
This allows many logical statements to be written in a more compact fashion.
For example:
The original expression had 8 IIFs, 16 ANDs and 24 comparisons. The optimized
expression results in 3 IIFs, 3 comparisons and two additions.
For example:
Avoid calculating or testing the same value multiple times. If the same sub-
expression is used several times in a transformation, consider making the sub-
expression a local variable. The local variable can be used only within the
transformation but by calculating the variable only once can speed performance.
The Informatica Server processes numeric operations faster than string operations.
For example, if a lookup is done on a large amount of data on two columns,
EMPLOYEE_NAME and EMPLOYEE_ID, configuring the lookup around EMPLOYEE_ID
improves performance.
When the Informatica Server performs comparisons between CHAR and VARCHAR
columns, it slows each time it finds trailing blank spaces in the row. The Treat CHAR
as CHAR On Read option can be set in the Informatica Server setup so that the
Informatica Server does not trim trailing spaces from the end of CHAR source fields.
When a LOOKUP function is used, the Informatica Server must lookup a table in the
database. When a DECODE function is used, the lookup values are incorporated into
the expression itself so the Informatica Server does not need to lookup a separate
table. Thus, when looking up a small set of unchanging values, using DECODE may
improve performance.
Challenge
Running sessions is where ‘the pedal hits the metal’. A common misconception is
that this is the area where most tuning should occur. While it is true that various
specific session options can be modified to improve performance, this should not be
the major or only area of focus when implementing performance tuning.
Description
When you have finished optimizing the sources, target database and mappings, you
should review the sessions for performance optimization.
Caches
The greatest area for improvement at the session level usually involves tweaking
memory cache settings. The Aggregator, Joiner, Rank and Lookup Transformations
use caches. Review the memory cache settings for sessions where the mappings
contain any of these transformations.
Because index and data caches are created for each of these transformations, , both
the index cache and data cache sizes may affect performance, depending on the
factors discussed in the following paragraphs.
When the PowerCenter Server creates memory caches, it may also create cache
files. Both index and data cache files can be created for the following transformations
in a mapping:
If the PowerCenter Server requires more memory than the configured cache size, it
stores the overflow values in these cache files. Since paging to disk can slow session
performance, try to configure the index and data cache sizes to store the appropriate
amount of data in memory. Refer to Chapter 9: Session Caches in the Informatica
Session and Server Guide for detailed information on determining cache sizes.
The PowerCenter Server writes to the index and data cache files during a session in
the following cases:
When a session is run, the PowerCenter Server writes a message in the session log
indicating the cache file name and the transformation name. When a session
completes, the DTM generally deletes the overflow index and data cache files.
However, index and data files may exist in the cache directory if the session is
configured for either incremental aggregation or to use a persistent lookup cache.
Cache files may also remain if the session does not complete successfully.
If a cache file handles more than 2 gigabytes of data, the PowerCenter Server
creates multiple index and data files. When creating these files, the PowerCenter
Server appends a number to the end of the filename, such as PMAGG*.idx1 and
PMAGG*.idx2. The number of index and data files is limited only by the amount of
disk space available in the cache directory.
o Aggregator Caches
• Joiner Caches
The source with fewer records should be specified as the master source
because only the master source records are read into cache. When a session
is run with a Joiner transformation, the PowerCenter Server reads all the rows
from the master source and builds memory caches based on the master rows.
After the memory caches are built, the PowerCenter Server reads the rows
from the detail source and performs the joins.
Also, the PowerCenter Server automatically aligns all data for joiner caches on
an eight-byte boundary, which helps increase the performance of the join.
• Lookup Caches
You can tweak session properties to increase the number of available memory blocks
by adjusting:
To configure these settings, first determine the number of memory blocks the
PowerCenter Server requires to initialize the session. Then you can calculate the
buffer pool size and/or the buffer block size based on the default settings, to create
the required number of session blocks.
If there are XML sources and targets in the mappings, use the number of groups in
the XML source or target in the total calculation for the total number of sources and
targets.
The DTM Buffer Pool Size setting specifies the amount of memory the
PowerCenter Server uses as DTM buffer memory. The PowerCenter Server
uses DTM buffer memory to create the internal data structures and buffer
blocks used to bring data into and out of the Server. When the DTM buffer
memory is increased, the PowerCenter Server creates more buffer blocks,
which can improve performance during momentary slowdowns.
If a session’s performance details show low numbers for your source and
target BufferInput_efficiency and BufferOutput_efficiency counters, increasing
the DTM buffer pool size may improve performance.
Within a session, you may modify the buffer block size by changing it in the
Advanced Parameters section. This specifies the size of a memory block that
is used to move data throughout the pipeline. Each source, each
transformation, and each target may have a different row size, which results
in different numbers of rows that can be fit into one memory block. Row size
If there are independent sessions that use separate sources and mappings to
populate different targets, they can be placed in a concurrent batch and run at the
same time.
If there is a complex mapping with multiple sources, you can separate it into several
simpler mappings with separate sources. This enables you to place the sessions for
each of the mappings in a concurrent batch to be run in parallel.
Partitioning Sessions
If large amounts of data are being processed with PowerCenter 5.x, data can be
processed in parallel with a single session by partitioning the source via the source
qualifier. Partitioning allows you to break a single source into multiple sources and to
run each in parallel. The PowerCenter Server will spawn a Read and Write thread for
each partition, thus allowing for simultaneous reading, processing, and writing. Keep
in mind that each partition will compete for the same resources (i.e., memory, disk,
and CPU), so make sure that the hardware and memory are sufficient to support a
parallel session. Also, the DTM buffer pool size is split among all partitions, so it may
need to be increased for optimal performance.
When increasing the commit interval at the session level, you must remember to
increase the size of the database rollback segments to accommodate this larger
You can improve performance by turning off session recovery. The PowerCenter
Server writes recovery information in the OPB_SRVR_RECOVERY table during each
commit. This can decrease performance. The PowerCenter Server setup can be set to
disable session recovery. But be sure to weigh the importance of improved session
performance against the ability to recover an incomplete session when considering
this option.
If a session runs with decimal arithmetic enabled, disabling decimal arithmetic may
improve session performance.
The Decimal datatype is a numeric datatype with a maximum precision of 28. To use
a high-precision Decimal datatype in a session, it must be configured so that the
PowerCenter Server recognizes this datatype by selecting Enable Decimal Arithmetic
in the session property sheet. However, since reading and manipulating a high-
precision datatype (i.e., those with a precision of greater than 28) can slow the
PowerCenter Server, session performance may be improved by disabling decimal
arithmetic.
To reduce the amount of time spent writing to the session log file, set the tracing
level to Terse. Terse tracing should only be set if the sessions run without problems
and session details are not required. At this tracing level, the PowerCenter Server
does not write error messages or row-level information for reject data. However, if
terse is not an acceptable level of detail, you may want to consider leaving the
tracing level at Normal and focus your efforts on reducing the number of
transformation errors.
Note that the tracing level must be set to Normal in order to use the reject loading
utility.
As an additional debug option (beyond the PowerCenter Debugger), you may set the
tracing level to Verbose to see the flow of data between transformations. However,
this will significantly affect the session performance. Do not use Verbose tracing
The session tracing level overrides any transformation-specific tracing levels within
the mapping. Informatica does not recommend reducing error tracing as a long-term
response to high levels of transformation errors. Because there are only a handful of
reasons why transformation errors occur, it makes sense to fix and prevent any
recurring transformation errors.
Challenge
Because there are many variables involved in identifying and rectifying performance
bottlenecks, an efficient method for determining where bottlenecks exist is crucial to
good data warehouse management.
Description
1. Write
2. Read
3. Mapping
4. Session
5. System
Before you begin, you should establish an approach for identifying performance
bottlenecks. To begin, attempt to isolate the problem by running test sessions. You
should be able to compare the session’s original performance with that of the tuned
session’s performance.
The swap method is very useful for determining the most common bottlenecks. It
involves the following five steps:
Relational Targets
The most common performance bottleneck occurs when the PowerCenter Server
writes to a target database. This type of bottleneck can easily be identified with the
following procedure:
If the session performance is significantly increased when writing to a flat file, you
have a write bottleneck.
If the session targets a flat file, you probably do not have a write bottleneck. You can
optimize session performance by writing to a flat file target local to the PowerCenter
server. If the local flat file is very large, you can optimize the write process by
dividing it among several physical drives.
Read Bottlenecks
Relational Sources
If the session reads from a relational source, you should first use a read test session
with a flat file as the source in the test session. You may also use a database query
to indicate if a read bottleneck exists.
1. Create a mapping and session that writes the source table data to a flat file.
2. Create a test mapping that contains only the flat file source, the source
qualifier, and the target table.
3. Create a session for the test mapping.
If the test session’s performance increases significantly, you have a read bottleneck.
If your session reads from a flat file source, you probably do not have a read
bottleneck. Tuning the Line Sequential Buffer Length to a size large enough to hold
approximately four to eight rows of data at a time (for flat files) may help when
reading flat file sources. Ensure the flat file source is local to the PowerCenter
Server.
Mapping Bottlenecks
If you have eliminated the reading and writing of data as bottlenecks, you may have
a mapping bottleneck. Use the swap method to determine if the bottleneck is in the
mapping. After using the swap method, you can use the session’s performance
details to determine if mapping bottlenecks exist. High Rowsinlookupcache and
Errorrows counters indicate mapping bottlenecks. Follow these steps to identify
mapping bottlenecks:
Multiple lookups can slow the session. You may improve session performance by
locating the largest lookup tables and tuning those lookup expressions.
For further details on eliminating mapping bottlenecks, refer to the Best Practice:
Tuning Mappings for Better Performance
Session Bottlenecks
Session performance details can be used to flag other problem areas in the session
Advanced Options Parameters or in the mapping.
For further details on eliminating session bottlenecks, refer to the Best Practice:
Tuning Sessions for Better Performance.
System Bottlenecks
After tuning the source, target, mapping, and session, you may also consider tuning
the system hosting the PowerCenter Server.
Windows NT/2000
Use system tools such as the Performance tab in the Task Manager or the
Performance Monitor to view CPU usage and total memory usage.
UNIX
On UNIX, use system tools like vmstat and iostat to monitor such items as system
performance and disk swapping actions.
For further information regarding system tuning, refer to the Best Practices:
Performance Tuning UNIX-Based Systems and Performance Tuning NT/2000-Based
Systems.
The following table details the Performance Counters that can be used to flag session
and mapping bottlenecks. Note that these can only be found in the Session
Performance Details file.
Note: The PowerCenter Server generates two sets of performance counters for a
Joiner transformation. The first set of counters refers to the master source. The
Joiner transformation does not generate output row counters associated with the
master source. The second set of counters refers to the detail source.
Challenge
Description
To ensure the use of consistent data source names for the same data sources across
the domain, the Administrator can create a single "official" set of data sources, then
use the Repository Manager to export that connection information to a file. You can
then distribute this file and import the connection information for each client
machine.
Solution
• From Repository Manager, choose Export Registry from the Tools drop down
menu.
• For all subsequent client installs, simply choose Import Registry from the
Tools drop down menu.
The “missing or invalid license key” error occurs when attempting to install
PowerCenter Client tools on NT 4.0 or Windows 2000 with a userid other than
‘Administrator.’
This problem also occurs when the client software tools are installed under the
Administrator account, and subsequently a user with a non-administrator ID
attempts to run the tools. The user who attempts to log in using the normal ‘non-
administrator’ userid will be unable to start the PowerCenter Client tools. Instead,
the software will display the message indicating that the license key is missing or
invalid.
The session log editor is not automatically determined when the PowerCenter
Client tools are installed. A window appears the first time a session log is
viewed from the PowerCenter Server Manager, prompting the user to enter
the full path name of the editor to be used to view the logs. Users often set
this parameter incorrectly and must access the registry to change it.
Solution
Challenge
Configuring the Throttle Reader and File Debugging options, adjusting semaphore
settings in the Unix environment, and configuring server variables.
Description
If problems occur when running sessions, some adjustments at the Server level can
help to alleviate issues or isolate problems.
One technique that often helps resolve “hanging” sessions is to limit the number of
reader buffers that use Throttle Reader. This is particularly effective if your mapping
contains many target tables, or if the session employs constraint-based loading. This
parameter closely manages buffer blocks in memory by restricting the number of
blocks that can be utilized by the Reader.
Note for PowerCenter 5.x and above ONLY: If a session is hanging and it is
partitioned, it is best to remove the partitions before adjusting the throttle reader.
When a session is partitioned, the server makes separate connections to the source
and target for every partition. This will cause the server to manage many buffer
blocks. If the session still hangs, try adjusting the throttle reader.
Solution: To limit the number of reader buffers using Throttle Reader in NT/2000:
• Access file
hkey_local_machine\system\currentcontrolset\services\powermart\parameter
s\miscinfo.
• Create a new String value with value name of 'ThrottleReader' and value data
of '10'.
If problems occur when running sessions or if the PowerCenter Server has a stability
issue, help technical support to resolve the issue by supplying them with Debug files.
• DebugScrubber=4
• DebugWriter=1
• DebugReader=1
• DebugDTM=1
The UNIX version of the PowerCenter Server uses operating system semaphores for
synchronization. You may need to increase these semaphore settings before
installing the server.
Informatica recommends setting the following parameters as high as possible for the
operating system. However, if you set these parameters too high, the machine may
not boot. Refer to the operating system documentation for parameter limits:
For example, you might add the following lines to the Solaris /etc/system file to
configure the UNIX kernel:
set shmsys:shminfo_shmmin = 1
set shmsys:shminfo_shmseg = 10
set semsys:shminfo_semmni = 70
Approach
In Server Manager, edit the server configuration to set or change the variables.
Each registered server has its own set of variables. The list is fixed, not user-
extensible.
• Server manager session editor: anywhere in the fields for session log
directory, bad file directory, etc.
• Designer: Aggregator/Rank/Joiner attribute for ‘Cache Directory’;
External Procedure attribute for ‘Location’
Does every session and mapping have to use these variables (are they mandatory)?
• No. If you remove any variable reference from the session or the widget
attributes then the server does not use that variable.
• The variable is just a convenience; the user can choose to use it or not. The
variable will be expanded only if it is explicitly referenced from another
location. If the session log directory is specified as $PMSessionLogDir, then
the logs are put in that location.
Note that this location may be different on every server. This is in fact a primary
purpose for utilizing variables. But if the session log directory field is changed to
designate a specific location, e.g. ‘/home/john/logs’, then the session logs will
instead be placed in the directory location as designated. (The variable
$PMSessionLogDir will be unused so it does not matter what the value of the variable
is set to).
Challenge
Description
This Best Practice provides general guidance for sizing computing environments. It
also discusses some potential questions and pitfalls that may arise when migrating to
Production.
Certain terms used within this Best Practice are specific to Informatica’s
PowerCenter. Please consult the appropriate PowerCenter manuals for explanation of
these terms where necessary.
Environmental configurations may very greatly with regard to hardware and software
sizing. In addition to requirements for PowerCenter, other applications may share the
server. Be sure to consider all mandatory server software components, including the
operating system and all of its components, the database engine, front-end engines,
etc. Regardless of whether or not the server is shared, it will be necessary to
research the requirements of these additional software components when estimating
the size of the overall environment.
Technical Information
Before delving into key sizing questions, let us review the PowerCenter engine and
its associated resource needs. Each session:
Note: It may be helpful to refer to the Performance Tuning section in Phase 4 of the
Informatica Methodology when determining memory settings. The Performance
Tuning section provides additional information on factors that typically affect session
performance, and offers general guidance for estimating session resources.
Requires 20-30 MB of memory for the main server engine for session coordination.
Note: Sorting the input to aggregations will greatly reduce the need for memory.
Disk space is not a factor if the machine is dedicated exclusively to the server
engine. However, if the following conditions exist, disk space will need to be carefully
considered:
Key Questions
The goal of this analysis is to size the machine so that the ETL processes can
complete within the specified load window.
Consider the following questions when estimating the required number of sessions,
the volume of data moved per session, and the caching requirements for the
session’s lookup tables, aggregation, and heterogeneous joins. Use these estimates
along with recommendations in the preceding Technical Information section to
determine the required number of processors, memory, and disk space to achieve
the required performance to meet the load window.
With these additional processing requirements in mind, consider platform size in light
of the following questions:
The following is a testimonial from a customer configuration. Please note that these
performance tests were run on a previous version of PowerCenter, which did not
include the performance and functional enhancements in release 5.1. These results
are offered as one example of throughput. However, results will definitely vary by
installation because each environment has a unique architecture and unique data
characteristics.
The performance tests were performed on a 4-processor Sun E4500 with 2GB of
memory. This processor handled just under 20.5 million rows, and more than 2.8GB
of data, in less than 54 minutes. In this test scenario, 22 sessions ran in parallel,
populating a large product sales table. Four sessions ran after the set of 22,
populating various summarization tables based on the product sales table. All of the
mappings were complex, joining several sources and utilizing several Expression,
Lookup and Aggregation transformations. The source and target database used in
the tests was Oracle. The source and target were both hosted locally on the ETL
Server.
Links
The following link may prove helpful when determining the platform
size:www.tpc.org. This website contains benchmarking reports that will help you
fine tune your environment and may assist in determining processing power
required.
Challenge
Description
When a network or other problem causes a session whose source contains a million
rows to fail after only half of the rows are committed to the target, one option is to
truncate the target and run the session again from the beginning. But that is not the
only option. Rather than processing the first half of the source again, you can tell the
server to keep data already committed to the target database and process the rest of
the source. This results in accurate and complete target data, as if the session
completed successfully with one run. This technique is called performing recovery.
When you run a session in recovery mode, the server notes the row id of the last row
committed to the target database. The server then reads all sources again, but only
processes from the subsequent row id. For example, if the server commits 1000 rows
before the session fails, when you run the session in recovery mode, the server
reads all source tables, and then passes data to the Data Transformation Manager
(DTM) starting from row 1001.
When necessary, the server can recover the same session more than once. That is, if
a session fails while running in recovery mode, you can re-run the session in
recovery mode until the session completes successfully. This is called nested
recovery.
The server can recover committed target data if the following three criteria are met:
• All session targets are relational. The server can only perform recovery on
relational tables. If the session has file targets, the server cannot perform
recovery. If a session writing to file targets fails, delete the files, and run the
session again.
• The session is configured for a normal (not bulk) target load. The
server uses database logging to perform recovery. Since bulk loading
bypasses database logging, the server cannot recover sessions configured to
bulk load targets. Although recovering a large session can be more efficient
When you configure a session to load in bulk, the server logs a message in
the session log stating that recovery is not supported.
In addition, to ensure accurate results from the recovery, the following must be true:
• Source data does not change before performing recovery. This includes
inserting, updating, and deleting source data. Changes in source files or
tables can result in inaccurate data.
• The mapping used in the session does not use a Sequence Generator
or Normalizer. Both the Sequence Generator and the Normalizer
transformations generate source values: the Sequence Generator generates
sequences, and the Normalizer generates primary keys. Therefore, sessions
using these transformations are not guaranteed to return the same values
when performing recovery.
Session Logs
If a session is configured to archive session logs, the server creates a new session
log for the recovery session. If you perform nested recovery, the server creates a
new log for each session run. If the session is not configured to archive session logs,
the server overwrites the existing log when you recover the session.
Reject Files
When performing recovery, the server creates a single reject file. The server
appends rejected rows from the recovery session (or sessions) to the session reject
file. This allows you to correct and load all rejected rows from the completed session.
Example
Session “s_recovery” reads from a Sybase source and writes to a target table in
“production_target”, a Microsoft SQL Server database. This session is configured for
a normal load. The mapping consists of:
The first time the session runs, the server creates a session log named
s_recovery.log. (If the session is configured to save logs by timestamp, the server
appends the date and time to the log file name.) The server also creates a reject file
for the target table named t_lineitem.bad.
The following section of the session log shows the server preparing to load normally
to the production_target database. Since the server cannot find
OPB_SRVR_RECOVERY, it creates the table.
...
CMN_1039 [01/14/99 18:42:44 SQL Server Message 208 : Invalid object name 'OPB_SRVR_RECOVERY'.]
CMN_1040 [01/14/99 18:42:44 DB-Library Error 10007 : General SQL Server error: Check messages from the
SQL Server.]
As the following session log show, the server performs six target -based commits before the session fails.
=============================================
Table: T_LINEITEM
Rows Rejected: 0
=============================================
Table: T_LINEITEM
Rows Rejected: 0
=============================================
Table: T_LINEITEM
Rows Rejected: 0
=============================================
Table: T_LINEITEM
Rows Rejected: 0
=============================================
Table: T_LINEITEM
Rows Rejected: 0
=============================================
Rows Rejected: 0
When a session fails, you can truncate the target and run the entire session again.
However, since the server committed more than 60,000 rows to the target, rather
than running the whole session again, you can configure the session to recover the
committed rows.
To run a recovery session, check the Perform Recovery option on the Log Files tab of
the session property sheet.
To archive the existing session log, either increase the number of session logs saved,
or choose Save Session Log By Timestamp option on the Log Files tab.
Start the session, or if necessary, edit the session schedule and reschedule the
session.
When you run the session in recovery mode, the server creates a new session log.
Since the session is configured to save multiple logs, it renames the existing log
s_recovery.log.0, and writes all new session information in s_recovery.log. The
server reopens the existing reject file (t_lineitem.bad) and appends any rejected
rows to that file.
When performing recovery, the server reads the source, and then passes data to the
DTM beginning with the first uncommitted row. In the session log below, the server
notes the session is in recovery mode, and states the row at which it will begin
recovery (i.e., row 60751.)
When running the session with the Verbose Data tracing level, the server provides
more detailed information about the session. As seen below, the server sets row
60751 as the row from which to recover. It opens the existing reject file and begins
processing with the next row, 60752.
Note: Setting the tracing level to Verbose Data slows the server's performance and
is not recommended for most production sessions.
If the recovery session fails before completing, you can run the session in recovery
mode again. The server runs the session as it did the earlier recovery sessions,
creating a new session log and appending bad data to the reject file.
You can run the session in recovery mode as many times as necessary to complete
the session's target tables.
When the server completes loading target tables, it performs any configured post-
session stored procedures or commands normally, as if the session completed in a
single run.
After successfully recovering a session, you must edit the session properties to clear
the Perform Recovery option. If necessary, return the session to its normal schedule
and reschedule the session.
Things to Consider
Challenge
Identifying the departments and individuals that are likely to benefit directly from the project implementation.
Understanding these individuals, and their business information requirements, is key to defining and scoping the
project.
Description
The following four steps summarize business case development and lay a good foundation f or
proceeding into detailed business requirements for the project.
1. One of the first steps in establishing the business scope is identifying the project beneficiaries and
understanding their business roles and project participation. In many cases, the Project Sponsor can
help to identify the beneficiaries and the various departments they represent. This information can then
be summarized in an organization chart that is useful for ensuring that all project team members
understand the corporate/business organization.
• Activity - Interview project sponsor to identify beneficiaries, define their business roles and
project participation.
• Deliverable - Organization chart of corporate beneficiaries and
participants.
2. The next step in establishing the business scope is to understand the business problem or need that
the project addresses. This information should be clearly defined in a Problem/Needs Statement, using
business terms to describe the problem. For example, the problem may be expressed as "a lack of
information" rather than "a lack of technology" and should detail the business decisions or analysis that
is required to resolve the lack of information. The best way to gather this type of information is by
interviewing the Project Sponsor and/or the project beneficiaries.
3. The next step in creating the project scope is defining the business goals and objectives for the
project and detailing them in a comprehensive Statement of Project Goals and Objectives. This
statement should be a high-level expression of the desired business solution (e.g., what strategic or
tactical benefits does the business expect to gain from the project,) and should avoid any technical
considerations at this point. Again, the Project Sponsor and beneficiaries are the best sources for this
type of information. It may be practical to combine information gathering for the needs assessment and
goals definition, using individual interviews or general meetings to elicit the information.
4. The final step is creating a Project Scope and Assumptions statement that clearly defines the
boundaries of the project based on the Statement of Project Goals and Objective and the associated
project assumptions. This statement should focus on the type of information or analysis that will be
included in the project rather than what will not.
The assumptions statements are optional and may include qualifiers on the scope, such as ass umptions
of feasibility, specific roles and responsibilities, or availability of resources or data.
Challenge
Developing a solid business case for the project that includes both the tangible and intangible potential benefits
of the project.
Description
The Business Case should include both qualitative and quantitative assessments of the project.
The Qualitative Assessment portion of the Business Case is based on the Statement of Problem/Need and the
Statement of Project Goals and Objectives (both generated in Subtask 1.1.1) and focuses on d iscussions with the
project beneficiaries of expected benefits in terms of problem alleviation, cost savings or controls, and increased
efficiencies and opportunities.
The Quantitative Assessment portion of the Business Case provides specific measurable details of the
proposed project, such as the estimated ROI, which may involve the following calculations:
• Cash flow analysis- Projects positive and negative cash flows for the
anticipated life of the project. Typically, ROI measurements use the cash flow
formula to depict results.
• Net present value - Evaluates cash flow according to the long-term value of
current investment. Net present value shows how much capital needs to be
invested currently, at an assumed interest rate, in order to create a stream of
payments over time. For instance, to generate an income stream of $500 per
month over six months at an interest rate of eight percent would require an
investment-a net present value-of $2,311.44.
• Payback - Determines how much time will pass before an initial capital
investment is recovered.
The following are steps to calculate the quantitative business case or ROI:
Step 2. Analyze Potential Benefits. Discussions with representative managers and users or the Project
Sponsor should reveal the tangible and intangible benefits of the project. The most effective format for
presenting this analysis is often a "before" and "after" format that compares the current situation to the project
expectations.
Step 3. Calculate Net Present Value for all Benefits. Information gathered in this step should help the
customer representatives to understand how the expected benefits will be allocated throughout the organization
over time, using the enterprise deployment map as a guide.
Step 4. Define Overall Costs. Customers need specific cost information in order to assess the dollar impact of
the project. Cost estimates should address the following fundamental cost components:
• Hardware
• Networks
• RDBMS software
• Back-end tools
• Query/reporting tools
• Internal labor
• External labor
• Ongoing support
• Training
Step 5. Calculate Net Present Value for all Costs. Use either actual cost estimates or percentage-of-cost
values (based on cost allocation assumptions) to calculate costs for each cost component, projected over the
timeline of the enterprise deployment map. Actual cost estimates are more accurate than percentage-of-cost
allocations, but much more time-consuming. The percentage-of-cost allocation process may be valuable for initial
ROI snapshots until costs can be more clearly predicted.
Step 6. Assess Risk, Adjust Costs and Benefits Accordingly. Review potential risks to the project and make
corresponding adjustments to the costs and/or benefits. Some of the major risks to consider are:
• Scope creep, which can be mitigated by thorough planning and tight project
scope
• Integration complexity, which can be reduced by standardizing on vendors
with integrated product sets or open architectures
• Architectural strategy that is inappropriate
• Other miscellaneous risks from management or end users who may withhold
project support; from the entanglements of internal politics; and from
technologies that don't function as promised
Step 7. Determine Overall ROI. When all other portions of the business case are complete, calculate the
project's "bottom line". Determining the overall ROI is simply a matter of subtracting net present value of total
costs from net present value of (total incremental revenue plus cost savings).
For more detail on these steps, refer to the Informatica White Paper: 7 Steps to Calculating Data
Warehousing ROI.
Challenge
Defining and prioritizing business and functional requirements is often accomplished through a combination of
interviews and facilitated meetings (i.e., workshops) between the Project Sponsor and beneficiaries and the
Project Manager and Business Analyst.
Description
The following three steps are key for successfully defining and prioritizing requirements:
Step 1: Discovery
During individual (or small group) interviews with high-level management, there is often focus and clarity of
vision that for some, may be hindered in large meetings or not available from lower-level management. On the
other hand, detailed review of existing reports and current analysis from the company's "information providers"
can fill in helpful details.
As part of the initial "discovery" process, Informatica generally recommends several interviews at the Project
Sponsor and/or upper management level and a few with those acquainted with current reporting and analysis
processes. A few peer group forums can also be valuable.
However, this part of the process must be focused and brief or it can become unwieldy as much time can be
expended trying to coordinate calendars between worthy forum participants. Set a time period and target list of
participants with the Project Sponsor, but avoid lengthening the process if some participants aren't available.
The Business Analyst, with the help of the Project Architect, documents the findings of the discovery process. The
resulting Business Requirements Specification includes a matrix linking the specific business requirements to their
functional requirements.
The detailed business requirements and information requirements should be reviewed with the project
beneficiaries and prioritized based on business need and the stated project objectives and scope.
Concurrent with the validation of the business requirements, the Architect begins the Functional Requirements
Specification providing details on the technical requirements for the project.
As general technical feasibility is compared to the prioritization from Step 2, the Project Manager, Business
Analyst, and Architect develop consensus on a project "phasing" approach. Items of secondary priority and those
with poor near-term feasibility are relegated to subsequent phases of the project. Thus, they develop a phased,
or incremental, "roadmap" for the project (Project Roadmap).
This is presented to the Project Sponsor for approval and becomes the first "Increment" or starting point for the
Project Plan.
Challenge
Developing a comprehensive work breakdown structure that clearly depicts all of the various tasks, subtasks
required to complete the project. Because project time and resource estimates are typically based on the Work
Breakdown Structure (WBS), it is critical to develop a thorough, accurate WBS.
Description
A WBS is a tool for identifying and organizing the tasks that need to be completed in a project. The WBS serves
as a starting point for both the project estimate and the project plan.
One challenge in developing a good WBS is obtaining the correct balance between enough detail, and too much
detail. The WBS shouldn't be a 'grocery list' of every minor detail in the project, but it does need to break the
tasks down to a manageable level of detail. One general guideline is to keep task detail to a duration of at least a
day.
It is also important to remember that the WBS is not necessarily a sequential document. Tasks in the hierarchy
are often completed in parallel. At this stage of project planning, the goal is to list every task that must be
completed; it is not necessary to determine the critical path for completing these tasks. For example, we may
have multiple subtasks under a task (e.g., 4.3.1 through 4.3.7 under task 4.3). So, although subtasks 4.3.1
through 4.3.4 may have sequential requirements that force us to complete them in order, subtasks 4.3.5 through
4.3.7 can - and should - be completed in parallel if they do not have sequential requirements. However, it is
important to remember that a task is not complete until all of its corresponding subtasks are completed -
whether sequentially or in parallel. For example, the BUILD phase is not complete until tasks 4.1 through 4.7 are
complete, but some work can (and should) begin for the DEPLOY phase long before the BUILD phase is
complete.
The Project Plan provides a starting point for further development of the project WBS. This sample is a Microsoft
Project file that has been "pre-loaded" with the Phases, Tasks, and Subtasks that make up the Informatica
Methodology. The Project Manager can use this WBS as a starting point, but should review it carefully to ensure
that it corresponds to the specific development effort, removing any steps that aren't relevant or adding steps as
necessary. Many projects will require the addition of detailed steps to accurately represent the development
effort.
If the Project Manager chooses not to use Microsoft Project, an Excel version of the Work Breakdown Structure is
available. The phases, tasks, and subtasks can be exported from Excel into many other project management
tools, simplifying the effort to develop the WBS.
After the WBS has been loaded into the selected project management tool and refined for the specific project
needs, the Project Manager can begin to estimate the level of effort involved in completing each of the steps.
When the estimate is complete, individual resources can be assigned and scheduled. The end result is the Project
Plan. Refer to Developing and Maintaining the Project Plan for further information about the project plan.
Challenge
Developing the first-pass of a project plan that incorporates all of the necessary components but which is
sufficiently flexible to accept the inevitable changes.
Description
Use the following steps as a guide for developing the initial project plan:
• Break the milestones down into major tasks and activities. The Project Plan should be helpful as a
starting point or for recommending tasks for inclusion.
• Continue the detail breakdown, if possible, to a level at which tasks are of about one to three days'
duration. This level provides satisfactory detail to facilitate estimation and tracking. If the detail tasks
are too broad in scope, estimates are much less likely to be accurate.
• Confer with technical personnel to review the task definitions and effort estimates (or even to help
define them, if applicable).
• Establish the dependencies among tasks, where one task cannot be started until another is completed
(or must start or complete concurrently with another).
• Define the resources based on the role definitions and estimated number of resources needed for each
role.
• Assign resources to each task. If a resource will only be part-time on a task, indicate this in the plan.
At this point, especially when using Microsoft Project, it is advisable to create dependencies (i.e., predecessor
relationships) between tasks assigned to the same resource in order to indicate the sequence of that person's
activities.
The initial definition of tasks and effort and the resulting schedule should be an exercise in pragmatic feasibility
unfettered by concerns about ideal completion dates. In other words, be as realistic as possible in your initial
estimations, even if the resulting scheduling is likely to be a hard sell to c ompany management.
This initial schedule becomes a starting point. Expect to review and rework it, perhaps several times. Look for
opportunities for parallel activities, perhaps adding resources, if necessary, to improve the schedule.
Once the Project Sponsor and company managers agree to the initial plan, it becomes the basis for assigning
tasks to individuals on the project team and for setting expectations regarding delivery dates. The planning
activity then shifts to tracking tasks against the schedule and updating the plan based on status and changes to
assumptions.
One approach is to establish a baseline schedule (and budget, if applicable) and then track changes against it.
With Microsoft Project, this involves creating a "Baseline" that remain s static as changes are applied to the
schedule. If company and project management do not require tracking against a baseline, simply maintain the
plan through updates without a baseline.
Regular status reporting should include any changes to the schedule, beginning with team members' notification
that dates for task completions are likely to change or have already been exceeded. These status report updates
should trigger a regular plan update so that project management can track the effect on the overall schedule and
budget.
Be sure to evaluate any changes to scope (see 1.2.4 Manage Project and Scope Change Assessment ), or
changes in priority or approach, as they arise to determine if they impact the plan. It may be necessary to modify
the plan if changes in scope or priority require rearranging task assignments or delivery sequences, or if they add
new tasks or postpone existing ones.
Challenge
Description
It is important to remember that the quality of a project can be directly correlated to the amount of
review that occurs during its lifecycle.
In addition to the initial project plan review with the Project Sponsor, schedule regular status meetings
with the sponsor and project team to review status, issues, scope changes and schedule updates.
Gather status, issues and schedule update information from the team one day before the status
meeting in order to compile and distribute the Status Report .
The Project Manager should coordinate, if not facilitate, reviews of requirements, plans and deliverables
with company management, including business requirements reviews with business personnel and
technical reviews with project technical personnel.
Set a process in place beforehand to ensure appropriate personnel are invited, any relevant documents
are distributed at least 24 hours in advance, and that reviews focus on questions and issues (rather
than a laborious "reading of the code").
Directly address and evaluate any changes to the planned project activities, priorities, or staffing as
they arise, or are proposed, in terms of their impact on the project plan.
• Use the Scope Change Assessment to record the background problem or requirement and the
recommended resolution that constitutes the potential scope change.
• Review each potential change with the technical team to assess its impact on the project,
evaluating the effect in terms of schedule, budget, staffing requirements, and so forth.
• Present the Scope Change Assessment to the Project Sponsor for acceptance (with formal
sign-off, if applicable). Discuss the assumptions involved in the impact estimate and any
potential risks to the project.
The Project Manager should institute this type of change management process in response to any issue
or request that appears to add or alter expected activities and has the potential to affect the plan. Even
if there is no evident effect on the schedule, it is important to document these changes because they
may affect project direction and it may become necessary, later in the project cycle, to justify these
changes to management.
Issues Management
Any questions, problems, or issues that arise and are not immediately resolved should be tracked to
ensure that someone is accountable for resolving them so that their effect can also be visible.
Use the Issues Tracking template, or something similar, to track issues, their owner, and dates of entry
and resolution as well as the details of the issue and of its solution.
Rather than simply walking away from a project when it seems complete, there should be an explicit
close procedure. For most projects this involves a meeting where the Project Sponsor and/or
department managers acknowledge completion or sign a statement of satisfactory completion.
• Even for relatively short projects, use the Project Close Report to finalize the project with a
final status report detailing:
o What was accomplished
o Any justification for tasks expected but not completed
o Recommendations
• Prepare for the close by considering what the project team has learned about the
environments, procedures, data integration design, data architecture, and other project plans.
• Formulate the recommendations based on issues or problems that need to be addressed.
Succinctly describe each problem or recommendation and if applicable, briefly describe a
recommended approach.
Challenge
Description
• Who needs access to the Repository? What do they need the ability to do?
• Is a central administrator required? What permissions are appropriate for
him/her?
• Is the central administrator responsible for designing and configuring the
repository security? If not, has a security administrator been identified?
• What levels of permissions are appropriate for the developers? Do they need
access to all the folders?
• Who needs to start sessions manually?
• Who is allowed to start and stop the Informatica Server?
• How will PowerCenter security be administered? Will it be the same as the
database security scheme?
• Do we need to restrict access to Global Objects?
The following pages offer some answers to the these questions and some
suggestions for assigning user groups and access privileges.
Global Objects include Database Connections, FTP Connections and External Loader
Connections. Global Object permissions, in addition to privileges and permissions
assigned using the Repository Manager, affect the ability to perform tasks in the
Server Manager, the Repository Manager, and the command line program, pmcmd.
The Server Manager also offers an enhanced security option that allows you to
specify a default set of privileges that applies restricted access controls for Global
Objects. Only the owner of the Object or a Super User can manage permissions for a
Global Object.
Choosing the Enable Security option activates the following set of default
privileges:
Enabling Enhanced Security does not lock the restricted access settings for Global
Objects. This means that the permissions for Global Objects can be changed after
enabling Enhanced Security.
The following table summarizes some possible privileges that may be granted:
Privilege Description
Session Operator Can run any sessions or batches, regardless
of folder level permissions.
Use Designer Can edit metadata in the Designer.
Browse Repository Can browse repository contents through the
Repository Manager.
Create Sessions and Can create, modify, and delete sessions and
Batches batches in Server Manager.
Administer Repository Can create and modify folders.
Administer Server Can configure connections on the server and
The next table suggests a common set of initial groups and the privileges that may
be associated with them:
Users with Administer Repository or Super User privileges may edit folder properties,
which must identify a folder owner and group, and also determine whether the folder
is shareable, meaning that shortcuts can be created pointing to objects within the
folder, thereby enabling object reuse. After a folder is flagged as shareable, this
property cannot be changed. For each folder, privileges are set for the owner, group,
and repository (i.e., any user).
The following table details the three folder level privileges: Read, Write, and
Execute:
Privilege Description
Read Can read, copy, and create shortcuts to
repository objects in the folder. Users without
read permissions cannot see the folder.
Write Can edit metadata in the folder.
Execute Can run sessions using mappings in the
folder.
Allowing shortcuts enables other folders in the same repository to share objects such
as source/target tables, transformations, and mappings. A recommended practice is
to create only one shareable folder per repository, and to place all reusable objects
within that sharable folder. When other folders create a shortcut from a shareable
folder, that folder inherits the properties of the object, so changes to common logic
or elements can be managed more efficiently.
Note that users with the Session Operator privilege can run sessions or batches,
regardless of folder level permissions. A folder owner should be allowed all three
folder level permissions. However members within the folder’s group may contain
only Read/Write, or possibly all three levels, depending on the desired level of
security. Repository privileges should be restricted to Read permissions only, if any
at all.
You might also wish to add a group specific to each application if there are many
application development tasks being performed within the same repository. For
example, if you have two projects, ABC and XYZ, it may be appropriate to create a
group for ABC developers and another for XYZ developers. This enables you to
assign folder level security to the group and keep the two projects from accidentally
working in folders that belong to the other project team. In this example, you may
assign group level security for all of the ABC folders to the ABC group. In this way,
only members of the ABC group can make changes to those folders.
Informatica recommends creating individual User IDs for all developers and
administrators on the system rather than using a single shared ID. One of the most
important reasons for this is session level locking. When a session is in use by a
developer, it cannot be opened and modified by anyone but that user. Locks thus
prevent repository corruption by preventing simultaneous uncoordinated updates.
Also, if multiple individuals share a common login ID, it is difficult to identify which
developer is making (or has made) changes to an object.