You are on page 1of 15

DATA WAREHOUSING A data warehouse (DW) is a database used for reporting.

The data is offloaded from the operational systems for reporting. The data may pass through an operational data store for additional operations before it is used in the DW for reporting. A data warehouse maintains its functions in three layers: staging, integration, and access. Staging is used to store raw data for use by developers (analysis and support). The integration layer is used to integrate data and to have a level of abstraction from users. The accesslayer is for getting data out for users. 1. Ralph Kimball's paradigm: Data warehouse is the conglomerate of all data marts within the enterprise. Information is always stored in the dimensional model. Definition as Per Ralph Kimball : A data warehouse is a copy of transaction data specifically structured for query and analysis. His Approach towards towards the Data warehouse Design is Bottom-Up.In the bottomup approach data marts are first created to provide reporting and analytical capabilities for specific business processes 2.Bill Inmon's paradigm: Data warehouse is one part of the overall business intelligence system. An enterprise has one data warehouse, and data marts source their information from the data warehouse. In the data warehouse, information is stored in 3rd normal form. Definition as Per Bill Inmon: A data warehouse is a subject-oriented, integrated, time-variant and non-volatile collection of data in support of management's decision making process. Subject-Oriented: A data warehouse can be used to analyze a particular subject area. For example, "sales" can be a particular subject. Integrated: A data warehouse integrates data from multiple data sources. For example, source A and source B may have different ways of identifying a product, but in a data warehouse, there will be only a single way of identifying a product. Time-Variant: Historical data is kept in a data warehouse. For example, one can retrieve data from 3 months, 6 months, 12 months, or even older data from a data warehouse. This contrasts with a transactions system, where often only the most recent data is kept. For example, a transaction system may hold the most recent address of a customer, where a data warehouse can hold all addresses associated with a customer. Non-volatile: Once data is in the data warehouse, it will not change. So, historical data in a data warehouse should never be altered. His Approach towards towards the Data warehouse Design is Top-Down. In top-down approach to data warehouse design, in which the data warehouse is designed using a normalized enterprise data model."Atomic" data, that is, data at the lowest level of detail, are stored in the data warehouse.

Definition: A data warehouse is a subject-oriented, integrated, time-variant and non-volatile collection of data in support of management's decision making process. Subject-Oriented: A data warehouse can be used to analyze a particular subject area. For example, "sales" can be a particular subject. Integrated: A data warehouse integrates data from multiple data sources. For example, source A and source B may have different ways of identifying a product, but in a data warehouse, there will be only a single way of identifying a product. Time-Variant: Historical data is kept in a data warehouse. For example, one can retrieve data from 3 months, 6 months, 12 months, or even older data from a data warehouse. This contrasts with a transactions system, where often only the most recent data is kept. For example, a transaction system may hold the most recent address of a customer, where a data warehouse can hold all addresses associated with a customer. Non-volatile: Once data is in the data warehouse, it will not change. So, historical data in a data warehouse should never be altered.

Ralph Kimball provided a more concise definition of a data warehouse: A data warehouse is a copy of transaction data specifically structured for query and analysis.

ARCHITECTURE

The above Architecture is taken from the www.databaseanswers.com . We would recommend you to visit this site to get good understanding towards data modeling. Now we are goanna define each and every terminology in the above picture to facilitate better understanding of the subject. 1. Operational Data Store : is a database designed to integrate data from multiple sources for additional operations on the data. The data is then passed back to operational systems for further operations and to the data warehouse for reporting. 2. ERP : Enterprise resource planning integrates internal and externalmanagement information across an entire organization, embracingfinance/accounting, manufacturing, sales and service, etc. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the organization and manage the connections to outside stakeholders. 3. CRM : Customer relationship management is a widely-implemented strategy for managing a companys interactions with customers, clients and sales prospects. It involves using technology to organize, automate, and synchronize business processesprincipally sales activities, but also those for marketing, customer service, and technical support. Customer relationship management describes a company-wide business strategy including customer-interface departments as well as other departments.

4. Flat Files In data Ware Housing : Flat Files Doesnt Maintain referential Integrity like RDBMS and are Usually seperated by some delimiters like comma and pipes etcs. Right from Informatica 8.6 unstructured data sources like Ms-word,Email and Pdf can be taken as source. 5. ETL (Extract,Transform, And load) : is a process in database usage and especially in data warehousing that involves: Extracting data from outside sources Transforming it to fit operational needs (which can include quality levels) Loading it into the end target (database or data warehouse) 6. Data Marts: A data mart (DM) is the access layer of the data warehouse (DW) environment that is used to get data out to the users. The DM is a subset of the DW, usually oriented to a specific business line or team. For the Definition of the Data Warehouse Please Refer to Introduction to the Data ware Housing. 7. OLAP : OLAP (Online Analytical Processing) is a methodology to provide end users with access to large amounts of data in an intuitive and rapid manner to assist with deductions based on investigative reasoning. OLAP systems need to: Support the complex analysis requirements of decision-makers, Analyze the data from a number of different perspectives (business dimensions), and Support complex analyses against large input (atomic-level) data sets. 8. OLTP : Online transaction processing, or OLTP, refers to a class of systems that facilitate and manage transaction-oriented applications, typically for data entry and retrieval transaction processing. 9. Data Mining: Is the process of extracting patterns from large data sets by combining methods from statistics and artificial intelligence withdatabase management. Data mining is seen as an increasingly important tool by modern business to transform data into business intelligence giving an informational advantage.

Different data warehousing systems have different structures. Some may have an ODS (operational data store), while some may have multiple data marts. Some may have a small number of data sources, while some may have dozens of data sources. In view of this, it is far more reasonable to present the different layers of a data warehouse architecture rather than discussing the specifics of any one system.

In general, all data warehouse systems have the following layers: Data Source Layer Data Extraction Layer Staging Area ETL Layer Data Storage Layer Data Logic Layer Data Presentation Layer Metadata Layer System Operations Layer

The picture below shows the relationships among the different components of the data warehouse architecture:

Each component is discussed individually below: Data Source Layer This represents the different data sources that feed data into the data warehouse. The data source can be of any format -- plain text file, relational database, other types of database, Excel file, ... can all act as a data source. Many different types of data can be a data source: Operations -- such as sales data, HR data, product data, inventory data, marketing data, systems data. Web server logs with user browsing data. Internal market research data. Third-party data, such as census data, demographics data, or survey data. All these data sources together form the Data Source Layer.

Data Extraction Layer Data gets pulled from the data source into the data warehouse system. There is likely some minimal data cleansing, but there is unlikely any major data transformation. Staging Area This is where data sits prior to being scrubbed and transformed into a data warehouse / data mart. Having one common area makes it easier for subsequent data processing / integration. ETL Layer This is where data gains its "intelligence", as logic is applied to transform the data from a transactional nature to an analytical nature. This layer is also where data cleansing happens. Data Storage Layer This is where the transformed and cleansed data sit. Based on scope and functionality, 3 types of entities can be found here: data warehouse, data mart, and operational data store (ODS). In any given system, you may have just one of the three, two of the three, or all three types. Data Logic Layer This is where business rules are stored. Business rules stored here do not affect the underlying data transformation rules, but does affect what the report looks like. Data Presentation Layer This refers to the information that reaches the users. This can be in a form of a tabular / graphical report in a browser, an emailed report that gets automatically generated and sent everyday, or an alert that warns users of exceptions, among others. Metadata Layer This is where information about the data stored in the data warehouse system is stored. A logical data model would be an example of something that's in the metadata layer. System Operations Layer This layer includes information on how the data warehouse system operates, such as ETL job status, system performance, and user access history.
DW DESIGN

following are the typical steps involved in the datawarehousing project cycle.

Requirement Gathering Physical Environment Setup Data Modeling ETL

OLAP Cube Design Front End Development Report Development Performance Tuning Query Optimization Quality Assurance Rolling out to Production Production Maintenance Incremental Enhancements

Each page listed above represents a typical data warehouse design phase, and has several sections:

Task Description: This section describes what typically needs to be accomplished during this particular data warehouse design phase. Time Requirement: A rough estimate of the amount of time this particular data warehouse task takes. Deliverables: Typically at the end of each data warehouse task, one or more documents are produced that fully describe the steps and results of that particular task. This is especially important for consultants to communicate their results to the clients. Possible Pitfalls: Things to watch out for. Some of them obvious, some of them not so obvious. All of them are real.

The Additional Observations section contains my own observations on data warehouse processes not included in any of the design steps.

DATA WAREHOUSE SCHEMA A schema is a collection of database objects, including tables, views, indexes, and synonyms. You can arrange schema objects in the schema models designed for data warehousing in a variety of ways. Star Schemas : The star schema (also called star-join schema, data cube, or multi-dimensional schema) is the simplest style of data warehouse schema. The star schema consists of one or more fact tables referencing any number of dimension tables

The facts that the data warehouse helps analyze are classified along different dimensions: The fact table holds the main data. It includes a large amount of aggregated data, such as price and units sold. There may be multiple fact tables in a star schema. Dimension tables, which are usually smaller than fact tables, include the attributes that describe the facts. Often this is a separate table for each dimension. Dimension tables can be joined to the fact table(s) as needed. Dimension tables have a simple primary key, while fact tables have a set of foreign keys which make up a compound primary key consisting of a combination of relevant dimension keys. Advantages : Provide a direct and intuitive mapping between the business entities being analyzed by end users and the schema design. Provide highly optimized performance for typical star queries. Are widely supported by a large number of business intelligence tools, which may anticipate or even require that the data-warehouse schema contain dimension tables Snow Flake Schemas : The snowflake schema is represented by centralized fact tables which are connected to multiple dimensions. In the snowflake schema, dimensions are normalized into multiple related tables, whereas the star schema's dimensions are denormalized with each dimension represented by a single table. Snowflake schemas are often better with more sophisticated query tools that isolate users from the raw table structures and for environments having numerous queries with complex criteria.

Advantages : Some OLAP multidimensional database modeling tools that use dimensional data marts as data sources are optimized for snowflake schemas.

A snowflake schema can sometimes reflect the way in which users think about data. Users may prefer to generate queries using a star schema in some cases, although this may or may not be reflected in the underlying organization of the database. A multidimensional view is sometimes added to an existing transactional database to aid reporting. In this case, the tables which describe the dimensions will already exist and will typically be normalized. A snowflake schema will therefore be easier to implement. If a dimension is very sparse (i.e. most of the possible values for the dimension have no data) and/or a dimension has a very long list of attributes which may be used in a query, the dimension table may occupy a significant proportion of the database and snowflaking may be appropriate.

DIMENSIONAL MODELLING Dimensional modeling (DM) is the name of a set of techniques and concepts used in data warehouse design. Dimensional modeling always uses the concepts of facts (measures), and dimensions (context). Facts : Facts are typically (but not always) numeric values that can be aggregated, and dimensions are groups of hierarchies and descriptors that define the facts. Types of Facts : There are three types of facts: Additive: Additive facts are facts that can be summed up through all of the dimensions in the fact table. Semi-Additive: Semi-additive facts are facts that can be summed up for some of the dimensions in the fact table, but not the others. Non-Additive: Non-additive facts are facts that cannot be summed up for any of the dimensions present in the fact table. Types of Fact Tables There are two types of fact tables: Cumulative: This type of fact table describes what has happened over a period of time. For example, this fact table may describe the total sales by product by store by day. The facts for this type of fact tables are mostly additive facts. The first example presented here is a cumulative fact table. Snapshot: This type of fact table describes the state of things in a particular instance of time, and usually includes more semi-additive and non-additive facts. The second example presented here is a snapshot fact table.

Dimension : A dimension is a data element that categorizes each item in a data set into nonoverlapping regions. A data warehouse dimension provides the means to "slice and dice" data in a data warehouse. Dimensions provide structured labeling information to otherwise unordered numeric measures. Types of Dimension : Conformed dimension : In data warehousing, a conformed dimension is a dimension that has the same meaning to every fact with which it relates. Conformed dimensions allow facts and measures to be categorized and described in the same way across multiple facts and/or data marts, ensuring consistent reporting across the enterprise. A conformed dimension can exist as a single dimension table that relates to multiple fact tables within the same data warehouse, or as identical dimension tables in separate data marts. Junk dimension : A Junk Dimension is a dimension table consisting of attributes that do not belong in the fact table or in any of the existing dimension tables.The junk dimension should contain a single row representing the blanks as a surrogate key that will be used in the fact table for every row returned with a blank comment field. The designer is faced with the challenge of where to put attributes that do not belong in the other dimensions,Solution is to create a new dimension for each of the remaining attributes, but due to their nature, it could be necessary to create a vast number of new dimensions resulting in a fact table with a very large number of foreign keys. Degenerate dimension : A dimension key, such as a transaction number, invoice number, ticket number, or bill-of-lading number, that has no attributes and hence does not join to an actual dimension table. Degenerate dimensions are very common when the grain of a fact table represents a single transaction item or line item because the degenerate dimension represents the unique identifier of the parent. Degenerate dimensions often play an integral role in the fact table's primary key. Dimensional modeling structure: The dimensional model is built on a star-like schema, with dimensions surrounding the fact table. To build the schema, the following design model is used: Choose the business process Declare the Grain Identify the dimensions Identify the Fact

Benefits of dimensional modeling : Benefits of the dimensional modeling are following: Understandability Query performance Extensibility

DDATA MODELLING Data modeling is the process of creating a data model by applying formal data model descriptions using data modeling techniques. In other words Data modeling can be defined as a method used to define and analyze data requirements needed to support the business processes of an organization. Conceptual, logical and physical schemes : Conceptual schema: This consists of entity classes, representing kinds of things of significance in the domain, and relationships assertions about associations between pairs of entity classes. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. Logical schema: This consists of descriptions of tables and columns, object oriented classes, and XML tags, among other things. Physical schema: describes the physical means by which data are stored. This is concerned with partitions, CPUs, table spaces, and the like. Modeling methodologies : Data models represent information areas of interest. While there are many ways to create data models, But only two modeling methodologies standard top-down and bottom-up are used in real time environment : Bottom-up models : are often the result of a reengineering effort. They usually start with existing data structures forms, fields on application screens, or reports. These models are usually physical, application-specific, and incomplete from an enterprise perspective. They may not promote data sharing, especially if they are built without reference to other parts of the organization. Top-down logical data models, on the other hand, are created in an abstract way by getting information from people who know the subject area. A system may not implement all the entities in a logical model, but the model serves as a reference point or template.

DATA MINING

Data mining is the process of extracting patterns from large data sets by combining methods from statistics and artificial intelligencewith database management. Process: 1.Pre-processing Before data mining algorithms can be used, a target data set must be assembled. As data mining can only uncover patterns already present in the data, the target dataset must be large enough to contain these patterns while remaining concise enough to be mined in an acceptable timeframe. A common source for data is a data mart or data warehouse. Pre-process is essential to analyze the multivariate datasets before clustering or data mining. The target set is then cleaned. Cleaning removes the observations with noise and missing data. The clean data are reduced into feature vectors, one vector per observation. A feature vector is a summarized version of the raw data observation. The feature(s) selected will depend on what the objective(s) is/are; obviously, selecting the "right" feature(s) is fundamental to successful data mining. The feature vectors are divided into two sets, the "training set" and the "test set". The training set is used to "train" the data mining algorithm(s), while the test set is used to verify the accuracy of any patterns found. 2.Data mining Data mining commonly involves four classes of tasks:[13] Clustering is the task of discovering groups and structures in the data that are in some way or another "similar", without using known structures in the data. Classification is the task of generalizing known structure to apply to new data. For example, an email program might attempt to classify an email as legitimate or spam. Common algorithms include decision tree learning, nearest neighbor, naive Bayesian classification, neural networks and support vector machines. Regression Attempts to find a function which models the data with the least error. Association rule learning Searches for relationships between variables. For example a supermarket might gather data on customer purchasing habits. Using association rule learning, the supermarket can determine which products are frequently bought together and use this information for marketing purposes. This is sometimes referred to as market basket analysis. 3.Results validation The final step of knowledge discovery from data is to verify the patterns produced by the data mining algorithms occur in the wider data set. Not all patterns found by the data mining algorithms are necessarily valid. It is common for the data mining algorithms to find patterns in the training set which are not present in the general data set, this is calledoverfitting. To overcome this, the evaluation uses a test set of data which the data mining algorithm was not trained on. The learnt

patterns are applied to this test set and the resulting output is compared to the desired output. For example, a data mining algorithm trying to distinguish spam from legitimate emails would be trained on a training setof sample emails. Once trained, the learnt patterns would be applied to the test set of emails which it had not been trained on, the accuracy of these patterns can then be measured from how many emails they correctly classify. A number of statistical methods may be used to evaluate the algorithm such as ROC curves. If the learnt patterns do not meet the desired standards, then it is necessary to reevaluate and change the preprocessing and data mining. If the learnt patterns do meet the desired standards then the final step is to interpret the learnt patterns and turn them into knowledge. OLTP VS OLAP OLTP Source of data OLAP Operational data; OLTPs are the original source of the data To control and run fundamental business tasks Reveals a snapshot of ongoing business processes Consolidation data; OLAP data comes from the various OLTP Databases To help with planning, problem solving, and decision support Multi-dimensional views of various kinds of business activities

Purpose of data

What the data

Inserts and Updates

Short and fast inserts and Periodic long-running updates initiated by end batch jobs refresh the data users Typically very fast Depends on the amount of data involved; batch data refreshes and complex queries may take many hours; query speed can be improved by creating indexes Often complex queries involving aggregations

Processing Speed

Queries

Relatively standardized and simple queries Returning relatively few records

Space Requirements

Can be relatively small if Larger due to the existence historical data is archived of aggregation structures and history data; requires

more indexes than OLTP Database Design Highly normalized with many tables Typically de-normalized with fewer tables; use of star and/or snowflake schemas Instead of regular backups, some environments may consider simply reloading the OLTP data as a recovery method.

Backup and Recovery

Backup religiously; operational data is critical to run the business, data loss is likely to entail significant monetary loss and legal liability

DATAMART VS DATA WAREHOUSE Datamart 1.Data mart is usually sponsored at the department level and developed with a specific issue or subject in mind, a data mart is a data warehouse with a focused objective. Data Warehouse 1.Data warehouse is a SubjectOriented, Integrated, Time-Variant, Nonvolatile collection of data in support of decision making.

2.A data mart is used on a business 2.A data warehouse is used on an division/ department level. enterprise level. 3.A Data Mart is a subset of data from a Data Warehouse. Data Marts are built for specific user groups. 3.A Data Warehouse is simply an integrated consolidation of data from a variety of sources that is specially designed to support strategic and tactical decision making. 4.The main objective of Data Warehouse is to provide an integrated environment and coherent picture of the business at a point in time.

4.By providing decision makers with only a subset of data from the Data Warehouse, Privacy, Performance and Clarity Objectives can be attained.

STAGING AREA What is Staging area why we need it in DWH?

If target and source databases are different and target table volume is high it contains some millions of records in this scenario without staging table we need to design your informatica using look up to find out whether the record exists or not in the target table since target has huge volumes so its costly to create cache it will hit the performance. If we create staging tables in the target database we can simply do outer join in the source qualifier to determine insert/update this approach will give you good performance. It will avoid full table scan to determine insert/updates on target.And also we can create index on staging tables since these tables were designed for specific application it will not impact to any other schemas/users. While processing flat files to data warehousing we can perform cleansing. Data cleansing, also known as data scrubbing, is the process of ensuring that a set of data is correct and accurate. During data cleansing, records are checked for accuracy and consistency. Since it is one-to-one mapping from ODS to staging we do truncate and reload. We can create indexes in the staging state, to perform our source qualifier best. If we have the staging area no need to relay on the informatics transformation to known whether the record exists or not. Data cleansing Weeding out unnecessary or unwanted things (characters and spaces etc) from incoming data to make it more meaningful and informative Data merging Data can be gathered from heterogeneous systems and put together Data scrubbing Data scrubbing is the process of fixing or eliminating individual pieces of data that are incorrect, incomplete or duplicated before the data is passed to end user. Data scrubbing is aimed at more than eliminating errors and redundancy. The goal is also to bring consistency to various data sets that may have been created with different, incompatible business rules.

You might also like