You are on page 1of 4

***********************************************************************************************************

keep data consistency when using oracle exp/expdp


Note: "consistent=y" has been replaced in (oracle 10g onwards) data pump export by flashback_scn or flashback_time. Export parameter setting of CONSISTENT=Y This setting if allowed to take default enables consistency at the table level and not across tables. If set to YES guarantees read consistency between multiple tables. The setting CONSISTENCY=Y causes SET TRANSACTION READONLY to be executed for your export session SET TRANSACTION READONLY means all subsequent queries are consistent to the point in time when you issued the SET. So all your data remains consistent despite committed changes made by other sessions. CONSISTENT = N is the default setting. When this parameter is used in the exports, one should know that he should have sufficiently large UNDO or ROLLBACK segments else you are bound to get ORA-01555 snapshot too old error. The original Export utility, when setting consistent=y, will create export dump file of database objects from the point in time at the beginning of the Export session. - In 24*7 running database active DML are being performed by any end users while we are performing export of database. Means if we want take consistent image of database in export backup then we should need to specify exp consistent=y command. b. Without setting values for FLASHBACK_SCN or FLASHBACK_TIME, Data Pump Export utility may create an inconsistent export. c. To insure a consistent export with Data Pump export, either set the FLASHBACK_SCN or FLASHBACK_TIME parameter, or restart the database in restrict mode before the export session starts. Ex: expdp system/passwd directory=flsh dumpfile=user001_2.dmp logfile =user001_2.log schemas=usr001 flashback_time=TO_TIMESTAMP (TO_CHAR (SYSDATE, YYYY-MM-DD HH24:MI:SS), YYYY-MM-DD HH24:MI:SS) Note: CONSISTENT=y is unsupported for exports that are performed when you are connected as user SYS or you are using AS SYSDBA, or both.

************************************************************************ *****************************************************************************

Understanding COMPRESS parameter in export

The default, COMPRESS=y, causes Export to flag table data for consolidation into one initial extent upon import. If extent sizes are large (for example, because of the PCTINCREASE parameter), the allocated space will be larger than the space required to hold the data. If you specify COMPRESS=n, Export uses the current storage parameters, including the values of initial extent size and next extent size. The values of the parameters may be the values specified in the CREATE TABLE or ALTER TABLE statements or the values modified by the database system. For example, the NEXT extent size value may be modified if the table grows and if the PCTINCREASE parameter is nonzero. If we specify COMPRESS=y during export then at the time of table creation while importing, the INITIAL extent of the table would be as large as the sum of all the extents allocated to the table in the original database. If we specify COMPRESS=n during export then while creating table in the import, it will use the same values of INITIAL extent as in the original database.

http://oracleadmins.wordpress.com/2008/08/05/understanding-compress-parameter-in-export/
http://tahiti.oracle.com

**************************************************************************
What is the difference between Traditional Export and Datapump? Datapump operates on a group of files called dump file sets. However, normal export operates on a single file. Datapump access files in the server (using ORACLE directories). Traditional export can access files in client and server both (not using ORACLE directories). exp/imp can utilize the client machine resource for taking the backups but, the datapump works only in server. Exports (exp/imp) represent database metadata information as DDLs in the dump file, but in datapump, it represents in XML document format. Datapump has parallel execution but in exp/imp single stream execution. Table Extent compression can be done using COMPRESS option whereas in datapump COMPRESSION does the dumpfile compression. Datapump has better control than exp/imp on the backup job with START, STOP and RESTART options. Export and import can be taken over the network using database links even without generating the dump file using NETWORK_LINK parameter.

Datapump does not support sequential media like tapes, but traditional export supports.

http://www.acehints.com/2012/02/datapump-vs-expimp-difference-or.html

One way to determine the objects that will or can be exported for the different modes is to look at the three DBA views DATABASE_EXPORT_OBJECTS, SCHEMA_EXPORT_OBJECTS, and TABLE_EXPORT_OBJECTS. Each of these views, if queried, will give you a list and short description on the specific paths to object types that you can expect INCLUDE or EXCLUDE to be dependent on the object you are exporting or importing.
SQL> SELECT object_path, comments FROM table_export_objects where object_path like 'TABLE%';

**************************************************** Online Index Rebuild oracle 11g>Behind the scene


http://fordba.wordpress.com/2011/04/05/online-ndx-rebuild/
Note: We can't rebuild online for BITMAP index in oracle 9i,but 10g onwards we can do. **********************************************

How to dump Oracle Data Block?


Often while doing instance tuning or sql tuning, Internal structure of a Oracle Data block is important to know

https://blogs.oracle.com/sysdba/entry/how_to_dump_oracle_data_block

*********************************

On-line Table Reorganization and Redefinition


Tables can be reorganized and redefined (evolved) on-line with the DBMS_REDEFINITION package. The process is similar to on-line rebuilds of indexes, in that the original table is left on-line, while a new copy of the table is built. However, an index index-rebuild is a singular operation, while table redefinition is a multi-step process. Table redefinition is started by the DBA creating an interim table based on the original table. The interim table can have a different structure than the original table, and will eventually take the original table's place in the database. While the table is redefined, DML operations on the original table are captured in a Materialized View Log table (MLOG$_%). These changes are eventually transformed and merged into the interim table. When done, the names of the original and the interim tables are swapped in the data dictionary. At this point all users will be working on the new table and the old table can be dropped.

http://www.orafaq.com/node/4
On-line Table Redefinition can be used for: o o Add, remove, or rename columns from a table Converting a non-partitioned table to a partitioned table and vice versa

o o o o o

Switching a heap table to an index organized and vice versa Modifying storage parameters Adding or removing parallel support Reorganize (defragmenting) a table Transform data in a table

You might also like