You are on page 1of 5

Arjun Choubey: appsdbac@yahoo.

com

Database Migration with Unix-pipe and Export/Import

Table of Contents
A. Introduction............................................................................................................2
B. Required utilities....................................................................................................2
1. What is Unix Pipe? ............................................................................................2
2. What is import/export ? .....................................................................................2
3. How fast a table is imported?.............................................................................2
C. Migration Scripts ...................................................................................................3
D. What I did to finish in 5 Hours ..............................................................................4
E. Contact Information ...............................................................................................5
Arjun Choubey: appsdbac@yahoo.com

A. Introduction
Migration is the process of transforming from one version to another.
Export/Import is only available utility to migrate the database across the
platform. Migration of production database across the platform using
export/import needs downtime and this effort becomes critical for those
companies whose production database is 24x7 and can’t afford long duration
of production downtime.

The aim of this document is to share the technical knowledge of using


export/import with Unix pipe, which needs minimal downtime. Before finalizing
this scripts I tested all combinations of export/import. The results are based on
my experience, which may vary and depends upon server capacity. It took a
little time to fix the scripts but we migrated the production database of 160 GB
in 5 hours.

B. Required utilities
Migration scripts need following three utilities.
Unix pipe
Oracle export (exp)
Oracle import (imp)

1. What is Unix Pipe?


In Unix, a pipe is a unidirectional, stream communication abstraction. One
process writes to the ``write end'' of the pipe, and a second process reads
from the ``read end'' of the pipe. Its very useful utility and I have used in the
scripts .

2. What is import/export ?
The Oracle export (EXP) and import (IMP) utilities are used to perform logical
database backup and recovery. They are also used to move Oracle data from
one machine, database or schema to another.

3. How fast a table is imported?


If you need to monitor how fast rows are imported from a running import job,
try following sql :

select substr(sql_text,instr(sql_text,'INTO "'),30) table_name,


rows_processed,
round((sysdate-to_date(first_load_time,'yyyy-mm-dd
hh24:mi:ss'))*24*60,1) minutes,
trunc(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd
hh24:mi:ss'))*24*60)) rows_per_min
from sys.v_$sqlarea
where sql_text like 'INSERT %INTO "%'
and command_type = 2
Arjun Choubey: appsdbac@yahoo.com

and open_versions > 0;

C. Migration Scripts
Save your export parameter file as exp_ALL.par
Save your import parameter file as imp_ALL.par
Save the following scripts as remote_exp_imp_all.ksh
Run the scripts from unix prompt as
remote_exp_imp_all.ksh > remote_exp_imp_all.out 2>&1 &

--------------------------------Cut the scripts ----------------------------------------------


#!/bin/ksh
#
# Assumption
# REMOTE SERVER is server on which existing production database is running
# LOCAL SERVER is new server on which database will be imported.
# In this case remote server is saqhdp01 and database name is C4PR
# All scripts are executed from LOCAL SERVER.
# Only export parameter file is required on REMOTE SERVER
# exp_ALL.par is export parameter file
# imp_ALL.par is import parameter file

set -xv

. $HOME/C4PR.env # C4PR.env sets Database environment

TIME_START=`date +%T`
touch start_ALL

R_SERVER=saqhdp01 # Remote database server


REMOTE_HOME=/u333/app # Location on remote server

L_PIPE=$HOME/upgrade/imp_pipe_ALL # Local Unix Pipe


R_PIPE=$REMOTE_HOME/upgrade/exp_pipe_ALL # Remote Unix Pipe

# Create the local and remote pipes and change the permissions
# Remove the files prior

rsh $R_SERVER "/bin/rm $R_PIPE"


/usr/bin/rm $L_PIPE

rsh $R_SERVER "/etc/mknod $R_PIPE p; chmod 777 $R_PIPE"


/usr/sbin/mknod $L_PIPE p

# Start the import from the pipe in the background


#

imp parfile=$HOME/upgrade/imp_ALL.par > $HOME/upgrade/imp_ALL.out 2>&1 &

# Start the export on the remote server in the background


#
rsh $R_SERVER ". /u333/app/upgrade/C4PR.env; \
Arjun Choubey: appsdbac@yahoo.com

exp parfile=/u333/app/upgrade/exp_ALL.par > /u333/app/upgrade/exp_ALL.out 2>&1" >


ALL.out 2>&1 &

# cat (concatenate) the export pipe to the import pipe on the remote server
#
rsh $R_SERVER "cat < $R_PIPE " | cat > $L_PIPE

TIME_END=`date +%T`
touch $HOME/upgrade/end_ALL

# Remove the local and remote pipes

rsh $R_SERVER "rm $R_PIPE"

rm $L_PIPE

mailx -s "Start/End time for piped export/import " appsdbac@yahoo.com <<EOF


Start : $TIME_START
End : $TIME_END
EOF
---------------------- End of scripts -------------------------------------------------------

D. What I did to finish in 5 Hours


In order to achieve it, I performed various iterations. At the end of each
iteration results were analyzed and issues (problematic area) were resolved.
Also there were support from internal Engineering, Operations, DBA and
management. Following points were considered in final iterations .

• Gather the statistics from various iterations


• Create a blank database on RAID-1 file systems. Since import is write
intensive operations, RAID-1 takes at least 30% less time.
• Distributes the data files based on I/O pattern. STATSCPACK results
are helpful
• Validate the parameters
disk_asynch_io = false
db_writer_processes = (No of cpu/2 )
_system_trig_enabled=FALSE
• Used a large buffer size. I used 100000000
• Created rollback segments of bigger size (initial 1 GB and next 100
MB ).
• Increased the online Redo log size . 200MB in my case .
• Drop the snapshot logs before export
• Disable the database triggers before export
• Set database in noarchivelog mode
• Using the above script , export/import the database structure only
without indexes, constraints
• Fix the invalid objects and compare the objects
Arjun Choubey: appsdbac@yahoo.com

• With help of above scripts , Prepare unix-pipe scripts to run in parallel


at user level . Divide the users in multiple groups based on their timing
so that all can finish around same time . In our case the biggest table
was MTL_MATERIAL_TRANSACTION of INV user with 14 Million
rows and it was taking 3.5 hours. So I divided the group in such way
that all group should finish in 4 hours. I used 4 set of unix-pipe scripts .
• Also wrote scripts to build the index parallel to import. I wrote the
scripts which was checking the tables from imported logs on regular
intervals (10 minutes) and starts rebuilding the index automatically on
those tables which were imported in last 10 minutes. This way index
rebuild was also completing on tables which were imported. The
remaining indexes were only on those tables which were imported in
last 10 minutes. This script reduced the index rebuild time .
• Finally compare the objects

E. Contact Information
Send your questions and feedback at appsdbac@yahoo.com

You might also like