Take a logical backup of your main production server
We can perform the export and import from higher version to lower version in oracle database. Oracle gives VERSION parameter which you can specify during the export and you can pass the lower version of database where you want to do import. Like if your source is 12c and you want to import 12c db backup into lower version 11g then check the version of 11g database and use it into 12c export backup script.
Source 12c
$ mkdir /u02/EXP_DIR
$ exit
Sql > create directory EXP_DIR as '/u02/EXP_DIR';
Directory created.
Sql > grant read,write on directory EXP_DIR to sys,system;
Sql > grant exp_full_database,imp_full_database to sys,system;
NOTE: If both side you have same version of database then you no need to mention “version” parameter during full export on production source db.
vi EXPDP_full.par
USERID='/ as sysdba'
DIRECTORY=EXP_DIR
DUMPFILE=EXPDP_FULL.dmp
LOGFILE=EXPDP_FULL.log
FULL=Y
VERSION=11.2.0.4.0
ESC:wq
To run in background this jobs use nohup command.
nohup expdp parfile=EXPDP_full.par &
jobs –l
after that it will create nohup.out (output file) you can check out
tail -100f nohup.out
and also view the logfile of export EXPDP_full.log for more information.
And take the DDL of production source db tablespaces before import on target machine.
SET LONG 99999999
Set lin 300
Select tablespace_name,file_id,file_name,bytes/1024/1024m,autoextensible
From dba_data_files;
select dbms_metadata.get_ddl('TABLESPACE','&TABLESPACE_NAME') FROM DUAL;
After generation of all DDL’s of tablespaces must be create all ddl on target where you want to import full database.
Now transfer the exported dumpfile to target machine where you want to import and
create the same name database during installation of oracle software or either after installation using DBCA utility and the directory structure must be same like production source machine check pfile accordingly source db pfile path of controlfile,datafile and logfiles must be on same location and if on production source db there is a number of redo log group have created so on the target machine must be create the same redo log groups with members like below commands.
On production source machine check first redo group’s info
Select group#,thread#,sequence#,bytes/1024/1024,members,status
From v$log;
Select group#,member from v$logfile;
Target Machine
According the output of above query we need to create all group and members on the same location on target machine.
ALTER DATABASE ADD LOGFILE GROUP 5 ('/u01/app/oracle/oradata/orcl/redo08.log', 'u01/app/oracle/oradata/orcl/redo09.log') SIZE 100G;
When you create all redo log group likely production machine on target server now we are ready for import.
NOTE :- Again I mention before starting import you must be sure on production and target machine all the tablespace datafiles sizes are matched. And also check the sga size of target machine with source db pfile must be same.
$ mkdir /u02/EXP_DIR
$ exit
Sql > create directory EXP_DIR as '/u02/EXP_DIR';
Directory created.
Sql > grant read,write on directory EXP_DIR to sys,system;
Sql > grant exp_full_database,imp_full_database to sys,system;
Now a parfile for import operation.
Vi IMPDP_full.par
USERID='/ as sysdba'
LOGFILE=impdp_full.log
DUMPFILE=EXPDP_full.dmp
DIRECTORY=EXP_DIR
FULL=Y
ESC:wq
Need to do import into 11g ….from 12c (source)
nohup impdp parfile=IMPDP_full.par &
Jobs -l
After that it will create nohup.out (output file) you can check out
tail -100f nohup.out
And also view the logfile of export impdp_full.log for more information.
After completion import operation you can view properly log info and finally database is refresh on new node and you can login and check all data.
Please share this Blog with your colleagues or friends. Your suggestions and feedback are very helpful for everyone who come to this site and learn it from oracleocpworld.com.
Please comment here for your any query related to above post. You can email me on : oracleocpworld@gmail.com.