Wednesday, July 22, 2020

ORACLE DATAPUMP

ORACLE DATAPUMP
-----------------------------

DataPump Introduction
----------------------
 DataPump: DataPump is built in utility in oracle  to take logical backups its a technology high speed of data movement data and metadata from one database to another database 
 Datapump is available on oracle 10 g release 1 and later it is faster and more flexible alternatives to the traditional exp and imp. datapump runs with in the 
 oracle server processes and can read directly from database files and writes directly to file on the server, Data Pump is an extension to traditional EXP/IMP which
 provides more advantages like security ,speed etc,traditional export and import was mainly security problem for eg : we take take export scott schema .dmp file(dump file)
 we can easily take the scott.schema .dmp file import to any of the database machine traditional export and import does not provide any security to the .dmp file  

How Datapump Works:
-------------------
DataPump Export: oracle Datapump will create master table in the corresponding schema and data will be transferred parellely to dump file(.dmp)

DataPump Import: Oracle Datapump will happen reverse order that is dump file to master table will be created and from that original table

after finishing either export or import in datapump oracle will automatically drops the master table

Note: Whenever datapump export is done using parallel option,import also should be done with the same option,otherwise it will effect the time taking for import

sqlplus directory should be created both the exportdp and importdp

os level directory should be created both the exportdp and importdp

Datapump directory is a secure feature

You must create a directory in os level and DB level

On Target Database,you need to again create directory to use datapump

The Directory name can be different on target server

This adds one more layer of security for export/import

The Following Levels of Datapump export/import are possible
-------------------------------------------------------------

1)Database Level
2)Schema Level
3)Table Level
4)Row Level
5)Tablespace Level

   
in this  i am showing testing server (source) and production server (target)
----------------------------------------------------------------------------
first create directory at OS level

# mkdir -p /u02/dp_exp_dir

second create a directory at sql level

sql> create directory datapump as '/u02/dp_exp_dir';

grant permissions on directory

sql> grant read,write on directory datapump to scott ; (better to give sysuser)

To view directory information

sql> select * from dba_directories

production server
----------------- 


# mkdir -p /u02/dp_exp_dir

second create a directory at sql level

sql> create directory datapump as '/u02/dp_exp_dir';

grant permissions on directory

sql> grant read,write on directory datapump to scott ; (better to give sysuser)

To view directory information

sql>select * from dba_directories;

Test server
-----------

# mkdir -p /u02/dp_imp_dir

sqlplus '/as sysdba'

second create a directory at sql level

sql> create directory datapump as '/u02/dp_imp_dir';

craete or repalce directory datapump as '/u01/imp_dir';

grant permissions on directory

sql> grant read,write on directory datapump to scott ; (better to give sysuser)

to know options of datapump export

$expdp help =y

To take the database level export
-----------------------------------

$expdp directory=datapump dumpfile=fullprod.dmp logfile=fullprod.log full=y

To take the schema level export
-----------------------------------

$expdp directory=datapump dumpfile=scott_bkp.dmp logfile= scott_bkp.log schemas='scott'


To take the table  level export
-----------------------------------

$expdp directory=datapump dumpfile=emp_bkp.dmp logfile=emp_bkp.log table='SCOTT.EMP';( here scott is the owner and emp is the table)


To take the row level export
-----------------------------------

$expdp directory=datapump dumpfile=emprows_bkp.dmp logfile=emprows_bkp.log tables= 'SCOTT.EMP' query=\"where deptno=10"\


production server
-----------------

$expdp directory =datapump dumpfile=scott_bkp.dmp logfile=scott_bkp.log schemas='SCOTT'

sqlplus '/as sysdba'

$cd /u02/dp_exp_dir/

db_exp_dir] $ ls -lrt

scott_bkp.log
scott_bkp.dmp

$expdp directory= dataump dumpfile=emp_bkp.dmp logfile=emp_bkp.log tables='SCOTT.EMP'

To Import a full database
----------------------

sqlplus '/as sysdba'

$impdp directory= datapump dumpfile=fullprod.dmp logfile=imp_fullprod.log full=y

To know options of datapump import

$ impdp help =y

To Import a schema
----------------

$impdp directory =datapump dumpfile=scott_bkp.dmp logfile= imp_schema.log remap_schema='SCOTT:SCOTT'

$impdp directory= datapump dumpfile=scott_bkp.dmp logfile=imp_schema.log remap_schema ='SCOTT:ABC'


To Import a table
--------------
$impdp directory=datapump dumpfile=emp_bkp.dmp logfile=imp_emp.log tables='EMP" remap_schema='SCOTT:SCOTT'

To Import a table to another user
-------------------------------

$impdp directory=datapump dumpfile=emp_bkp.dmp logfile=imp_emp.log tables= 'EMP" remap_schema= 'SCOTT:system'


To Import a table to another tablespace(only in datapump)
------------------------------------------------------
$imp directory= datapump dumpfile=emp_bkp.dmp logfile=imp_emp.log tables='emp remap_schema='SCOTT:SCOTT' remap_tablespace='MYDATA:MYTBS'

Production server
-----------------
$db_exp_dir]$ ls -lrth

scott_bkp.dmp

$scp scott_bkp.dmp chaitanya@192.168.1.100: /u02/dp_imp_dir (production to testing server copy scott_bkp.dmp to /u02/dp_imp_dir location in testing server)  
password

Target server
-------------
$cd /uo2/dp_imp_dir
dp_imp_dir]ls -lrth
scott_bkp.dmp

$impdp directory=datapump dumpfile=scott_bkp.dmp logfile=imp_scott.log remap_schema='SCOTT:IMP_TEST  (here user is imp_test scott is owner)

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Oracle - datapumps - Logical Backup
------------------------------------

Note : For Logical backup, the DB must be up and running.

using logical backup, will move the data from one server to another server.

with utilities , expdp and impdp will export and import in to target database.

prod - expdp - .dmp - > move to target server using scp
---> .dmp - impdp ->import - target 

Prior to 10g version , we have exp and imp. From 10g oracle introduced Oracle datapumps with expdp and impdp which is 15-45 faster than normal imports.

We also use logical backup for refresh activities.

If developers want to build a module , for that they need production data to development server.
So DBA will refresh data from prod to develop server as per critirea.

There are three levels of Logical backup.

1. Full Database
2. Schema Level
3. Table Level

Export Parameters - job
------------------------
Will export in to binary dump file .dmp with expdp utility.

To get help on export parameters.
--------------------------------
$expdp help=y

Use physical location to hold dump files.
$mkdir -p /orabackup/prod/dp


Create logical alias for physical directory.
----------------------------------------
sql>create or replace directory dp_dir as '/orabackup/prod/dp';
sql>grant read,write on directory dp_dir to public;


Verify - the path /name - dba_directories
------------------------------------------
SQL> select directory_name,directory_path from dba_directories where directory_name like 'DP_DIR';

Two grants needed for a user to export and import.

datapump_exp_full_database
datapump_imp_full_database

SQL> select role from dba_roles where role like '%DATAPUMP%';

eg: If scott user to have a backup
sql>grant datapump_exp_full_database to scott;
sql>grant datapump_imp_full_database to scott;

Default system user has these roles. so can use system user to
have logical backup.
---------------------------------------------------------------
Full Database backup
--------------------
$expdp system/sys123 directory=dp_dir dumpfile=expdpfull220720.dmp logfile=expdpfull220720.log full=y compression=ALL job_name=j1

Interactive Method -
-------------------- 
can start/continue/stop/status/parallel - jobs

To stop the dp job.
--------------------

use ctrl+c
export>stop
export>....yes

Verify the job - status
-------------------------

sql>select job_name,state,operation from dba_datapump_jobs;

To continue - the dp job

1. attach
2. continue.

$ expdp system/sys123 attach=P1
export>continue


To terminate the job - kill.

use ctrl+c
export>kill
export>yes.

Schema Level Backup
--------------------
$expdp system/sys123 directory=dp_dir dumpfile=expdpscott.dmp logfile=expdpscott.log schemas=scott,hr,sh job_name=t1 compression=ALL content=metadata_only

Table Level backup
-------------------
$expdp system/sys123 directory=dp_dir dumpfile=expdpscotttab.dmp logfile=expdpscotttab.log tables=scott.emp,hr.departments job_name=r1 compression=ALL


How to diagnose the errors while logical backup? either expdp/impdp.
---------------------------------------------------------------------

$more expdpscotttab.log 
will have messages 
successfully completed
abnormally terminated
No space left on device (clean up backup space)
object doesnot exists
user doesnot exists
user already exists
objects skipped


Parameters
----------
Full
Schemas
Directory
tables
dumpfile
logfile
content
compression
job_name
parfile

Using parfile 
-------------
is a parameter file. will enclose the parameters in parfile.

Create parfile using vi

$vi expdpfull.par
directory=dp_dir
dumpfile=expdpfull.dmp
logfile=expdpfull.log
full=y
job_name=p1
estimate=blocks

Provide permissions - execution
$chmod +x expdpfull.par


$expdp system/sys123 parfile=expdpfull.par

using parameter estimate/estimate_only

  with estimate=blocks|statistics

will estimate the size of the dump file while export.
in terms of blocks level or using statistics

With estimate_only
will show the dumpfile size , but not export.


$expdp system/sys123 directory=dp_dir full=y estimate_only=yes

Finding usage of each folder in current location
------------------------------------------------
#du -sh *

Error: Severity 1
/u01 threshold limit reached 90% 
Action:

inventory
binaries
network files
alert log file / trace files --- will cause full - house keep

Error : Sev2
/oradata - 90% threshold reached?
action:
find the maxbytes with autoextend on 
limit with autoextend on
find space to clean up if any other files
Find system engineer to add more space.
find other fs if so, to add more datafiles.


Number of databases on server?
/etc/oratab

Number of instances currently running on server?
ps -ef | grep pmon
or
ps -ef| grep smon

Number of instance can start on boot?
/etc/oratab
Y/N


CONTENT
Specifies data to unload.
Valid keyword values are: [ALL], DATA_ONLY and METADATA_ONLY.

To export only table structure and with out data
can use content parameter with metadata_only
default is all(includes data)

eg:
$expdp system/sys123 directory=dp_dir dumpfile=expdpscott220720.dmp logfile=expdpscott220720.log schemas=scott,hr  content=metadata_only


Note: 
To estimate_only , finding the size of the dumpfile.
        Doesnt need to specify the dumpfile and log file.
Will not export. Just estimates.
------------------------------------------------------------
Parallel
--------

Can write/load in to multiple dumpfiles for large database.
For faster performance.
using parallel parameter , depends on number of CPU's
eg: Number of cores : 2 
parallel=2

eg:
$expdp system/sys123 directory=dp_dir dumpfile=expdpfull220720_%U.dmp logfile=expdpfull220720.log schemas=scott,hr,sh parallel=2

%U - unique number


COMPRESSION
Reduce the size of a dump file.
Valid keyword values are: ALL, DATA_ONLY, [METADATA_ONLY] and NONE.


EXCLUDE
Exclude specific object types.
For example, EXCLUDE=SCHEMA:"='HR'".

INCLUDE
Include specific object types.
For example, INCLUDE=TABLE_DATA.

QUERY : Specific rows/records can unload.
Predicate clause used to export a subset of a table.
For example, QUERY=employees:"WHERE department_id > 10".

$expdp system/sys123 directory=dp_dir dumpfile=expdpscott220720.dmp logfile=expdpscott220720.log QUERY='scott.emp:"WHERE deptno > 10"' 

VERSION
--------
Version of objects to export.
Valid keyword values are: [COMPATIBLE], LATEST or any valid database version.

$expdp system/sys123 directory=dp_dir dumpfile=expdpscott037.dmp logfile=expdpscott0307.log QUERY='scott.emp:"WHERE deptno > 10"' version=12.2

eg: version=12.2  or version=LATEST
can use for migrating data from source to target of different version.
from 12.1.0  - ? 12.2.0

-------------------------------------------------------
Imports
----------
till now,...using expdp utility dumped in to .dmp file.

Now from source DB will dump into .dmp and scp to remote server
using source .dmp file will import into target DB

Note: scp - server copy
eg:
$scp  filename  root@servername:/path
eg: $scp expdpscott.dmp oracle@chaitanyahost:/orabackup/dev/dp
Paswd: oracle123
using impdp will import in to target db.

prod -> expdp -> .dmp - scp
->>>> .dmp -> impdp - > target db

$impdp help=y

The rqmnt
full
table
schema
with/without
compresed
query
version

having dump file from exports
will use to import in target db.

full ---> full/schema/table/query
Schema -> schema/table/query
table -> table/query

Finding Count of objects in schema before export to validate in source and target.
schemas exists in both source and target.
Tablespace associated with schema in target db as same as source.

eg:
source target
scott scott
  users users
Note: In target db, if schema user doesnt exist, will create automatially while import.
But make sure tablespace exists in target same in source before import.


sql>select object_type,count(*) from all_objects where owner like 'CHAITANYA' group by object_type;

select object_type,count(*) from all_objects where owner like 'ABHIRAM' group by object_type;

Count of Invalids - plsql objects will become invalid
-----------------
sql>select owner,object_type,count(*) from all_objects where status like 'INVALID' group by object_type,owner;

PLSQL objects - procedures/packages/functions/triggers will go off invalid.


Execute following script - to validate and compile.
--------------------------------------------------
sql>@?/rdbms/admin/utlrp.sql

sql>select username,default_tablespace from dba_users where username like 'SCOTT';

sql>select tablespace_name from dba_tablespaces where tablespace_name like 'USERS';

----------------------------------------------------------
Scheme Level
-------------
$ expdp system/sys123 directory=dp_dir dumpfile=expdpfull220720scott.dmp logfile=expdpfull220720scott.log schemas=scott

$impdp system/sys123 directory=dp_dir dumpfile=expdpscott220720.dmp logfile=impdpscott220720.log schemas=scott

Remap Schema
------------
unload/load in to different schema.
using
remap_schema=scott:manasa

$expdp system/sys123 directory=dp_dir dumpfile=expdpscott220720.dmp logfile=expdpscott220720.log schemas=scott

$impdp system/sys123 directory=dp_dir dumpfile=expdpscott220720.dmp logfile=impdpscott220720.log schemas=scott remap_schema=scott:manasa

sql>select username from dba_users where username like 'DEV';

sql>select object_type,count(*) from all_objects where owner like 'DEV' group by object_type;


Again reimport , what will happen?

Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 5 error(s) at

ORA-39151: Table "DEV"."SALGRADE" exists.a will be skipped due to table_exists_action of skip

ORA-31684: Object type USER:"DEV" already exists

--------Still , want to continue reimport - how?

Using table_exists_action parameter in impdp
--------------------------------------------
$impdp system/sys123 directory=dp_dir dumpfile=expdpfull220720scott.dmp logfile=impdpscott220720.log schemas=scott remap_schema=scott:dev table_exists_action=truncate

Note: truncate , will delete the table records and can reuse the space.


TABLE_EXISTS_ACTION
Action to take if imported object already exists.
Valid keywords are: APPEND, REPLACE, [SKIP] and TRUNCATE.

VERSION
Version of objects to import.
Valid keywords are: [COMPATIBLE], LATEST, or any valid database version


TABLESPACES
Identifies a list of tablespaces to import.


TABLES
Identifies a list of tables to import.
For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_2020.

SCHEMAS
List of schemas to import.

REMAP_SCHEMA
Objects from one schema are loaded into another schema.


QUERY
Predicate clause used to import a subset of a table.
For example, QUERY=employees:"WHERE department_id > 10".


NOLOGFILE
Do not write log file [N].

PARALLEL
Change the number of active workers for current job.

PARFILE
Specify parameter file.

FULL
Import everything from source [Y].

HELP
Display help messages [N].

INCLUDE
Include specific object types.
For example, INCLUDE=TABLE_DATA.

JOB_NAME
Name of import job to create.

LOGFILE
Log file name [import.log].


EXCLUDE
Exclude specific object types.

DIRECTORY
Directory object to be used for dump, log and SQL files.

DUMPFILE
List of dump files to import from [expdat.dmp].
For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.

ATTACH
Attach to an existing job.
For example, ATTACH=job_name.


CONTENT
Specifies data to load.
Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.



Export Import
full full/query/table/content/schema
schema schema/table/query/content
table table/query/content




Note: Here i am giving some info on datapump which may differ from your environment like production, testing and ipaddress  etc.




THANK YOU FOR VIEWING MY BLOG FOR MORE UPDATES VISIT MY BLOG REGULARLY  https://chaitanyaoracledba.blogspot.com/

















































































ITIL Process

ITIL Process Introduction In this Blog i am going to explain  ITIL Process, ITIL stands for Information Technology Infrastructure Library ...