Wednesday, July 22, 2020

ORACLE DATAPUMP

ORACLE DATAPUMP
-----------------------------

DataPump Introduction
----------------------
 DataPump: DataPump is built in utility in oracle  to take logical backups its a technology high speed of data movement data and metadata from one database to another database 
 Datapump is available on oracle 10 g release 1 and later it is faster and more flexible alternatives to the traditional exp and imp. datapump runs with in the 
 oracle server processes and can read directly from database files and writes directly to file on the server, Data Pump is an extension to traditional EXP/IMP which
 provides more advantages like security ,speed etc,traditional export and import was mainly security problem for eg : we take take export scott schema .dmp file(dump file)
 we can easily take the scott.schema .dmp file import to any of the database machine traditional export and import does not provide any security to the .dmp file  

How Datapump Works:
-------------------
DataPump Export: oracle Datapump will create master table in the corresponding schema and data will be transferred parellely to dump file(.dmp)

DataPump Import: Oracle Datapump will happen reverse order that is dump file to master table will be created and from that original table

after finishing either export or import in datapump oracle will automatically drops the master table

Note: Whenever datapump export is done using parallel option,import also should be done with the same option,otherwise it will effect the time taking for import

sqlplus directory should be created both the exportdp and importdp

os level directory should be created both the exportdp and importdp

Datapump directory is a secure feature

You must create a directory in os level and DB level

On Target Database,you need to again create directory to use datapump

The Directory name can be different on target server

This adds one more layer of security for export/import

The Following Levels of Datapump export/import are possible
-------------------------------------------------------------

1)Database Level
2)Schema Level
3)Table Level
4)Row Level
5)Tablespace Level

   
in this  i am showing testing server (source) and production server (target)
----------------------------------------------------------------------------
first create directory at OS level

# mkdir -p /u02/dp_exp_dir

second create a directory at sql level

sql> create directory datapump as '/u02/dp_exp_dir';

grant permissions on directory

sql> grant read,write on directory datapump to scott ; (better to give sysuser)

To view directory information

sql> select * from dba_directories

production server
----------------- 


# mkdir -p /u02/dp_exp_dir

second create a directory at sql level

sql> create directory datapump as '/u02/dp_exp_dir';

grant permissions on directory

sql> grant read,write on directory datapump to scott ; (better to give sysuser)

To view directory information

sql>select * from dba_directories;

Test server
-----------

# mkdir -p /u02/dp_imp_dir

sqlplus '/as sysdba'

second create a directory at sql level

sql> create directory datapump as '/u02/dp_imp_dir';

craete or repalce directory datapump as '/u01/imp_dir';

grant permissions on directory

sql> grant read,write on directory datapump to scott ; (better to give sysuser)

to know options of datapump export

$expdp help =y

To take the database level export
-----------------------------------

$expdp directory=datapump dumpfile=fullprod.dmp logfile=fullprod.log full=y

To take the schema level export
-----------------------------------

$expdp directory=datapump dumpfile=scott_bkp.dmp logfile= scott_bkp.log schemas='scott'


To take the table  level export
-----------------------------------

$expdp directory=datapump dumpfile=emp_bkp.dmp logfile=emp_bkp.log table='SCOTT.EMP';( here scott is the owner and emp is the table)


To take the row level export
-----------------------------------

$expdp directory=datapump dumpfile=emprows_bkp.dmp logfile=emprows_bkp.log tables= 'SCOTT.EMP' query=\"where deptno=10"\


production server
-----------------

$expdp directory =datapump dumpfile=scott_bkp.dmp logfile=scott_bkp.log schemas='SCOTT'

sqlplus '/as sysdba'

$cd /u02/dp_exp_dir/

db_exp_dir] $ ls -lrt

scott_bkp.log
scott_bkp.dmp

$expdp directory= dataump dumpfile=emp_bkp.dmp logfile=emp_bkp.log tables='SCOTT.EMP'

To Import a full database
----------------------

sqlplus '/as sysdba'

$impdp directory= datapump dumpfile=fullprod.dmp logfile=imp_fullprod.log full=y

To know options of datapump import

$ impdp help =y

To Import a schema
----------------

$impdp directory =datapump dumpfile=scott_bkp.dmp logfile= imp_schema.log remap_schema='SCOTT:SCOTT'

$impdp directory= datapump dumpfile=scott_bkp.dmp logfile=imp_schema.log remap_schema ='SCOTT:ABC'


To Import a table
--------------
$impdp directory=datapump dumpfile=emp_bkp.dmp logfile=imp_emp.log tables='EMP" remap_schema='SCOTT:SCOTT'

To Import a table to another user
-------------------------------

$impdp directory=datapump dumpfile=emp_bkp.dmp logfile=imp_emp.log tables= 'EMP" remap_schema= 'SCOTT:system'


To Import a table to another tablespace(only in datapump)
------------------------------------------------------
$imp directory= datapump dumpfile=emp_bkp.dmp logfile=imp_emp.log tables='emp remap_schema='SCOTT:SCOTT' remap_tablespace='MYDATA:MYTBS'

Production server
-----------------
$db_exp_dir]$ ls -lrth

scott_bkp.dmp

$scp scott_bkp.dmp chaitanya@192.168.1.100: /u02/dp_imp_dir (production to testing server copy scott_bkp.dmp to /u02/dp_imp_dir location in testing server)  
password

Target server
-------------
$cd /uo2/dp_imp_dir
dp_imp_dir]ls -lrth
scott_bkp.dmp

$impdp directory=datapump dumpfile=scott_bkp.dmp logfile=imp_scott.log remap_schema='SCOTT:IMP_TEST  (here user is imp_test scott is owner)

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Oracle - datapumps - Logical Backup
------------------------------------

Note : For Logical backup, the DB must be up and running.

using logical backup, will move the data from one server to another server.

with utilities , expdp and impdp will export and import in to target database.

prod - expdp - .dmp - > move to target server using scp
---> .dmp - impdp ->import - target 

Prior to 10g version , we have exp and imp. From 10g oracle introduced Oracle datapumps with expdp and impdp which is 15-45 faster than normal imports.

We also use logical backup for refresh activities.

If developers want to build a module , for that they need production data to development server.
So DBA will refresh data from prod to develop server as per critirea.

There are three levels of Logical backup.

1. Full Database
2. Schema Level
3. Table Level

Export Parameters - job
------------------------
Will export in to binary dump file .dmp with expdp utility.

To get help on export parameters.
--------------------------------
$expdp help=y

Use physical location to hold dump files.
$mkdir -p /orabackup/prod/dp


Create logical alias for physical directory.
----------------------------------------
sql>create or replace directory dp_dir as '/orabackup/prod/dp';
sql>grant read,write on directory dp_dir to public;


Verify - the path /name - dba_directories
------------------------------------------
SQL> select directory_name,directory_path from dba_directories where directory_name like 'DP_DIR';

Two grants needed for a user to export and import.

datapump_exp_full_database
datapump_imp_full_database

SQL> select role from dba_roles where role like '%DATAPUMP%';

eg: If scott user to have a backup
sql>grant datapump_exp_full_database to scott;
sql>grant datapump_imp_full_database to scott;

Default system user has these roles. so can use system user to
have logical backup.
---------------------------------------------------------------
Full Database backup
--------------------
$expdp system/sys123 directory=dp_dir dumpfile=expdpfull220720.dmp logfile=expdpfull220720.log full=y compression=ALL job_name=j1

Interactive Method -
-------------------- 
can start/continue/stop/status/parallel - jobs

To stop the dp job.
--------------------

use ctrl+c
export>stop
export>....yes

Verify the job - status
-------------------------

sql>select job_name,state,operation from dba_datapump_jobs;

To continue - the dp job

1. attach
2. continue.

$ expdp system/sys123 attach=P1
export>continue


To terminate the job - kill.

use ctrl+c
export>kill
export>yes.

Schema Level Backup
--------------------
$expdp system/sys123 directory=dp_dir dumpfile=expdpscott.dmp logfile=expdpscott.log schemas=scott,hr,sh job_name=t1 compression=ALL content=metadata_only

Table Level backup
-------------------
$expdp system/sys123 directory=dp_dir dumpfile=expdpscotttab.dmp logfile=expdpscotttab.log tables=scott.emp,hr.departments job_name=r1 compression=ALL


How to diagnose the errors while logical backup? either expdp/impdp.
---------------------------------------------------------------------

$more expdpscotttab.log 
will have messages 
successfully completed
abnormally terminated
No space left on device (clean up backup space)
object doesnot exists
user doesnot exists
user already exists
objects skipped


Parameters
----------
Full
Schemas
Directory
tables
dumpfile
logfile
content
compression
job_name
parfile

Using parfile 
-------------
is a parameter file. will enclose the parameters in parfile.

Create parfile using vi

$vi expdpfull.par
directory=dp_dir
dumpfile=expdpfull.dmp
logfile=expdpfull.log
full=y
job_name=p1
estimate=blocks

Provide permissions - execution
$chmod +x expdpfull.par


$expdp system/sys123 parfile=expdpfull.par

using parameter estimate/estimate_only

  with estimate=blocks|statistics

will estimate the size of the dump file while export.
in terms of blocks level or using statistics

With estimate_only
will show the dumpfile size , but not export.


$expdp system/sys123 directory=dp_dir full=y estimate_only=yes

Finding usage of each folder in current location
------------------------------------------------
#du -sh *

Error: Severity 1
/u01 threshold limit reached 90% 
Action:

inventory
binaries
network files
alert log file / trace files --- will cause full - house keep

Error : Sev2
/oradata - 90% threshold reached?
action:
find the maxbytes with autoextend on 
limit with autoextend on
find space to clean up if any other files
Find system engineer to add more space.
find other fs if so, to add more datafiles.


Number of databases on server?
/etc/oratab

Number of instances currently running on server?
ps -ef | grep pmon
or
ps -ef| grep smon

Number of instance can start on boot?
/etc/oratab
Y/N


CONTENT
Specifies data to unload.
Valid keyword values are: [ALL], DATA_ONLY and METADATA_ONLY.

To export only table structure and with out data
can use content parameter with metadata_only
default is all(includes data)

eg:
$expdp system/sys123 directory=dp_dir dumpfile=expdpscott220720.dmp logfile=expdpscott220720.log schemas=scott,hr  content=metadata_only


Note: 
To estimate_only , finding the size of the dumpfile.
        Doesnt need to specify the dumpfile and log file.
Will not export. Just estimates.
------------------------------------------------------------
Parallel
--------

Can write/load in to multiple dumpfiles for large database.
For faster performance.
using parallel parameter , depends on number of CPU's
eg: Number of cores : 2 
parallel=2

eg:
$expdp system/sys123 directory=dp_dir dumpfile=expdpfull220720_%U.dmp logfile=expdpfull220720.log schemas=scott,hr,sh parallel=2

%U - unique number


COMPRESSION
Reduce the size of a dump file.
Valid keyword values are: ALL, DATA_ONLY, [METADATA_ONLY] and NONE.


EXCLUDE
Exclude specific object types.
For example, EXCLUDE=SCHEMA:"='HR'".

INCLUDE
Include specific object types.
For example, INCLUDE=TABLE_DATA.

QUERY : Specific rows/records can unload.
Predicate clause used to export a subset of a table.
For example, QUERY=employees:"WHERE department_id > 10".

$expdp system/sys123 directory=dp_dir dumpfile=expdpscott220720.dmp logfile=expdpscott220720.log QUERY='scott.emp:"WHERE deptno > 10"' 

VERSION
--------
Version of objects to export.
Valid keyword values are: [COMPATIBLE], LATEST or any valid database version.

$expdp system/sys123 directory=dp_dir dumpfile=expdpscott037.dmp logfile=expdpscott0307.log QUERY='scott.emp:"WHERE deptno > 10"' version=12.2

eg: version=12.2  or version=LATEST
can use for migrating data from source to target of different version.
from 12.1.0  - ? 12.2.0

-------------------------------------------------------
Imports
----------
till now,...using expdp utility dumped in to .dmp file.

Now from source DB will dump into .dmp and scp to remote server
using source .dmp file will import into target DB

Note: scp - server copy
eg:
$scp  filename  root@servername:/path
eg: $scp expdpscott.dmp oracle@chaitanyahost:/orabackup/dev/dp
Paswd: oracle123
using impdp will import in to target db.

prod -> expdp -> .dmp - scp
->>>> .dmp -> impdp - > target db

$impdp help=y

The rqmnt
full
table
schema
with/without
compresed
query
version

having dump file from exports
will use to import in target db.

full ---> full/schema/table/query
Schema -> schema/table/query
table -> table/query

Finding Count of objects in schema before export to validate in source and target.
schemas exists in both source and target.
Tablespace associated with schema in target db as same as source.

eg:
source target
scott scott
  users users
Note: In target db, if schema user doesnt exist, will create automatially while import.
But make sure tablespace exists in target same in source before import.


sql>select object_type,count(*) from all_objects where owner like 'CHAITANYA' group by object_type;

select object_type,count(*) from all_objects where owner like 'ABHIRAM' group by object_type;

Count of Invalids - plsql objects will become invalid
-----------------
sql>select owner,object_type,count(*) from all_objects where status like 'INVALID' group by object_type,owner;

PLSQL objects - procedures/packages/functions/triggers will go off invalid.


Execute following script - to validate and compile.
--------------------------------------------------
sql>@?/rdbms/admin/utlrp.sql

sql>select username,default_tablespace from dba_users where username like 'SCOTT';

sql>select tablespace_name from dba_tablespaces where tablespace_name like 'USERS';

----------------------------------------------------------
Scheme Level
-------------
$ expdp system/sys123 directory=dp_dir dumpfile=expdpfull220720scott.dmp logfile=expdpfull220720scott.log schemas=scott

$impdp system/sys123 directory=dp_dir dumpfile=expdpscott220720.dmp logfile=impdpscott220720.log schemas=scott

Remap Schema
------------
unload/load in to different schema.
using
remap_schema=scott:manasa

$expdp system/sys123 directory=dp_dir dumpfile=expdpscott220720.dmp logfile=expdpscott220720.log schemas=scott

$impdp system/sys123 directory=dp_dir dumpfile=expdpscott220720.dmp logfile=impdpscott220720.log schemas=scott remap_schema=scott:manasa

sql>select username from dba_users where username like 'DEV';

sql>select object_type,count(*) from all_objects where owner like 'DEV' group by object_type;


Again reimport , what will happen?

Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 5 error(s) at

ORA-39151: Table "DEV"."SALGRADE" exists.a will be skipped due to table_exists_action of skip

ORA-31684: Object type USER:"DEV" already exists

--------Still , want to continue reimport - how?

Using table_exists_action parameter in impdp
--------------------------------------------
$impdp system/sys123 directory=dp_dir dumpfile=expdpfull220720scott.dmp logfile=impdpscott220720.log schemas=scott remap_schema=scott:dev table_exists_action=truncate

Note: truncate , will delete the table records and can reuse the space.


TABLE_EXISTS_ACTION
Action to take if imported object already exists.
Valid keywords are: APPEND, REPLACE, [SKIP] and TRUNCATE.

VERSION
Version of objects to import.
Valid keywords are: [COMPATIBLE], LATEST, or any valid database version


TABLESPACES
Identifies a list of tablespaces to import.


TABLES
Identifies a list of tables to import.
For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_2020.

SCHEMAS
List of schemas to import.

REMAP_SCHEMA
Objects from one schema are loaded into another schema.


QUERY
Predicate clause used to import a subset of a table.
For example, QUERY=employees:"WHERE department_id > 10".


NOLOGFILE
Do not write log file [N].

PARALLEL
Change the number of active workers for current job.

PARFILE
Specify parameter file.

FULL
Import everything from source [Y].

HELP
Display help messages [N].

INCLUDE
Include specific object types.
For example, INCLUDE=TABLE_DATA.

JOB_NAME
Name of import job to create.

LOGFILE
Log file name [import.log].


EXCLUDE
Exclude specific object types.

DIRECTORY
Directory object to be used for dump, log and SQL files.

DUMPFILE
List of dump files to import from [expdat.dmp].
For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.

ATTACH
Attach to an existing job.
For example, ATTACH=job_name.


CONTENT
Specifies data to load.
Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.



Export Import
full full/query/table/content/schema
schema schema/table/query/content
table table/query/content




Note: Here i am giving some info on datapump which may differ from your environment like production, testing and ipaddress  etc.




THANK YOU FOR VIEWING MY BLOG FOR MORE UPDATES VISIT MY BLOG REGULARLY  https://chaitanyaoracledba.blogspot.com/

















































































USER MANAGEMENT IN ORACLE

USER  MANAGEMENT IN ORACLE



Introduction:


In this blog i am going to expalin USER MANAGEMENT IN ORACLE,User is basically used to connect a database  it may be production, pre-production,development or testing server (UAT) but users  depends on  what the role and designation it should be manager, admin or programmer,test user 
all database objects like Table,index,view,etc can be created under the user,in oracle users,and schemas are important you can consider that user is the account connect to database ,and schema is the set of objects (tables,vies,indexes).

USER MANAGEMENT IN ORACLE,user management in oracle 12c,user management in oracle 11g blogs,oracle user privileges,how to check the privileges assigned to a user in oracle,grant role to user oracle 11g,roles and privileges in oracle 11g,create user in oracle,user management in oracle interview questions,user security management in oracle,what is user management in oracle,user management in oracle 12c,user management in oracle dba,user profile management in oracle 11g,chaitanya oracledba blog



USER MANAGEMENT IN ORACLE that belong to the account any user minimum privelige is necessary to connect a database it should be like admin priviles   


User Management  - SQL


Create user ( creating the user in database)
alter user  (alter the user in database)
drop user   (drop the user in database)




sql>create user chaitanya identified by chaitu123;

Already exists/list of user / got created.

Using dictionary view/table owned sys user.

Information from dba_users

sql>desc dba_users;

sql>select username,password,account_status from dba_users where username like 'CHAITANYA';

sql>select username,password,account_status from dba_users where username in ('CHAITANYA','SYS','SCOTT');

sql>select distinct account_status from dba_users;

Locked
Expired & Locked
Open


Account Lock/Unlock/reset


sql>alter user Chaitanya account lock;

sql>select username,account_status from dba_users where username like 'CHAITANYA';

sql>alter user chaitanya account unlock;

sql>alter user chaitanya identified by chaitu123;


Dropping user


sql>drop user chaitanya;


Creating user :


sql>create user abhiram identified by abhi123;


sys owned dictionary tables, can find the information of user details.

Note : dba_users

sql>select username,account_status from dba_users where username in ('CHAITANYA','MANASA','ABHIRAM','SAIBABU','PAVAN','VASANTH','SYS','SYSTEM');

sql>select username,password from dba_users;


Finding number of users in DB


sql>select count(*) from dba_users;


Altering user 


Account lock/unlock


sql>select username,account_status from dba_users where username like 'CHAITANYA';

sql>alter user chaitanya account lock;
sql>alter user chaitanya account unlock;


Password reset / Changing


sql>alter user Manasa identified by manu123;


Drop user - removing user from db


sql>drop user abhiram;
   
    note: including dependencies , his objects to remove.
 
      If its a schemas, then use cascade.

sql>drop user abhiram cascade;

    In case, scott user
 
    sql>drop user scott cascade;


Roles :

Roles are groupings or collection of privileges that you can use to create different levels of database access.Defining subset of privileges through a role to a user.it allows easier management of privileges in USER MANAGEMENT IN ORACLE

We can also grant role to a role and to a user.


  • manager
  • dba
  • developer
  • tester
  • privilege - > user
  • privilege -> role - > user
  • privileges - > role - > role - > user


  •         dba_roles
  •         dba_role_privs
  •         role_role_privs
  •         dba_sys_privs



sql>select role from dba_roles;

ROLE
------------------------------
CONNECT
RESOURCE
DBA
SELECT_CATALOG_ROLE
EXECUTE_CATALOG_ROLE
DELETE_CATALOG_ROLE
EXP_FULL_DATABASE
IMP_FULL_DATABASE


Creating role:


sql>create role dev_role;
sql>create role test_role;
sql>select role from dba_roles where role like 'DEV_ROLE';

How to grant role to a user?

sql>grant dev_role to scott;


Connect role :


Connect : is a system defined role with create session privilege.

    Every new user needs create session privilege.That granted with role connect.

Privilege(create session) ---> user (abhiram) 
SQL>grant create session to dev_role;
Privilege(create session) -> connect(role) -> user (abhiram)
privileges - > role(connect) - > role(dev) -> user
dev(role) -> user will get including connect role 

sql>grant connect to chaitanya;
sql>connect chaitanya/chaitu123;



Resource role : built in system defined role

    resource role has create objects privileges.

    create table,create index,create trigger

SQL> select privilege from dba_sys_privs where grantee like 'RESOURCE';
SQL> select privilege from dba_sys_privs where grantee like 'CONNECT';
SQL> select privilege from dba_sys_privs where grantee like 'DBA';
SQL> select privilege from dba_sys_privs where grantee like 'DEV_ROLE';

sql> grant dba to scott;

Now scott has DBA role, have all privileges.

sql>grant resource to abhiram;

  role - role ----> user
  privilege ---> role ----> user
  privilege ---> user
  privilege ---> role ---> role ---> user
  create session ---> connect,resource ---> dev_role---> abhiram
  create user,drop user---> dev_role ---> abhiram


DBA Role :

        If user does need all sys privileges to manage database. 
provide DBA role which has all system privileges.

sql>grant dba to chaitanya;
sql>revoke dba from chaitanya;


Instead of giving connect,resource or other roles.we can create and grant with one single role.So we can grant role to role.

sql>grant connect,resource to  dev_role;
sql>grant dev_role to manasa;

So now manasa has create session,create table,trigger,index privileges via dev_role.So for next new user, directly we can grant with one single role.

sql>grant dev_role to vasanth;


What roles granted for a role?

sql>select granted_role from role_role_privs where role like 'DEV';


What roles granted for a user?

SQL> select grantee,granted_role from dba_role_privs where grantee like 'PAVAN';


Dropping a role:

sql>drop role dev_role cascade;

 Note : use cascade, to remove for dependencies


user-----> create/alter/drop
Role ----> create/drop/assign/revoke


Privileges/Permissions


Two Types of Privileges

  1. System Level Privileges
  2. Object Level Privileges

dba_tab_privs - Object Level Privileges---->(select,insert,update,delete)

sql>desc dba_tab_privs

dba_sys_privs - System Level Privileges---->(shutdown,backups,create user,alter user,create role,drop role,create table,alter table)

sql>desc dba_sys_privs


System Privileges -


SQL> select grantee,privilege from dba_sys_privs where grantee like 'CONNECT';

SQL> select grantee,privilege from dba_sys_privs where grantee like 'RESOURCE';


List of system Privileges


sql>select distinct privilege from dba_sys_privs; 

CREATE USER
SELECT ANY TABLE
CREATE SESSION
CREATE TABLESPACE
CREATE PROFILE
ALTER USER
DROP USER

How to grant /provide system privilege to scott ?

sql>grant create user,drop user,alter user to scott;

SQL> select privilege from dba_sys_privs where grantee like 'SCOTT';

To remove/revoke privileges

sql>revoke drop user from scott;

Privileges--->(Create session,create table..)--->Role(Connect,resource)---> role(devop)---> User(ravi)



sql>create user chaitanya identified by chaitu123;
sql>create role devop;

sql>grant connect,resource,create user,drop user to devop;
sql>grant devop to chaitanya;
sql>create user manasa identified by manu123;

Now grant role devop to manasa(developer)
sql>grant devop to manasa;

sql>revoke drop user from devop;
    but this applicable for all users under devop role.



Object / Table Privileges


sql>desc dba_tab_privs


SQL> select owner,grantee,grantor,privilege from dba_tab_privs where owner like 'SCOTT' and table_name like 'EMP';


sql>grant select,insert,update on scott.emp to hr;


SQL> select grantee,grantor,privilege from dba_tab_privs where owner like 'SCOTT' and table_name like 'EMP';

sql>revoke update  on scott.emp from hr;

sql>connect hr/hr123
sql>select * from scott.emp;


Profiles:

       Profile enforces set of password security rules and resources usage limit while creating a user if no profile is mentioned, then DEFAULT profile will be assigned to the user ,limiting resources for a user defining a profile.USER MANAGEMENT IN ORACLE

sql>desc dba_profiles
       
    system defined---> two profiles

        default
        monitoring_profile

SQL> select resource_name,limit from dba_profiles where profile like 'DEFAULT';


Altering profile:


SQL> alter profile devop limit idle_time 380;

create profile:

sql>create profile devop limit idle_time 180 failed_login_attempts 3;

Assign a profile to a user:

SQL> alter user scott profile devop;


Verify:


SQL> select username,profile from dba_users where username like 'SCOTT';


Drop profile :


sql>drop profile devop;

If user dependency , then use cascade to drop
sql>drop profile devop cascade;


  • dba_users - username,paswd,profile,account_status
  • dba_roles - roles
  • dba_role_privs
  • role_role_privs - roles
  • dba_sys_privs - system level privs
  • dba_tab_privs - object level privs
  • dba_profiles - profiles details - default

Here i am giving some tables create it in your database and assign to user and roles in  
USER MANAGEMENT IN ORACLE



Tables

  • agents
  • customers
  • orders



sql > CREATE TABLE  AGENTS("AGENT_CODE" CHAR(6) NOT NULL PRIMARY KEY,AGENT_NAME CHAR(40),WORKING_AREA CHAR(35),COMMISSION NUMBER(10,2),PHONE_NO CHAR(15),COUNTRY VARCHAR2(25));



sql>CREATE TABLE CUSTOMER(CUST_CODE VARCHAR2(6) NOT NULL PRIMARY KEY,CUST_NAME VARCHAR2(40) NOT NULL,CUST_CITY CHAR(35),WORKING_AREA VARCHAR2(35) NOT NULL,CUST_COUNTRY" VARCHAR2(20) NOT NULL, GRADE NUMBER,OPENING_AMT NUMBER(12,2) NOT NULL,RECEIVE_AMT NUMBER(12,2) NOT NULL,PAYMENT_AMT NUMBER(12,2) NOT NULL, OUTSTANDING_AMT NUMBER(12,2) NOT NULL,PHONE_NO" VARCHAR2(17) NOT NULL,AGENT_CODE CHAR(6) NOT NULL REFERENCES AGENTS);



sql>CREATE TABLE ORDERS(ORD_NUM NUMBER(6,0) NOT NULL PRIMARY KEY,ORD_AMOUNT NUMBER(12,2) NOT NULL,ADVANCE_AMOUNT" NUMBER(12,2) NOT NULL,ORD_DATE DATE NOT NULL,CUST_CODE VARCHAR2(6) NOT NULL REFERENCES CUSTOMER,AGENT_CODE CHAR(6) NOT NULL REFERENCES AGENTS,ORD_DESCRIPTION VARCHAR2(60) NOT NULL);


 Inserting values into agent table values


sql>INSERT INTO AGENTS VALUES ('A003', 'vasanth', 'Bangalore', '0.15', '077-25814763', 'india');
sql>INSERT INTO AGENTS VALUES ('A006', 'saibabu ', 'hyderabad', '0.13', '075-12458969', 'india');
sql>INSERT INTO AGENTS VALUES ('A009', 'pavan', 'rajahmundry', '0.12', '044-25874365', 'india');


 Inserting values into customer table values


INSERT INTO CUSTOMER VALUES ('C00012', 'tribhuvan', 'visakhapatnam', 'vizag', 'india', '2', '6000.00', '5000.00', '7000.00', '4000.00', '9898989898', 'A003');
INSERT INTO CUSTOMER VALUES ('C00021', 'vinod', 'rajahmundry', 'rjy', 'india', '2', '3000.00', '5000.00', '2000.00', '6000.00', '7878787878', 'A008');
INSERT INTO CUSTOMER VALUES ('C00027', 'kalyan', 'hyderabad', 'hyd', 'india', '3', '5000.00', '7000.00', '6000.00', '6000.00', '9400940021', 'A008');



 Inserting values into orders table values


INSERT INTO ORDERS VALUES('200102', '1000.00', '600.00', '08/01/2008', 'C00013', 'A003', 'SOD');
INSERT INTO ORDERS VALUES('200110', '3000.00', '500.00', '04/15/2008', 'C00019', 'A006', 'SOD');
INSERT INTO ORDERS VALUES('200107', '4500.00', '900.00', '08/30/2008', 'C00007', 'A009', 'SOD');


Creating user



create user abhiram identified by abhi123;
grant create session to abhiram;

create user manasa identified by manu123;
grant create session to manasa;


Creating Roles &Assigning Roles


create role cust_serv_clerk;

grant select on customer,agents,orders, to cust_serv_clerk;

grant select,insert,update on customer to cust_serv_clerk;

grant insert,update,delete on customer,agents,orders to cust_serv_mgr;


create role cust_serv_mgr;

grant cust_serv_clerk to cust_serv_mgr;


Assiginging Roles to the Particular User 

grant cust_serv_clerk to abhiram

grant cust_serv_mgr to manasa;


create user chaitanya identified by chaitu123
default tablespace users
quota 10m on users
temporary tablespace temp
quota 5m on system
profile application_user
password expire;


Note: Info on USER MANAGEMENT IN ORACLE,This is for the practical purpose how to assign a role and profile and resources to the user to a database it may be differ from your environment
like production,testing,development etc




THANKS FOR VIEWING MY BLOG FOR MORE UPDATES FOLLOW ME AND SUBSCRIBE ME
 










   

 



Oracle Database Cloning Using Rman Utility


Oracle Database Cloning Using Rman Utility
-----------------------------------------------------

Prerequisites 
-------------

1)OEL Oracle enterprise Linux server

2)Oracle installed with out database


Activity Flow
------------- 

1)Take source backup using Rman

2)move pfile,Controlfile,backup pieces, to target server 

3)Start the Instance in Mount Stage and Restore from backup pieces

4)Open the database as source SID

5)Rename the Database


Trigger Backup on source
------------------------

$ RMAN> backup database plus archivelog delete input;
$ RMAN> restore controlfile to '/tmp/prod_control.ctl';


Move files to Target server
----------------------------
parameter file pfile

edit pfile change SID except for DB_NAME parameter keep it source

create directories as per new pfile


$ RMAN> rman target /catalog rman-rc/rman-rc@rca
$ RMAN>backup database plus archivelog delete input;
$ RMAN>list backup of database summary;
$ RMAN>restore controlfile to '/tmp/prod_control.ctl';
$ RMAN>exit

$ cd ORACLE_HOME/dbs
$ ls -lrt
  initproddb.ora
 
i want to copy the initproddb.ora to the target server using scp

$ scp initproddb.ora oracle@192.168.0.100:$ORACLE_HOME/dbs    (it will ask password promt enter the password remote target server and enter it)

initproddb.ora   (you will prompt the 100% complete)

Target server
-------------

$ cd   ORACLE_HOME/dbs

$ ls -lrth

initproddb.ora

open this file in VI editor

replace with 

%s/proddb/testdb/   

save and exit the file  

(source db is proddb and target db is testdb create the directories as per new file)

(after that open the parameter file initproddb.ora in cat command we need to create directories)

$ cat initproddb.ora

$ mkdir -p /u01/app/oracle/admin/testdb/adump

$ mkdir -p /u01/app/oracle/oradata/testdb/

$ mkdir -p /u01/app/oracle/fast_recovery_area/testdb/

$ mkdir -p /u01/app/oracle/fast_recovery_area

scp/tmp/prod_control.ctl ---> target server control location  move files to target server

$ cd /tmp
 $ ls -lrth

source server
----------------

$ scp prod_control.ctl oracle@ 192.168.0.100:/u01/app/oracle/oradata/testdb/control01.ctl
$ scp prod_control.ctl oracle@ 192.168.0.100:/u01/app/oracle/oradata/testdb/control02.ctl

Target Server
-------------

$ ls -lrth

$ /u01/app/oracle/oradata/testdb/control01.ctl
$ /u01/app/oracle/oradata/testdb/control02.ctl

database backup pieces ---> same location as source

$ rman target /catalog rman -rc/rman_rc@rcat

conneceted target database :proddb (DBID=674237234)
connecteed to recovery catalog database

$ RMAN> list backup of database summary;  (it will show like this key ty lv device type and tag)

key --> 2949  ty--->B   LV--->A  Device type---> disk  Tag ---> TAG202012t105306  (tag is the important using tag only we can perform restore and recovery in backup)

$ RMAN> list backup TAG202012t105306;

list of backup piece name :/u01/app/oracle/fast_receovery_area/proddb/backupset/2020-07-20/TAG202012t105306.bkp



$ scp/u01/app/oracle/fast_recovery_area/proddb/backupset/2020-07-20/TAG202012t105306.bkp oracle@192.168.0.100:/u01/app/oracle/fast_recovery_area/proddb/backupset/2020-07-20/

Target server
--------------

$ mkdir-p /u01/app/oracle/fast_recovery_area/proddb/backupset/2020-07-20   (target server directory may not exist create this directory)

archive backup pieces ---> same location as source

$ scp/u01/app/oracle/fast_recovery_area/proddb/backupset/2020-07-20/TAG202012t105306.bkp oracle@192.168.0.100:/u01/app/oracle/fast_recovery_area/proddb/backupset/2020-07-20/

Start cloning 
------------------

export environment variables connect to rman

Target Server
-------------

$ env |grep ora
 ORACLE_SID =proddb
ORACLE_HOME= '/u01/app/oracle/product/11.2.0/db_home-1

$ rman target /

connected to target database (not started)

$ RMAN> startup mount;

get the last scn available in the archive log backup

$RMAN> list backup of archivelog all;

last archive log--> 6  next scn---.> 960034

-Rename the DB redolog files so they can be created in new location
--------------------------------------------------------------------

sql> alter database rename file '/u01/app/oracle/oradata/proddb/redo01.log' to '/u01/app/oracle/oradata/testdb/redo01.log';

Target server
-------------

sql> select member from v$logfle;

member
-------

/u01/app/oracle/oradata/proddb/redo03.log
/u01/app/oracle/oradata/proddb/redo02.log
/u01/app/oracle/oradata/proddb/redo01.log

we have to change the datafile proddb to testdb redo03,02,01

Restore the datafiles to new location
----------------------------------------

run {

set newname for datafile1 to '/u01/app/oracle/oradata/testdb/system01.dbf';
set newname for datafile1 to '/u01/app/oracle/oradata/testdb/sysaux01.dbf';
set newname for datafile1 to '/u01/app/oracle/oradata/testdb/undotbs01.dbf';
set newname for datafile1 to '/u01/app/oracle/oradata/testdb/user01.dbf';
set newname for datafile1 to '/u01/app/oracle/oradata/testdb/example01.dbf';

restore datafile from TAG202012t105306;
switch datafile all;
recover database untill scn 960034;
alter database open resetlogs;
}

Renaming Database after cloning
--------------------------------


sql> select name,open_mode from v$database;

name--->proddb  open_mode->read_write 

hostname  dctest.chaitanya.com

 ( we have to rename the proddb to testdb we are in the testserver)

Take control file backup to trace with resetlog options
--------------------------------------------------------

sqlplus '/as sysdba'

sql> alter database backup controlfile to trace as'/tmp/ create_ctrol_file.sql';

sql> database altered

sql> shut immedaite ;

sql>exit

create pfile for new dbid
-------------------------

$cd $ORACLE_HOME/dbs
$ ls -ltr

initproddb.ora

$ mv initproddb.ora inittestdb.ora

$ vi inittestdb.ora        (one parameter db_name =proddb change to testdb   db_name=testdb save and exit vi editor)

$ export ORACLE_SID= testdb 

sqlplus '/as sysdba'

startup instance in nomount stage
---------------------------------
sql>startup nomount;
exit
$ cd/tmp
ls -lrt
create_ctrolfile.sql           (copy the create_ctrolfile.sql to new note pad and paste it)

$ cat create_ctrolfile.sql

Edit the control file in trace location with new sid  
-----------------------------------------------------

create control file set database  "testdb" resetlogs archivelog

remove reuse and set change norestlogs to resetlogs

it look like this when u open file 

maxlogfile
--
---
---
log file
--
--
group1 /u01---
group2 /u01---
group3 /u01---

datafile 
 
------/u01
---
---
---
---

character set WE8MSWIN1252


$cd  $ ORACLE_HOME/dbs  (remove the oldcontrol file)

$ ls -lrt
$ inittestdb.ora
cat inittsetdb.ora
$ rm-rf /u01/app/oracle/oradata/testdb/control01.ctl /u01/app/oracle/oradata/testdb/control02.ctl

Create controlfiles for new instance
------------------------------------

sqlplus '/as sysdba'

sql> select instance_name ,status from v%instance;

instance_name -->testdb   

status--->started

we have created control statement create_ctrol.sql and run 

 it will display controlfile created

sql> alter database open resetlogs

sql>select name,open_mode from v$database;

name---> testdb

open_mode---> read,write

sql> select name from v$ controlfile;

sql>select member from v$ logfile; 


cloning is done

Note: it will differ in your environment it may be u r development or testing or production and also directory structure in linux  mountpoints and IP address but the process is same.  


THANK YOU FOR VIEWING FOR MORE UPDATES VISIT MY BLOG 

http://chaitanyaoracledba.blogspot.com/ 

ITIL Process

ITIL Process Introduction In this Blog i am going to explain  ITIL Process, ITIL stands for Information Technology Infrastructure Library ...