Category Archives: ORACLE DBA

RMAN COLD BACKUP AND RESTORE

Simple RMAN script to take cold database backup:

For taking RMAN cold database backup, Database should be in mount status, Which can be noticed from below scipt.

mkdir -p /BACKUP/PROD
mkdir -p /BACKUP/PROD/log

$cold_backup.sh
export ORACLE_HOME=/u01/app/oracle/product/11.2.0
export ORACLE_SID=PROD
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
rman target / log=/BACKUP/PROD/log/PROD`date +%d%m%Y`.log <<EOF
sql ‘alter system checkpoint’;
shutdown immediate;
startup mount;
sql “create pfile=”/BACKUP/PROD/pfile`date +%d%m%Y`.ora” from spfile”;
RUN {
ALLOCATE CHANNEL disk1 DEVICE TYPE DISK FORMAT ‘/BACKUP/PROD/%U’;
ALLOCATE CHANNEL disk2 DEVICE TYPE DISK FORMAT ‘/BACKUP/PROD/%U’;
ALLOCATE CHANNEL disk3 DEVICE TYPE DISK FORMAT ‘/BACKUP/PROD/%U’;
BACKUP AS COMPRESSED BACKUPSET DATABASE;
BACKUP CURRENT CONTROLFILE FORMAT ‘/BACKUP/PROD/cntrl_%s_%p_%t’;
RELEASE CHANNEL disk1;
RELEASE CHANNEL disk2;
RELEASE CHANNEL disk3;
}
script execution:
./cold_backup.sh

Simple RMAN script to RESTORE DATABASE:

Make sure that rman backup is mounted on target system and necessary directories configured same as source system
$restore.sh
run
{
startup pfile=’/BACKUP/PROD/pfilexxxxxxx.ora’ nomount;
restore controlfile from ‘/BACKUP/PROD/cntrl_xxxxxxx’;
alter database mount;
restore database;
alter database open resetlogs;
}
script execution:
$rman target /
rman>@restore.sh

Finally change the database name using nid (DBNEWID) utility.

DATABASE HANGS

ERROR MESSAGE IN ALERTLOG FILE:

ORACLE Instance PROD – Archival Error
ORA-16038: log 4 sequence# 9961 cannot be archived
ORA-19502: write error on file “”, block number  (block size=)
ORA-00312: online log 4 thread 1: ‘/u01/oradata/data01/PROD/redo04.log’
ORA-16014: log 4 sequence# 9961 not archived, no available destinations

CAUSE:  Default archive log destination is 100% used, There is no space to create additional archive log

POSSIBLE ISSUES BECAUSE of archiving is stuck:

1. Database Hangs
2. Users not able to connect to database
3. Not able to open the database
4. Archive  space related error in alertlog file(ORA-16038,ORA-16014,ORA-19809 )

POSSIBLE SOLUTIONS:

1. Increase the ARCHIVE DESTINATION size
2. Take backup of the archivelogs to different location.
3. Change archivelogs destination to some other mountpoint.
4. Delete archivelogs to make more space. ( should be the last option) and in case of standby database make sure those logs are already applied to standby.

DATABASE SIZE (datafiles + tempfiles + redologfiles)

To get accurate size of the database, We have to combined the sizes of all datafiles, tempfiles and redologfiles.

select round ((a.data_size+b.temp_size+c.redo_size)/1024/1024/1024,2) “DB_size_in_GB”
from ( select sum(bytes) data_size
from dba_data_files ) a,
( select nvl(sum(bytes),0) temp_size
from dba_temp_files ) b,
( select sum(bytes) redo_size
from sys.v_$log ) c;

TRACE IN ORACLE DATABASE

Current session trace:

NORMAL TRACE:

alter session set tracefile_identifer=’XXX’;   — trace identifier
alter session set sql_trace = true;

FULL LEVEL TARCE with WAIT and BIND :

alter session set tracefile_identifer=’XXX’;   — trace identifier
alter session set events = ‘10046 trace name context forever, level 12’;

Other running session trace:

NORMAL TRACE:

execute dbms_system.set_sql_trace_in_session ($sid,$serial,true);

FULL LEVEL TARCE with WAIT and BIND :

execute dbms_system.set_ev($sid,$serial,10046,12,’’);

TRACE FILE LOCATION:

<DIAGNOSTIC_DEST>/diag/rdbms/<SID>/trace

TKPROF :
tkprof utility can translate the trace file into a more human readable format

tkprof <tracefile> <outfile> explain=username/password sort ='(prsela,exeela,fchela)’

explain=user/password    Connect to ORACLE and issue EXPLAIN PLAN.
prsela  elapsed time parsing
exeela  elapsed time executing
fchela  elapsed time fetching

RMAN : Perform Block Recovery

We can perform BLOCK RECOVERY with or without RMAN BACKUP, Here is the demonstration of block media recovery with HOT BACKUP and RMAN BACKUP as well.

1)  Create tablespace, user and table in same schema.
2)  Take the backup of datafile.
a) HOT BACKUP.
b) RMAN BACKUP.
3)  Corrupt the data in datafile.
4)  Connect with RMAN and try to use BLOCKRECOVER command.
a) Perform Block Recovery with HOTBACKUP.
b) Perform Block Recovery with RMAN BACKUP.

Here you go…

Create tablespace, user and table in same schema:

$sql> create tablespace TS1 datafile ‘/d01/oradata/ts1.dbf’ size 100m;
$Create user U1 identified by U1 default tablespace TS1;
$sql> grant connect,resource to U1;
$sql> conn U1/U1
$sql> create table TEST_CORRUPT (no number);
$sql> insert into TEST_CORRUPT values(1);
$sql> coomit;
$sql> SELECT segment_name, a.tablespace_name, b.name FROM dba_segments a, v$datafile b WHERE a.header_file=b.file# AND a.segment_name=’TEST_CORRUPT’;
SEGMENT_NAME     TABLESPACE_NAME    NAME
—————                      —————                           ————–
TEST_CORRUPT         TS1                                     /d01/oradata/ts1.dbf

HOTBACKUP of datafile:

$sql> ALTER TABLESPACE TS1 BEGIN BACKUP;
[oracle@localhost]$cp /d01/oradata/ts1.dbf /d01/oradata/ts1_bkp.dbf
$sql> ALTER TABLESPACE TS1 END BACKUP;

RMAN BACKUP of datafile 4:

[oracle@localhost]$rman target /
RMAN> backup datafile 4;

Corrupt the data in datafile with dd command:

Make sure that dd command given below is just for learning purposes and should only be used on testing systems

$sql>select segment_name , header_file , header_block  from dba_segments where segment_name = ‘TEST_CORRUPT’
and owner = ‘U1’;
SEGMENT_NAME         HEADER_FILE   HEADER_BLOCK
—————————- ———–                ————
TEST_CORRUPT                      4                         16

[oracle@localhost]$cd /d01/oradata
[oracle@localhost]$dd of=ts1.dbf bs=8192 conv=notrunc seek=17 << EOF
> Bingo! Corrupted.
> EOF
0+1 records in
0+1 records out
18 bytes (18 B) copied, 0.000684 seconds, 27.0 kB/s

This command successfully executed block 17 in the data file “/d01/oradata/ts1.dbf” is corrupt

Check the data block curroption:

$sql> sqlplus / as sysdba
$sql> alter system flush buffer_cache;
$sql> conn u1/u1
$sql> select count(*) from TEST_CORRUPT;
select count(*) from test_corrupt
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 4, block # 17)
ORA-01110: data file 4: ‘/d01/oradata/ts1.dbf’

Please perform below any one of the method, Here I am showing two methods with or without RMAN Backup.

Performing Block Recovery without having RMAN BACKUP:

As we have HOTBACKUP for datafile ts1.dbf, Catalog the “hot backup” to the RMAN repository

[oracle@localhost]$rman target /
RMAN> CATALOG DATAFILECOPY ‘/d01/oradata/ts1_bkp.dbf’;
RMAN> BLOCKRECOVER DATAFILE 4 BLOCK 17;

Performing Block Recovery with RMAN BACKUP:

As we have datafile 4 RMAN BACKUP, Perform recovey…

[oracle@localhost]$rman target /
RMAN> BLOCKRECOVER DATAFILE 4 BLOCK 17;

BLOCK MEDIA RECOVERY Complete we should now get the block 17 recovered back:

[oracle@localhost]$ sqlplus U1/U1
$sql> select count(*) from TEST_CORRUPT;
COUNT(*)
———-
1
$SQL> exit

HAPPY LEARNING !!

Resolve huge archive gap between PRIMARY and STANDBY

A Physical Standby database synchs with Primary by continuous apply of archive logs from a Primary Database.

When the logs are missing on standby difference is huge (say more than 500 logs), you have to rebuild the standby database from scratch.

With out rebuild standby database, As an enhancement from 10g, an incremental backup created with BACKUP INCREMENTAL… FROM SCN can be used to refresh the standby database.

Please use below query to find out archive gap on Standby:

SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL.SEQUENCE# “Last Sequence Applied”, (ARCH.SEQUENCE# – APPL.SEQUENCE#) “Difference” FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH, (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;
 
 Thread        Last Sequence Received      Last Sequence Applied      Difference
———-              ———————-                  ———————             ———-  
1                               8254                                      7954                          300
 

Find the SCN on the PRIMARY:

SQL> select current_scn from v$database;
 
CURRENT_SCN
———–  
242671761

Find the SCN on the STANDBY:

 
SQL> select current_scn from v$database;
CURRENT_SCN
———–  
223771173

Clearly you can see there is difference

Stop and shutdown the managed standby apply process:

SQL> alter database recover managed standby database cancel;

Database altered.

Shutdown the standby database

SQL> shut immediate

On the primary, take an incremental backup from the SCN number where the standby current value 223771173:

 RMAN> run { allocate channel c1 type disk format ‘/backup/%U.bkp’;
backup incremental from scn 223771173 database;
 }

On the primary, create a new standby controlfile:

SQL> alter database create standby controlfile as ‘/backup/for_standby.ctl’;

Database altered.

Copy the standby controlfile to STANDBY and bring up the standby instance in nomount status with standby controlfile:

SQL> startup nomount

SQL> alter database mount standby database;

Connect to RMAN on STANDBY, Catalog backup files to RMAN using below commands:

$ rman target=/

RMAN> catalog start with ‘/backup’;

PERFORM RECOVER:

RMAN> recover database;

Start managed recovery process:

SQL> alter database recover managed standby database disconnect from session;

Database altered.

Check the SCN’s in primary and standby it should be close to each other.

If archive gap difference is minimal or less, Please check this https://oracledbamasters.wordpress.com/2013/10/22/resolve-archive-gap-primary-standby/

Resolve archive gap between PRIMARY and STANDBY

A Physical Standby database synchs with Primary by continuous apply of archive logs from a Primary Database.

When the logs are missing on standby is less than 10-15 numbers, We can simple ship the logs which are missing in the standby database from primary database by using SCP/FTP and then register the logfiles in standby to resolve the gap.

Please use below query to find out archive gap on Standby:

SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL.SEQUENCE# “Last Sequence Applied”, (ARCH.SEQUENCE# – APPL.SEQUENCE#) “Difference” FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH, (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;
 
Thread  Last Sequence Received   Last Sequence Applied     Difference
———-  ———————-                          ———————        ———-  
1                   9545                                             9535                   10
 
SQL> SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG WHERE APPLIED=’YES’;
 MAX(SEQUENCE#)
————–
9535

COPY ARCHIVELOG FILE TO STANDBY FROM PRIMARY:

$ scp log_file_n.arc oracle@standby:/log_location/log_file_n.arc

REGISTER LOGFILE AT STANDBY:

SQL> alter database register logfile ‘/log_location/log_file_n.arc’; logfile registered

Repeat the same process for all the log files which are missing at standby.

If archive gap difference is huge(say more than 500 logs) , Please check this https://oracledbamasters.wordpress.com/2013/10/22/resolve-huge-archive-gap-primary-standby/

SIMPLE DATABASE CREATION

CREATION OF DATABASE IN COMMAND LINE:

1) Set Environment file:

#su – oracle

$pwd

export ORACLE_SID=PROD

export ORACLE_HOME=/u01/app/oracle/PRODuct/11.2.0/db_1

export PATH=$ORACLE_HOME/bin:$PATH

2) Create Parameter file:

$cd $ORACLE_HOME/dbs

$cpinit.ora initPROD.ora

vi $ORACLE_HOME/dbs/initPROD.ora

db_name = PROD

control_files=/u01/PROD/control1.ctl,/u01/PROD/control2.ctl

undo_management = AUTO

undo_tablespace = UNDOTBS1

diagnostic_dest=/u01/PROD/

:wq!

3) Start the instance:

Edit the file $ORACLE_HOME/sqlplus/admin/glogin.sql and add in line as below.
setsqlprompt “_USER’@’_CONNECT_IDENTIFIER> ”

sqlplus / as sysdba

startup nomount

Note : In the nomount state oracle reads all initialization parameter values from the pfile(initPROD.ora)

4) Create the database:

vi create.sql

create database test

logfile group 1 (‘/u01/PROD/redo1.log’) size 100M,

group 2 (‘/u01/PROD/redo2.log’) size 100M,

group 3 (‘/u01/PROD/redo3.log’) size 100M

datafile ‘/u01/PROD/system.dbf’ size 500M autoextend on next 10M maxsize unlimited extent management local

sysauxdatafile ‘/u01/PROD/sysaux.dbf’ size 100M autoextend on next 10M maxsize unlimited

undotablespace undotbs1 datafile ‘/u01/PROD/undotbs1.dbf’ size 100M

default temporary tablespace temp tempfile ‘/u01/PROD/temp01.dbf’ size 100M;

sqlplus / as sysdba

@create.sql

5) Run catalog and catproc:

After Successful creation of the database we need to execute catalog.sql and catproc.sql.These two scripts updates the data dictionary tables and views. And pupbld.sql must be executed from Systemuser.This script updates users product information.

sqlplus / as sysdba

@?/rdbms/admin/catalog.sql

@?/rdbms/admin/catproc.sql

connect system/manager

@?/sqlplus/admin/pupbld.sql

####### Use Below dictionary view to see database information #####

sys@PROD>select * from v$version;

sys@PROD>select * from v$tablespace;

sys@PROD>select name from v$datafile;

sys@PROD>select name from v$tempfile;

sys@PROD>select name from v$controlfile;

sys@PROD>set wrap off

sys@PROD>select * from v$logfile;