Wednesday, May 26, 2010
Friday, April 9, 2010
Create Multiple Instance on A Single Database
TO CRAETE A NEW ORACLE INSTANCE (DB_NAME=bnfd INSTANCE_NAME=bnfd1)
-----------------------------------
1.CREATE A NEW DIRECTORY AS THE GLOBAL DATABASE NAME (DB_NAME:bnfd) IN THE PARAMETER FILE
AND EDIT THE PFILE WITH THE NEW INSTANCE NAME AND NEW DATABASE NAME.
example:create a directory as E:\oracle\admin\bnfd and edit the parameter
file after copying.(CREATE THE NEW PARAMETER FILE)
2.CREATE THE NEW PASSWORD FILE IN THE DIRECTORY
E:\>oracle\ora92\database>orapwd FILE=pwdbnfd1.ora PASSWORD=sys ENTRIES=5;
3.If we are using Windows NT then add the service with oradim program
C:\>oradim -new -sid bnfd1 -startmode a -pfile E:\oracle\admin\bnfd\pfile\bnfd.ora
4.Add in the listener.ora the global database name and the sid and restart the lis
tener.
5.Add this in tnsnames.ora with the new service name(SERVICE_NAME=bnfd1).
6.connect sys/sys@bnfd as sysdba
7C:\>oradim -delete -sid bnfd1(To delete a SID)
8C:\>oradim -edit -sid bnfd1 -startmode a -pfile E:\oracle\admin\bnfd\pfile\bnfd.ora;(To edit a sid)
TO CREATE A PARALLEL SERVER(instance=11,12 db_name:hello1)
-------------------------------------------------------------------
1.Do not change the DB_NAME to hello2(keep it hello1)
but change the instance name to 12.
2.do the same steps as that of creating a new instance.
3.In the tnsnames.ora have two entries to connect to two instances, but they will
connect to the same database so that if one instance is down the other instance
can go up and take its place.
copy the folder structure as it is in case of other databases.
TO CREATE A DATABASE FOR THE INSTANCE (11 and 12)(DB_NAME:hello1)
Either give the UNDO tablespace or comment the undo parameters
---------------------------------------------------------------------------
1.Comment all the undo related parameters in the init.ora file for 12 instance.
2.Startup nomount using pfile for 12 instance
3. startup nomount pfile="C:\oracle\admin\catalgdb\scripts\init.ora";
CREATE DATABASE targetdb
MAXINSTANCES 1
MAXLOGHISTORY 1
MAXLOGFILES 5
MAXLOGMEMBERS 3
MAXDATAFILES 100
DATAFILE 'C:\oracle\oradata\targetdb\system01.dbf' SIZE 250M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE 'C:\oracle\oradata\targetdb\temp01.dbf' SIZE 40M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
UNDO TABLESPACE "UNDOTBS1" DATAFILE 'C:\oracle\oradata\targetdb\undotbs01.dbf' SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
CHARACTER SET WE8MSWIN1252
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 ('C:\oracle\oradata\targetdb\redo01.log') SIZE 102400K,
GROUP 2 ('C:\oracle\oradata\targetdb\redo02.log') SIZE 102400K,
GROUP 3 ('C:\oracle\oradata\targetdb\redo03.log') SIZE 102400K;
spool off
exit;
Database created.
4.alter database mount;
5.alter database archivelog;
6.alter database open;
Run catalog.sql from e:\oracle\ora92\rdbms\admin
Eun catproc.sql from e:\oracle\ora92\rdbms\admin
Go to the e:\oracle\ora92\sqlplus\admin directory, connect as SYSTEM and run @PUPBLD.SQL from the sql prompt.
open the database and create temp and undo tablespace.
make them at least 600mb each
undo tablespace should be named as undo01
edit the init.ora file where all "UNDO" parameters are there and make them uncommented.
It will be possible to start 11/12 in nomount mode.
But if we mount the database hello1 using 11 instance
then if we try to mount the database using 12 instance
then we will get an error
ORA-01102: cannot mount database in EXCLUSIVE mode.
(Any one of the instances 11 or 12 can be used to open the database).
THE STEPS IS THEORITICALLY CORRECT BUT PRACTICALLY THERE SEEMS TO BE SOME PROBLEM
(To solve this problem we have to mount and open the database in shared mode and
this is done using the statement
SQL>ALTER DATABASE MOUNT SHARED;(Parallel server mode)
(This will ensure that whether we can connect the database using SID 11 or 12
but it will open the same database -hello1).)
STEPS TO PREPARE AN AUXILLARY INSTANCE FOR TSPITR
-----------------------------------------------------
1.db_file_name_convert=('e:\oracle\oradata\hello1','e:\oracle\oradata\hello2')
log_file_name_convert=('e:\oracle\oradata\hello1','e:\oracle\oradata\hello2')
lock_name_space=hello2
db_name=hello1 instance_name=12
2.Start the auxiliary instance in nomount mode.
TSPITR-Tablespace Point In Time Recovery
----------------------------------------------------------
Before TSPITR can continue ensure that the database backup is taken and target
database is open and auxillary database is in nomount mode and follow the rules
of tspitr.It should be noted that immediately after TSPITR is done backup
should be taken, otherwise next time TSPITR will give an error.
Shutdown the auxiliary instance immediately after performing tspitr.
RMAN> connect target backup_admin/backup_admin@hello1
connected to target database: hello1 (DBID=3960930000)
RMAN> connect catalog rman_bnfd/rman_bnfd@kodiak
connected to recovery catalog database
RMAN> connect auxiliary sys/sys@hello2
connected to auxiliary database: hello1 (not mounted)
RMAN> RECOVER tablespace users UNTIL TIME "to_date('14-10-2004 15:30:00','dd-mm-yyyy hh24:mi:ss')"
2> ;
Starting recover at 14-OCT-04
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=20 devtype=DISK
printing stored script: Memory Script
{
# set the until clause
set until time "to_date('14-10-2004 15:30:00','dd-mm-yyyy hh24:mi:ss')";
# restore the controlfile
restore clone controlfile to clone_cf;
# replicate the controlfile
replicate clone controlfile from clone_cf;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log for tspitr to a resent until time
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
# resync catalog after controlfile restore
resync catalog;
}
executing script: Memory Script
executing command: SET until clause
Starting restore at 14-OCT-04
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=12 devtype=DISK
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: restoring controlfile
output filename=E:\ORACLE\ORADATA\hello2\CONTROL01.CTL
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=C:\BACKUP\CONTROLFILE\%ORACLE_SID_CTRL_DATE_C-3960930000-20041014-0
5.BAK tag=null params=NULL
channel ORA_AUX_DISK_1: restore complete
Finished restore at 14-OCT-04
replicating controlfile
input filename=E:\ORACLE\ORADATA\hello2\CONTROL01.CTL
output filename=E:\ORACLE\ORADATA\hello2\CONTROL02.CTL
output filename=E:\ORACLE\ORADATA\hello2\CONTROL03.CTL
sql statement: alter database mount clone database
sql statement: alter system archive log current
sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;
starting full resync of recovery catalog
full resync complete
printing stored script: Memory Script
{
# generated tablespace point-in-time recovery script
# set the until clause
set until time "to_date('14-10-2004 15:30:00','dd-mm-yyyy hh24:mi:ss')";
plsql <<<-- tspitr_2
declare
sqlstatement varchar2(512);
offline_not_needed exception;
pragma exception_init(offline_not_needed, -01539);
begin
sqlstatement := 'alter tablespace '|| 'USERS' ||' offline for recover';
krmicd.writeMsg(6162, sqlstatement);
krmicd.execSql(sqlstatement);
exception
when offline_not_needed then
null;
end; >>>;
# set a destination filename for restore
set newname for datafile 1 to
"E:\ORACLE\ORADATA\hello2\SYSTEM01.DBF";
# set a destination filename for restore
set newname for datafile 2 to
"E:\ORACLE\ORADATA\hello2\UNDOTBS01.DBF";
# set a destination filename for restore
set newname for datafile 9 to
"E:\ORACLE\ORADATA\hello1\USERS01.DBF";
# restore the tablespaces in the recovery set plus the auxilliary tablespaces
restore clone datafile 1, 2, 9;
switch clone datafile all;
#online the datafiles restored or flipped
sql clone "alter database datafile 1 online";
#online the datafiles restored or flipped
sql clone "alter database datafile 2 online";
#online the datafiles restored or flipped
sql clone "alter database datafile 9 online";
# make the controlfile point at the restored datafiles, then recover them
recover clone database tablespace "USERS", "SYSTEM", "UNDOTBS1";
alter clone database open resetlogs;
# PLUG HERE the creation of a temporary tablespace if export fails due to lack
# of temporary space.
# For example in Unix these two lines would do that:
#sql clone "create tablespace aux_tspitr_tmp
# datafile ''/tmp/aux_tspitr_tmp.dbf'' size 500K";
}
executing script: Memory Script
executing command: SET until clause
sql statement: alter tablespace USERS offline for recover
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting restore at 14-OCT-04
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to E:\ORACLE\ORADATA\hello2\SYSTEM01.DBF
restoring datafile 00002 to E:\ORACLE\ORADATA\hello2\UNDOTBS01.DBF
restoring datafile 00009 to E:\ORACLE\ORADATA\hello1\USERS01.DBF
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=C:\BACKUP\DATABASETEST\TEST_0AG2HB1G_1_1.BAK tag=TAG20041014T151351
params=NULL
channel ORA_AUX_DISK_1: restore complete
Finished restore at 14-OCT-04
datafile 9 switched to datafile copy
input datafilecopy recid=4 stamp=539537696 filename=E:\ORACLE\ORADATA\hello1\US
ERS01.DBF
sql statement: alter database datafile 1 online
sql statement: alter database datafile 2 online
sql statement: alter database datafile 9 online
Starting recover at 14-OCT-04
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 8 is already on disk as file E:\ORACLE\ORA92\RDBMS
\ARC00008.001
channel ORA_AUX_DISK_1: starting archive log restore to default destination
channel ORA_AUX_DISK_1: restoring archive log
archive log thread=1 sequence=7
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=C:\BACKUP\ARCHIVELOG\%ORACLE_SID_DATA_DATA.BAK tag=TAG20041014T1515
34 params=NULL
channel ORA_AUX_DISK_1: restore complete
archive log filename=E:\ORACLE\ORA92\RDBMS\ARC00007.001 thread=1 sequence=7
archive log filename=E:\ORACLE\ORA92\RDBMS\ARC00008.001 thread=1 sequence=8
media recovery complete
Finished recover at 14-OCT-04
database opened
printing stored script: Memory Script
{
# export the tablespaces in the recovery set
host 'exp userid =\"sys/sys@hello2 as sysdba\" point_in_time_recover=y tablespa
ces=
USERS file=
tspitr_a.dmp';
# shutdown clone before import
shutdown clone immediate
# import the tablespaces in the recovery set
host 'imp userid =\"backup_admin/backup_admin@hello1 as sysdba\" point_in_time_
recover=y file=
tspitr_a.dmp';
# online/offline the tablespace imported
sql "alter tablespace USERS online";
sql "alter tablespace USERS offline";
# enable autobackups in case user does open resetlogs from RMAN after TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(TRUE); end;';
# resync catalog after tspitr finished
resync catalog;
}
executing script: Memory Script
Export: Release 9.2.0.1.0 - Production on Thu Oct 14 15:35:22 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Note: table data (rows) will not be exported
About to export Tablespace Point-in-time Recovery objects...
For tablespace USERS ...
. exporting cluster definitions
. exporting table definitions
. exporting referential integrity constraints
. exporting triggers
. end point-in-time recovery
Export terminated successfully without warnings.
host command complete
database closed
database dismounted
Oracle instance shut down
Import: Release 9.2.0.1.0 - Production on Thu Oct 14 15:35:41 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export file created by EXPORT:V09.02.00 via conventional path
About to import Tablespace Point-in-time Recovery objects...
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Import terminated successfully without warnings.
host command complete
sql statement: alter tablespace USERS online
sql statement: alter tablespace USERS offline
sql statement: begin dbms_backup_restore.AutoBackupFlag(TRUE); end;
starting full resync of recovery catalog
full resync complete
Finished recover at 14-OCT-04
ANOTHER EXAMPLE OF TSPITR RECOVERY
-----------------------------------
RMAN> connect target backup_admin/backup_admin@hello1
connected to target database: hello1 (DBID=3960930000)
RMAN> connect catalog rman_bnfd/rman_bnfd@kodiak
connected to recovery catalog database
RMAN> connect auxiliary sys/sys@hello2
connected to auxiliary database: hello1 (not mounted)
RMAN> RECOVER tablespace testbnfd UNTIL TIME "to_date('14-10-2004 16:20:00','dd
-mm-yyyy hh24:mi:ss')"
2> ;
Starting recover at 14-OCT-04
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=8 devtype=DISK
printing stored script: Memory Script
{
# set the until clause
set until time "to_date('14-10-2004 16:20:00','dd-mm-yyyy hh24:mi:ss')";
# restore the controlfile
restore clone controlfile to clone_cf;
# replicate the controlfile
replicate clone controlfile from clone_cf;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log for tspitr to a resent until time
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
# resync catalog after controlfile restore
resync catalog;
}
executing script: Memory Script
executing command: SET until clause
Starting restore at 14-OCT-04
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=12 devtype=DISK
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: restoring controlfile
output filename=E:\ORACLE\ORADATA\hello2\CONTROL01.CTL
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=C:\BACKUP\CONTROLFILE\%ORACLE_SID_CTRL_DATE_C-3960930000-20041014-0
B.BAK tag=null params=NULL
channel ORA_AUX_DISK_1: restore complete
Finished restore at 14-OCT-04
replicating controlfile
input filename=E:\ORACLE\ORADATA\hello2\CONTROL01.CTL
output filename=E:\ORACLE\ORADATA\hello2\CONTROL02.CTL
output filename=E:\ORACLE\ORADATA\hello2\CONTROL03.CTL
sql statement: alter database mount clone database
sql statement: alter system archive log current
sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;
starting full resync of recovery catalog
full resync complete
printing stored script: Memory Script
{
# generated tablespace point-in-time recovery script
# set the until clause
set until time "to_date('14-10-2004 16:20:00','dd-mm-yyyy hh24:mi:ss')";
plsql <<<-- tspitr_2
declare
sqlstatement varchar2(512);
offline_not_needed exception;
pragma exception_init(offline_not_needed, -01539);
begin
sqlstatement := 'alter tablespace '|| 'TESTbnfd' ||' offline for recover';
krmicd.writeMsg(6162, sqlstatement);
krmicd.execSql(sqlstatement);
exception
when offline_not_needed then
null;
end; >>>;
# set a destination filename for restore
set newname for datafile 1 to
"E:\ORACLE\ORADATA\hello2\SYSTEM01.DBF";
# set a destination filename for restore
set newname for datafile 2 to
"E:\ORACLE\ORADATA\hello2\UNDOTBS01.DBF";
# set a destination filename for restore
set newname for datafile 11 to
"E:\ORACLE\ORADATA\hello1\TESTbnfd.ORA";
# restore the tablespaces in the recovery set plus the auxilliary tablespaces
restore clone datafile 1, 2, 11;
switch clone datafile all;
#online the datafiles restored or flipped
sql clone "alter database datafile 1 online";
#online the datafiles restored or flipped
sql clone "alter database datafile 2 online";
#online the datafiles restored or flipped
sql clone "alter database datafile 11 online";
# make the controlfile point at the restored datafiles, then recover them
recover clone database tablespace "TESTbnfd", "SYSTEM", "UNDOTBS1";
alter clone database open resetlogs;
# PLUG HERE the creation of a temporary tablespace if export fails due to lack
# of temporary space.
# For example in Unix these two lines would do that:
#sql clone "create tablespace aux_tspitr_tmp
# datafile ''/tmp/aux_tspitr_tmp.dbf'' size 500K";
}
executing script: Memory Script
executing command: SET until clause
sql statement: alter tablespace TESTbnfd offline for recover
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting restore at 14-OCT-04
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to E:\ORACLE\ORADATA\hello2\SYSTEM01.DBF
restoring datafile 00002 to E:\ORACLE\ORADATA\hello2\UNDOTBS01.DBF
restoring datafile 00011 to E:\ORACLE\ORADATA\hello1\TESTbnfd.ORA
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=C:\BACKUP\DATABASETEST\TEST_0JG2HEPM_1_1.BAK tag=TAG20041014T161758
params=NULL
channel ORA_AUX_DISK_1: restore complete
Finished restore at 14-OCT-04
datafile 11 switched to datafile copy
input datafilecopy recid=6 stamp=539547280 filename=E:\ORACLE\ORADATA\hello1\TE
STbnfd.ORA
sql statement: alter database datafile 1 online
sql statement: alter database datafile 2 online
sql statement: alter database datafile 11 online
Starting recover at 14-OCT-04
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 14 is already on disk as file E:\ORACLE\ORA92\RDBM
S\ARC00014.001
channel ORA_AUX_DISK_1: starting archive log restore to default destination
channel ORA_AUX_DISK_1: restoring archive log
archive log thread=1 sequence=13
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=C:\BACKUP\DATABASETEST\TEST_0NG2HEST_1_1.BAK tag=TAG20041014T161938
params=NULL
channel ORA_AUX_DISK_1: restore complete
archive log filename=E:\ORACLE\ORA92\RDBMS\ARC00013.001 thread=1 sequence=13
archive log filename=E:\ORACLE\ORA92\RDBMS\ARC00014.001 thread=1 sequence=14
media recovery complete
Finished recover at 14-OCT-04
database opened
printing stored script: Memory Script
{
# export the tablespaces in the recovery set
host 'exp userid =\"sys/sys@hello2 as sysdba\" point_in_time_recover=y tablespa
ces=
TESTbnfd file=
tspitr_a.dmp';
# shutdown clone before import
shutdown clone immediate
# import the tablespaces in the recovery set
host 'imp userid =\"backup_admin/backup_admin@hello1 as sysdba\" point_in_time_
recover=y file=
tspitr_a.dmp';
# online/offline the tablespace imported
sql "alter tablespace TESTbnfd online";
sql "alter tablespace TESTbnfd offline";
# enable autobackups in case user does open resetlogs from RMAN after TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(TRUE); end;';
# resync catalog after tspitr finished
resync catalog;
}
executing script: Memory Script
Export: Release 9.2.0.1.0 - Production on Thu Oct 14 18:15:12 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Note: table data (rows) will not be exported
About to export Tablespace Point-in-time Recovery objects...
For tablespace TESTbnfd ...
. exporting cluster definitions
. exporting table definitions
. . exporting table TEST1
. . exporting table TEST2
. exporting referential integrity constraints
. exporting triggers
. end point-in-time recovery
Export terminated successfully without warnings.
host command complete
database closed
database dismounted
Oracle instance shut down
Import: Release 9.2.0.1.0 - Production on Thu Oct 14 18:15:36 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export file created by EXPORT:V09.02.00 via conventional path
About to import Tablespace Point-in-time Recovery objects...
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing bnfd's objects into bnfd
. . importing table "TEST1"
. . importing table "TEST2"
. importing SYS's objects into SYS
Import terminated successfully without warnings.
host command complete
sql statement: alter tablespace TESTbnfd online
sql statement: alter tablespace TESTbnfd offline
sql statement: begin dbms_backup_restore.AutoBackupFlag(TRUE); end;
starting full resync of recovery catalog
full resync complete
Finished recover at 14-OCT-04
RMAN>
RMAN>
AFTER THE TABLESPACE POINT IN TIME RECOVERY IS OVER THE TABLESPACE SHOULD BE MADE ONLINE FOR
FURTHUR TRANSACTIONS.
2
CREATING A CLONE DATABASE(hello1 database being duplicated to hello3).BEFORE CLONING THE DATABASE
IT SHOULD BE SEEN THAT THE SCN IS SET TILL THE LAST ARCHIVELOG SCN.THIS INFORMATION CAN BE OBTAINED
BY ISSUING THE SQL STATEMENT SELECT * FROM V$DATABASE;)
connect to auxiliary database before starting the database cloning
----------------------------
RMAN> RUN
2> {
3> set until scn 2338189;
4> duplicate target database to hello3;
5> }
executing command: SET until clause
Starting Duplicate Db at 04-DEC-04
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=12 devtype=DISK
printing stored script: Memory Script
{
set until scn 2338189;
set newname for datafile 1 to
"E:\ORACLE\ORADATA\hello3\SYSTEM01.DBF";
set newname for datafile 2 to
"E:\ORACLE\ORADATA\hello3\UNDO01";
set newname for datafile 3 to
"E:\ORACLE\ORADATA\hello3\TEMP01";
set newname for datafile 4 to
"E:\ORACLE\ORADATA\hello3\bnfdTEST";
restore
check readonly
clone database
;
}
executing script: Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting restore at 04-DEC-04
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to E:\ORACLE\ORADATA\hello3\SYSTEM01.DBF
restoring datafile 00002 to E:\ORACLE\ORADATA\hello3\UNDO01
restoring datafile 00003 to E:\ORACLE\ORADATA\hello3\TEMP01
restoring datafile 00004 to E:\ORACLE\ORADATA\hello3\bnfdTEST
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=C:\BACKUP\DATABASETEST\TEST_37G6ON27_1_1.BAK tag=TAG20041203T232702
params=NULL
channel ORA_AUX_DISK_1: restore complete
Finished restore at 04-DEC-04
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "hello3" RESETLOGS ARCHIVE
LOG
MAXLOGFILES 40
MAXLOGMEMBERS 2
MAXDATAFILES 240
MAXINSTANCES 16
MAXLOGHISTORY 1815
LOGFILE
GROUP 1 ( 'E:\ORACLE\ORADATA\hello3\REDO01.LOG' ) SIZE 5242880 REUSE,
GROUP 2 ( 'E:\ORACLE\ORADATA\hello3\REDO02.LOG' ) SIZE 5242880 REUSE,
GROUP 3 ( 'E:\ORACLE\ORADATA\hello3\REDO03.LOG' ) SIZE 5242880 REUSE
DATAFILE
'E:\ORACLE\ORADATA\hello3\SYSTEM01.DBF'
CHARACTER SET WE8ISO8859P1
printing stored script: Memory Script
{
switch clone datafile all;
}
executing script: Memory Script
datafile 2 switched to datafile copy
input datafilecopy recid=1 stamp=543977044 filename=E:\ORACLE\ORADATA\hello3\UN
DO01
datafile 3 switched to datafile copy
input datafilecopy recid=2 stamp=543977044 filename=E:\ORACLE\ORADATA\hello3\TE
MP01
datafile 4 switched to datafile copy
input datafilecopy recid=3 stamp=543977044 filename=E:\ORACLE\ORADATA\hello3\SU
MANTEST
printing stored script: Memory Script
{
set until scn 2338189;
recover
clone database
delete archivelog
;
}
executing script: Memory Script
executing command: SET until clause
Starting recover at 04-DEC-04
using channel ORA_AUX_DISK_1
starting media recovery
archive log thread 1 sequence 4 is already on disk as file E:\ORACLE\ORA92\RDBMS
\ARC00004.001
channel ORA_AUX_DISK_1: starting archive log restore to default destination
channel ORA_AUX_DISK_1: restoring archive log
archive log thread=1 sequence=3
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=C:\BACKUP\ARCHIVELOG\%ORACLE_SID_DATA_DATA_U%.BAK tag=TAG20041203T2
32756 params=NULL
channel ORA_AUX_DISK_1: restore complete
archive log filename=E:\ORACLE\ORA92\DATABASE\hello3ARC00003.001 thread=1 seque
nce=3
channel clone_default: deleting archive log(s)
archive log filename=E:\ORACLE\ORA92\DATABASE\hello3ARC00003.001 recid=1 stamp=
543977050
archive log filename=E:\ORACLE\ORA92\RDBMS\ARC00004.001 thread=1 sequence=4
media recovery complete
Finished recover at 04-DEC-04
printing stored script: Memory Script
{
shutdown clone;
startup clone nomount ;
}
executing script: Memory Script
database closed
database dismounted
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 185670316 bytes
Fixed Size 453292 bytes
Variable Size 159383552 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "hello3" RESETLOGS ARCHIVE
LOG
MAXLOGFILES 40
MAXLOGMEMBERS 2
MAXDATAFILES 240
MAXINSTANCES 16
MAXLOGHISTORY 1815
LOGFILE
GROUP 1 ( 'E:\ORACLE\ORADATA\hello3\REDO01.LOG' ) SIZE 5242880 REUSE,
GROUP 2 ( 'E:\ORACLE\ORADATA\hello3\REDO02.LOG' ) SIZE 5242880 REUSE,
GROUP 3 ( 'E:\ORACLE\ORADATA\hello3\REDO03.LOG' ) SIZE 5242880 REUSE
DATAFILE
'E:\ORACLE\ORADATA\hello3\SYSTEM01.DBF'
CHARACTER SET WE8ISO8859P1
printing stored script: Memory Script
{
catalog clone datafilecopy "E:\ORACLE\ORADATA\hello3\UNDO01";
catalog clone datafilecopy "E:\ORACLE\ORADATA\hello3\TEMP01";
catalog clone datafilecopy "E:\ORACLE\ORADATA\hello3\bnfdTEST";
switch clone datafile all;
}
executing script: Memory Script
cataloged datafile copy
datafile copy filename=E:\ORACLE\ORADATA\hello3\UNDO01 recid=1 stamp=543977210
cataloged datafile copy
datafile copy filename=E:\ORACLE\ORADATA\hello3\TEMP01 recid=2 stamp=543977210
cataloged datafile copy
datafile copy filename=E:\ORACLE\ORADATA\hello3\bnfdTEST recid=3 stamp=5439772
11
datafile 2 switched to datafile copy
input datafilecopy recid=1 stamp=543977210 filename=E:\ORACLE\ORADATA\hello3\UN
DO01
datafile 3 switched to datafile copy
input datafilecopy recid=2 stamp=543977210 filename=E:\ORACLE\ORADATA\hello3\TE
MP01
datafile 4 switched to datafile copy
input datafilecopy recid=3 stamp=543977211 filename=E:\ORACLE\ORADATA\hello3\SU
MANTEST
printing stored script: Memory Script
{
Alter clone database open resetlogs;
}
executing script: Memory Script
database opened
Finished Duplicate Db at 04-DEC-04
Posted by
Bis
at
9:18 PM
3
comments
Optach for RDBMS Oracle_HOme
connect as oracle
set env of instance
1) down load the patch from updates.oracle.com to $ORACLE_HOME/patch
unzip the patch.
During uzipping please note down the directory it is creating.
cd to that directory. copy the directory path(copied path).
2)
export PATH=
export PATH=/opt/perl/bin:$PATH
3)
Apply the patch by this command
opatch apply -invPtrLoc
Posted by
Bis
at
8:37 PM
2
comments
Optimizer Statstics & Index Usage!!!
Optimizer Statistics -
To Verify -
SQL> SELECT TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,
TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')
FROM DBA_TABLES
WHERE TABLE_NAME IN ('SO_LINES_ALL','SO_HEADERS_ALL','SO_LAST_ALL');
TABLE_NAME NUM_ROWS BLOCKS AVG_ROW_LEN LAST_ANALYZED
------------ -------- ------- ----------- -------------
SO_HEADERS_ALL 1632264 209014 149 07/29/2009 00:59:51
SO_LINES_ALL 10493845 1922196 263 07/29/2009 01:16:09
SO_LAST_ALL
Note - SO_LAST_ALL has no statistics.
Verifying Index Statistics
To verify that index statistics are available and assist you in determining which are the best indexes to use in an application, execute the following statement against the dictionary DBA_INDEXES view:
SQL> SELECT INDEX_NAME "NAME", NUM_ROWS, DISTINCT_KEYS "DISTINCT",
1 LEAF_BLOCKS, CLUSTERING_FACTOR "CF", BLEVEL "LEVEL",
2 AVG_LEAF_BLOCKS_PER_KEY "ALFBPKEY"
3 FROM DBA_INDEXES
4 WHERE owner = 'SH'
5* ORDER BY INDEX_NAME;
NAME NUM_ROWS DISTINCT LEAF_BLOCKS CF LEVEL ALFBPKEY
------------------------------ ---------- ---------- ----------- ---------- ---------- ----------
CUSTOMERS_PK 50000 50000 454 4405 2 1
PRODUCTS_PK 10000 10000 90 1552 1 1
PRODUCTS_PROD_CAT_IX 10000 4 99 4422 1 24
PRODUCTS_PROD_SUBCAT_IX 10000 37 170 6148 2 4
SALES_PROD_BIX 6287 909 1480 6287 1 1
SALES_PROMO_BIX 4727 459 570 4727 1 1
6 rows selected.
Optimizer Index Determination Criteria
The optimizer uses the following criteria when determining which index to use:
Number of rows in the index (cardinality)
Number of distinct keys. These define the selectivity of the index.
Level or height of the index. This indicates how deeply the data 'probe' must search in order to find the data.
Number of leaf blocks in the index. This is the number of I/Os needed to find the desired rows of data.
Clustering factor (CF). This is the collocation amount of the index block relative to data blocks. The higher the CF, the less likely the optimizer is to select this index.
Average leaf blocks per key (ALFBKEY). Average number of leaf blocks in which each distinct value in the index appears, rounded to the nearest integer. For indexes that enforce UNIQUE and PRIMARY KEY constraints, this value is always one.
Determining if You Have Chosen the Right Index
Use the following notes to assist you in deciding whether you have chosen an appropriate index for a table, data, and query:
DISTINCT
Consider index ap_invoices_n3, the number of distinct keys is two. The resulting selectivity based on index ap_invoices_n3 is poor, and the optimizer is not likely to use this index. Using this index fetches 50% of the data in the table. In this case, a full table scan is cheaper than using index ap_invoices_n3.
Index Cost Tie
The optimizer uses alphabetic determination: If the optimizer determines that the selectivity, cost, and cardinality of two finalist indexes is the same, then it uses the two indexes' names as the deciding factor. It chooses the index with name beginning with a lower alphabetic letter or number.
Verifying Column Statistics
To verify that column statistics are available, execute the following statement against the dictionary's DBA_TAB_COL_STATISTICS view:
SQL> SELECT COLUMN_NAME, NUM_DISTINCT, NUM_NULLS, NUM_BUCKETS, DENSITY
FROM DBA_TAB_COL_STATISTICS
WHERE TABLE_NAME ="PA_EXPENDITURE_ITEMS_ALL"
ORDER BY COLUMN_NAME;
This returns the following data:
COLUMN_NAME NUM_DISTINCT NUM_NULLS NUM_BUCKETS DENSITY
------------------------------ ------------ ---------- ----------- ----------
BURDEN_COST 4300 71957 1 .000232558
BURDEN_COST_RATE 675 7376401 1 .001481481
CONVERTED_FLAG 1 16793903 1 1
COST_BURDEN_DISTRIBUTED_FLAG 2 15796 1 .5
COST_DISTRIBUTED_FLAG 2 0 1 .5
COST_IND_COMPILED_SET_ID 87 6153143 1 .011494253
EXPENDITURE_ID 1171831 0 1 8.5337E-07
TASK_ID 8648 0 1 .000115634
TRANSFERRED_FROM_EXP_ITEM_ID 1233787 15568891 1 8.1051E-07
Verifying column statistics are important for the following conditions:
Join conditions
When the WHERE clause includes a column(s) with a bind variable; for example:
column x = :variable_y
In these cases, the stored column statistics can be used to get a representative cardinality estimation for the given expression.
Consider the data returned in the previous example.
NUM_DISTINCT Column Statistic
Low
The number of distinct values for the columns CONVERTED_FLAG is one. In this case this column has only one value. If in the WHERE clause, then there is a bind variable on column CONVERTED_FLAG = :variable_y, say. If CONVERTED_FLAG is low, as the case in this example, then this leads to poor selectivity, and CONVERTED_FLAG is a poor candidate to be used as the index.
Column COST_BURDEN_DISTRIBUTED_FLAG: NUM_DISTINCT = 2. Likewise, this is low. COST_BURDEN_DISTRIBUTED_FLAG is not a good candidate for an index unless there is much skew or there are a lot of nulls. If there is data skew of, say, 90%, then 90% of the data has one particular value and 10% of the data has another value. If the query only needs to access the 10%, then a histogram is needed on that column in order for the optimizer to recognize the skew and use an index on this column.
High
NUM_DISTINCT is more than 1 million for column EXPEDITURE_ID. If there is a bind variable on column EXPENDITURE_ID, then this leads to high selectivity (implying high density of data on this column). In other words, EXPENDITURE_ID is a good candidate to be used as the index.
NUM_NULL Column Statistic
NUM_NULLS indicates the number of null statistics.
Low
For example, if a single column index has few nulls, such as the COST_DISTRIBUTED_FLAG column, and if this column is used as the index, then the resulting data set is large.
High
If there are many nulls on a particular column, such as the CONVERTED_FLAG column, and if this column is used as the index, then the resulting data set is small. This means that COST_DISTRIBUTED_FLAG is a more appropriate column to index.
DENSITY Column Statistic
This indicates the density of the values of that column. This is calculated by 1 over NUM_DISTINCT.
Column Statistics and Join Methods
Column statistics are useful to help determine the most efficient join method, which, in turn, is also based on the number of rows returned.
Posted by
Bis
at
8:28 PM
1 comments
Saturday, January 16, 2010
Wednesday, December 16, 2009
How to determine which Unix shell you are using
How to determine which Unix shell you are using:
> env | grep SHELL
-or-
> echo $SHELL
-or-
> ps -f ....Will provide a full listing of processes associated
with the current terminal, one of which will be the
shell process.
-or-
> setenv ....On a C shell this will return the current
environment, while other shells will return an error.
Please Note:
The following examples use 'ORACLE_HOME' as the variable name.
BOURNE SHELL(sh):
-----------------
To set environment variables within a Bourne Shell (sh), the variable must be
initialized locally, then exported globally:
> ORACLE_HOME=/u01/app/oracle/product/8.1.7
...defines ORACLE_HOME locally to the shell
> export ORACLE_HOME
...makes it globally available to other processes started from
this shell
To have a variable set automatically when you log into the Bourne Shell of your
Unix server:
Add the above lines (minus the '>' prompt) to the hidden '.profile'
file in your $HOME directory.
If you make changes to your '.profile' and want those changes propagated to
your current running environment (without having to log out, then back in):
> cd $HOME
> . .profile
To unset environment variables within a Bourne Shell (sh):
> unset ORACLE_HOME
To check what an environment variable is set to:
> env | grep ORACLE_HOME
KORN SHELL(ksh):
----------------
To set environment variables within a Korn Shell (ksh), you can use the Bourne
syntax show above, or use the streamlined Korn Shell syntax:
> export ORACLE_HOME=/u01/app/oracle/product/8.1.7
To have a variable set automatically when you log into the Korn Shell of your
Unix server:
Add the above lines (minus the '>' prompt) to the hidden '.profile'
file in your $HOME directory.
If you make changes to your '.profile' and want those changes propagated to
your current running environment (without having to log out, then back in):
> cd $HOME
> . .profile
To unset environment variables within a Korn Shell (ksh), use the same syntax
as you would in a Bourne Shell (sh):
> unset ORACLE_HOME
To check what an environment variable is set to:
> env | grep ORACLE_HOME
C SHELL(csh):
-------------
To set environment variables within a C Shell (csh):
> setenv ORACLE_HOME /u01/app/oracle/product/11.1.0
Posted by
Bis
at
1:29 PM
1 comments
Wednesday, April 1, 2009
DBMS_STATS
partname VARCHAR2 DEFAULT NULL, estimate_percent NUMBER DEFAULT NULL, block_sample BOOLEAN DEFAULT FALSE, method_opt VARCHAR2 DEFAULT 'FOR ALL COLUMNS SIZE 1', degree NUMBER DEFAULT NULL, granularity VARCHAR2 DEFAULT 'DEFAULT', cascade BOOLEAN DEFAULT FALSE, stattab VARCHAR2 DEFAULT NULL, statid VARCHAR2 DEFAULT NULL, statown VARCHAR2 DEFAULT NULL, no_invalidate BOOLEAN DEFAULT FALSE
EXEC DBMS_STATS.gather_schema_stats (ownname => 'SCOTT', cascade =>true,estimate_percent => dbms_stats.auto_sample_size);
Posted by
Bis
at
3:33 PM
2
comments
Labels: DBMS_stats
Tuesday, March 31, 2009
CODE OPTIMISATION TIPS
1. Avoid using functions on indexed columns Example: where to_char(emp_no) = ‘121212’ Solutions: where emp_no = to_number(’121212’) 2.Comparison of wrong data types, suppresses the index internally where char_data = 123 3.Oracle does not make use of an index on a column if NOT IN is present. 4.the index on brand will not be used. 5.Use of nested statements slows down the query execution 6.Use of NVL() suppresses index usage. 7.Composite Indexes are not used if the leading indexed column is not present in the WHERE clause. Solution: The leading column of the index must be present in the WHERE clause in order to hit the index. select count(*) from products where brand > ‘0’ and price <>
The index on emp_no will be suppressed.
The index on char_data will be suppressed as oracle re-writes the query.
Solution:
Avoid such errors
where char_data = to_char(123)
Example:
select * from emp where deptno not in (select deptno from dept where deptstatus = ‘A’)
Solution:
Use NOT EXISTS instead of NOT IN
Example: select * from emp where not exists (select ‘X’ from dept where deptstatus = ‘A’ and dept. deptno = emp. deptno).
Example:
select brand, sum(price) from products group by brand having brand = ‘Arrow’;
Solution:
Replace HAVING with a WHERE clause where ever possible.
Example:
select brand, sum(price) from products where brand = ‘Arrow’ group by brand;
Example:
select count(*) from products where itemcode in (Select itemcode from clothing where material = ‘Cotton’);
Solution:
Avoid nested selects.
Example:
select count(*) from products P, clothing C
where P.barcode = C.barcode and C.material = ‘Cotton’;
Example:
where NVL(this_value, 99) = other_value
Try to re-write NVLs.
Example:
where (this_value is null and other_value = 99) OR
(this_value is not null and this_value = other_value)
Example: select count(*) from products where price <>
Posted by
Bis
at
9:27 AM
0
comments
Labels: Tuning
Sunday, March 29, 2009
rdbms patch
/usr/bin/perl $ORACLE_HOME/OPatch/opatch lsinventory -invPtrLoc
$ /home/orart02
OPatch Version: 10.2.0.3.0
OPatch succeeded.
Posted by
Bis
at
3:25 PM
0
comments
Labels: Patching
Basic checks for a new Daatabase
When You will be given an 'USER NAME' and 'PASSWORD' of a new database and a new environment then the basic things you need to check are below,
Goto $ORACLE_HOME/dbs and see the parameter file ,SPFILE
uname -a
$echo $oracle_HOME
$.ps-efgrep pmon --- to check databases
$echo $SID ---
$echo $oracle_sid
$ ps -efgrep pmon ---It ll show u all the databases
user02 319658 1 0 Feb 24 - 3:37 ora_pmon_sun
user02 663636 1 0 Feb 28 - 3:52 ora_pmon_MOON
user02 2265204 1224816 0 12:56:55 pts/0 0:00 grep pmon
$ grep env *To see all the env variables
$sqlplus "/as sysdba"
SQL> select name from v$database;
NAME
---------
RT02
SQL> select status from v$instance;
STATUS
------------
OPEN
SQL> show parameter control
NAME TYPE VALUE
control_file_record_keep_time integer 7
control_files string /apps/user02/oracle/admin/cntr l_rt02.dbf, /apps/user02/oracle/admin/cnrtl_rt02.dbf,/apps/user02/oracle/admin/cntrl_rt03.dbf
SQL>show parameter dump;
NAME TYPE VALUE
background_core_dump string partial
background_dump_dest string /apps/user02/oracle/admin/
bdumpcore_dump_dest string /apps/user02/oracle/admin/
cdumpmax_dump_file_size string UNLIMITED
shadow_core_dump string partial
user_dump_dest string /apps/user02/oracle/admin/udump
SQL> select * from v$log;
SQL> select * from v$logfile
SQL> show parameter undo
NAME TYPE VALUE
undo_management string AUTO
undo_retention integer 3600
undo_tablespace string undo_ts
SQL> select file_name,sum(bytes)/1024/1024 from dba_data_files where tablespace_name='UNDO_TS' group by file_name;
SQL> col FILE_NAME for a60
SQL>set lines 200
SQL> /
SQL> select FILE_NAME, TABLESPACE_NAME,(bytes)/1024/1024 from dba_data_files;
select tablespace_name,sum(bytes)/1024/1024 from dba_free_space where tablespace_name='SYSTEM' group by tablespace_name;
SQL> select AUTOEXTENSIBLE,FILE_NAME,TABLESPACE_NAME from dba_data_files where tablespace_name='&TSNAME';
Enter value for tsname: RETEK_DATA
=========
Examples -
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data03/temp01.dbf' SIZE 8000M AUTOEXTEND OFF
Added a new file -
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data03/temp02.dbf' SIZE 8000M AUTOEXTEND OFF
Reused a existing tempfile which had come from clone -
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data01/temp02.dbf' SIZE 16000M REUSE AUTOEXTEND OFF
Posted by
Bis
at
2:51 PM
0
comments
Labels: Fundamentals
Thursday, March 26, 2009
Data guard Creation
1) Ensure the Primary database is in ARCHIVELOG mode:
SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Current log sequence 9
SQL> alter database close;
Database altered.
SQL> alter database archivelog;
Database altered.
SQL> shutdown immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Modify the Primary database init.ora so that log_archive_start=true and restart
the instance. Verify that database is in archive log mode and that automatic
archiving is enabled.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Next log sequence to archive 9
Current log sequence 9
2) Create a backup of the Primary database:
You can use an existing backup of the Primary database as long as you have the
archive logs that have been generated since that backup. You may also take a
hot backup as long as you have all archive logs through the end of the backup
of the last tablespace. To create a cold backup do the following:
SQL>
SQL> select name from v$datafile;
NAME
----------------------------------------------------------------------
/export/home/oracle/temp/oracle/data/sys.dbf
SQL>
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Create a backup of all datafiles and online redo logs using an OS command or
utility. A backup of the online redo logs is necessary to facilitate switchover.
Once complete startup the instance:
SQL> startup
ORACLE instance started.
Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
Database mounted.
Database opened.
3) Connect to the primary database and create the standby control file:
SQL> alter database create standby controlfile as
'/export/home/oracle/temp/oracle/data/backup/standby.ctl';
Database altered.
4) Copy files to the Standby host:
Copy the backup datafiles, standby controlfile, all available archived redo logs,
and online redo logs from the primary site to the standby site. Copying of the
online redo logs is necessary to facilitate switchover.
If the standby is on a separate site with the same directory structure as the
primary database then you can use the same path names for the standby files as
the primary files. In this way, you do not have to rename the primary datafiles
in the standby control file. If the standby is on the same site as the primary
database, or the standby database is on a separate site with a different
directory structure the you must rename the primary datafiles in the standby
control file after copying them to the standby site. This can be done using
the db_file_name_convert and log_file_name_convert parameters or by manually
using the alert database statements.
5) Set the initialization parameters for the primary database:
It is suggested that you maintain two init.ora’s on both the primary and the
standby. This allows you to facilitate role reversal during switchover
operations more easily.
Primary init.ora on Primary host:
log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60'
log_archive_dest_state_1=enable
log_archive_dest_state_2=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
remote_archive_enable=true
Create the standby initialization parameter file and set the initialization
parameters for the standby database. Depending on your configuration, you may
need to set filename conversion parameters.
Standby init.ora on Primary host:
log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_state_1=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
standby_archive_dest=/export/home/oracle/temp/oracle/arch
standby_file_management=auto
fal_server=DGD01_hasunclu2
fal_client=DGD01_hasunclu1
remote_arhive_enable=true
NOTE: In the above example db_file_name_convert and log_file_name_convert are
not needed as the directory structure on the two hosts are the same. If the
directory structure is not the same then setting of these parameters is
recommended. Please reference notes 47325.1 and 47343.1 for further
information.
Copy the two init.ora’s from the Primary host to the Standby host. You must
modify the Primary init.ora on the Standby host to have log_archive_dest_2 use
the alias that points to the Primary host (ie DGD01_hasunclu1). You must
modify the Standby init.ora on the standby host to have fal_server and
fal_client use the aliases when standby is running on the Primary host
(ie fal_server=DGD01_hasunclu1 and fal_client=DGD01_hasunclu2).
6) Configure networking components:
On the Primary host create a net service name that the Primary database can
use to connect to the Standby database. On the Primary host create a net
service name that Standby, when running on the Primary host, can use to
connect to the Primary, when it is running on the Standby host. Example from
Primary’s host tnsnames.ora:
DGD01_hasunclu1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu1)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)
DGD01_hasunclu2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu2)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)
The above two net service names must exist in the Standby hosts tnsnames.ora
also.
You must also configure a listener on the standby database. If you plan to
manage this standby database using the Data Guard broker, you must configure
the listener to use the TCP/IP protocol and statically register the standby
database service using its SID.
7) Start the standby instance and mount the standby database.
oracle@hasunclu2:/export/home/oracle/temp/oracle> sqlplus "/ as sysdba"
SQL*Plus: Release 9.0.1.0.0 - Production on Thu Mar 14 18:00:57 2002
(c) Copyright 2001 Oracle Corporation. All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile=?/dbs/initDGD.ora
ORACLE instance started.
Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
SQL> alter database mount standby database;
Database altered.
SQL>
8) Create standby redo log files, if necessary:
Standby redo logs are necessary for the higher protection levels such as
Guaranteed, Instant, and Rapid. In these protection modes LGWR from the
Primary host writes transactions directly to the standby redo logs.
This enables no data loss solutions and reduces the amount of data loss
in the event of failure. Standby redo logs are not necessary if you are using
the delayed protection mode.
If you configure standby redo on the standby then you should also configure
standby redo logs on the primary database. Even though the standby redo logs
are not used when the database is running in the primary role, configuring
the standby redo logs on the primary database is recommended in preparation
for an eventual switchover operation.
Standby redo logs must be archived before the data can be applied to the
standby database. The standby archival operation occurs automatically, even if
the standby database is not in ARCHIVELOG mode. However, the archiver process
must be started on the standby database. Note that the use of the archiver
process (ARCn) is a requirement for selection of a standby redo log.
You must have the same number of standby redo logs on the standby as you have
online redo logs on production. They must also be exactly the same size.
The following syntax is used to create standby redo logs:
SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_1.dbf' size 20m;
Database altered.
SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_2.dbf' size 20m;
Database altered.
SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_3.dbf' size 20m;
Database altered.
SQL>
9) Manually change the names of the primary datafiles and redo logs in the
standby control file for all files not automatically renamed using
DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT as noted in step 5. Datafile
names can be changed on the standby at a mounted state with the following
syntax:
SQL> alter database rename file
2 '/export/home/oracle/temp/oracle/data/sys.dbf'
3 to
4 '/export/home/oracle/temp/oracle/data2/sys.dbf';
10) Stop and restart the listeners:
On the primary database, and start the listener on the standby database so that
changes made to the listener.ora can be implemented.
11) Activate parameter changes:
Manually enable initialization parameter changes on the primary database
so that it can initiate archiving to the standby site.
At runtime, the LOG_ARCHIVE_DEST_n initialization parameter can be changed
using ALTER SYSTEM and ALTER SESSION statements. Activate the changes made to
these parameters by either bouncing the instance or activating via alter system.
For example:
SQL> alter system set log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60';
System altered.
12) Verify that automatic archiving is occurring:
On the Primary database switch a log and verfy that it has been shipped
properly using the v$archive_dest view.
SQL> alter system switch logfile;
System altered.
SQL> select status, error from v$archive_dest where dest_id=2;
STATUS ERROR
--------- -------------------------------------------------------
VALID
SQL>
13) Optionally place Standby database in managed recovery:
SQL> recover managed standby database disconnect;
Media recovery complete.
SQL> exit
Posted by
Bis
at
1:48 PM
0
comments
Labels: Data Guard