BNFD - Your DBA NoteBook On The Net!!!

Tuesday, March 31, 2009

CODE OPTIMISATION TIPS

1. Avoid using functions on indexed columns

Example:

 where to_char(emp_no) = ‘121212’
The index on emp_no will be suppressed.

Solutions: where emp_no = to_number(’121212’)

2.Comparison of wrong data types, suppresses the index internally

where char_data = 123
The index on char_data will be suppressed as oracle re-writes the query.
Solution:
Avoid such errors
where char_data = to_char(123)

3.Oracle does not make use of an index on a column if NOT IN is present.
Example: 
select * from emp where deptno not in (select deptno from dept where deptstatus = ‘A’) 
Solution:
Use
NOT EXISTS instead of NOT IN
Example: select * from emp where not exists (select ‘X’ from dept where deptstatus = ‘A’ and dept. deptno = emp. deptno).

4.the index on brand will not be used.
Example: 
select brand, sum(price) from products group by brand having brand = ‘Arrow’; 
Solution:
Replace HAVING with a WHERE clause where ever possible.
Example: 
select brand, sum(price) from products where brand = ‘Arrow’ group by brand;

5.Use of nested statements slows down the query execution
Example: 
select count(*) from products where itemcode in (Select itemcode from clothing where material = ‘Cotton’);
Solution:
Avoid nested selects.
Example:
select count(*) from products P, clothing C 
where P.barcode = C.barcode and C.material = ‘Cotton’;

6.Use of NVL() suppresses index usage.
Example: 
where NVL(this_value, 99) = other_value 
Try to re-write NVLs.
Example:
where (this_value is null and other_value = 99) OR 
(this_value is not null and this_value = other_value)

7.Composite Indexes are not used if the leading indexed column is not present in the WHERE clause.
Example: select count(*) from products where price <>

Solution: The leading column of the index must be present in the WHERE clause in order to hit the index. 

 select count(*) from products where brand > ‘0’   and price <>

Sunday, March 29, 2009

rdbms patch

Check which rdbms patches are applied in uor database -


/usr/bin/perl $ORACLE_HOME/OPatch/opatch lsinventory -invPtrLoc


$ORACLE_HOME/oraInst.loc
$ /home/orart02

$ opatch versionInvoking OPatch 10.2.0.3.0
OPatch Version: 10.2.0.3.0
OPatch succeeded.

$ /home/orart02

$ which opatch

/apps/rt02/oracle/product/10201/OPatch/opatch


$ which perl

/usr/bin/perl


$ /home/orart02 $ /usr/bin/perl /apps/rt02/oracle/product/10201/OPatch/opatch lsinventory -invPtrLoc

$ORACLE_HOME/oraInst.loc

Basic checks for a new Daatabase

When You will be given an 'USER NAME' and 'PASSWORD' of a new database and a new environment then the basic things you need to check are below,

Goto $ORACLE_HOME/dbs and see the parameter file ,SPFILE

uname -a

$echo $oracle_HOME

$.ps-efgrep pmon --- to check databases

$echo $SID ---

$echo $oracle_sid

$ ps -efgrep pmon ---It ll show u all the databases
user02 319658 1 0 Feb 24 - 3:37 ora_pmon_sun
user02 663636 1 0 Feb 28 - 3:52 ora_pmon_MOON
user02 2265204 1224816 0 12:56:55 pts/0 0:00 grep pmon


$ grep env *To see all the env variables

$sqlplus "/as sysdba"

SQL> select name from v$database;
NAME
---------
RT02

SQL> select status from v$instance;
STATUS
------------
OPEN

SQL> show parameter control
NAME TYPE VALUE
control_file_record_keep_time integer 7
control_files string /apps/user02/oracle/admin/cntr l_rt02.dbf, /apps/user02/oracle/admin/cnrtl_rt02.dbf,/apps/user02/oracle/admin/cntrl_rt03.dbf

SQL>show parameter dump;

NAME TYPE VALUE
background_core_dump string partial
background_dump_dest string /apps/user02/oracle/admin/
bdumpcore_dump_dest string /apps/user02/oracle/admin/
cdumpmax_dump_file_size string UNLIMITED
shadow_core_dump string partial
user_dump_dest string /apps/user02/oracle/admin/udump

SQL> select * from v$log;

SQL> select * from v$logfile

SQL> show parameter undo
NAME TYPE VALUE
undo_management string AUTO
undo_retention integer 3600
undo_tablespace string undo_ts

SQL> select file_name,sum(bytes)/1024/1024 from dba_data_files where tablespace_name='UNDO_TS' group by file_name;

SQL> col FILE_NAME for a60
SQL>set lines 200
SQL> /
SQL> select FILE_NAME, TABLESPACE_NAME,(bytes)/1024/1024 from dba_data_files;
select tablespace_name,sum(bytes)/1024/1024 from dba_free_space where tablespace_name='SYSTEM' group by tablespace_name;

SQL> select AUTOEXTENSIBLE,FILE_NAME,TABLESPACE_NAME from dba_data_files where tablespace_name='&TSNAME';
Enter value for tsname: RETEK_DATA
=========
Examples -

SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data03/temp01.dbf' SIZE 8000M AUTOEXTEND OFF

Added a new file -
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data03/temp02.dbf' SIZE 8000M AUTOEXTEND OFF

Reused a existing tempfile which had come from clone -
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data01/temp02.dbf' SIZE 16000M REUSE AUTOEXTEND OFF

Thursday, March 26, 2009

Data guard Creation


1) Ensure the Primary database is in ARCHIVELOG mode:


SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Current log sequence 9
SQL> alter database close; 

Database altered.

SQL> alter database archivelog; 

Database altered.

SQL> shutdown immediate 
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

Modify the Primary database init.ora so that log_archive_start=true and restart 
the instance. Verify that database is in archive log mode and that automatic 
archiving is enabled.

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Next log sequence to archive 9
Current log sequence 9

2) Create a backup of the Primary database:

You can use an existing backup of the Primary database as long as you have the 
archive logs that have been generated since that backup. You may also take a 
hot backup as long as you have all archive logs through the end of the backup 
of the last tablespace. To create a cold backup do the following: 

SQL> 
SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------
/export/home/oracle/temp/oracle/data/sys.dbf

SQL>
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit

Create a backup of all datafiles and online redo logs using an OS command or 
utility. A backup of the online redo logs is necessary to facilitate switchover.

Once complete startup the instance:

SQL> startup
ORACLE instance started.

Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
Database mounted.
Database opened.

3) Connect to the primary database and create the standby control file:

SQL> alter database create standby controlfile as 
'/export/home/oracle/temp/oracle/data/backup/standby.ctl';

Database altered.

4) Copy files to the Standby host:

Copy the backup datafiles, standby controlfile, all available archived redo logs,
and online redo logs from the primary site to the standby site. Copying of the
online redo logs is necessary to facilitate switchover.

If the standby is on a separate site with the same directory structure as the 
primary database then you can use the same path names for the standby files as 
the primary files. In this way, you do not have to rename the primary datafiles 
in the standby control file. If the standby is on the same site as the primary 
database, or the standby database is on a separate site with a different 
directory structure the you must rename the primary datafiles in the standby 
control file after copying them to the standby site. This can be done using 
the db_file_name_convert and log_file_name_convert parameters or by manually 
using the alert database statements.

5) Set the initialization parameters for the primary database:

It is suggested that you maintain two init.ora’s on both the primary and the 
standby. This allows you to facilitate role reversal during switchover 
operations more easily. 

Primary init.ora on Primary host:

log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60'
log_archive_dest_state_1=enable
log_archive_dest_state_2=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
remote_archive_enable=true

Create the standby initialization parameter file and set the initialization 
parameters for the standby database. Depending on your configuration, you may 
need to set filename conversion parameters.

Standby init.ora on Primary host:

log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_state_1=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
standby_archive_dest=/export/home/oracle/temp/oracle/arch
standby_file_management=auto
fal_server=DGD01_hasunclu2
fal_client=DGD01_hasunclu1
remote_arhive_enable=true

NOTE: In the above example db_file_name_convert and log_file_name_convert are 
not needed as the directory structure on the two hosts are the same. If the 
directory structure is not the same then setting of these parameters is 
recommended. Please reference notes 47325.1 and 47343.1 for further 
information.

Copy the two init.ora’s from the Primary host to the Standby host. You must 
modify the Primary init.ora on the Standby host to have log_archive_dest_2 use 
the alias that points to the Primary host (ie DGD01_hasunclu1). You must 
modify the Standby init.ora on the standby host to have fal_server and 
fal_client use the aliases when standby is running on the Primary host 
(ie fal_server=DGD01_hasunclu1 and fal_client=DGD01_hasunclu2).

6) Configure networking components:

On the Primary host create a net service name that the Primary database can 
use to connect to the Standby database. On the Primary host create a net 
service name that Standby, when running on the Primary host, can use to 
connect to the Primary, when it is running on the Standby host. Example from 
Primary’s host tnsnames.ora:

DGD01_hasunclu1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu1)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)
DGD01_hasunclu2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu2)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)

The above two net service names must exist in the Standby hosts tnsnames.ora 
also.

You must also configure a listener on the standby database. If you plan to 
manage this standby database using the Data Guard broker, you must configure 
the listener to use the TCP/IP protocol and statically register the standby 
database service using its SID.


7) Start the standby instance and mount the standby database.

oracle@hasunclu2:/export/home/oracle/temp/oracle> sqlplus "/ as sysdba" 

SQL*Plus: Release 9.0.1.0.0 - Production on Thu Mar 14 18:00:57 2002

(c) Copyright 2001 Oracle Corporation. All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile=?/dbs/initDGD.ora
ORACLE instance started.

Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
SQL> alter database mount standby database;

Database altered.

SQL>

8) Create standby redo log files, if necessary:

Standby redo logs are necessary for the higher protection levels such as 
Guaranteed, Instant, and Rapid. In these protection modes LGWR from the 
Primary host writes transactions directly to the standby redo logs. 
This enables no data loss solutions and reduces the amount of data loss 
in the event of failure. Standby redo logs are not necessary if you are using 
the delayed protection mode. 

If you configure standby redo on the standby then you should also configure 
standby redo logs on the primary database. Even though the standby redo logs
are not used when the database is running in the primary role, configuring 
the standby redo logs on the primary database is recommended in preparation 
for an eventual switchover operation. 

Standby redo logs must be archived before the data can be applied to the 
standby database. The standby archival operation occurs automatically, even if 
the standby database is not in ARCHIVELOG mode. However, the archiver process 
must be started on the standby database. Note that the use of the archiver 
process (ARCn) is a requirement for selection of a standby redo log.

You must have the same number of standby redo logs on the standby as you have 
online redo logs on production. They must also be exactly the same size.

The following syntax is used to create standby redo logs:

SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_1.dbf' size 20m;

Database altered.
SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_2.dbf' size 20m;

Database altered.

SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_3.dbf' size 20m;

Database altered.

SQL>

9) Manually change the names of the primary datafiles and redo logs in the 
standby control file for all files not automatically renamed using 
DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT as noted in step 5. Datafile 
names can be changed on the standby at a mounted state with the following 
syntax:

SQL> alter database rename file
2 '/export/home/oracle/temp/oracle/data/sys.dbf'
3 to
4 '/export/home/oracle/temp/oracle/data2/sys.dbf';

10) Stop and restart the listeners:

On the primary database, and start the listener on the standby database so that 
changes made to the listener.ora can be implemented.

11) Activate parameter changes:

Manually enable initialization parameter changes on the primary database 
so that it can initiate archiving to the standby site.

At runtime, the LOG_ARCHIVE_DEST_n initialization parameter can be changed 
using ALTER SYSTEM and ALTER SESSION statements. Activate the changes made to 
these parameters by either bouncing the instance or activating via alter system. 
For example:

SQL> alter system set log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60';

System altered.

12) Verify that automatic archiving is occurring:

On the Primary database switch a log and verfy that it has been shipped 
properly using the v$archive_dest view.

SQL> alter system switch logfile;

System altered.

SQL> select status, error from v$archive_dest where dest_id=2;

STATUS ERROR
--------- -------------------------------------------------------
VALID

SQL>

13) Optionally place Standby database in managed recovery:

SQL> recover managed standby database disconnect;
Media recovery complete.
SQL> exit

reference metalink document id Note:180031.1

Wednesday, March 25, 2009

Default USERS

SYS
The SYS user owns all base tables and user-accessable view of the data dictionary (Oracle configuration information). No Oracle user should ever alter (update, delete, or insert) any rows or schema objects conatained in the SYS schema, because such activity can compromise data integrety. The security administrator must keep strict control of this central account.
SYSTEM
The SYSTEM user is used to create additional tables and views that display administrative information, and internal tables and views used by various Oracle options and tools.
SYSMAN
The SYSMAN user represents the Enterprise Manager super admin account. This EM admin can create and modify other EM admin accounts as well as admin the database instance itself.
DBSNMP
The DBSNMP user is used by EM to monitor the database. EM uses this account to access performance stats about the database. The DBSNMP credentials sometimes referred to as the monitoring credentials.

REDO LOG File Resize

You can't resize logfile, you have to create new ones and drop the old ones.


before you begin ensure your database is not experiencing heavy traffic or do it at the time of least traffic probably at night [with peace at your alongside].

 

1.have a look at your current log groups/members:

SELECT a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;

 


2.
Make your last redolog group as current [very important] by using manual log switch until you get desired result.

alter system switch logfile;

select group#, status from v$log;

GROUP# STATUS
---------- ----------------
1 INACTIVE
2 INACTIVE
3 INACTIVE
4 CURRENT

 


.After making the last group as current start dropping the groups one by one beginning from group number 1. Do it for all except the last group which has been recently made current.If you get any recovery related

312/624 error than carry out a fresh manual checkpoint and proceed again to drop the group.

alter database drop logfile group 1;


4.Recreate your new loggroups with desired sizes & locations. Ensure you do not create any thing below minimum size [10g : minimum 4Mb, default 50Mb, OMF 100MB]. Ensure a correct log size as it may impact

your database performance. Create number of groups as desired

alter database add logfile group 1 (
'path/member1.log', 
'path/member2.log') size 500M reuse;

 

 

5.Carry out a manual log switch. See that newly created group becomes current. Carry out manual checkpoint & log switch until your last old log(the only one left) becomes inactive than drop it.Veryfy new logs. If you

are using RMAN than rman may search for dropped logfiles during your next backup.


 
 do i need to restart my database after these operations. ???

 No need to restart the Database