BNFD - Your DBA NoteBook On The Net!!!

Wednesday, December 16, 2009

How to determine which Unix shell you are using

How to determine which Unix shell you are using:

> env | grep SHELL

-or-

> echo $SHELL

-or-

> ps -f ....Will provide a full listing of processes associated
with the current terminal, one of which will be the
shell process.

-or-

> setenv ....On a C shell this will return the current
environment, while other shells will return an error.


Please Note:
The following examples use 'ORACLE_HOME' as the variable name.


BOURNE SHELL(sh):
-----------------

To set environment variables within a Bourne Shell (sh), the variable must be
initialized locally, then exported globally:

> ORACLE_HOME=/u01/app/oracle/product/8.1.7
...defines ORACLE_HOME locally to the shell

> export ORACLE_HOME
...makes it globally available to other processes started from
this shell

To have a variable set automatically when you log into the Bourne Shell of your
Unix server:

Add the above lines (minus the '>' prompt) to the hidden '.profile'
file in your $HOME directory.

If you make changes to your '.profile' and want those changes propagated to
your current running environment (without having to log out, then back in):

> cd $HOME
> . .profile

To unset environment variables within a Bourne Shell (sh):

> unset ORACLE_HOME

To check what an environment variable is set to:

> env | grep ORACLE_HOME


KORN SHELL(ksh):
----------------

To set environment variables within a Korn Shell (ksh), you can use the Bourne
syntax show above, or use the streamlined Korn Shell syntax:

> export ORACLE_HOME=/u01/app/oracle/product/8.1.7

To have a variable set automatically when you log into the Korn Shell of your
Unix server:

Add the above lines (minus the '>' prompt) to the hidden '.profile'
file in your $HOME directory.

If you make changes to your '.profile' and want those changes propagated to
your current running environment (without having to log out, then back in):

> cd $HOME
> . .profile

To unset environment variables within a Korn Shell (ksh), use the same syntax
as you would in a Bourne Shell (sh):

> unset ORACLE_HOME

To check what an environment variable is set to:

> env | grep ORACLE_HOME


C SHELL(csh):
-------------

To set environment variables within a C Shell (csh):

> setenv ORACLE_HOME /u01/app/oracle/product/11.1.0

Wednesday, April 1, 2009

DBMS_STATS

Optimizer statistics are a collection of data that describe more details about the database and the objects in the database. These statistics are used by the query optimizer to choose the best execution plan for each SQL statement

The optimizer statistics are stored in the data dictionary.

Because the objects in a database can be constantly changing, statistics must be regularly updated so that they accurately describe these database objects. Statistics are maintained automatically by Oracle or you can maintain the optimizer statistics manually using the DBMS_STATS package.

Automatic Statistics Gathering:

Oracle gathers statistics on all database objects automatically and maintains those statistics in a regularly-scheduled maintenance job. Automated statistics collection eliminates many of the manual tasks associated with managing the query optimizer, and significantly reduces the chances of getting poor execution plans because of missing or stale statistics.

Optimizer statistics are automatically gathered with the job GATHER_STATS_JOB. This job gathers statistics on all objects in the database which have:

Missing statistics

Stale statistics

This job is created automatically at database creation time and is managed by the Scheduler. The Scheduler runs this job when the maintenance window is opened. By default, the maintenance window opens every night from 10 P.M. to 6 A.M. and all day on weekends.
It calls DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure.

Automatic statistics gathering is enabled by default when a database is created, or when a database is upgraded from an earlier database release. You can verify that the job exists by viewing the DBA_SCHEDULER_JOBS view:

SELECT * FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'GATHER_STATS_JOB';

SELECT owner, job_name, enabled FROM dba_scheduler_jobs;

OWNER                          JOB_NAME                                                     ENABLE
------------------------------ ----------------                       ---------------------
SYS                            PURGE_LOG                                                        TRUE
SYS                            GATHER_STATS_JOB                                       TRUE
SYS                            TEST_FULL_JOB_DEFINITION                     TRUE
SYS                            TEST_PROG_SCHED_JOB_DEFINITION     TRUE
SYS                            TEST_PROG_JOB_DEFINITION                     TRUE
SYS                            TEST_SCHED_JOB_DEFINITION                   TRUE




Manual Statistics:

Because the automatic statistics gathering runs during an overnight batch window, the statistics on tables which are significantly modified during the day may become stale. There are typically two types of such objects:

Volatile tables that are being deleted or truncated and rebuilt during the course of the day.

Objects which are the target of large bulk loads which add 10% or more to the object's total size.

Statistics are gathered using the DBMS_STATS package. This PL/SQL package is also used to modify, view, export, import, and delete statistics.

The DBMS_STATS package can gather statistics on table and indexes, and well as individual columns and partitions of tables

Procedures in DBMS_STATS Package:

GATHER_INDEX_STATS === Index statistics
GATHER_TABLE_STATS === Table, column, and index statistics
GATHER_SCHEMA_STATS === Statistics for all objects in a schema
GATHER_DICTIONARY_STATS === Statistics for all dictionary objects, for all system schemas, including SYS and SYSTEM and other optional schemas 
GATHER_DATABASE_STATS === Statistics for all objects in a database


When to Gather Statistic:

1.For an application in which tables are being incrementally modified

2.For tables which are being substantially modified in batch operations, such as with bulk loads

3.For partitioned tables, there are often cases in which only a single partition is modified.In those cases, statistics can be gathered only on those partitions rather than gathering statistics for the entire table

System Statistics

 Describe the system's hardware characteristics, such as I/O and CPU performance and utilization, to the query optimizer. When choosing an execution plan, the optimizer estimates the I/O and CPU resources required for each query

Gather, View, Modify or Delete optimizer statistics for database objects:

Subprocedures:
GATHER_DATABASE_STATS
    (estimate_percent,block_sample,method_opt,degree, granularity,cascade,stattab, 
     statid, options,statown,gather_sys,no_invalidate,gather_temp,gather_fixed,stattype);
GATHER_INDEX_STATS 
    (ownname,indname,partname,estimate_percent,stattab,statid                  
     statown,degree,granularity,no_invalidate,stattype);
GATHER_SCHEMA_STATS
    (ownname,estimate_percent,block_sample,method_opt,degree,granularity,cascade,
     stattab,statid,options,statown,no_invalidate,gather_temp,gather_fixed);
GENERATE_STATS
    (ownname,objname,organized);
GATHER_SYSTEM_STATS 
    (gathering_mode,interval,stattab,statid,statown);
GATHER_TABLE_STATS
    (ownname,tabname,partname,estimate_percent,block_sample,method_opt,
     degree,granularity,cascade,stattab,statid,statown,no_invalidate,stattype);
partname         VARCHAR2 DEFAULT NULL,    estimate_percent NUMBER   DEFAULT NULL,     block_sample     BOOLEAN  DEFAULT FALSE,    method_opt       VARCHAR2 DEFAULT 'FOR ALL COLUMNS SIZE 1',    degree           NUMBER   DEFAULT NULL,    granularity      VARCHAR2 DEFAULT 'DEFAULT',     cascade          BOOLEAN  DEFAULT FALSE,    stattab          VARCHAR2 DEFAULT NULL,     statid           VARCHAR2 DEFAULT NULL,    statown          VARCHAR2 DEFAULT NULL,    no_invalidate    BOOLEAN  DEFAULT FALSE
PREPARE_COLUMN_VALUES
    (srec,values);
SET_COLUMN_STATS 
SET_INDEX_STATS 
SET_SYSTEM_STATS 
SET_TABLE_STATS 
CONVERT_RAW_VALUE 
GET_COLUMN_STATS 
GET_INDEX_STATS 
GET_SYSTEM_STATS 
GET_TABLE_STATS 
DELETE_COLUMN_STATS
    (ownname,tabname,colname,partname,stattab,statid,
     cascade_parts,statown,no_invalidate,force);
DELETE_DATABASE_STATS
    (stattab,statid,statown,no_invalidate,stattype,force);
DELETE_INDEX_STATS
    (ownname,indname,partname,stattab,statid,cascade_parts,statown            
     no_invalidate,stattype,force);
DELETE_SCHEMA_STATS
    (ownname,stattab,statid,statown,no_invalidate
     stattype,force);
DELETE_SYSTEM_STATS
    (stattab,statid,statown); 
DELETE_TABLE_STATS 
    (ownname,tabname,partname,stattab,statid,cascade_parts,cascade_columns,    
     cascade_indexes,statown,no_invalidate,stattype,force);
CREATE_STAT_TABLE 
DROP_STAT_TABLE 
EXPORT_COLUMN_STATS 
EXPORT_INDEX_STATS 
EXPORT_SYSTEM_STATS 
EXPORT_TABLE_STATS 
EXPORT_SCHEMA_STATS 
EXPORT_DATABASE_STATS 
IMPORT_COLUMN_STATS 
IMPORT_INDEX_STATS 
IMPORT_SYSTEM_STATS 
IMPORT_TABLE_STATS 
IMPORT_SCHEMA_STATS 
IMPORT_DATABASE_STATS 
FLUSH_SCHEMA_MONITORING_INFO 
FLUSH_DATABASE_MONITORING_INFO 
ALTER_SCHEMA_TABLE_MONITORING 
ALTER_DATABASE_TABLE_MONITORING

Example:
EXEC DBMS_STATS.gather_schema_stats('SCOTT',DBMS_STATS.AUTO_SAMPLE_SIZE);

EXEC DBMS_STATS.gather_schema_stats (ownname => 'SCOTT', cascade =>true,estimate_percent => dbms_stats.auto_sample_size);
EXEC DBMS_STATS.delete_schema_stats('SCOTT');


Tuesday, March 31, 2009

CODE OPTIMISATION TIPS

1. Avoid using functions on indexed columns

Example:

 where to_char(emp_no) = ‘121212’
The index on emp_no will be suppressed.

Solutions: where emp_no = to_number(’121212’)

2.Comparison of wrong data types, suppresses the index internally

where char_data = 123
The index on char_data will be suppressed as oracle re-writes the query.
Solution:
Avoid such errors
where char_data = to_char(123)

3.Oracle does not make use of an index on a column if NOT IN is present.
Example: 
select * from emp where deptno not in (select deptno from dept where deptstatus = ‘A’) 
Solution:
Use
NOT EXISTS instead of NOT IN
Example: select * from emp where not exists (select ‘X’ from dept where deptstatus = ‘A’ and dept. deptno = emp. deptno).

4.the index on brand will not be used.
Example: 
select brand, sum(price) from products group by brand having brand = ‘Arrow’; 
Solution:
Replace HAVING with a WHERE clause where ever possible.
Example: 
select brand, sum(price) from products where brand = ‘Arrow’ group by brand;

5.Use of nested statements slows down the query execution
Example: 
select count(*) from products where itemcode in (Select itemcode from clothing where material = ‘Cotton’);
Solution:
Avoid nested selects.
Example:
select count(*) from products P, clothing C 
where P.barcode = C.barcode and C.material = ‘Cotton’;

6.Use of NVL() suppresses index usage.
Example: 
where NVL(this_value, 99) = other_value 
Try to re-write NVLs.
Example:
where (this_value is null and other_value = 99) OR 
(this_value is not null and this_value = other_value)

7.Composite Indexes are not used if the leading indexed column is not present in the WHERE clause.
Example: select count(*) from products where price <>

Solution: The leading column of the index must be present in the WHERE clause in order to hit the index. 

 select count(*) from products where brand > ‘0’   and price <>

Sunday, March 29, 2009

rdbms patch

Check which rdbms patches are applied in uor database -


/usr/bin/perl $ORACLE_HOME/OPatch/opatch lsinventory -invPtrLoc


$ORACLE_HOME/oraInst.loc
$ /home/orart02

$ opatch versionInvoking OPatch 10.2.0.3.0
OPatch Version: 10.2.0.3.0
OPatch succeeded.

$ /home/orart02

$ which opatch

/apps/rt02/oracle/product/10201/OPatch/opatch


$ which perl

/usr/bin/perl


$ /home/orart02 $ /usr/bin/perl /apps/rt02/oracle/product/10201/OPatch/opatch lsinventory -invPtrLoc

$ORACLE_HOME/oraInst.loc

Basic checks for a new Daatabase

When You will be given an 'USER NAME' and 'PASSWORD' of a new database and a new environment then the basic things you need to check are below,

Goto $ORACLE_HOME/dbs and see the parameter file ,SPFILE

uname -a

$echo $oracle_HOME

$.ps-efgrep pmon --- to check databases

$echo $SID ---

$echo $oracle_sid

$ ps -efgrep pmon ---It ll show u all the databases
user02 319658 1 0 Feb 24 - 3:37 ora_pmon_sun
user02 663636 1 0 Feb 28 - 3:52 ora_pmon_MOON
user02 2265204 1224816 0 12:56:55 pts/0 0:00 grep pmon


$ grep env *To see all the env variables

$sqlplus "/as sysdba"

SQL> select name from v$database;
NAME
---------
RT02

SQL> select status from v$instance;
STATUS
------------
OPEN

SQL> show parameter control
NAME TYPE VALUE
control_file_record_keep_time integer 7
control_files string /apps/user02/oracle/admin/cntr l_rt02.dbf, /apps/user02/oracle/admin/cnrtl_rt02.dbf,/apps/user02/oracle/admin/cntrl_rt03.dbf

SQL>show parameter dump;

NAME TYPE VALUE
background_core_dump string partial
background_dump_dest string /apps/user02/oracle/admin/
bdumpcore_dump_dest string /apps/user02/oracle/admin/
cdumpmax_dump_file_size string UNLIMITED
shadow_core_dump string partial
user_dump_dest string /apps/user02/oracle/admin/udump

SQL> select * from v$log;

SQL> select * from v$logfile

SQL> show parameter undo
NAME TYPE VALUE
undo_management string AUTO
undo_retention integer 3600
undo_tablespace string undo_ts

SQL> select file_name,sum(bytes)/1024/1024 from dba_data_files where tablespace_name='UNDO_TS' group by file_name;

SQL> col FILE_NAME for a60
SQL>set lines 200
SQL> /
SQL> select FILE_NAME, TABLESPACE_NAME,(bytes)/1024/1024 from dba_data_files;
select tablespace_name,sum(bytes)/1024/1024 from dba_free_space where tablespace_name='SYSTEM' group by tablespace_name;

SQL> select AUTOEXTENSIBLE,FILE_NAME,TABLESPACE_NAME from dba_data_files where tablespace_name='&TSNAME';
Enter value for tsname: RETEK_DATA
=========
Examples -

SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data03/temp01.dbf' SIZE 8000M AUTOEXTEND OFF

Added a new file -
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data03/temp02.dbf' SIZE 8000M AUTOEXTEND OFF

Reused a existing tempfile which had come from clone -
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data01/temp02.dbf' SIZE 16000M REUSE AUTOEXTEND OFF

Thursday, March 26, 2009

Data guard Creation


1) Ensure the Primary database is in ARCHIVELOG mode:


SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Current log sequence 9
SQL> alter database close; 

Database altered.

SQL> alter database archivelog; 

Database altered.

SQL> shutdown immediate 
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

Modify the Primary database init.ora so that log_archive_start=true and restart 
the instance. Verify that database is in archive log mode and that automatic 
archiving is enabled.

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Next log sequence to archive 9
Current log sequence 9

2) Create a backup of the Primary database:

You can use an existing backup of the Primary database as long as you have the 
archive logs that have been generated since that backup. You may also take a 
hot backup as long as you have all archive logs through the end of the backup 
of the last tablespace. To create a cold backup do the following: 

SQL> 
SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------
/export/home/oracle/temp/oracle/data/sys.dbf

SQL>
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit

Create a backup of all datafiles and online redo logs using an OS command or 
utility. A backup of the online redo logs is necessary to facilitate switchover.

Once complete startup the instance:

SQL> startup
ORACLE instance started.

Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
Database mounted.
Database opened.

3) Connect to the primary database and create the standby control file:

SQL> alter database create standby controlfile as 
'/export/home/oracle/temp/oracle/data/backup/standby.ctl';

Database altered.

4) Copy files to the Standby host:

Copy the backup datafiles, standby controlfile, all available archived redo logs,
and online redo logs from the primary site to the standby site. Copying of the
online redo logs is necessary to facilitate switchover.

If the standby is on a separate site with the same directory structure as the 
primary database then you can use the same path names for the standby files as 
the primary files. In this way, you do not have to rename the primary datafiles 
in the standby control file. If the standby is on the same site as the primary 
database, or the standby database is on a separate site with a different 
directory structure the you must rename the primary datafiles in the standby 
control file after copying them to the standby site. This can be done using 
the db_file_name_convert and log_file_name_convert parameters or by manually 
using the alert database statements.

5) Set the initialization parameters for the primary database:

It is suggested that you maintain two init.ora’s on both the primary and the 
standby. This allows you to facilitate role reversal during switchover 
operations more easily. 

Primary init.ora on Primary host:

log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60'
log_archive_dest_state_1=enable
log_archive_dest_state_2=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
remote_archive_enable=true

Create the standby initialization parameter file and set the initialization 
parameters for the standby database. Depending on your configuration, you may 
need to set filename conversion parameters.

Standby init.ora on Primary host:

log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_state_1=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
standby_archive_dest=/export/home/oracle/temp/oracle/arch
standby_file_management=auto
fal_server=DGD01_hasunclu2
fal_client=DGD01_hasunclu1
remote_arhive_enable=true

NOTE: In the above example db_file_name_convert and log_file_name_convert are 
not needed as the directory structure on the two hosts are the same. If the 
directory structure is not the same then setting of these parameters is 
recommended. Please reference notes 47325.1 and 47343.1 for further 
information.

Copy the two init.ora’s from the Primary host to the Standby host. You must 
modify the Primary init.ora on the Standby host to have log_archive_dest_2 use 
the alias that points to the Primary host (ie DGD01_hasunclu1). You must 
modify the Standby init.ora on the standby host to have fal_server and 
fal_client use the aliases when standby is running on the Primary host 
(ie fal_server=DGD01_hasunclu1 and fal_client=DGD01_hasunclu2).

6) Configure networking components:

On the Primary host create a net service name that the Primary database can 
use to connect to the Standby database. On the Primary host create a net 
service name that Standby, when running on the Primary host, can use to 
connect to the Primary, when it is running on the Standby host. Example from 
Primary’s host tnsnames.ora:

DGD01_hasunclu1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu1)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)
DGD01_hasunclu2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu2)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)

The above two net service names must exist in the Standby hosts tnsnames.ora 
also.

You must also configure a listener on the standby database. If you plan to 
manage this standby database using the Data Guard broker, you must configure 
the listener to use the TCP/IP protocol and statically register the standby 
database service using its SID.


7) Start the standby instance and mount the standby database.

oracle@hasunclu2:/export/home/oracle/temp/oracle> sqlplus "/ as sysdba" 

SQL*Plus: Release 9.0.1.0.0 - Production on Thu Mar 14 18:00:57 2002

(c) Copyright 2001 Oracle Corporation. All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile=?/dbs/initDGD.ora
ORACLE instance started.

Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
SQL> alter database mount standby database;

Database altered.

SQL>

8) Create standby redo log files, if necessary:

Standby redo logs are necessary for the higher protection levels such as 
Guaranteed, Instant, and Rapid. In these protection modes LGWR from the 
Primary host writes transactions directly to the standby redo logs. 
This enables no data loss solutions and reduces the amount of data loss 
in the event of failure. Standby redo logs are not necessary if you are using 
the delayed protection mode. 

If you configure standby redo on the standby then you should also configure 
standby redo logs on the primary database. Even though the standby redo logs
are not used when the database is running in the primary role, configuring 
the standby redo logs on the primary database is recommended in preparation 
for an eventual switchover operation. 

Standby redo logs must be archived before the data can be applied to the 
standby database. The standby archival operation occurs automatically, even if 
the standby database is not in ARCHIVELOG mode. However, the archiver process 
must be started on the standby database. Note that the use of the archiver 
process (ARCn) is a requirement for selection of a standby redo log.

You must have the same number of standby redo logs on the standby as you have 
online redo logs on production. They must also be exactly the same size.

The following syntax is used to create standby redo logs:

SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_1.dbf' size 20m;

Database altered.
SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_2.dbf' size 20m;

Database altered.

SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_3.dbf' size 20m;

Database altered.

SQL>

9) Manually change the names of the primary datafiles and redo logs in the 
standby control file for all files not automatically renamed using 
DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT as noted in step 5. Datafile 
names can be changed on the standby at a mounted state with the following 
syntax:

SQL> alter database rename file
2 '/export/home/oracle/temp/oracle/data/sys.dbf'
3 to
4 '/export/home/oracle/temp/oracle/data2/sys.dbf';

10) Stop and restart the listeners:

On the primary database, and start the listener on the standby database so that 
changes made to the listener.ora can be implemented.

11) Activate parameter changes:

Manually enable initialization parameter changes on the primary database 
so that it can initiate archiving to the standby site.

At runtime, the LOG_ARCHIVE_DEST_n initialization parameter can be changed 
using ALTER SYSTEM and ALTER SESSION statements. Activate the changes made to 
these parameters by either bouncing the instance or activating via alter system. 
For example:

SQL> alter system set log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60';

System altered.

12) Verify that automatic archiving is occurring:

On the Primary database switch a log and verfy that it has been shipped 
properly using the v$archive_dest view.

SQL> alter system switch logfile;

System altered.

SQL> select status, error from v$archive_dest where dest_id=2;

STATUS ERROR
--------- -------------------------------------------------------
VALID

SQL>

13) Optionally place Standby database in managed recovery:

SQL> recover managed standby database disconnect;
Media recovery complete.
SQL> exit

reference metalink document id Note:180031.1

Wednesday, March 25, 2009

Default USERS

SYS
The SYS user owns all base tables and user-accessable view of the data dictionary (Oracle configuration information). No Oracle user should ever alter (update, delete, or insert) any rows or schema objects conatained in the SYS schema, because such activity can compromise data integrety. The security administrator must keep strict control of this central account.
SYSTEM
The SYSTEM user is used to create additional tables and views that display administrative information, and internal tables and views used by various Oracle options and tools.
SYSMAN
The SYSMAN user represents the Enterprise Manager super admin account. This EM admin can create and modify other EM admin accounts as well as admin the database instance itself.
DBSNMP
The DBSNMP user is used by EM to monitor the database. EM uses this account to access performance stats about the database. The DBSNMP credentials sometimes referred to as the monitoring credentials.

REDO LOG File Resize

You can't resize logfile, you have to create new ones and drop the old ones.


before you begin ensure your database is not experiencing heavy traffic or do it at the time of least traffic probably at night [with peace at your alongside].

 

1.have a look at your current log groups/members:

SELECT a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;

 


2.
Make your last redolog group as current [very important] by using manual log switch until you get desired result.

alter system switch logfile;

select group#, status from v$log;

GROUP# STATUS
---------- ----------------
1 INACTIVE
2 INACTIVE
3 INACTIVE
4 CURRENT

 


.After making the last group as current start dropping the groups one by one beginning from group number 1. Do it for all except the last group which has been recently made current.If you get any recovery related

312/624 error than carry out a fresh manual checkpoint and proceed again to drop the group.

alter database drop logfile group 1;


4.Recreate your new loggroups with desired sizes & locations. Ensure you do not create any thing below minimum size [10g : minimum 4Mb, default 50Mb, OMF 100MB]. Ensure a correct log size as it may impact

your database performance. Create number of groups as desired

alter database add logfile group 1 (
'path/member1.log', 
'path/member2.log') size 500M reuse;

 

 

5.Carry out a manual log switch. See that newly created group becomes current. Carry out manual checkpoint & log switch until your last old log(the only one left) becomes inactive than drop it.Veryfy new logs. If you

are using RMAN than rman may search for dropped logfiles during your next backup.


 
 do i need to restart my database after these operations. ???

 No need to restart the Database