How to determine which Unix shell you are using:
> env | grep SHELL
-or-
> echo $SHELL
-or-
> ps -f ....Will provide a full listing of processes associated
with the current terminal, one of which will be the
shell process.
-or-
> setenv ....On a C shell this will return the current
environment, while other shells will return an error.
Please Note:
The following examples use 'ORACLE_HOME' as the variable name.
BOURNE SHELL(sh):
-----------------
To set environment variables within a Bourne Shell (sh), the variable must be
initialized locally, then exported globally:
> ORACLE_HOME=/u01/app/oracle/product/8.1.7
...defines ORACLE_HOME locally to the shell
> export ORACLE_HOME
...makes it globally available to other processes started from
this shell
To have a variable set automatically when you log into the Bourne Shell of your
Unix server:
Add the above lines (minus the '>' prompt) to the hidden '.profile'
file in your $HOME directory.
If you make changes to your '.profile' and want those changes propagated to
your current running environment (without having to log out, then back in):
> cd $HOME
> . .profile
To unset environment variables within a Bourne Shell (sh):
> unset ORACLE_HOME
To check what an environment variable is set to:
> env | grep ORACLE_HOME
KORN SHELL(ksh):
----------------
To set environment variables within a Korn Shell (ksh), you can use the Bourne
syntax show above, or use the streamlined Korn Shell syntax:
> export ORACLE_HOME=/u01/app/oracle/product/8.1.7
To have a variable set automatically when you log into the Korn Shell of your
Unix server:
Add the above lines (minus the '>' prompt) to the hidden '.profile'
file in your $HOME directory.
If you make changes to your '.profile' and want those changes propagated to
your current running environment (without having to log out, then back in):
> cd $HOME
> . .profile
To unset environment variables within a Korn Shell (ksh), use the same syntax
as you would in a Bourne Shell (sh):
> unset ORACLE_HOME
To check what an environment variable is set to:
> env | grep ORACLE_HOME
C SHELL(csh):
-------------
To set environment variables within a C Shell (csh):
> setenv ORACLE_HOME /u01/app/oracle/product/11.1.0
Wednesday, December 16, 2009
How to determine which Unix shell you are using
Posted by Bis at 1:29 PM 1 comments
Wednesday, April 1, 2009
DBMS_STATS
partname VARCHAR2 DEFAULT NULL, estimate_percent NUMBER DEFAULT NULL, block_sample BOOLEAN DEFAULT FALSE, method_opt VARCHAR2 DEFAULT 'FOR ALL COLUMNS SIZE 1', degree NUMBER DEFAULT NULL, granularity VARCHAR2 DEFAULT 'DEFAULT', cascade BOOLEAN DEFAULT FALSE, stattab VARCHAR2 DEFAULT NULL, statid VARCHAR2 DEFAULT NULL, statown VARCHAR2 DEFAULT NULL, no_invalidate BOOLEAN DEFAULT FALSE
EXEC DBMS_STATS.gather_schema_stats (ownname => 'SCOTT', cascade =>true,estimate_percent => dbms_stats.auto_sample_size);
Posted by Bis at 3:33 PM 2 comments
Labels: DBMS_stats
Tuesday, March 31, 2009
CODE OPTIMISATION TIPS
1. Avoid using functions on indexed columns Example: where to_char(emp_no) = ‘121212’ Solutions: where emp_no = to_number(’121212’) 2.Comparison of wrong data types, suppresses the index internally where char_data = 123 3.Oracle does not make use of an index on a column if NOT IN is present. 4.the index on brand will not be used. 5.Use of nested statements slows down the query execution 6.Use of NVL() suppresses index usage. 7.Composite Indexes are not used if the leading indexed column is not present in the WHERE clause. Solution: The leading column of the index must be present in the WHERE clause in order to hit the index. select count(*) from products where brand > ‘0’ and price <>
The index on emp_no will be suppressed.
The index on char_data will be suppressed as oracle re-writes the query.
Solution:
Avoid such errors
where char_data = to_char(123)
Example:
select * from emp where deptno not in (select deptno from dept where deptstatus = ‘A’)
Solution:
Use NOT EXISTS instead of NOT IN
Example: select * from emp where not exists (select ‘X’ from dept where deptstatus = ‘A’ and dept. deptno = emp. deptno).
Example:
select brand, sum(price) from products group by brand having brand = ‘Arrow’;
Solution:
Replace HAVING with a WHERE clause where ever possible.
Example:
select brand, sum(price) from products where brand = ‘Arrow’ group by brand;
Example:
select count(*) from products where itemcode in (Select itemcode from clothing where material = ‘Cotton’);
Solution:
Avoid nested selects.
Example:
select count(*) from products P, clothing C
where P.barcode = C.barcode and C.material = ‘Cotton’;
Example:
where NVL(this_value, 99) = other_value
Try to re-write NVLs.
Example:
where (this_value is null and other_value = 99) OR
(this_value is not null and this_value = other_value)
Example: select count(*) from products where price <>
Posted by Bis at 9:27 AM 0 comments
Labels: Tuning
Sunday, March 29, 2009
rdbms patch
/usr/bin/perl $ORACLE_HOME/OPatch/opatch lsinventory -invPtrLoc
$ /home/orart02
OPatch Version: 10.2.0.3.0
OPatch succeeded.
Posted by Bis at 3:25 PM 0 comments
Labels: Patching
Basic checks for a new Daatabase
When You will be given an 'USER NAME' and 'PASSWORD' of a new database and a new environment then the basic things you need to check are below,
Goto $ORACLE_HOME/dbs and see the parameter file ,SPFILE
uname -a
$echo $oracle_HOME
$.ps-efgrep pmon --- to check databases
$echo $SID ---
$echo $oracle_sid
$ ps -efgrep pmon ---It ll show u all the databases
user02 319658 1 0 Feb 24 - 3:37 ora_pmon_sun
user02 663636 1 0 Feb 28 - 3:52 ora_pmon_MOON
user02 2265204 1224816 0 12:56:55 pts/0 0:00 grep pmon
$ grep env *To see all the env variables
$sqlplus "/as sysdba"
SQL> select name from v$database;
NAME
---------
RT02
SQL> select status from v$instance;
STATUS
------------
OPEN
SQL> show parameter control
NAME TYPE VALUE
control_file_record_keep_time integer 7
control_files string /apps/user02/oracle/admin/cntr l_rt02.dbf, /apps/user02/oracle/admin/cnrtl_rt02.dbf,/apps/user02/oracle/admin/cntrl_rt03.dbf
SQL>show parameter dump;
NAME TYPE VALUE
background_core_dump string partial
background_dump_dest string /apps/user02/oracle/admin/
bdumpcore_dump_dest string /apps/user02/oracle/admin/
cdumpmax_dump_file_size string UNLIMITED
shadow_core_dump string partial
user_dump_dest string /apps/user02/oracle/admin/udump
SQL> select * from v$log;
SQL> select * from v$logfile
SQL> show parameter undo
NAME TYPE VALUE
undo_management string AUTO
undo_retention integer 3600
undo_tablespace string undo_ts
SQL> select file_name,sum(bytes)/1024/1024 from dba_data_files where tablespace_name='UNDO_TS' group by file_name;
SQL> col FILE_NAME for a60
SQL>set lines 200
SQL> /
SQL> select FILE_NAME, TABLESPACE_NAME,(bytes)/1024/1024 from dba_data_files;
select tablespace_name,sum(bytes)/1024/1024 from dba_free_space where tablespace_name='SYSTEM' group by tablespace_name;
SQL> select AUTOEXTENSIBLE,FILE_NAME,TABLESPACE_NAME from dba_data_files where tablespace_name='&TSNAME';
Enter value for tsname: RETEK_DATA
=========
Examples -
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data03/temp01.dbf' SIZE 8000M AUTOEXTEND OFF
Added a new file -
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data03/temp02.dbf' SIZE 8000M AUTOEXTEND OFF
Reused a existing tempfile which had come from clone -
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data01/temp02.dbf' SIZE 16000M REUSE AUTOEXTEND OFF
Posted by Bis at 2:51 PM 0 comments
Labels: Fundamentals
Thursday, March 26, 2009
Data guard Creation
1) Ensure the Primary database is in ARCHIVELOG mode:
SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Current log sequence 9
SQL> alter database close;
Database altered.
SQL> alter database archivelog;
Database altered.
SQL> shutdown immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Modify the Primary database init.ora so that log_archive_start=true and restart
the instance. Verify that database is in archive log mode and that automatic
archiving is enabled.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Next log sequence to archive 9
Current log sequence 9
2) Create a backup of the Primary database:
You can use an existing backup of the Primary database as long as you have the
archive logs that have been generated since that backup. You may also take a
hot backup as long as you have all archive logs through the end of the backup
of the last tablespace. To create a cold backup do the following:
SQL>
SQL> select name from v$datafile;
NAME
----------------------------------------------------------------------
/export/home/oracle/temp/oracle/data/sys.dbf
SQL>
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Create a backup of all datafiles and online redo logs using an OS command or
utility. A backup of the online redo logs is necessary to facilitate switchover.
Once complete startup the instance:
SQL> startup
ORACLE instance started.
Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
Database mounted.
Database opened.
3) Connect to the primary database and create the standby control file:
SQL> alter database create standby controlfile as
'/export/home/oracle/temp/oracle/data/backup/standby.ctl';
Database altered.
4) Copy files to the Standby host:
Copy the backup datafiles, standby controlfile, all available archived redo logs,
and online redo logs from the primary site to the standby site. Copying of the
online redo logs is necessary to facilitate switchover.
If the standby is on a separate site with the same directory structure as the
primary database then you can use the same path names for the standby files as
the primary files. In this way, you do not have to rename the primary datafiles
in the standby control file. If the standby is on the same site as the primary
database, or the standby database is on a separate site with a different
directory structure the you must rename the primary datafiles in the standby
control file after copying them to the standby site. This can be done using
the db_file_name_convert and log_file_name_convert parameters or by manually
using the alert database statements.
5) Set the initialization parameters for the primary database:
It is suggested that you maintain two init.ora’s on both the primary and the
standby. This allows you to facilitate role reversal during switchover
operations more easily.
Primary init.ora on Primary host:
log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60'
log_archive_dest_state_1=enable
log_archive_dest_state_2=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
remote_archive_enable=true
Create the standby initialization parameter file and set the initialization
parameters for the standby database. Depending on your configuration, you may
need to set filename conversion parameters.
Standby init.ora on Primary host:
log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_state_1=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
standby_archive_dest=/export/home/oracle/temp/oracle/arch
standby_file_management=auto
fal_server=DGD01_hasunclu2
fal_client=DGD01_hasunclu1
remote_arhive_enable=true
NOTE: In the above example db_file_name_convert and log_file_name_convert are
not needed as the directory structure on the two hosts are the same. If the
directory structure is not the same then setting of these parameters is
recommended. Please reference notes 47325.1 and 47343.1 for further
information.
Copy the two init.ora’s from the Primary host to the Standby host. You must
modify the Primary init.ora on the Standby host to have log_archive_dest_2 use
the alias that points to the Primary host (ie DGD01_hasunclu1). You must
modify the Standby init.ora on the standby host to have fal_server and
fal_client use the aliases when standby is running on the Primary host
(ie fal_server=DGD01_hasunclu1 and fal_client=DGD01_hasunclu2).
6) Configure networking components:
On the Primary host create a net service name that the Primary database can
use to connect to the Standby database. On the Primary host create a net
service name that Standby, when running on the Primary host, can use to
connect to the Primary, when it is running on the Standby host. Example from
Primary’s host tnsnames.ora:
DGD01_hasunclu1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu1)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)
DGD01_hasunclu2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu2)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)
The above two net service names must exist in the Standby hosts tnsnames.ora
also.
You must also configure a listener on the standby database. If you plan to
manage this standby database using the Data Guard broker, you must configure
the listener to use the TCP/IP protocol and statically register the standby
database service using its SID.
7) Start the standby instance and mount the standby database.
oracle@hasunclu2:/export/home/oracle/temp/oracle> sqlplus "/ as sysdba"
SQL*Plus: Release 9.0.1.0.0 - Production on Thu Mar 14 18:00:57 2002
(c) Copyright 2001 Oracle Corporation. All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile=?/dbs/initDGD.ora
ORACLE instance started.
Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
SQL> alter database mount standby database;
Database altered.
SQL>
8) Create standby redo log files, if necessary:
Standby redo logs are necessary for the higher protection levels such as
Guaranteed, Instant, and Rapid. In these protection modes LGWR from the
Primary host writes transactions directly to the standby redo logs.
This enables no data loss solutions and reduces the amount of data loss
in the event of failure. Standby redo logs are not necessary if you are using
the delayed protection mode.
If you configure standby redo on the standby then you should also configure
standby redo logs on the primary database. Even though the standby redo logs
are not used when the database is running in the primary role, configuring
the standby redo logs on the primary database is recommended in preparation
for an eventual switchover operation.
Standby redo logs must be archived before the data can be applied to the
standby database. The standby archival operation occurs automatically, even if
the standby database is not in ARCHIVELOG mode. However, the archiver process
must be started on the standby database. Note that the use of the archiver
process (ARCn) is a requirement for selection of a standby redo log.
You must have the same number of standby redo logs on the standby as you have
online redo logs on production. They must also be exactly the same size.
The following syntax is used to create standby redo logs:
SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_1.dbf' size 20m;
Database altered.
SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_2.dbf' size 20m;
Database altered.
SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_3.dbf' size 20m;
Database altered.
SQL>
9) Manually change the names of the primary datafiles and redo logs in the
standby control file for all files not automatically renamed using
DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT as noted in step 5. Datafile
names can be changed on the standby at a mounted state with the following
syntax:
SQL> alter database rename file
2 '/export/home/oracle/temp/oracle/data/sys.dbf'
3 to
4 '/export/home/oracle/temp/oracle/data2/sys.dbf';
10) Stop and restart the listeners:
On the primary database, and start the listener on the standby database so that
changes made to the listener.ora can be implemented.
11) Activate parameter changes:
Manually enable initialization parameter changes on the primary database
so that it can initiate archiving to the standby site.
At runtime, the LOG_ARCHIVE_DEST_n initialization parameter can be changed
using ALTER SYSTEM and ALTER SESSION statements. Activate the changes made to
these parameters by either bouncing the instance or activating via alter system.
For example:
SQL> alter system set log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60';
System altered.
12) Verify that automatic archiving is occurring:
On the Primary database switch a log and verfy that it has been shipped
properly using the v$archive_dest view.
SQL> alter system switch logfile;
System altered.
SQL> select status, error from v$archive_dest where dest_id=2;
STATUS ERROR
--------- -------------------------------------------------------
VALID
SQL>
13) Optionally place Standby database in managed recovery:
SQL> recover managed standby database disconnect;
Media recovery complete.
SQL> exit
Posted by Bis at 1:48 PM 0 comments
Labels: Data Guard
Wednesday, March 25, 2009
Default USERS
Posted by Bis at 3:47 PM 0 comments
Labels: default Users
REDO LOG File Resize
You can't resize logfile, you have to create new ones and drop the old ones. 1.have a look at your current log groups/members: SELECT a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#; alter system switch logfile; select group#, status from v$log; GROUP# STATUS 312/624 error than carry out a fresh manual checkpoint and proceed again to drop the group. alter database drop logfile group 1; your database performance. Create number of groups as desired alter database add logfile group 1 ( 5.Carry out a manual log switch. See that newly created group becomes current. Carry out manual checkpoint & log switch until your last old log(the only one left) becomes inactive than drop it.Veryfy new logs. If you are using RMAN than rman may search for dropped logfiles during your next backup. No need to restart the Database
before you begin ensure your database is not experiencing heavy traffic or do it at the time of least traffic probably at night [with peace at your alongside].
2.Make your last redolog group as current [very important] by using manual log switch until you get desired result.
---------- ----------------
1 INACTIVE
2 INACTIVE
3 INACTIVE
4 CURRENT
3 .After making the last group as current start dropping the groups one by one beginning from group number 1. Do it for all except the last group which has been recently made current.If you get any recovery related
4.Recreate your new loggroups with desired sizes & locations. Ensure you do not create any thing below minimum size [10g : minimum 4Mb, default 50Mb, OMF 100MB]. Ensure a correct log size as it may impact
'path/member1.log',
'path/member2.log') size 500M reuse;
do i need to restart my database after these operations. ???
Posted by Bis at 3:44 PM 0 comments
Labels: redo log file resize