BNFD - Your DBA NoteBook On The Net!!!

Monday, October 27, 2008

Hot Backup in Oracle

When datafiles are put into hot backup mode, Oracle is still writing to them
Oracle slightly changes the way it performs updates to datafiles by copying whole datafile blocks, rather changed bytes , into the online redo logs.
Hot backups, by their very nature,take a corrupt copy of the datafiles.Since the instance is in ARCHIVELOG mode, all changes to the datafiles are available upon restore to correct any discrepancies created as the archiving process passes over any given datafile. This is why Oracle records redo information in whole blocks during hot backup: to help recover any inconsistent blocks in the datafiles upon restore.
And also during Hot backup mode oracle doesnt update the SCN in the header of the Datafile.
Online redo log files should never be backed up in hot backup mode.

STEPS:
1)Get listing of tablespaces and datafiles
(SELECT tablespace_name, file_name FROM sys.dba_data_files).

2. For each tablespacea) Put tablespace in hot backup mode
(ALTER TABLESPACE $TABLESPACENAME BEGIN BACKUP).
b) Copy, archive, or snapshot each datafile in this tablespace.
c) End hot backup mode for this tablespace
(ALTER TABLESPACE $TABLESPACENAME END BACKUP).

3. Backup the Oracle “controlfile”
(ALTER DATABASE BACKUP CONTROLFILE to trace)

4. Confirm all tablespaces returned to normal mode (SELECT FILE#,STATUS,CHANGE#,TIME FROM v$backup WHERE STATUS != 'NOT ACTIVE').

5.ALter System switch logfile ( it ll force the checkpoint which ll update SCN in the datafiles header)

6. Perform an archive log switch (ALTER SYSTEM ARCHIVE LOG CURRENT).
7. Backup archived redo logs.

Wednesday, October 22, 2008

Create Database in oracle

After you have install the binaries using Oracle Universal Installer you need to create the database. And you can do that in two ways:
Using the Database Configuration Assistant (DBCA)
With the SQL “create database” statement


Using “DBCA” :

The Database Configuration Assistant (DBCA) an Oracle supplied tool that enables you to create an Oracle database, configure database options for an existing Oracle database, delete an Oracle database, or manage database templates.
command prompt ( start->run->cmd->dbca).

Using SQL “create database” statement:


Step 1: Decide on Your Instance Identifier (SID)
There can be more than one Oracle instance on a single machine. In order to be able to distinguish these instances, Oracle uses a SID (System Identifier) which is a string.
The SID can be set through the ORACLE_SID environment
Example:

>set ORACLE_SID = OraProdEurope

Step 2: Establish the Database Administrator Authentication Method

Step 3: Create the initialization parameter file
When an Oracle instance starts up, it requires either an initialization paramter file (init.ora) or an SPFILE. SPFILES have binary content and must be created from init.ora files. Therefore, the init.ora file (which is an ordinary text file) is created first.
Create the init.ora file and put it in below path.
Unix Path is: $ORACLE_HOME/dbs
Windows path is: %ORACLE_HOME%\database

Step 4: Connect to the Instance
$ SQLPLUS /nolog
SQL>CONNECT SYS/password AS SYSDBA
Connected to an idle instance.

SQL*Plus tells us that we're connected to an idle instance. That means that it is not yet started. So, let's start the instance. We have to start the instance without mounting (nomount) as there is no database we could mount at the moment.

Step 5: Start the Instance.
SQL> STARTUP NOMOUNT

This created the SGA (System Global Area) and the background processes

Step 6: Issue the CREATE DATABASE Statement
CREATE DATABASE mynewdb
USER SYS IDENTIFIED BY pz6r58
USER SYSTEM IDENTIFIED BY y1tz5p
LOGFILE GROUP 1 ('/oracle/oradata/mynewdb/redo01.log') SIZE 100M,
GROUP 2 ('/oracle/oradata/mynewdb/redo02.log') SIZE 100M,
GROUP 3 ('/oracle/oradata/mynewdb/redo03.log') SIZE 100M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
MAXINSTANCES 1
CHARACTER SET US7ASCII
NATIONAL CHARACTER SET AL16UTF16
DATAFILE '/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE
EXTENT MANAGEMENT LOCAL
DEFAULT TEMPORARY TABLESPACE tempts1
DATAFILE '/oracle/oradata/mynewdb/temp01.dbf'
SIZE 20M REUSE
UNDO TABLESPACE undotbs
DATAFILE '/oracle/oradata/mynewdb/undotbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED;

If something goes wrong with the creation, Oracle will write an error into the alert.log. The alert log is normaly found in the directory that is specified with the background_dump_dest. If this parameter was not specified (as is the case in our minimal init.ora), the alert.log will be written into %ORACLE_HOME%/RDMBS/trace.
If an ORA-01031: insufficient privileges is returned, that means most likely, that the current user is not in the dba group (on unix), or the ORA_DBA (windows).
If the init.ora file is not at its default location or has not been found with the pfile attribute, an ORA-01078: failure in processing system parameters and an LRM-00109: could not open parameter file '/appl/oracle/product/9.2.0.2/dbs/initadpdb.ora' error is issued.
The create database command also executes a file whose name is determined by the (hidden) init parameter _init_sql_file (which seems to default to sql.bsq)
After the creation of the database, it can be mounted and opened for use.

Step 7: Create Additional Tablespaces

CONNECT SYS/password AS SYSDBA
-- create a user tablespace to be assigned as the default tablespace for users
CREATE TABLESPACE users LOGGING
DATAFILE '/oracle/oradata/mynewdb/users01.dbf'
SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;
-- create a tablespace for indexes, separate from user tablespace
CREATE TABLESPACE indx LOGGING
DATAFILE '/oracle/oradata/mynewdb/indx01.dbf'
SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;
EXIT

Step 8: Run Scripts to Build Data Dictionary Views
CONNECT SYS/password AS SYSDBA
(In order to complete the db creation, the following scripts must be run as sys )

@/oracle/rdbms/admin/catalog.sql
@/oracle/rdbms/admin/catproc.sql
EXIT

CATALOG.SQL
Creates the views of the data dictionary tables, the dynamic performance views, and public synonyms for many of the views. Grants PUBLIC access to the synonyms.
CATPROC.SQL
Runs all scripts required for or used with PL/SQL.

The user system might also want to run /sqlplus/admin/pupbld.sql.
pupbld.sql creates a table that allows to block someone from using sql plus.

Monday, October 20, 2008

USER Profiles

Profiles in Oracle
Profiles are a means to limit resources a user can use.
Before profiles can be assigned, they must be created with create profile. Then, they can be assigned to users with alter user ... profile.
For Example:

CREATE PROFILE
SQL> CREATE PROFILE developer LIMIT
PASSWORD_LIFE_TIME 60
PASSWORD_GRACE_TIME 10
PASSWORD_REUSE_TIME 0
PASSWORD_REUSE_MAX 0
FAILED_LOGIN_ATTEMPTS 4
PASSWORD_LOCK_TIME 2
CPU_PER_CALL 3000
PRIVATE_SGA 500K
LOGICAL_READS_PER_CALL 1000;

ALTER Profile:
SQL>ALTER PROFILE developer LIMIT FAILED_LOGIN_ATTEMPTS 3;

Drop Profile without user:
SQL> drp profile developer;

Drop Profile with user:
SQL> drp profile developer CASCADE;

Assigna profile to User:
SQL> alter user rama profile developer ;


Limitable resources

RESOURCE_LIMIT=TRUE is required for resource limiting portions of the profile. Password limiting functionality is not affected by this parameter.

SQL>SELECT name, valueFROM gv$parameterWHERE name = 'resource_limit';
SQL> ALTER SYSTEM SET resource_limit=TRUE SCOPE=BOTH;

Kernel limits
Maximum concurrent sessions for a user(sessions_per_user)
CPU time limit per session(cpu_per_session)
CPU time limit per call(cpu_per_call)Call being parse, execute and fetch
Maximum connect time(connect_time)The session will be dropped by oracle after specified time.
Maximum idle time(idle_time)The session will be dropped by oracle after specified time of doing nothing. Long running processes are not idle!
Maximum blocks read per session(logical_reads_per_session)
Maximum blocks read per call(logical_reads_per_call)
Maximum amount of SGA(private_sga)
....(composite_limit)
In order to enforce kernel limits, resource_limit must be set to true.

Password limits
-Maximum failed login attempts(failed_login_attempts)
-Maximum time a password is valid(password_life_time)
-Minimum of different passwords before password can be reused(password_reuse_max)
-Minimum of days before a password can be reused(password_reuse_time)
-Number of days an account is locked after failing to login(password_lock_time)
-Password grace time (password_grace_time)
-Verify function for passwords(password_verify_function)


If a session exceeds one of these limits, Oracle will terminate the session. If there is a logoff trigger, it won't be executed.

History of passwords
In order to track password related profile limits, Oracle stores the history of passwords for a user in user_history$.
Profiles already created and their settings can be queried through dba_profiles

Monday, September 8, 2008

3 important Oracle Net files (tnsnames.ora,sqlnet.ora & listner.ora)

tnsnames.ora
A tnsnames.ora file maps net service names to connect descriptors.
In simple words, tnsnames.ora file provides the ability to reference oracle databases by a simple alias.
It can be found in the ORACLE_HOME/network/admin or ORACLE_HOME/net80/admin directory on the client. This file will also be present on the server if client style connections are used on the server itself.
The net service name thus becomes a (most likely shorter and more readable) alias for the somewhat cumbersome net service name.

In the following example, the «text» after the equal sign is the connect descriptor while the text before the equal sign (net_service_name) is the net service name.

$ORACLE_HOME/network/admin/tnsnames.ora
---------------------------------------
dbname1, aliasname1, aliasname2 =
(description =
(address_list =
(address =
(protocol = tcp)(host = yourHost.domain)(port = 1521)))
(connect_data =(sid = yourSID)))

sqlnet.ora
The sqlnet.ora file contains client side network configuration parameters.

The sqlnet.ora file enables you to:
Specify the client domain to append to unqualified names
Prioritize naming methods
Enable logging and tracing features
Route connections through specific processes
Configure parameters for external naming
Configure Oracle Advanced Security
Use protocol-specific parameters to restrict access to the database
It can be found in the ORACLE_HOME/network/admin or ORACLE_HOME/net80/admin directory on the client.

Here is an example of an sqlnet.ora file :
$ORACLE_HOME/network/admin/sqlnet.ora
-------------------------------------
automatic_ipc = ON # Set to OFF for PC's
trace_level_client = OFF # Set to 16 if tracing is required
sqlnet.expire_time = 0 # Idle time in minutes
sqlnet.authentication_services = (ALL)
names.directory_lookup = (TNSNAMES,ONAMES)
names.default_domain = world
name.default_zone = world

listener.ora

Listener.ora file consists of the listener configurations like,
Listener's name
Protocol addresses that it is accepting connection requests on
Services it is listening
Control parameters
By default, the listener.ora file is located in the $ORACLE_HOME/network/admin

A Sample Listener.ora file
$ORACLE_HOME/network/admin/listener.ora
---------------------------------------
LISTENER = # Listener name is LISTENER
(address_list =
(address=
(protocol=ipc)
(key=yourSID)
)
(address=(protocol = tcp)(host = yourHost.domain)(port = 1521)
)
)

Friday, August 8, 2008

Performance Questions

How do I create the PLUSTRACE role for use with AUTOTRACE?
How do I interpret the output from Explain Plan?

Friday, August 1, 2008

RDBMS Patch Lab -

Oracle RDBMS patching is one of the DBAs regular tasks. Here I would like to discuss very broadly how we do the rdbms patching. The database and the listener must be down while applying the patch.

We apply the rdbms patches by using Opatch utility. Please ceck the correct version of your optach which you will be using to apply a particular rdbms patch.

Here are the steps -

connect as oracle user.

set environment of the instance

1) Download the patch from oracle suuport to $ORACLE_HOME/patch
unzip the patch.
During uzipping please note down the directory it is creating.
cd to that directory. copy the directory path.
Example : 3852108

2)
export PATH=/$ORACLE_HOME/OPatch:$PATH
export PATH=/opt/perl/bin:$PATH

3)
cd /$ORACLE_HOME/OPatch
4)Apply the patch by this command
opatch apply -invPtrLoc $ORACLE_HOME/oraInventory/oraInst.loc /3852108

opatch apply -no_inventory /3852108
optach lsinventory -invPtrLoc $ORACLE_HOME/oraInventory/oraInst.loc

Thursday, June 26, 2008

Apps Auditing

Oracle application user level auditing is possible -

User level
Responsibility level
Form level

All user signons, responsibility selections, and form accesses will be logged toAPPLSYS.FND_LOGINSAPPLSYS.FND_LOGIN_RESPONSIBILITIES.APPLSYS.FND_LOGIN_RESP_FORMSAPPS AUDITING – END USER ACCESS Unsuccessful Logins Unsuccessful password attempts are automatically recorded in the APPLSYS.FND_UNSUCCESSFUL_LOGINS and ICX.ICX_FAILURESConcurrent Requests All concurrent requests are recorded in the FND_CONCURRENT_REQUESTS table. There is no configuration required. Based upon the level of Sign Auditing selected the System capture all or some of the following information.• The users who sign on to the system• The Sign in and Sign Out times• Responsiblities they choose• The forms they use• Duration of using the form or responsibility• Terminals used for sign on.

Tuesday, May 6, 2008

The Basics of Oracle Performance Tuning

Performance is a very typical word associated with every other concepts, things, workforce, technology that drives the very business.

For Oracle database also no exception .You will find thousands of articles where cmplicated thing are represented simple.And many other articles where simple thing get twisted with authr's imagination.

To me perfomance is someting in some good enough form to attain the result.

Oracle perofrmance often termed as High perofmance depends on very basic components it is made of. We can divide oracle performance in to following category.

1. Instance/Memory Tuning Tuning

2. Network Tuning

3. SQL/ Application Tuning

4. Database Tuning

We will try to limit our scope of this discussion in its broad perpescetive for understanding purpose.I will try to give more detail explanations using a case study for better understanding in my next blog.

New Topics in This Blog

On Readers Demand - Introducing Performane Tuning in a NutShell!!

Automatic Storage Management (ASM) in Oracle 10g

Oracle ASM (Automatic Storage Management) is a new feature of Oracle 10g to streamline storage management and provisioning.ASM is a powerful and easy to use volume manager and filesystem for Oracle 10g and 10g RAC.ASM eliminates the need for the DBA to directly manage the thousands of Oracle database files that could be present in a modern Oracle instance. ASM does this by enabling ASM disk groups, which are logical units comprised of disks and the files that reside on them. Using ASM, the management of thousands of Oracle files is reduced to managing a small number of disk groups.To turn on ASM you must create a ASM instance before you start your database instances.The ASM functionality is controlled by ASM instance.This is not a full database instance, just the memory structures and as such is very small and lightweightThe main components of ASM are disk groups, each of which comprise of several physical disks that are controlled as a single unit. The physical disks are known as ASM disks, while the files that reside on the disks are know as ASM files. The locations and names for the files are controlled by ASM, but user-friendly aliases and directory structures can be defined for ease of reference.ASM is the logical extension of the power of Oracle-managed files (OMF).

The level of redundancy and the granularity of the striping can be controlled using templates. Default templates are provided for each file typestored by ASM, but additional templates can be defined as needed.Failure groups are defined withina disk group to support the required level of redundancy. For two-way mirroring you would expect a disk group to contain two failure groups so individual files are written to two locations.

V$ Views (All these views are accessible from the ASM instance)
The ASM configuration can be viewed using the V$ASM_% views, which often contain different information depending on whether they are queried from the ASM instance, or a dependant database instance.
V$ASM_DISKGROUP - Performs disk discovery and lists diskgroups
V$ASM_DISKGROUP_STAT - Lists disk groups
V$ASM_DISK - Performs disk discovery and listdisks+usage metrics
V$ASM_DISK_STAT - List disks +usage metrics
V$ASM_FILE - List ASM files
V$ASM_ALIAS - list ASM aliases
V$ASM_CLIENT - lists instances DB instances connected to ASM
V$OPERATION - extent mapping table for ASM files
V$ASM_TEMPLATE - Displays a row for each template present in each disk group mounted by the ASM instance. Displays a row for each template present in each disk group mounted by the ASM instance with which the database instance communicates
In summary ASM provides the following functionality:
1
.Manages groups of disks, called disk groups.
2.Manages disk redundancy within a disk group.
3.Provides near-optimal I/O balancing without any manual tuning.
4.Enables management of database objects without specifying mount points and filenames. 5.Supports large files.
6.Initialization Parameters and ASM Instance Creation



Initialization Parameters and ASM Instance Creation

The initialization parameters that are of specific interest for an ASM instance are:

INSTANCE_TYPE
Defines the instance as an ASM instance. This is the only required parameter to identify an ASM instance; the remainder can be left at their defaults.Set to ASM or RDBMS depending on the instance type. The default is RDBMS.
DB_UNIQUE_NAME
Defines the service provider name for which this ASM instance manages disk groups. +ASM is the default value, and should not be modified unless multiple ASM instances are on the same node
ASM_POWER_LIMIT
Controls rebalance operation speed. Values range from 1 to 11, with 11 being the fastest. If omitted, this value defaults to 1. The number of slaves is derived from the parallelization level specified in a manual rebalance command (POWER), or by the ASM_POWER_LIMIT parameter
ASM_DISKSTRING
An operating system dependent value; used by ASM to limit the set of disks considered for discovery
ASM_DISK_GROUPS
Describes the list of ASM disk group names to be mounted by an ASM instance at startup, or whenever the ALTER DISKGROUP ALL MOUNT command is used
LARGE_POOL_SIZE
The LARGE POOL size. This must be set to a minimum of 8MB, but Oracle recommends setting this to 16MB

Incorrect usage of parameters in ASM or RDBMS instances result in ORA-15021 errors.

To create an ASM instance first create a file called init+ASM.ora in the /tmp directory containing the following information.

INSTANCE_TYPE=ASM

Next, using SQL*Plus connect to the ide instance.

export ORACLE_SID=+ASM

sqlplus / as sysdba

Create an spfile using the contents of the init+ASM.ora file.

SQL> CREATE SPFILE FROM PFILE='/tmp/init+ASM.ora';

File created.

Finally, start the instance with the NOMOUNT option.

SQL> startup nomount

ASM instance started

Total System Global Area 125829120 bytes
Fixed Size 1301456 bytes
Variable Size 124527664 bytes
Database Buffers 0 bytes
Redo Buffers 0 bytes

The ASM instance is now ready to use for creating and mounting disk groups.

To shutdown the ASM instance issue the following command.

SQL> shutdown

ASM instance shutdown

Once an ASM instance is present disk groups can be used for the following parameters in database instances

(INSTANCE_TYPE=RDBMS) to allow ASM file creation:
DB_CREATE_FILE_DEST
DB_CREATE_ONLINE_LOG_DEST_n
DB_RECOVERY_FILE_DEST
CONTROL_FILES
LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST
STANDBY_ARCHIVE_DEST

Startup and Shutdown of ASM InstancesASM instance are started and stopped in a similar way to normal database instances.

The options for the STARTUP command are:
FORCE - Performs a SHUTDOWN ABORT before restarting the ASM instance.
MOUNT - Starts the ASM instance and mounts the disk groups specified by the ASM_DISKGROUPS parameter.
NOMOUNT - Starts the ASM instance without mounting any disk groups.
OPEN - This is not a valid option for an ASM instance.

The options for the SHUTDOWN command are:
NORMAL - The ASM instance waits for all connected ASM instances and SQL sessions to exit then shuts down.
IMMEDIATE - The ASM instance waits for any SQL transactions to complete then shuts down. It doesn't wait for sessions to exit.
TRANSACTIONAL - Same as IMMEDIATE.
ABORT - The ASM instance shuts down instantly.

Administering ASM :

CREATE DISKGROUP statement - for creating Disk groups

This statement allows you to specify the level of redundancy:
A.NORMAL REDUNDANCY - Two-way mirroring, requiring two failure groups.
B. HIGH REDUNDANCY - Three-way mirroring, requiring three failure groups.
C.EXTERNAL REDUNDANCY - No mirroring for disks that are already protected using hardware mirroring or RAID. In addition failure groups and preferred names for disks can be defined. If the NAME clause is omitted the disks are given a system generated name like "disk_group_1_0001". The FORCE option can be used to move a disk from another disk group into this one.

Create Disk Group Comand Esample:
CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY
FAILGROUP failure_group_1 DISK
'/devices/diska1' NAME diska1,
'/devices/diska2' NAME diska2,
FAILGROUP failure_group_2 DISK
'/devices/diskb1' NAME diskb1,
'/devices/diskb2' NAME diskb2;

Disk groups can be deleted using the DROP DISKGROUP statement.
DROP DISKGROUP disk_group_1 INCLUDING CONTENTS;

Disks can be added or removed from disk groups using the ALTER DISKGROUP statement. Remember that the wildcard "*" can be used to reference disks so long as the resulting string does not match a disk already used by an existing disk group.

Add disks.
ALTER DISKGROUP disk_group_1 ADD DISK
'/devices/disk*3',
'/devices/disk*4';

Drop a disk
ALTER DISKGROUP disk_group_1 DROP DISK disk2;

Disks can be resized using the RESIZE clause of the ALTER DISKGROUP statement. The statement can be used to resize individual disks, all disks in a failure group or all disks in the disk group. If the SIZE clause is omitted the disks are resized to the size of the disk returned by the OS.
Resize a specific disk
ALTER DISKGROUP disk_group_1
RESIZE DISK diska1 SIZE 100G;
Resize all disks in a failure group.
ALTER DISKGROUP disk_group_1
RESIZE DISKS IN FAILGROUP failure_group_1 SIZE 100G;
Resize all disks in a disk group
ALTER DISKGROUP disk_group_1
RESIZE ALL SIZE 100G;

The UNDROP DISKS clause of the ALTER DISKGROUP statement allows pending disk drops to be undone. It will not revert drops that have completed, or disk drops associated with the dropping of a disk group.

ALTER DISKGROUP disk_group_1 UNDROP DISKS;
Disk groups can be rebalanced manually using the REBALANCE clause of the ALTER DISKGROUP statement. If the POWER clause is omitted the ASM_POWER_LIMIT parameter value is used. Rebalancing is only needed when the speed of the automatic rebalancing is not appropriate.
ALTER DISKGROUP disk_group_1 REBALANCE POWER 5;
Disk groups are mounted at ASM instance startup and unmounted at ASM instance shutdown. Manual mounting and dismounting can be accomplished using the ALTER DISKGROUP statement as seen below.
ALTER DISKGROUP ALL DISMOUNT;
ALTER DISKGROUP ALL MOUNT;
ALTER DISKGROUP disk_group_1 DISMOUNT;
ALTER DISKGROUP disk_group_1 MOUNT;

Template
Templates are named groups of attributes that can be applied to the files within a disk group. The following example show how templates can be created, altered and dropped.
Create a new template
ALTER DISKGROUP disk_group_1 ADD TEMPLATE my_template ATTRIBUTES (MIRROR FINE);
Modify template
ALTER DISKGROUP disk_group_1 ALTER TEMPLATE my_template ATTRIBUTES (COARSE);
Drop template
ALTER DISKGROUP disk_group_1 DROP TEMPLATE my_template;

Available attributes include:
UNPROTECTED - No mirroring or striping regardless of the redundancy setting.
MIRROR - Two-way mirroring for normal redundancy and three-way mirroring for high redundancy. This attribute cannot be set for external redundancy.
COARSE - Specifies lower granuality for striping. This attribute cannot be set for external redundancy.
FINE - Specifies higher granularity for striping. This attribute cannot be set for external redundancy.

Directories
A directory heirarchy can be defined using the ALTER DISKGROUP statement to support ASM file aliasing.
The following examples show how ASM directories can be created, modified and deleted.--

Create a directory
ALTER DISKGROUP disk_group_1 ADD DIRECTORY '+disk_group_1/my_dir';

Rename a directory
ALTER DISKGROUP disk_group_1 RENAME DIRECTORY '+disk_group_1/my_dir' TO '+disk_group_1/my_dir_2';

Delete a directory and all its contents
ALTER DISKGROUP disk_group_1 DROP DIRECTORY '+disk_group_1/my_dir_2' FORCE;

Aliases
Aliases allow you to reference ASM files using user-friendly names, rather than the fully qualified ASM filenames.

Create an alias using the fully qualified filename
ALTER DISKGROUP disk_group_1 ADD ALIAS '+disk_group_1/my_dir/my_file.dbf'FOR '+disk_group_1/mydb/datafile/my_ts.342.3';

Create an alias using the numeric form filename
ALTER DISKGROUP disk_group_1 ADD ALIAS '+disk_group_1/my_dir/my_file.dbf'
FOR '+disk_group_1.342.3';

Rename an alias
ALTER DISKGROUP disk_group_1 RENAME ALIAS '+disk_group_1/my_dir/my_file.dbf'TO '+disk_group_1/my_dir/my_file2.dbf';

Delete an alias
ALTER DISKGROUP disk_group_1 DELETE ALIAS '+disk_group_1/my_dir/my_file.dbf';
Attempting to drop a system alias results in an error.

Files
Files are not deleted automatically if they are created using aliases, as they are not Oracle Managed Files (OMF), or if a recovery is done to a point-in-time before the file was created. For these circumstances it is necessary to manually delete the files, as shown below.

Drop file using an alias
ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1/my_dir/my_file.dbf';

Drop file using a numeric form filename.
ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1.342.3';

Drop file using a fully qualified filename.
ALTER DISKGROUP disk_group_1 DROP FILE disk_group_1/mydb/datafile/my_ts.342.3';

Checking Metadata

The internal consistency of disk group metadata can be checked in a number of ways using the CHECK clause of the ALTER DISKGROUP statement.

Check metadata for a specific file.ALTER DISKGROUP disk_group_1 CHECK FILE '+disk_group_1/my_dir/my_file.dbf'

Check metadata for a specific failure group in the disk group.
ALTER DISKGROUP disk_group_1 CHECK FAILGROUP failure_group_1;

Check metadata for a specific disk in the disk group. ALTER DISKGROUP disk_group_1 CHECK DISK diska1;

Check metadata for all disks in the disk group.
ALTER DISKGROUP disk_group_1 CHECK ALL;

ASM Filenames

There are several ways to reference ASM file. Some forms are used during creation and some for referencing ASM files. The forms for file creation are incomplete, relying on ASM to create the fully qualified name, which can be retrieved from the supporting views. The forms of the ASM filenames are summarised below.
Filename Type Format
Fully Qualified ASM Filename +dgroup/dbname/file_type/file_type_tag.file.incarnation Numeric ASM Filename +dgroup.file.incarnation
Alias ASM Filenames +dgroup/directory/filename
Alias ASM Filename with Template +dgroup(template)/alias
Incomplete ASM Filename +dgroup
Incomplete ASM Filename with Template +dgroup(template)

SQL and ASM
ASM filenames can be used in place of conventional filenames for most Oracle file types, including controlfiles, datafiles, logfiles etc.
For example, the following command creates a new tablespace with a datafile in the disk_group_1 disk group.

CREATE TABLESPACE my_ts DATAFILE '+disk_group_1' SIZE 100M AUTOEXTEND ON;
Migrating to ASM Using RMAN

The following method shows how a primary database can be migrated to ASM from a disk based backup:
Disable change tracking (only available in Enterprise Edition) if it is currently being used.
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
Shutdown the database.

SQL> SHUTDOWN IMMEDIATE

Modify the parameter file of the target database as follows:
Set the DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n parameters to the relevant ASM disk groups.
Remove the CONTROL_FILES parameter from the spfile so the control files will be moved to the DB_CREATE_* destination and the spfile gets updated automatically. If you are using a pfile the CONTROL_FILES parameter must be set to the appropriate ASM files or aliases.

Start the database in nomount mode.
RMAN> STARTUP NOMOUNT

Restore the controlfile into the new location from the old location.
RMAN> RESTORE CONTROLFILE FROM 'old_control_file_name';

Mount the database.
RMAN> ALTER DATABASE MOUNT;

Copy the database into the ASM disk group.
RMAN> BACKUP AS COPY DATABASE FORMAT '+disk_group';

Switch all datafile to the new ASM location.
RMAN> SWITCH DATABASE TO COPY;

Open the database.
RMAN> ALTER DATABASE OPEN;

Create new redo logs in ASM and delete the old ones.Enable change tracking if it was being used.
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

Wednesday, April 23, 2008

Invalid Objects !!! Its Impacts

Sometimes the packages become invalid. Have you observed what happens to it and what happens to the procedures or programs accessing these packages?

We will try to find it in this session -

Monday, April 21, 2008

Advanced Topics!!!

Coming Soon!!!!
In
bignfatdba.blogspot.com

Checkout this blog for advanced Topics.

Resource Role is a must for Apps

If we revoke resource role then while patching it will throw errors.

Example:

CREATE OR REPLACE PUBLIC SYNONYM qp_list_headers_all_b FOR QP.qp_list_headers_b
AD Worker error: The following ORACLE error:
ORA-01031: insufficient privileges

Saturday, April 12, 2008

Unix - Oracle Common DBA Handy Commands for General Admin Workz

Find out Which Server you are In:
oracle@myServer::/offline [BNFD] $ uname -aHP-UX myServer B.11.00 A 9000/800 1126424671 two-user license
du - estimate file space usage
Displays the free disk space in the / filesystem.df -kvi
1. to ckeck system activity History on UNIX :/var/adm/syslog/syslog.log
2. Ask Sysadmin to get the pwefview statistical analysis for that data
unix Find/Replace
:1,$ s/sun/mars/g
:1,$ s/mars/sun/g
sed 's/sun/mars/g' pasand.ctl > mars.ctl
sed 's/mars/sun/g' mars_ora_3770.trc > sun.ctl
The SCP command for different server copy :
scp *.dbf oracle@bnfd.usa.com:/stage/oracle/mars/
be at the source and then go to the sourec directory:
scp filename with path username@destination servername:path of the dir and file
Example:
scp us_exp_140705.dmp oracle@bnfd.usa.com:/stage/oracle/mars/
Find and Remove :
Find file which are more tha 30 days old and remove them from udump:
find . -atime +30 -name '*' -exec rm -f {} \;
find /d1/app/oracle/orasrv/admin/BNFD/arch -atime +3 -name '*' -exec rm -f {} \;
Common Vi Editor Commands:
ctrl + d = 1 page down
ctrl + u= 1 page up
to delete above (all)
1+d+shift+g
to delete everything below the cursor:
d+shift+g

The SCP command for different server copy :
scp *.dbf oracle@bnfd.usa.com:/q764/data01/
be at the source and then go to the sourec directory:
scp filename with path username@destination servername:pathe of the dir and file
Unix command :
gzip -d arch135.arc.gz - - for unziping the files
spool c:\d760.csv -- to spool to the excel file

General DBA Responsibilities!!!

Q: What was your responsibility in the previous company?
Ans: I was responsible for managing the space -- i. e Monitoring the growth of the datafiles and resolve space related issues.
-> Troubleshooting the issues related to backup failures.
-> creating policies for taking backups.
-> Recovering database in situations and preparing standby servers.
-> Taking Logical and physical backups adhering strictly to the policies.
-> creating database on demand.
-> Allocate and manage system privilege.
-> Plan and implement backup policies & strategies.
-> Minimize datbase down time and in situations contact Oracle corp. for technical support.
DBA Responsibilities:
User Management:
Oracle Account Management Policy contains the following parts:
• Account Locking • Password Aging and Expiration • Password History • Password Complexity Verification

Tuesday, March 18, 2008

When Does DBWR Write?

When Does Database Writer Write?
The DBWn background process writes to the datafiles whenever one of the following events occurs:

 A user’s Server Process has searched too long for a free buffer when reading a buffer into
the Buffer Cache.
 The number of modified and committed, but unwritten, buffers in the Database Buffer
Cache is too large.
 At a database Checkpoint event. See Chapters 10 and 11 for information on checkpoints.
 The instance is shut down using any method other than a shutdown abort.
 A tablespace is placed into backup mode.
 A tablespace is taken offline to make it unavailable or changed to READ ONLY.
 A segment is dropped.

When Does LGWR Write?

When Does Log Writer Write?

The LGWR background process writes to the current redo log group whenever one of the
following database events occurs:

 Every three seconds.
 A user commits a transaction.
 The Redo Log Buffer is one-third full.
 The Redo Log Buffer contains 1MB worth of redo information.
 Before the DBWn process whenever a database checkpoint occurs. See Chapter 10 for more
information on checkpoints.

Oracle Physical Files

Oracle Physical Files

Controlfiles: Locations of other physical files, database name, database block size, database character set, and recovery information. These files are required to open the database.

Datafile: All application data and internal metadata.

Redo log files: Record of all changes made to the database; used for recovery.

Parameter (PFILE or SPFILE): Configuration parameters for the SGA, optional Oracle features,and background processes.

Archived logfiles: Copy of the contents of previous online redo logs, used for recovery.

Password Optional file : used to store names of users who have been
granted the SYSDBA and SYSOPER privileges.

Control Files

Control files are critical components of the database because they store important information
that is not available anywhere else. This information includes the following:

1.The name of the database
2.The names, locations, and sizes of the datafiles and redo log files
3.Information used to recover the database in the case of a disk failure or user error

The control files are created when the database is created in the locations specified in the
control_files parameter in the parameter file. Because loss of the control files negatively
impacts the ability to recover the database, most production databases multiplex their control files to multiple locations. Oracle uses the CKPT background process to automatically update each of these files as needed, keeping the contents of all copies of the control synchronized. You can use the dynamic performance view V$CONTROLFILE to display the names and locations of all the database’s control files.
A sample query of V$CONTROLFILE on a Unix system is shown here:

SQL> select name from v$controlfile;

NAME
------------------------------------
/orcl/MARS/control/control01.ctl
/orcl/MARS/control/control02.ctl
/orcl/MARS/control/control03.ctl


One thing that the control files keep track of in the database are the names, locations, and
sizes of the database datafiles. Datafiles, and their relationship to another database structure
called a tablespace.


Datafiles
Datafiles are the physical files that actually store the data that has been inserted into each table in the database. The size of the datafiles is directly related to the amount of table data that they store. Datafiles are the physical structure behind another database storage area called a tablespace. A tablespace is a logical storage area within the database. Tablespaces group logically related segments.

Common tablespaces are : SYSTEM,SYSAUX, TEMP, TOOLS,USER,UNDOTBS1


SQL> select tablespace_name from dba_tablespaces order by tablespace_name;
TABLESPACE_NAME
------------------------------
APPL_IDX

APPL_TAB
EXAMPLE
SYSAUX
SYSTEM
TEMP
UNDOTBS1
7 rows selected.

or you can try following sql command for more info.

SQL> select tablespace_name, file_name from dba_data_files order by tablespace_name;


Redo Log Files
Whenever a user performs a transaction in the database, the information needed to reproduce
this transaction in the event of a database failure is automatically recorded in the Redo Log
Buffer. The contents of the Redo Log Buffer are ultimately written to the redo logs by the
LGWR background process. Because of the important role that redo logs play in Oracle’s recovery mechanism, they are usually multiplexed, or copied. This means that each redo log contains one or more copies of itself in case one of the copies becomes corrupt or is lost due to a hardware failure. Collectively, these sets of redo logs are referred to as redo log groups. Each multiplexed file within the group is called a redo log group member. Oracle automatically writes to all members of the redo log group to keep the files in sync. Each redo log group must be composed of one or more members. Each database must have a minimum
of two redo log groups because redo logs are used in a circular fashion.


SQL> select * from v$logfile ;

SQL> select * from v$log ;

5 mandatory Oracle Background Processes

System Monitor SMON: Performs instance recovery following an instance crash, coalesces
free space in the database, and manages space used for sorting

Process Monitor PMON: Cleans up failed user database connections

Database Writer DBWn* : Writes modified database blocks from the SGA’s Database Buffer Cache to the datafiles on disk

Log Writer LGWR : Writes transaction recovery information from the SGA’s Redo Log Buffer
to the online Redo Log files on disk

Checkpoint CKPT : Updates the database files following a Checkpoint Event


On Unix systems, you can view these background processes from the operating system using
the ps command, as shown here:

$ ps -ef grep MARS

oracle 3969 1 0 10:02 ? 00:00:05 ora_pmon_MARS
oracle 3971 1 0 10:02 ? 00:00:00 ora_mman_MARS
oracle 3973 1 0 10:02 ? 00:00:07 ora_dbw0_MARS
oracle 3975 1 0 10:02 ? 00:00:07 ora_lgwr_MARS
oracle 3977 1 0 10:02 ? 00:00:10 ora_ckpt_MARS
oracle 3979 1 0 10:02 ? 00:00:20 ora_smon_MARS
oracle 3981 1 0 10:02 ? 00:00:00 ora_reco_MARS
oracle 3983 1 0 10:02 ? 00:00:09 ora_cjq0_MARS
oracle 3985 1 0 10:02 ? 00:00:00 ora_d000_MARS
oracle 3987 1 0 10:02 ? 00:00:00 ora_s000_MARS
oracle 4052 1 0 10:02 ? 00:00:00 ora_qmnc_MARS
oracle 4054 1 0 10:02 ? 00:00:29 ora_mmon_MARS
oracle 4057 1 0 10:02 ? 00:00:08 ora_mmnl_MARS
oracle 4059 1 0 10:02 ? 00:01:04 ora_j000_MARS
oracle 27544 1 0 20:29 ? 00:00:00 ora_q000_MARS

This output shows that 15 background processes are running on the Unix server for the MARS
database.

Parameters in PFILE/SPFILE

many initialization parameters are used to specify the size of the SGA and its components. Any parameters not specified in the PFILE or SPFILE take on their default values. The following is an example of the contents of a typical Unix Oracle 10g PFILE that contains both basic and advanced parameters:

db_block_size=8192
db_file_multiblock_read_count=16
open_cursors=300
db_name=MARS
background_dump_dest=/u01/app/oracle/admin/MARS/bdump
core_dump_dest=/oracle/admin/MARS/cdump
user_dump_dest=/oracle/admin/MARS/udump
control_files=(/vitaldata/c1/MARS/control01.ctl, /vitaldata/c2/MARS/control02.ctl,
/vitaldata/c3/MARS/control03.ctl)
db_recovery_file_dest=/oracle/flash_recovery_area/
db_recovery_file_dest_size=2147483648
job_queue_processes=10
compatible=10.1.0.2.0
sga_target=500M
max_sga_size=800M
processes=250
remote_login_passwordfile=EXCLUSIVE
pga_aggregate_target=25165824
sort_area_size=65536
undo_management=AUTO
undo_tablespace=UNDOTBS1


SQL> select * 2 from V$SGA;

NAME VALUE
-------------------- ----------
Fixed Size 787988
Variable Size 145750508
Database Buffers 25165824
Redo Buffers 262144


SQL> select component,current_size from v$sga_dynamic_components;

PFILE & SPFILE







Monday, March 17, 2008

Data dictionary view VS Dynamic view

Dictionary Views vs Dynamic Performance Views:

1. Dictionary Views Dynamic Performance Views
The DBA_ views usually have plural names (for
example, DBA_DATA_FILES).

The names of the V$ views are generally singular
(for example, V$DATAFILE).

2. The DBA_ views are available only when the
database is open and running.

Some V$ views are available even when the
database is not fully open and running.

3. The data contained in the DBA_ views is generally
uppercase.

The data contained in the V$ views is usually
lowercase.

4. The data contained in the DBA_ views is static
and is not cleared when the database is shut
down.

The V$ views contain dynamic statistical data
that is lost each time the database is shut
down.

Wednesday, March 5, 2008

Managing the Recycle Bin

what happens when the dropped objects take up all of that space?

The answer is simple: that situation does not even arise. When a tablespace is completely filled up with recycle bin data such that the datafiles have to extend to make room for more data, the tablespace is said to be under "space pressure." In that scenario, objects are automatically purged from the recycle bin in a first-in-first-out manner. The dependent objects (such as indexes) are removed before a table is removed.


Similarly, space pressure can occur with user quotas as defined for a particular tablespace. The tablespace may have enough free space, but the user may be running out of his or her allotted portion of it. In such situations, Oracle automatically purges objects belonging to that user in that tablespace.

If you want to purge the specific table named EMP from the recycle bin after its drop, you could issue

PURGE TABLE EMP;

or using its recycle bin name:

PURGE TABLE "BIN$09LuipandafgMALLLLLghg==$0";

This command will remove table TEST and all dependent objects such as indexes, constraints, and so on from the recycle bin, saving some space. If, however, you want to permanently drop an index from the recycle bin, you can do so using:

purge index inx_emp_01;


which will remove the index only, leaving the copy of the table in the recycle bin.

Sometimes it might be useful to purge at a higher level. For instance, you may want to purge all the objects in recycle bin in a tablespace MYTBSP. You would issue:

PURGE TABLESPACE MYTBSP;


You may want to purge only the recycle bin for a particular user in that tablespace. This approach could come handy in data warehouse-type environments where users create and drop many transient tables. You could modify the command above to limit the purge to a specific user only:

PURGE TABLESPACE MYTBSP USER DIP;


A user such as DIP would clear his own recycle bin with

PURGE RECYCLEBIN;


You as a DBA can purge all the objects in any tablespace using

PURGE DBA_RECYCLEBIN;


As you can see, the recycle bin can be managed in a variety of different ways to meet your specific needs.

Flashback Table

The Flashback Table feature in Oracle Database 10g allows you to revive an accidentally dropped table.

First, let's see the tables in the present schema.


SQL> select * from tab;

TNAME TABTYPE CLUSTERID
------------------------ ------- ----------
RECYCLETEST TABLE


Now, we accidentally drop the table:


SQL> drop table recycletest;

Table dropped.


Let's check the status of the table now.


SQL> select * from tab;

TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
BIN$04LhcpndanfgMAAAAAANPw==$0 TABLE


The table RECYCLETEST is gone but note the presence of the new table
BIN$04LhcpndanfgMAAAAAANPw==$0.

Here's what happened: The dropped table RECYCLETEST, instead of completely disappearing, was renamed to a system-defined name. It stays in the same tablespace, with the same structure as that of the original table. If there are indexes or triggers defined on the table, they are renamed too, using the same naming convention used by the table. Any dependent sources such as procedures are invalidated; the triggers and indexes of the original table are instead placed on the renamed table BIN$04LhcpndanfgMAAAAAANPw==$0, preserving the complete object structure of the dropped table.

The table and its associated objects are placed in a logical container known as the "recycle bin," which is similar to the one in your PC. However, the objects are not moved from the tablespace they were in earlier; they still occupy the space there. The recycle bin is merely a logical structure that catalogs the dropped objects.

Use the following command from the SQL*Plus prompt to see its content:


SQL> show recyclebin

ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
---------------- ------------------------------ ------------ ------------------
RECYCLETEST BIN$04LhcpndanfgMAAAAAANPw==$0 TABLE 2004-02-16:21:13:31


This shows the original name of the table, RECYCLETEST, as well as the new name in the recycle bin, which has the same name as the new table we saw created after the drop. (Note: the exact name may differ by platform.) To reinstate the table, all you have to do is use the FLASHBACK TABLE command:


SQL> FLASHBACK TABLE RECYCLETEST TO BEFORE DROP;

FLASHBACK COMPLETE.

SQL> SELECT * FROM TAB;

TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
RECYCLETEST TABLE


Voila! The table is reinstated effortlessly. If you check the recycle bin now, it will be empty.

Remember, placing tables in the recycle bin does not free up space in the original tablespace. To free the space, you need to purge the bin using:


PURGE RECYCLEBIN;


But what if you want to drop the table completely, without needing a flashback feature? In that case, you can drop it permanently using:

DROP TABLE RECYCLETEST PURGE;


This command will not rename the table to the recycle bin name; rather, it will be deleted permanently, as it would have been pre-10g.

Other Uses of Flashback Tables

Flashback Drop Table is not limited to reversing the drop of the table. Similar to flashback queries, you can also use it to reinstate the table to a different point in time, replacing the entire table with its "past" version. For example, the following statement reinstates the table to a System Change Number (SCN) 2202666520.

FLASHBACK TABLE RECYCLETEST TO SCN 2202666520;


This feature uses Oracle Data Pump technology to create a different table, uses flashback to populate the table with the versions of the data at that SCN, and then replaces the original table with the new table. To find out how far you can flashback the table, you could use the versioning feature of Oracle Database 10g.

Wednesday, February 27, 2008

Recovery Catalog

Configure the Recovery Catalog:

Lets assume that 'MARS' is the Catalog Database and the recovery Database is 'EARTH'


1. First create a user to hold the recovery catalog:

CONNECT sys/sys123 @ mars AS SYSDBA

2. Create tablepsace to hold repository :


CREATE TABLESPACE "RMAN"
DATAFILE 'C:\ORCL\RMAN\RMAN01.DBF' SIZE 6208K REUSE
AUTOEXTEND ON NEXT 64K MAXSIZE 32767M
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;

3. Create rman schema owner


CREATE USER rman IDENTIFIED BY rman123
TEMPORARY TABLESPACE temp
DEFAULT TABLESPACE rman
QUOTA UNLIMITED ON rman;

GRANT connect, resource, recovery_catalog_owner TO rman;


4. Then create the recovery catalog:

C:>rman catalog = rman/rman123 @ mars

Recovery Manager: Release 9.2.0.1.0 - Production

Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.

connected to recovery catalog database
recovery catalog is not installed

RMAN> create catalog tablespace "RMAN";

recovery catalog created

RMAN> exit

Recovery Manager complete.

C:>
5.Register Database


Each database to be backed up by RMAN must be registered:

C:>rman catalog=rman/rman@ mars target=sys/password@ earth

Recovery Manager: Release 9.2.0.1.0 - Production

Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.

connected to target database:earth (DBID=1371963417)
connected to recovery catalog database

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

RMAN>

Tuesday, February 26, 2008

TNS:12541 ERROR

TNS:12541 error: No Linstner



Solution:



1.Check the target Db's listener.ora , check for HOST and PORT ensure that both are correct.



2. Bounce the LNSRCTL (STOP and then start it)



3. Now ping the target database again.



EXAMPLE:

C:\orcl>tnsping earth


TNS Ping Utility for 32-bit Windows: Version 9.2.0.1.0 - Production on 26-FEB-2008 16:28:11
Copyright (c) 1997 Oracle Corporation. All rights reserved.

Used parameter files:C:\orcl\network\admin\sqlnet.ora
Used TNSNAMES adapter to resolve the aliasAttempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.16.12.35)(PORT = 1522))) (CONNECT_DATA = (SID = EARTH) (SERVER = DEDICATED) (SERVICE_NAME = EARTH)))

TNS-12541: TNS:no listener

C:\myorcl>lsnrctl

LSNRCTL for 32-bit Windows: Version 9.2.0.1.0 - Production on 26-FEB-2008 16:24:19
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Welcome to LSNRCTL, type "help" for information.


LSNRCTL> stop



Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))The command completed successfully



LSNRCTL> start



Starting tnslsnr: please wait...
TNSLSNR for 32-bit Windows: Version 9.2.0.1.0 - ProductionSystem parameter file is C:\myorcl\network\admin\listener.oraLog messages written to C:\myorcl\network\log\listener.logListening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1ipc)))Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.16.12.35)(PORT=1522)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))STATUS of the LISTENER

------------------------

Alias LISTENER

Version TNSLSNR for 32-bit Windows: Version 9.2.0.1.0 - ProductionStart Date 26-FEB-2008 16:24:33

Uptime 0 days 0 hr. 0 min. 2 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File C:\myorcl\network\admin\listener.oraListener

Log File C:\myorcl\network\log\listener.logListening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1ipc))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.16.12.35)(PORT=1522)))Services Summary...Service "EARTH" has 1 instance(s). Instance "EARTH", status UNKNOWN, has 1 handler(s) for this service...Service "PLSExtProc" has 1 instance(s). Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...

The command completed successfully



C:\orcl>tnsping earth

TNS Ping Utility for 32-bit Windows: Version 9.2.0.1.0 - Production on 26-FEB-2008 16:35:18
Copyright (c) 1997 Oracle Corporation. All rights reserved.

Used parameter files:C:\orcl\network\admin\sqlnet.ora
Used TNSNAMES adapter to resolve the aliasAttempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.16.12.35)(PORT = 1522))) (CONNECT_DATA = (SID = EARTH) (SERVER = DEDICATED) (SERVICE_NAME = EARTH)))

OK (20 msec)

Sunday, February 24, 2008

Database House Keeping Activities

Thursday, February 14, 2008

Priority Checking - For database Hanging

Assumptions :The database is running in one of the unix platform.

Check for locks using standrad scripts as described below:

1. Check if numerous locks exist in the database(You can use some scripts _ I will try to put in this blog soon)
2. Look for TX locks/ Kill the processes which are form / CM long running sessions/ Run -away processes.
3. If SS- Enqueue/ ST - Enqueue , then we need to dig further , which process is creating
4. If oracle background processes , then try to find whcih one.
2. Check for the services running in that node and may consider moving some of the services from that node to other nodes which are a bit lesser burdened.

3. Do a top -h for looking which process (oracle/ user) is taking much cpu.
$ top -h
or
$ glance

4. Open a SR with oracle and upload the following files:
1. alertlog
2.Hang Analyze (means at node level)
3.CRS oswatcher files
4. Systemstate Dumps
5.Processstack
6.Errorstack at level 3 / or at level 1
7.Hang analyze Global
8.Processstack for the background processes for oracle background processes.
9.AWR report ( Execute $ORACLE_HOME/rdbms/admin/awrrpt.sql) -In 10g its awr , in 9i - it was statspack)

I will discuss more on each with individual sessions in coming days.

Wednesday, February 13, 2008

Advantage of RAC

Oracle Real Application clusters allows multiple instances to access a single database. Instance running on multiple server nodes access the common set of datafiles compromising a single database.

Advantages:

1. High availability—Provide continuous uptime for database applications
2. On-demand scalability—Expand capacity by simply adding servers to your cluster
3. Lower computing costs—Use low-cost commodity hardware and reduce cost of downtime
4. High performance—Runs faster than the fastest mainframe
5. Grid computing—Oracle RAC is the foundation for grid computing

RAID

RAID is an acronym for Redundant Arrays of Inexpensive Disks. These provide a mechanism for load balancing and securing your data across multiple disks.

There are three terminology related to RAID, that is

1. Striping : It is the process of breaking data into pieces and distributing across multiple disk that supports a logical volume. It yeilds better Performance.

2. Mirroring : Its the process of writting the same data simultaniously to another member of the same volume.Mirroring provides protection for data by writting exactly same data to other members in the same volume.

3. Parity : Its the error checking.

Types of RAID

1.Host Based RAID: RAID can be software based.its imposes a small overhead as it consumes memory, I/O bandwidth, and CPU space on the host where it is implemented.

2.Embedded Controller based RAID : RAId implemented by hardware is in the form of micro code present in dedicated disk controler module that connect to the host this type of RAID is know as Embedded Controller based RAID .

3.Bridge based RAID : RAID can also be implemented using controllers that are external to the host where it is implemented. This type of RAID are known as bridge based RAID.

RAID Levels:
Initially RAID was a simple method of logically joining two or more disks.RAID levels usually range from 0 to 7.

RAID 0: Plain striping

RAID 1: plain mirroring

RAID 0+1 / 1+0 : combination of 0 and 1- striping and then mirroring or vice versa

RAID 2: Early implementation of striping with parity

RAID 3 : Striping with bit/byte level parity,dedicated parity disk

RAID 4: Striping with block level parity

RAID 5: Striping with block level parity, distributed parity across the number of disks in volume

RAID 6: Striping with block level multidimensional parity

RAID 7: Same as RAID 3 but with better asynchronous capability for read and write.

RAID s : EMC's implementation of RAID 3/5

Auto RAID : HP's automatic RAID technilogy.

Tuesday, February 12, 2008

Pros and Cons of Indexing


Pros


  • Its easy to use/implement and provides immediate value
  • It can be used to speed up existing applications without changing any of their logic or queries. Many orders of magnitude query improvement may be observed.
  • It can be used to precompute complex values without using a trigger
  • Can be created either as B*Tree or bitmap index
  • Index can be built on an arithmetic expression or expression containing PL/SQL, package functions, C callout or SQL built-in functions
  • Optimizer can estimate selectivity more accurately if the expressions are materialized in a function-based index. Range scan can be used for queries with expression in where clause and has index build on the expression used.
  • Provides efficient linguistic collation to use NLS sort index
  • Indexes can be created on object columns and REF columns by using methods defined for the object.
Cons

  • You cannot direct path load the table with a function based index if that function was user written and requires the SQL ENGIRE. That means you cannot direct path load into a table indexed using my_soundex(x), but you could if it had indexed upper(x).
  • It will affect the performance of inserts and updates. (Remember, you insert a row once, you query it thousands of times.)
  • If the function you are indexing is a user written function and it returns a string, you may have to expose a view for the end users to use.





























Monday, February 11, 2008

Difference Between Export and Import






Ten Best Things About Indexing

To day I wanna share all that I know about indexing for beginners.

1. An index is a performance-tuning method of allowing faster retrieval of records.

2. It's easy and provides immediate value, because it's just a CREATE INDEX statement

3.It can be used to speed up existing applications without changing any of their logic or queries

4. It can be used to supply additional functionality to applications with very little cost

5. An index creates an entry for each value that appears in the indexed columns. By default, Oracle creates B-tree indexes.

6.Because I/Os are expensive operations on a computer system, table scans are very expensive. Reducing the amount of data that is read from the disk is desirable. By reducing the amount of data that is read, you reduce system overhead. An index improves your performance by knowing exactly where the data is on disk and avoiding costly table scans, thus reducing I/O overhead.

6. Indexes are transparent to the end-user application.

7.Oracle automatically maintains the indexes during a DML operation on the table. Oracle Optimizer chooses the suitable index during a SQL SELECT operation.

8.Oracle manages the index for you, there are no additional commands you need to use to keep the index sync'd with the table.

9. It can allow the SQL optimizer to avoid large chunks of data blocks and narrow down the candidate blocks very quickly taking a query that takes minutes to run to less than a second to run.

10. an index is one of the best features of an RDBMS and will solve so many of your performance and scalability issues

Table & Index Partition

Partition:
Decompose a table or index into smaller, more manageable pieces, called partitions.

Each partition of a table or index must have the same logical attributes, such as column names, datatypes, and constraints, but each partition can have separate physical attributes such as pctfree, pctused, and tablespaces.

Partitioning is useful for many different types of applications, particularly applications that manage large volumes of data. OLTP systems often benefit from improvements in manageability and availability, while data warehousing systems benefit from performance and manageability.
Partition Key

Each row in a partitioned table is unambiguously assigned to a single partition. The partition key is a set of one or more columns that determines the partition for each row. Oracle9i automatically directs insert, update, and delete operations to the appropriate partition through the use of the partition key.
A partition key:
Consists of an ordered list of 1 to 16 columns
Cannot contain a LEVEL, ROWID, or MLSLABEL pseudocolumn or a column of type ROWID
Can contain columns that are NULLable

Partitioning Methods
There are mainly four types of partitioning :

1.Range Partitioning
2.List Partitioning
3.Hash Partitioning
4.Composite Partitioning

You can get the partition related information form the views:
dba_tables
dba_tab_partitions

Task: Partition the Scott.emp table using SAL (range partitioning method ):

SQL> SELECT partitioned FROM user_tables WHERE table_name = 'EMP';
PAR
----
NO

SQL> CREATE TABLE emp_range
(EMPNO NUMBER(4) NOT NULL,
ENAME vARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2))
PARTITION BY RANGE (SAL)
(PARTITION emp_1000 VALUES LESS THAN('1000'),
PARTITION emp_2000 VALUES LESS THAN('2000'),
PARTITION emp_6000 VALUES LESS THAN('6000'));

SQL> insert into emp_range (select * from emp) ;
14 rows created;

SQL> drop table emp;
Table dropped.

SQL> alter table emp_range rename to emp ;
Table altered.

SQL> SELECT partitioned FROM user_tables WHERE table_name = 'EMP';
PAR
----
YES

SQL> SELECT * FROM emp;

SQl> SELECT * FROM emp PARTITION(emp_1000);

SQL> SELECT partition_name FROM dba_tab_partitions WHERE table_name = 'EMP';
PARTITION_NAME
------------------------------
EMP_1000
EMP_2000
EMP_6000

SQL> ALTER TABLE "SCOTT"."EMP" DROP PARTITION "EMP_6000";

SQL> ALTER TABLE "SCOTT"."EMP" ADD PARTITION "EMP_MAX" VALUES LESS THAN (MAXVALUE) TABLESPACE "SYSTEM" ;

SQL> SELECT partition_name FROM dba_tab_partitions WHERE table_name = 'EMP';
PARTITION_NAME
------------------------------
EMP_1000
EMP_2000
EMP_MAX

SQL> ALTER TABLE "DIP"."FOOD_RANGE" RENAME PARTITION "FOOD_4" TO "FOOD_3"



Range partition Example:
CREATE TABLE sales_range (salesman_id NUMBER(5), salesman_name VARCHAR2(30), sales_amount NUMBER(10), sales_date DATE)PARTITION BY RANGE(sales_date) (PARTITION sales_jan2000 VALUES LESS THAN(TO_DATE('02/01/2000','DD/MM/YYYY')),PARTITION sales_feb2000 VALUES LESS THAN(TO_DATE('03/01/2000','DD/MM/YYYY')),PARTITION sales_mar2000 VALUES LESS THAN(TO_DATE('04/01/2000','DD/MM/YYYY')),PARTITION sales_apr2000 VALUES LESS THAN(TO_DATE('05/01/2000','DD/MM/YYYY')));
List partition:
CREATE TABLE sales_list(salesman_id NUMBER(5), salesman_name VARCHAR2(30),sales_state VARCHAR2(20),sales_amount NUMBER(10), sales_date DATE)PARTITION BY LIST(sales_state)(PARTITION sales_west VALUES('California', 'Hawaii'),PARTITION sales_east VALUES ('New York', 'Virginia', 'Florida'),PARTITION sales_central VALUES('Texas', 'Illinois')PARTITION sales_other VALUES(DEFAULT));

Hash partition Example:
CREATE TABLE sales_hash(salesman_id NUMBER(5), salesman_name VARCHAR2(30), sales_amount NUMBER(10), week_no NUMBER(2)) PARTITION BY HASH(salesman_id) PARTITIONS 4 STORE IN (data1, data2, data3, data4);

Composite Partitioning Range-Hash Example
CREATE TABLE sales_composite (salesman_id NUMBER(5), salesman_name VARCHAR2(30), sales_amount NUMBER(10), sales_date DATE)PARTITION BY RANGE(sales_date) SUBPARTITION BY HASH(salesman_id)SUBPARTITION TEMPLATE(SUBPARTITION sp1 TABLESPACE data1,SUBPARTITION sp2 TABLESPACE data2,SUBPARTITION sp3 TABLESPACE data3,SUBPARTITION sp4 TABLESPACE data4)(PARTITION sales_jan2000 VALUES LESS THAN(TO_DATE('02/01/2000','DD/MM/YYYY')) PARTITION sales_feb2000 VALUES LESS THAN(TO_DATE('03/01/2000','DD/MM/YYYY')) PARTITION sales_mar2000 VALUES LESS THAN(TO_DATE('04/01/2000','DD/MM/YYYY')) PARTITION sales_apr2000 VALUES LESS THAN(TO_DATE('05/01/2000','DD/MM/YYYY')) PARTITION sales_may2000 VALUES LESS THAN(TO_DATE('06/01/2000','DD/MM/YYYY')));

Composite Partitioning Range-List Example
CREATE TABLE bimonthly_regional_sales(deptno NUMBER, item_no VARCHAR2(20), txn_date DATE, txn_amount NUMBER, state VARCHAR2(2))PARTITION BY RANGE (txn_date)SUBPARTITION BY LIST (state)SUBPARTITION TEMPLATE(
SUBPARTITION east VALUES('NY', 'VA', 'FL') TABLESPACE ts1,SUBPARTITION west VALUES('CA', 'OR', 'HI') TABLESPACE ts2,SUBPARTITION central VALUES('IL', 'TX', 'MO') TABLESPACE ts3)
( PARTITION janfeb_2000 VALUES LESS THAN (TO_DATE('1-MAR-2000','DD-MON-YYYY')), PARTITION marapr_2000 VALUES LESS THAN (TO_DATE('1-MAY-2000','DD-MON-YYYY')), PARTITION mayjun_2000 VALUES LESS THAN (TO_DATE('1-JUL-2000','DD-MON-YYYY')) );

Partitioned Indexes

Just like partitioned tables, partitioned indexes improve manageability, availability, performance, and scalability.
They can either be partitioned independently (global indexes)
or
automatically linked to a table's partitioning method (local indexes).

Example of a Local Index Creation
CREATE INDEX employees_local_idx ON employees (employee_id) LOCAL;

Example of a Global Index Creation
CREATE INDEX employees_global_idx ON employees(employee_id);

Example of a Global Partitioned Index Creation
CREATE INDEX employees_global_part_idx ON employees(employee_id)
GLOBAL PARTITION BY RANGE(employee_id)
(PARTITION p1 VALUES LESS THAN(5000),
PARTITION p2 VALUES LESS THAN(MAXVALUE));

How to partition a non-partitioned table

You can partition a non-partitioned table three different ways:

A. Export/Import method
B. Partition exchange method
C. Insert with a sub-query method

Any of these 3 methods will create a partitioned table from

an existing non-partitioned table. Here we are considering that EMP is a non-partitioned table.

A. Export/import method

1) Export non-partitioned EMP table:
exp username/password tables=emp file=emp_exp.dmp log=emp_exp.log

2) Drop the table:
drop table emp;

3) Recreate EMP table with partitions:
create table EMP (empo number(3), name varchar2(15)) partition by range (qty) (partition p1 values less than (501), partition p2 values less than (maxvalue));

4) Import the table with ignore=y:
imp usr/pswd file=exp.dmp ignore=y The ignore=y causes the import to skip the table creation and continues to load all rows.

B. Insert with a subquery method

1) Create a partitioned table:
create table EMP_PART (empo number(3), name varchar2(15)) partition by range (qty) (partition p1 values less than (501), partition p2 values less than (maxvalue));

2) Insert into the partitioned table with a subquery from the non-partitioned table:
insert into EMP_PRAT (qty, name) select * from EMP Note: EMP is a non-partition table.

3) If you want the partitioned table to have the same name as the original table, then drop the original table and rename the new table:
drop table EMP; alter table EMP_PART rename to EMP;

C. Partition Exchange method
ALTER TABLE EXCHANGE PARTITION can be used to convert a partition (or subpartition) into a non-partitioned table and a non-partitioned table into a partition (or subpartition) of a partitioned table by exchanging their data and index segments.

1) Create table dummy_t as select with the required partitions
2) Alter table EXCHANGE partition with ;

Example
-------
SQL> CREATE TABLE p_emp 2 (sal NUMBER(7,2)) 3 PARTITION BY RANGE(sal) 4 (partition emp_p1 VALUES LESS THAN (2000), 5 partition emp_p2 VALUES LESS THAN (4000));
Table created.

SQL> SELECT * FROM emp;
EMPNO ENAME JOB MGR HIREDATE SAL --------- ---------- --------- --------- --------- --------- 7369 SMITH CLERK 7902 17-DEC-80 800 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 7521 WARD SALESMAN 7698 22-FEB-81 1250 7566 JONES MANAGER 7839 02-APR-81 2975 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 7698 BLAKE MANAGER 7839 01-MAY-81 2850 7782 CLARK MANAGER 7839 09-JUN-81 2450 7788 SCOTT ANALYST 7566 19-APR-87 3000 7839 KING PRESIDENT 17-NOV-81 5000 7844 TURNER SALESMAN 7698 08-SEP-81 1500 7876 ADAMS CLERK 7788 23-MAY-87 1100 7900 JAMES CLERK 7698 03-DEC-81 950 7902 FORD ANALYST 7566 03-DEC-81 3000 7934 MILLER CLERK 7782 23-JAN-82 1300
14 rows selected.

SQL> CREATE TABLE dummy_y as SELECT sal FROM emp WHERE sal<2000;> CREATE TABLE dummy_z as SELECT sal FROM emp WHERE sal BETWEEN 2000 AND 3999;
Table created.

SQL> alter table p_emp exchange partition emp_p1 with table dummy_y;
Table altered.

SQL> alter table p_emp exchange partition emp_p2 with table dummy_z;
Table altered

Saturday, February 9, 2008

What Backup policies your Company Need?

A brief overview of Backup policy Matrics and how you can take advantage of latest Tools and Technologies for safeguarding your data

Ten Best Recovery Scenarios I have ever faced!!!!

Today I am sharing the most difficult and challenging assignments I have experienced in my past jobs - The data recovery scenarios.

You know what , these are the best times in my job profile I would like to tell you today.

Data , the most important and valuable data of any organization when need to be required from your back up sets , then even experienced dbas find it most difficult to get what the companies want.

Friday, February 8, 2008

RMAN

Recovery Manager is Oracle’s utility to manage the backup, and more importantly the recovery, of the database. It eliminates operational complexity while providing superior performance and availability of the database. Recovery Manager debuted with Oracle8 to provide DBAs an integrated backup and recovery solution. Recovery Manager determines the most efficient method of executing the requested backup, restore, or recovery operation and then executes these operations in concert with the Oracle database server. Recovery Manager and the server automatically identify modifications to the structure of the database and dynamically adjust the required operation to adapt to the changes.

RMAN Configuration commands :::

RMAN> connect target sys/justchill

connected to target database: MARS (DBID=617151761)
using target database controlfile instead of recovery catalog

RMAN> CONFIGURE DEFAULT DEVICE TYPE TO DISK;

new RMAN configuration parameters:
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
new RMAN configuration parameters are successfully stored

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters are successfully stored

RMAN> CONFIGURE BACKUP OPTIMIZATION ON;

new RMAN configuration parameters:
CONFIGURE BACKUP OPTIMIZATION ON;
new RMAN configuration parameters are successfully stored

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'c:\orcl\rman\ora_cf%F'
RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT 'c:\orcl\rman\ora_df%t_s%s_s%p';
RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'c:\orcl\rman\snp123.ora' ;

RMAN> show all;

RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'c:\orcl\rman\ora_cf%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 1; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT 'c:\orcl\rman\ora_df%t_s%s_s%p';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:\ORCL\RMAN\SNP123.ORA';


RMAN BACKUP Commands :------------

RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE; ////Incremental Backups
RMAN> BACKUP DATABASE PLUS ARCHIVELOG; ////Backup the Database
RMAN> BACKUP VALIDATE DATABASE ARCHIVELOG ALL; /// Testing Backups
RMAN> report schema; ///show all schemas
RMAN> BACKUP TABLESPACE EXAMPLE; /// backup particular tablespace
RMAN> BACKUP DATAFILE 5; /// backup particular datafile
RMAN> BACKUP DATABASE NOT BACKED UP SINCE TIME 'SYSDATE-1'; /// upto time

Example:
RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE; ////Incremental Backups Starting backup at 08-FEB-08 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=15 devtype=DISK channel ORA_DISK_1: starting incremental level 0 datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset input datafile fno=00001 name=C:\ORCL\P123\VITALDATA\SYSTEM01.DBF input datafile fno=00002 name=C:\ORCL\P123\VITALDATA\UNDOTBS01.DBF input datafile fno=00004 name=C:\ORCL\P123\VITALDATA\EXAMPLE01.DBF input datafile fno=00009 name=C:\ORCL\P123\VITALDATA\XDB01.DBF input datafile fno=00005 name=C:\ORCL\P123\VITALDATA\INDX01.DBF input datafile fno=00008 name=C:\ORCL\P123\VITALDATA\USERS01.DBF input datafile fno=00003 name=C:\ORCL\P123\VITALDATA\DRSYS01.DBF input datafile fno=00006 name=C:\ORCL\P123\VITALDATA\ODM01.DBF input datafile fno=00007 name=C:\ORCL\P123\VITALDATA\TOOLS01.DBF input datafile fno=00010 name=C:\ORCL\P123\VITALDATA\DATA.DBF input datafile fno=00011 name=C:\ORCL\P123\VITALDATA\TS_STH.DBF channel ORA_DISK_1: starting piece 1 at 08-FEB-08 channel ORA_DISK_1: finished piece 1 at 08-FEB-08 piece handle=C:\ORCL\RMAN\ORA_DF646157625_S2_S1 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:55 Finished backup at 08-FEB-08 Starting Control File and SPFILE Autobackup at 08-FEB-08 piece handle=C:\ORCL\RMAN\ORA_CFC-617151761-20080208-00 comment=NONE Finished Control File and SPFILE Autobackup at 08-FEB-08

RMAN Administrative Commands:-------

RMAN> REPORT NEED BACKUP; ///files need to be backed up based on the configured retention policy

RMAN> REPORT OBSOLETE; /// to view what backups are obsolete and delete them.

RMAN> DELETE OBSOLETE; //// to view what backups are obsolete and delete them.

RMAN> BACKUP VALIDATE DATABASE ARCHIVELOG ALL; //to check for corruption or verify that database files are in the correct location

RMAN> REPORT UNRECOVERABLE; //report any files that have not been backed up since the last nolog operation

RMAN> DELETE ARCHIVELOG UNTIL TIME 'SYSDATE-1'; //run this command to delete the archive logs that are older then yesterday

RMAN> CROSSCHECK BACKUP OF DATABASE;

RMAN> LIST BACKUP OF DATAFILE 4; /// view the backups for datafile 4

RMAN> LIST BACKUP OF CONTROLFILE; /// view the backups for control file

RMAN Complete Recovery :------
When you perform complete recovery, you recover the backups to the current SCN. You can either recover the whole database at once or recover individual tablespaces or datafiles. Because you do not have to open the database with the RESETLOGS option after complete recovery as you do after incomplete recovery, you have the option of recovering some datafiles at one time and the remaining datafiles later

1.Recovering Datafiles :::
If u know the file corrupted then use the command below..
for example - DATAFILE 5 C:\orcl\p123\vitaldata\EXAMPLE01.BDF corrupted

RMAN> CONNECT TARGET SYS/justchill
RMAN> RESTORE DATAFILE 5;
RMAN> RECOVER DATAFILE 5;
RMAN> SQL 'alter tablespace example online';

2. Recovering Database :::
The database must be in MOUNT mode to restore and recover with RMAN.

RMAN> CONNECT TARGET SYS/justchill
RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE;
RMAN> ALTER DATABASE OPEN;

3. Recovering Tablespaces :::

RMAN> CONNECT TARGET SYS/justchill
RMAN> RESTORE TABLESPACE EXAMPLE;
RMAN> RECOVER TABLESPACE EXAMPLE;
RMAN> SQL 'ALTER TABLESPACE EXAMPLE ONLINE';


RMAN InComplete Recovery :------
Incomplete recovery uses a backup to produce a noncurrent version of the database. In other words, you do not apply all of the redo records generated after the most recent backup. You usually perform incomplete recovery of the whole database in the following situations:

1.Media failure destroys some or all of the online redo logs.

2. A user error causes data loss, for example, a user inadvertently drops a table.

3.You cannot perform complete recovery because an archived redo log is missing.

4.You lose your current control file and must use a backup control file to open the database.

To perform incomplete media recovery, you must restore all datafiles from backups created prior to the time to which you want to recover and then open the database with the RESETLOGS option when recovery completes. The RESETLOGS operation creates a new incarnation of the database. in other words, a database with a new stream of log sequence numbers starting with log sequence 1.

The following commands are issued within RMAN, using the database id(DBID) returned from
the query. You will receive an error message at the end of recover due to the loss of the
online redo logs

RMAN> SET DBID 1003121815;
RMAN> CONNECT TARGET SYS/ORACLE@ORCL.WORLD;
RMAN> STARTUP NOMOUNT;
RMAN> SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'C:\ORCl\RMAN\ora_cf%F';
RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP;
RMAN> STARTUP MOUNT;
RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE;


Since the online logs were lost, complete recovery is not possible. Open the database
with resetlogs to continue.

RMAN> ALTER DATABASE OPEN RESETLOGS;


To monitor the RMAN @monitorjob type this in SQL promt.

Thursday, February 7, 2008

Tablespaces

Tablespaces

A tablespace is a logical storage unit within an Oracle database. It is logical because a tablespace is not visible in the file system of the machine on which the database resides. A tablespace, in turn, consists of at least one datafile which, in turn, are physically located in the file system of the server. Btw, a datafile belongs to exactly one tablespace.

Each table, index and so on that is stored in an Oracle database belongs to a tablespace. The tablespace builds the bridge between the Oracle database and the filesystem in which the table's or index' data is stored.

There are three types of tablespaces in Oracle:
A.Permanent tablespaces
B.Undo tablespaces
C.Temporary tablespaces

Creating a tablespace
A tablespace is created with the create tablespace sql command.

----Permanent tablespace
create tablespace ts_something logging datafile '/dbf1/ts_sth.dbf' size 32m
autoextend on next 32m maxsize 2048m extent management local;

create tablespace data datafile '/home/oracle/databases/ora10/data.dbf'size 10Mautoextend on maxsize 200M extent management local uniform size 64K;

----Temporary tablespace
create temporary tablespace temp_mtr tempfile '/dbf1/mtr_temp01.dbf' size 32m
autoextend on next 32m maxsize 2048m extent management local;
--- Undo tablespace
create undotablespace ts_undo datafile '/dbf/undo.dbf' size 100M;

Dropping a tablespace
Dropping a tablespace is a structural change to the database that requires a backup.
The extent size of the objects in the tablespace are subject to the tablespace's minimum extent size.

Quotas on tablespaces
Users can have (space) quotas on tablespaces. This is a means to limit how much space a user uses on a tablespace. This quota can be set using alter user quota.

Tablespace groups
This is a feature that is available with Oracle 10g.

Assigning a tablespace to a tablespace group:

SQL>alter tablespace ts_user tablespace group ts_grp_user;


Renaming Tablespaces
This is a feature that is available with Oracle 10g and can be useful in transportable tablespace scenarios.

SQL>alter tablespace ts_user rename to ts_user_01;

The system and sysaux tablespace cannot be renamed, though. But that will not be much of a problem, However, read only tablespaces can be renamed.
After renaming a tablespace, the controlfiles should immediatly be backed up.

The system tablespace

Every Oracle database contains a tablespace named SYSTEM that Oracle creates automatically when the database is created. The SYSTEM tablespace always contains the data dictionary tables for the entire database.
A small database might need only the SYSTEM tablespace; however, it is recommended that you create at least one additional tablespace to store user data separate from data dictionary information. This allows you more flexibility in various database administration operations and can reduce contention among dictionary objects and schema objects for the same datafiles.

Note: The SYSTEM tablespace must always be kept online

The sysaux tablespace
The sysaux tablespace is new with Oracle 10g . It is used to store database components that were stored in the system tablespace in prior releases of the database. Also, the tablespaces that were needed for RMAN's recovery catalog, for Ultra Search, for Data Mining, for XDP and for OLAP are going to sysaux with 10g. Additionally, it is the place where automatic workload repository stores its information.

occupants
An occupant is a set of (application-) tables within the sysaux tablespace. The occupants can be viewed with the v$sysaux_occupants

Temporary tablespace
A temporary tablespace is a tablespace that can only be used for sort segments. No permanent objects can reside in a temporary tablespace. Sort segments are used when a segment is shared by multiple sort operations. One sort segment exists in every instance that performs a sort operation in a given tablespace.
Temporary tablespaces provide performance improvements when you have multiple sorts that are too large to fit into memory. The sort segment of a given temporary tablespace is created at the time of the first sort operation. The sort segment grows by allocating extents until the segment size is equal to or greater than the total storage demands of all of the active sorts running on that instance

Transportable tablespaces
Transportable tablespaces can quickly be moved across different platforms (with Oracle 10g)
This allows databases to be migrated at almost the speed of a file transfer.
For example, data providers can deploy their data to custumers that run Oracle on different platforms more easily. Also, usually, data marts are on different platforms as the data warehouse. With transportable tablespaces, the data can be 'copied' from the data warehouse to the data mart more efficiently.

Transportable tablespaces require the export transportable tablespaces option.

RMAN> convert tablespace ts_data, ts_idx to platform 'Microsoft Windows NT' format '/temp/%U';

Bigfile tablespaces
This is a feature of Oracle 10g. A bigfile tablespace contains only one datafile (or tempfile) which can be as big as 2^32 (=4GB) blocks
create bigfile tablespace beeeg_ts data file '/o1/dat/beeeg.dbf' size 2T
Bigfile tablespaces are supported only for locally managed tablespaces with automatic segment-space management (which is the default setting since Oracle 9i). There are two exceptions: locally managed undo and temporary tablespaces can be bigfile tablespaces, even though their segments are manually managed.
The system and sysauxtablespace cannot be created as bigfile tablespace.
Bigfile tablespaces should be used with automatic storage managemet, or other logical volume managers that support dynamically extensible logical volumes, striping and RAID.

Smallfile tablespaces
A smallfile tablespace is a traditional tablespace that can contain up to 1022 datafiles.

Default tablespaces
An Oracle database can be configured with a default tablespace and a default temporary tablespace. These are used for users who are not configured with default tablespaces or default temporary tablespaces .
Displaying tablespaces
The dba_tablespaces view displays all tablespaces along with the respective relevant information.
Space management
Oracle maintains extents for a tablespace. There are two different methods for Oracle to keep track of free and used (occupied) extents:
Dictionary managed tablespace
Locally managed tablespace
A 'bitmap' is stored within the tablespace. Each bit within this bitmap determines if a corresponding extent in the tablespace is free or used. The extent sizes are either uniform or autoallocate.

resulting in the following benefits:

1.Improved concurrency and speed of space operations, because space allocations and deallocations predominantly modify locally managed resources (bitmaps stored in header files) rather than requiring centrally managed resources such as enqueues

2.Improved performance, because recursive operations that are sometimes required during dictionary-managed space allocation are eliminated
Readable standby databases are allowed, because locally managed temporary tablespaces (used, for example, for sorts) are locally managed and thus do not generate any undo or redo.

3.Simplified space allocation--when the AUTOALLOCATE clause is specified, appropriate extent size is automatically selected

4.Reduced user reliance on the data dictionary because necessary information is stored in file headers and bitmap blocks

EXTENT MANAGEMENT LOCAL AUTOALLOCATE
EXTENT MANAGEMENT LOCAL UNIFORM

Segment Space Management in Locally Managed Tablespaces

A.MANUAL

Specifying MANUAL tells Oracle that you want to use free lists for managing free space within segments. Free lists are lists of data blocks that have space available for inserting rows.

MANUAL is the default.

B. AUTO

This keyword tells Oracle that you want to use bitmaps to manage the free space within segments. A bitmap, in this case, is a map that describes the status of each data block within a segment with respect to the amount of space in the block available for inserting rows. As more or less space becomes available in a data block, its new state is reflected in the bitmap. Bitmaps allow Oracle to manage free space more automatically, and thus, this form of space management is called automatic segment-space management.

CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;

Dictionary managed tablespaces
For dictionary-managed tablespaces, Oracle updates the appropriate tables in the data dictionary whenever an extent is allocated, or freed for reuse. Extents are allocated according to the following storage parameters initial next pctincrease .The information about used and free extents is stored in the dictionary.

CREATE TABLESPACE tbsb DATAFILE '/u02/oracle/data/tbsa01.dbf' SIZE 50M EXTENT MANAGEMENT DICTIONARY
DEFAULT STORAGE ( INITIAL 50K NEXT 50K MINEXTENTS 2 MAXEXTENTS 50 PCTINCREASE 0);

State of tablespaces
A tablespace is either online (ready to be accessed) or offline (not accessible).
A tablespace becomes offline either
when the DBA explicitely makes the tablespace offline, or
when an error occures while Oracle accesses a datafile within the tablespace.
Offline tablespaces cannot be transported to other databases.

Read Only Tablespaces

The primary purpose of read-only tablespaces is to eliminate the need to perform backup and recovery of large, static portions of a database. Oracle never updates the files of a read-only tablespace, and therefore the files can reside on read-only media, such as CD ROMs or WORM drives.

Whenever you create a new tablespace, it is always created as read-write. The READ ONLY option of the ALTER TABLESPACE command allows you to change the tablespace to read-only, making all of its associated datafiles read-only as well. You can then use the READ WRITE option to make a read-only tablespace writeable again.
Read-only tablespaces cannot be modified. Therefore, they do not need repeated backup. Also, should you need to recover your database, you do not need to recover any read-only tablespaces, because they could not have been modified.
You can drop items, such as tables and indexes, from a read-only tablespace, just as you can drop items from an offline tablespace. However, you cannot create or alter objects in a read-only tablespace.