BNFD - Your DBA NoteBook On The Net!!!

Thursday, January 31, 2008

Practice Session 1

C:\orcl>sqlplus /nolog

SQL*Plus: Release 9.2.0.1.0 - Production on Fri Feb 1 02:42:58 2008
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

SQL> conn sys/justchill as sysdba ;Connected.SQL> select name from v$database ;

NAME
---------
DIPS

SQL> select status from v$instance ;

STATUS
------------
MOUNTED

SQL> alter database open ;

Database altered.

SQL> select status from v$instance ;

STATUS
------------
OPEN

SQL> shut immediate ;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup nomount
ORACLE instance started.

Total System Global Area 135338868
bytesFixed Size 453492
bytesVariable Size 109051904
bytesDatabase Buffers 25165824 bytes
Redo Buffers 667648 bytes

SQL> select name from v$controlfile ;
no rows selected

SQL> select * from v$log ;
*ERROR at line 1:ORA-01507: database not mounted

SQL> alter database mount 2 ;
Database altered.

SQL> select status from v$instance ;
STATUS
------------
MOUNTED

SQL> select * from v$logfile ;
GROUP# STATUS TYPE MEMBER
---------- ------- --------- ---------------
3 ONLINEC:\ORCL\Z123DATA\Z123\REDO03.LOG
2 ONLINEC:\ORCL\Z123DATA\Z123\REDO02.LOG
1 ONLINEC:\ORCL\Z123DATA\Z123\REDO01.LOG

SQL> alter database open resetlogs ;
*ERROR at line 1:ORA-01139: RESETLOGS option only valid after an incomplete database recovery

SQL> recover database ;
ORA-00283: recovery session canceled due to errorsORA-00264: no recovery required

SQL> alter database open ;
Database altered.

SQL> select status from v$instance ;
STATUS
------------
OPEN

Sunday, January 27, 2008

Network Configuration

A client is any application that needs to connect to the Oracle database to send or retrieve data. An Oracle client application can reside on any machine provided it has Oracle client software installed.

Oracle Net is a software component that resides on the client and the Oracle database server. It is responsible for establishing and maintaining the connection between the client application and the server, as well as exchanging messages between them, using industry standard protocols.

For the client application and a database to communicate, the client application must specify location details for the database it wants to connect to and the database must provide some sort of identification, or address.

Oracle Net Listener (Server side) Configuration

On the database server side, the Oracle Net listener, commonly known as the listener, is a process that listens for client connection requests. It is configured in a file named listener.ora, with a protocol address that identifies the database. This address defines the protocol the listener is listening on and any other protocol specific information.
For example, the listener could be configured to listen at the following protocol address:

(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=my-server) (PORT=1521)))
This example shows a TCP/IP protocol address that specifies the host machine of the listener and a port number.

Client Configuration

When a client configured with the same protocol address broadcasts a request for a connection, the listener on the server machine brokers the request and forwards it to the Oracle database.

The client uses a connect descriptor to specify the database it wants to connect to. This connect descriptor contains a protocol address and a database service name. A database can have multiple services defined, so a specific one must be specified for the connection. In the case of the preconfigured database that you installed, there is only one service, the name of which defaults to the global database name.
The following example shows a connect descriptor that enables clients to connect to a database service called mydb.us.acme.com:
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=my-server) PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=mydb.us.acme.com)))

Connection Requests

Users initiate a connection request by providing a connect string. A connect string includes a username and password, along with a connect identifier. This connect identifier can be the connect descriptor itself, or a name that resolves to the connect descriptor. One of the most common connect identifiers is a net service name, a simple name for a service. The following example shows a connect string that uses net service name mydb as the connect identifier.

CONNECT scott/tiger@mydb

While the connect identifier is relatively simple as shown, it can be long and inconvenient to use if your environment configuration is more complex. With long connect identifiers, you can use a mapping method that maps the net service name to the connect descriptor. This mapping information is stored in one or more repositories of information that are accessed with naming methods.

Naming methods:

Local Naming
The local naming method stores connect descriptors, identified by their net service name, in a local (on the client) configuration file named tnsnames.ora.

Directory Naming
The directory naming method stores connect identifiers in a centralized LDAP-compliant directory server to access a database service.

Easy Connect Naming
The easy connect naming method enables clients to connect to an Oracle database server by using a TCP/IP connect string consisting of a host name and optional port and service name:

CONNECT username/password@host[:port][/service_name]

For example:CONNECT scott/tiger@my-server:1521/mydb

The easy connect naming method requires no configuration.

External Naming
The external naming method stores net service names in a supported non-Oracle naming service. These supported third-party services include:

1.Network Information Service (NIS) External Naming
2.Distributed Computing Environment (DCE) Cell Directory Services (CDS)

Tools for Network Configuration
Oracle enables you to manage your network configuration with the following tools:
A.Oracle Net Configuration Assistant,
B. Enterprise Manager,
C. Oracle Net Manager

A.Oracle Net Configuration Assistant
The Oracle Universal Installer launches Oracle Net Configuration Assistant after the database is installed. Oracle Net Configuration Assistant enables you to configure the listening protocol address and service information for an Oracle database.
During a typical database install (as opposed to client install), Oracle Net Configuration Assistant automatically configures a listener with a name of LISTENER that has a TCP/IP listening protocol address for the Oracle database that you install. If you do a custom install, Oracle Net Configuration Assistant prompts you to configure a listener name and protocol address of your choice.
Oracle Net Configuration Assistant is primarily a post installation tool. Thereafter, Oracle provides other means for network configuration, which are the Oracle Enterprise Manager and Oracle Net Manager.

B. Enterprise Manager
Enterprise Manager enables you to manage your server-side network configuration with two pages: the Listener page and the Net Services Administration page.
The Listener page displays the listener status and enables you to shut it down. You can navigate to this page from the Home page by clicking the Listener link

C. Oracle Net Manager
You can access Oracle Net Manager from the command line or for Windows platforms through the Start menu.
For command line (UNIX, Linux, or Windows) run netmgr.
On Windows choose:
Start > Programs > Oracle - home-name > Configuration and Migration Tools > Net Manager

Starting and Shutting Down the Listener

The listener runs on your database server machine and brokers incoming client requests. With Enterprise Manager, you can view the status of the listener, which is set to start automatically whenever your server machine is restarted.

listener status : lsnrctl status

listener start : lsnrctl start

listener start : lsnrctl stop

Configuring Oracle Networking on Client Machines
Client machines need to be configured before they can connect to an Oracle database. To configure the client machine, you must first install Oracle client software, which includes Oracle Net software.
This example shows you how to configure a Windows client with local naming, which involves adding a connect descriptor entry to the local tnsnames.ora file. The connect descriptor specifies the host name, protocol address, and service name of the database to connect to.
On Windows, you can use the Oracle Net Manager program to create a new net service entry in your tsnames.ora file as follows:

1.Start Oracle Net Manager by clicking Start->Programs->Oracle->Configuration and Migration Tools->Net Manager

2.Highlight Local Naming and click the plus (+) on the left hand side of the page. The Net Service Name Wizard appears.

3.Enter a net service name to identify it. The name you choose will appear under the Service Naming folder. Click Next.

4.Select the protocol to use to connect to your database. For example, you can accept the default TCP/IP (Internet Protocol). The database listener must be configured to use the same network protocol. Click Next.

5.Enter the host name of the database machine, such as coconuts.island.com.

6.Enter either the database service name or the database SID. In addition, select a Connection Type, either Shared or Dedicated. Shared connections require your database to be running in shared mode. If you choose Dedicated, the database dedicates a single server process to serve your connection. If you are unsure or would like to use the default database connection type, select Database Default. Click Next.

7.The last screen of the wizard enables you to test the connection using the information you entered. To do so, click Test. To log in to the database, you can use the SYS user and the password you specified when you configured your database. The wizard tells you if the connection test succeed or not.

8.Click Finish. The new service appears under the Service Naming folder. You can view or modify the configuration for the highlighted service under Service Identification and Address Configuration.

Where is the SQL*Net configuration files located?

The SQL*Net configuration files (like SQLNET.ORA and TNSNAMES.ORA) can be found in one of the following locations (SQL*Net searches for it in this order):
Directory pointed to by the TNS_ADMIN parameter ($TNS_ADMIN on Unix)
/etc (Unix only)
/var/opt/oracle (Unix only)
$ORACLE_HOME/network/admin (or sometimes net8/admin directory)

ORA-12154 ERROR !!!

This error message occurs when a user attempts to logon to an Oracle database. The tnsnames.ora file was not found or has a syntax error. The Oracle communications software (SQL*Net) did not recognize the database connect descriptor as being valid. For GUI utilities, this is the name entered in the third field of the ORACLE logon box.

ORA-12154 TNS:Could not resolve service name

Cause: The service name specified is not defined in the TNSNAMES.ORA file.

Action: Make the following checks and correct the error:
- Verify that a TNSNAMES.ORA file exists and is in the proper
place and accessible. See the operating system specific manual
for details on the required name and location.
- Check to see that the service name exists in one of the
TNSNAMES.ORA files and add it if necessary.
- Make sure there are no syntax errors anywhere in the file.
Particularly look for unmatched parentheses or stray characters.
Any error in a TNSNAMES.ORA file makes it unusable. If possible
regenerate the configuration files using the Oracle Network
Manager.

sample configuration files

$ORACLE_HOME/network/admin/sqlnet.ora
-------------------------------------
automatic_ipc = ON # Set to OFF for PC's
trace_level_client = OFF # Set to 16 if tracing is required
sqlnet.expire_time = 0 # Idle time in minutes
sqlnet.authentication_services = (ALL)
names.directory_lookup = (TNSNAMES,ONAMES)
names.default_domain = world
name.default_zone = world


$ORACLE_HOME/network/admin/tnsnames.ora
---------------------------------------
dbname1, aliasname1, aliasname2 =
(description =
(address_list =
(address =
(protocol = tcp)
(host = yourHost.domain)
(port = 1521)
)
)
(connect_data =
(sid = yourSID)
)
)

$ORACLE_HOME/network/admin/listener.ora
---------------------------------------
LISTENER = # Listener name is LISTENER
(address_list =
(address=
(protocol=ipc)
(key=yourSID)
)
(address=
(protocol = tcp)
(host = yourHost.domain)
(port = 1521)
)
)
STARTUP_WAIT_TIME_LISTENER = 0
CONNECT_TIMEOUT_LISTENER = 10
TRACE_LEVEL_LISTENER = ON
TRACE_FILE_LISTENER = $ORACLE_HOME/network/trace/listener.trc
SID_LIST_LISTENER =
(SID_LIST=
(SID_DESC=
(SID_NAME=yourSID)
(ORACLE_HOME=YOUR_ORACLE_HOME)
)
)

How to check a physical corruption?

There are various methods to check / understand the physical corruption:
(a) To check the corruption we shall run DBV on the datafiles, which will explain the corruption.
DBV utility will scan through the files at OS level. While reading the blocks it checks for any fractured blocks , checksum failure & other checking failures. It will report corrupted blocks and also classifies them in what type of object were they (table/index/etc).
LIMITATIONS WITH DBV:
o DBV cannot understand any table / index rowcount mismatch.
o DBV cannot understand any complex corruptions, especially corruptions below cache layer of a datablock.
o Some blocks that may not be part of Oracle, they would have been dropped. But DBV will still report that block as corrupted. When you check with the query against dba_extents (given below) there wont be any rows returned. And this corrupt block will not affect normal database operations as Oracle is not going to read this block. But while performing RMAN backups we still report this block as corrupted.
o Logical corruptions with undo / redo records cannot be identified.
o Some types of corruptions that cannot be explored while we do an OS level read.

(b) RMAN to validate the database files.
The following command will validate all the datafiles in the database.

RMAN> BACKUP VALIDATE DATABASE;

- If you want to validate a particular datafile you shall use the following command, this will vaildate the file# 1
RMAN> BACKUP VALIDATE DATAFILE 1;

LIMITATIONS WITH RMAN:
o All the limitations that apply to DBV are applicable to RMAN VALIDATE also.

(c) Export Utility / SELECT:
To check the corruption, we shall export the affected segment / schema / tablespace / full.
If we doubt a particular table, we shall either perform a full table scan (or) CTAS.
If there is any corruptions with the exporting object, export will fail with corruption error.
In case there no space in filesystem to perform an export, we shall do it to /dev/null.
LIMITATIONS WITH EXPORT:
o It fails immediately after encountering the first corruption., so complete picture will be missed-out.
o export performs full table scan, so we will not be checking the indexes. Any corruption in index (or) table/index rowcount/rowid mismatch will not be checked by export.
o A full database export reads data dictionary, but it doesn't mean that there is no corruption with data dictionary when this export completes successfully.

(d) ANALYZE command:
ANALYZE has been referred as the best utility that performs maximum checks.
This is the easy utility to check the corruptions associated with index on any means.

To check the corruption with a table do the following:
ANALYZE TABLE VALIDATE STRUCTURE;

To check the corruption with an index do the following:
ANALYZE INDEX VALIDATE STRUCTURE;

To check the corruption with the table and its index(s) to perform the cross reference checkings do the following:
ANALYZE TABLE VALIDATE STRUCTURE CASCADE;

For partition tables, we need to use ANALYZE command with INTO INVALID_ROWS option, similar to the following:
ANALYZE TABLE VALIDATE STRUCTURE CASCADE INTO INVALID_ROWS;
ANALYZE PARTITON TABLE VALIDATE STRUCTURE CASCADE

LIMITATIONS WITH ANALYZE:
o It is not a limitation, rather the common problem. It would be difficult to perform an analyze with huge segments as it needs to scan all the blocks with that segment. We shall use ONLINE option there. But still the performance problem may not be acceptable for busy segments.
o Again this fails when it encounters the first corruption, it will not proceed after that.

Saturday, January 26, 2008

FAQ about Backup & recovery

difference between restoring and recovering?
Restoring involves copying backup files from secondary storage (backup media) to disk. This can be done to replace damaged files or to copy/move a database to a new location.
Recovery is the process of applying redo logs to the database to roll it forward. One can roll-forward until a specific point-in-time (before the disaster occurred), or roll-forward until the last transaction recorded in the log files.

difference between online and offline backups?

A hot (or on-line) backup is a backup performed while the database is open and available for use (read and write activity). Except for Oracle exports, one can only do on-line backups when the database is ARCHIVELOG mode.
A cold (or off-line) backup is a backup performed while the database is off-line and unavailable to its users. Cold backups can be taken regardless if the database is in ARCHIVELOG or NOARCHIVELOG mode.
It is easier to restore from off-line backups as no recovery (from archived logs) would be required to make the database consistent. Nevertheless, on-line backups are less disruptive and doesn't require database downtime.
Point-in-time recovery (regardless if you do on-line or off-line backups) is only available when the database is in ARCHIVELOG mode.

strategies are available for backing-up an Oracle database?
The following methods are valid for backing-up an Oracle database:
Export/Import - Exports are "logical" database backups in that they extract logical definitions and data from the database to a file.
Cold or Off-line Backups - Shut the database down and backup up ALL data, log, and control files.
Hot or On-line Backups - If the database is available and in ARCHIVELOG mode, set the tablespaces into backup mode and backup their files. Also remember to backup the control files and archived redo log files.
RMAN Backups - While the database is off-line or on-line, use the "rman" utility to backup the database.

But Regardless of strategy, take backup of all required software libraries, parameter files, password files, etc. If your database is in ARCHIVELOG mode, you also need to backup archived log files.


If Someone lost an archived/online REDO LOG file, How to restoreDB back?
The following INIT.ORA/SPFILE parameter can be used if your current redologs are corrupted or blown away. It may also be handy if you do database recovery and one of the archived log files are missing and cannot be restored.

_allow_resetlogs_corruption = true

This should allow you to open the database. However, after using this parameter your database will be inconsistent (some committed transactions may be lost or partially applied).
Steps:
--Do a "SHUTDOWN NORMAL" of the database
--Set the above parameter
--Do a "STARTUP MOUNT" and "ALTER DATATBASE OPEN RESETLOGS;"
--If the database asks for recovery, use an UNTIL CANCEL type recovery and apply all available archive and on-line redo logs, then issue CANCEL and reissue the "ALTER DATATBASE OPEN RESETLOGS;" command.
--Wait a couple of minutes for Oracle to sort itself out
--Do a "SHUTDOWN NORMAL"
--Remove the above parameter!
--Do a database "STARTUP" and check your ALERT.LOG file for errors.
--Extract the data and rebuild the entire database

How does one put a database into ARCHIVELOG mode?

The main reason for running in archivelog mode is that one can provide 24-hour availability and guarantee complete data recoverability. It is also necessary to enable ARCHIVELOG mode before one can start to use on-line database backups.
Issue the following commands to put a database into ARCHVELOG mode:

SQL> select log_mode from v$database;
LOG_MODE
------------
NOARCHIVELOG

SQL> CONNECT sys AS SYSDBA
SQL> STARTUP MOUNT EXCLUSEVE;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ARCHIVE LOG START;
SQL> ALTER DATABASE OPEN;

Alternatively, add the above commands into your database's startup command script, and bounce the database.
The following parameters needs to be set IN INIT.ORA for databases in ARCHIVELOG mode:

log_archive_start = TRUE
log_archive_dest_1 = 'LOCATION=/arch_dir_name'
log_archive_dest_state_1 = ENABLE
log_archive_format = %d_%t_%s.arc

NOTE 1: Remember to take a baseline database backup right after enabling archivelog mode. Without it one would not be able to recover. Also, implement an archivelog backup to prevent the archive log directory from filling-up.
NOTE 2: ARCHIVELOG mode was introduced with Oracle V6, and is essential for database point-in-time recovery. Archiving can be used in combination with on-line and off-line database backups.
NOTE 3: You may want to set the following INIT.ORA parameters when enabling ARCHIVELOG mode: log_archive_start=TRUE, log_archive_dest=..., and log_archive_format=...
NOTE 4: You can change the archive log destination of a database on-line with the ARCHIVE LOG START TO 'directory'; statement. This statement is often used to switch archiving between a set of directories.
NOTE 5: When running Oracle Real Application Server (RAC), you need to shut down all nodes before changing the database to ARCHIVELOG mode.


OR
If you want to enable archive log mode on a single node instance and a Real Application Clusters (RAC).

Do below,
ALTER SYSTEM SET log_archive_start=TRUE SCOPE=spfile;
If you are getting error : ora-32001:write to SPFILE requested but no SPFILE specified at startup

then you have to do below:
create spfile from pfile;
shut immediate;
startup;

and then you can issue below commands:
ALTER SYSTEM SET log_archive_start=TRUE SCOPE=spfile;
ALTER SYSTEM SET log_archive_dest_1='location=/u01/oradata/MYSID/archive/' SCOPE=spfile;
ALTER SYSTEM SET log_archive_format='arch_%t_%s.arc' SCOPE=spfile;

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ARCHIVE LOG START;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;

BASICS OF DATABASE BACKUP AND RECOVERY

A backup is a representative copy of data. This copy can include important parts of a database such as the control file, redo logs, and datafiles. A backup protects data from application error and acts as a safeguard against unexpected data loss, by providing a way to restore original data. Backups are divided into physical backups and logical backups. Physical backups are copies of physical database files. The phrase "backup and recovery" usually refers to the transfer of copied files from one location to another, along with the various operations performed on these files.

In contrast, logical backups contain data that is exported using SQL commands and stored in a binary file. Oracle records both committed and uncommitted changes in redo log buffers. Logical backups are used to supplement physical backups. Restoring a physical backup means reconstructing it and making it available to the Oracle server. To recover a restored backup, data is updated using redo records from the transaction log. The transaction log records changes made to the database after the backup was taken.

Oracle performs crash recovery and instance recovery automatically after an instance failure. In the case of media failure, a database administrator (DBA) must initiate a recovery operation. Recovering a backup involves two distinct operations: rolling the backup forward to a more recent time by applying redo data, and rolling back all changes made in uncommitted transactions to their original state. In general, recovery refers to the various operations involved in restoring, rolling forward, and rolling back a backup. Backup and recovery refers to the various strategies and operations involved in protecting the database against data loss and reconstructing the database should a loss occur.

BACKUP AND RECOVERY OPERATIONS:
A backup is a snapshot of a datafile, tablespace, or database at a certain time. If periodic backups of the database have been made and data is lost, users can apply the stored redo information to their latest backup to make the database current again. Oracle enables users to restore an older backup and apply only some redo data, thereby recovering the database to an earlier point in time. This type of recovery is called incomplete media recovery. If the backup was consistent, then users are not required to apply any redo data, at all.

A simple example of media recovery illustrates the concept. Suppose a user makes a backup of the database at noon. Starting at noon, one change to the database is made every minute. At 1 p.m. one of the disk drives fails, causing the loss of all data on that disk. Fortunately, Oracle records all changes in the redo log. The user can then restore the noon backup onto a functional disk drive and use redo data to recover the database to 1 p.m., reconstructing the lost changes.
ELEMENTS OF A BACKUP AND RECOVERY STRATEGY

Although backup and recovery operations can be intricate and vary from one business to another, the basic principles follow these four simple steps:

1.Multiplex the online redo logs
2.Run the database in ARCHIVELOG mode and archive redo logs to multiple locations
3.Maintain multiple concurrent backups of the control file
4.Take frequent backups of physical datafiles and store them in a safe place, making multiple copies if possible

As long as users have backups of the database and archive redo logs in safe storage, the original database can be recreated.
KEY DATA STRUCTURES FOR BACKUP AND RECOVERY Before users begin to think seriously about backup and recovery strategy, the physical data structures relevant for backup and recovery operations must be identified. This section discusses the following physical data structures:

Datafiles
Controlfiles
Online Redo Log Files
Archieved Redo Log Files
Automatic managed Undo

ARCHIVED REDO LOG FILES
Archived log files are redo logs that Oracle has filled with redo entries, rendered inactive, and copied to one or more log archive destinations. Oracle can be run in either of two modes:
ARCHIVELOG - Oracle archives the filled online redo log files before reusing them in the cycle.
NOARCHIVELOG - Oracle does not archive the filled online redo log files before reusing them in the cycle.
Running the database in ARCHIVELOG mode has the following benefits:
The database can be completely recovered from both instance and media failure.
The user can perform online backups, i.e., back up tablespaces while the database is open and available for use.
Archived redo logs can be transmitted and applied to the standby database
Oracle supports multiplexed archive logs to avoid any possible single point of failure on the archive logs.
The user has more recovery options, such as the ability to perform tablespace-point-in-time recovery (TSPITR)
Running the database in NOARCHIVELOG mode has the following consequences:
The user can only back up the database while it is completely closed after a clean shutdown.
Typically, the only media recovery option is to restore the whole database, which causes the loss of all transactions issued since the last backup.

Automatic Managed Undo
Every Oracle database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. Oracle refers to these records collectively as undo. Historically, Oracle has used rollback segments to store undo. Space management for these rollback segments has proven to be quite complex. Oracle now offers another method of storing undo that eliminates the complexities of managing rollback segment space, and allows DBAs to exert control over how long undo is retained before being overwritten. This method uses an undo tablespace. Undo records are used to:
Roll back transactions when a ROLLBACK statement is issued
Recover the database
Provide read consistency
When a rollback statement is issued, undo records are used to undo changes that were made to the database by the uncommitted transaction. During database recovery, undo records are used to undo any uncommitted changes applied from the redo log to the datafiles. Undo records provide read consistency by maintaining the before image of the data for users who are accessing the data at the same time that another user is changing it.

Understanding Basic Backup
A backup strategy provides a safeguard against data loss. Answering the following questions can help database administrators develop a strong backup strategy:
What types of failures can occur?
What information should be backed up?
Which backup method should be used?
Should backups be made online or offline?
How often should backups be made?
How can dangerous backup techniques be avoided?

WHAT TYPES OF FAILURES CAN OCCUR?
Data loss can occur for various reasons. Here are some of the most common types of failures that can lead to data loss.

A statement failure is a logical failure in the handling statement in an Oracle program. For example, a user issues a statement that is not a valid SQL construction. When statement failure occurs, Oracle automatically undoes any effects of the statement and returns control to the user.
A process failure is a failure in a user process accessing Oracle, i.e., an abnormal disconnection or process termination. The failed user process cannot continue work, although Oracle and other user processes can. If the user process fails while modifying the database, Oracle background processes undo the effects of uncommitted transactions.

An instance failure is a problem that prevents an Oracle instance, i.e., the SGA and background processes, from continuing to function. Instance failure can result from a hardware problem such as a power outage, or a software problem such as an operating system crash. When an instance fails, Oracle does not write the data in the buffers of the SGA to the datafiles.

A user or application error is a user mistake that results in the loss of data. For example, a user can accidentally delete data from a payroll table. Such user errors can require the database or object to be recovered to a point in time before the error occurred. To allow recovery from user error and accommodate other unique recovery requirements, Oracle provides Flashback Technology.

A media failure is a physical problem that arises when Oracle tries to write or read a file that is required to operate the database. A common example is a disk head crash that causes the loss of all data on a disk drive. Disk failure can affect a variety of files, including datafiles, redo log files, and control files. Because the database instance cannot continue to function properly, it cannot write the data in the database buffers of the SGA to the datafiles.

Configuration Files Configuration files may consist of spfile or init.ora, password file, tnsnames.ora, and sqlnet.ora. Since these files do not change often, then they require a less frequent backup schedule. If you lost a configuration file it can be easily recreated manually. When restore time is a premium, it will be faster to restore a backup of the configuration file then manually creating a file with a specific format.

WHICH BACKUP METHOD SHOULD BE USED?
Oracle provides users a choice of several basic methods for making backups. The methods include:
Recovery Manager (RMAN) - A component that establishes a connection with a server process and automates the movement of data for backup and recovery operations.
Oracle Enterprise Manager - A GUI interface that invokes Recovery Manager.
Oracle Data Pump - The utility makes logical backups by writing data from an Oracle database to operating system files in a proprietary format. This data can later be imported into a database.
User Managed - The database is backed up manually by executing commands specific to the user's operating system.


UNDERSTANDING BASIC RECOVERY STRATEGY

Basic recovery involves two parts: restoring a physical backup and then updating it with the changes made to the database since the last backup. The most important aspect of recovery is making sure all data files are consistent with respect to the same point in time. Oracle has integrity checks that prevent the user from opening the database until all data files are consistent with one another. When preparing a recovery strategy, it is critical to understand the answers to these questions:
How does recovery work?
What are the types of recovery?
Which recovery method should be used?

HOW DOES RECOVERY WORK?
In every type of recovery, Oracle sequentially applies redo data to data blocks. Oracle uses information in the control file and datafile headers to ascertain whether recovery is necessary. Recovery has two parts: rolling forward and rolling back. When Oracle rolls forward, it applies redo records to the corresponding data blocks. Oracle systematically goes through the redo log to determine which changes it needs to apply to which blocks, and then changes the blocks. For example, if a user adds a row to a table, but the server crashes before it can save the change to disk, Oracle can use the redo record for this transaction to update the data block to reflect the new row.
Once Oracle has completed the rolling forward stage, the Oracle database can be opened. The rollback phase begins after the database is open. The rollback information is stored in transaction tables. Oracle searches through the table for uncommitted transactions, undoing any that it finds. For example, if the user never committed the SQL statement that added the row, then Oracle will discover this fact in a transaction table and undo the change.

WHAT ARE THE TYPES OF RECOVERY?
There are three basic types of recovery: instance recovery, crash recovery, and media recovery. Oracle performs the first two types of recovery automatically at instance startup. Only media recovery requires the user to issue commands.

An instance recovery, which is only possible in an Oracle Real Applications Cluster configuration, occurs in an open database when one instance discovers that another instance has crashed. A surviving instance automatically uses the redo log to recover the committed data in the database buffers that was lost when the instance failed.Oracle also undoes any transactions that were in progress on the failed instance when it crashed, then clears any locks held by the crashed instance after recovery is complete.

A crash recovery occurs when either a single-instance database crashes or all instances of a multi-instance database crash. In crash recovery, an instance must first open the database and then execute recovery operations. In general, the first instance to open the database after a crash or SHUTDOWN ABORT automatically performs crash recovery.

Unlike crash and instance recovery, a media recovery is executed on the user's command, usually in response to media failure. In media recovery, online or archived redo logs can be used to make a restored backup current or to update it to a specific point in time. Media recovery can restore the whole database, a tablespace or a datafile and recover them to a specified time. Whenever redo logs are used or a database is recovered to some non-current time, media recovery is being performed.
A restored backup can always be used to perform the recovery. The principal division in media recovery is between complete and incomplete recovery. Complete recovery involves using redo data combined with a backup of a database, tablespace, or datafile to update it to the most current point in time. It is called complete because Oracle applies all of the redo changes to the backup. Typically, media recovery is performed after a media failure damages datafiles or the control file.

Recovery Options
If the user does not completely recover the database to the most current time, Oracle must be instructed how far to recover. The user can perform:

1. Tablespace point-in-time recovery (TSPITR), which enables users to recover one or more tablespaces to a point-in-time that is different from the rest of the database.

2.Time-based recovery, also called point-in-time recovery (PITR), which recovers the data up to a specified point in time.

3.Cancel-based recovery, which recovers until the CANCEL command is issued.

4.Change-based recovery or log sequence recovery. If O/S commands are used, change-based recovery recovers up to a specified SCN in the redo record.

5.Flashback from human error

WHICH RECOVERY METHOD SHOULD BE USED?

Users have a choice between two basic methods for recovering physical files. They can:

1.Use Recovery Manager to automate recovery.
2.Execute SQL commands.

Recovering with Recovery Manager
The basic RMAN commands are RESTORE and RECOVER. RMAN can be used to restore datafiles from backup sets or image copes, either to their current location or to a new location. If any archived redo logs are required to complete the recovery operation, RMAN automatically restores and applies them. In a recovery catalog, RMAN keeps a record containing all the essential information concerning every backup ever taken. If a recovery catalog is not used, RMAN uses the control file for necessary information. The RMAN RECOVER command can be used to perform complete media recovery and apply incremental backups, and to perform incomplete media recovery.

Recovering with SQL*Plus
Administrators can use the SQL*Plus utility at the command line to restore and perform media recovery on your files. Users can recover a database, tablespace, or datafile. Before performing recovery, users need to:
Determine which files to recover. Often the table V$RECOVER_FILE can be used.
Restore backups of files permanently damaged by media failure. If the user does not have a backup, recovery can still be performed if the user has the necessary redo log files and the control file contains the name of the damaged file.
If a file cannot be restored to its original location, then the user must relocate the restored file and inform the control file of the new location.
Restore necessary archived redo log files.

Oracle Data Guard

Oracle Data Guard is the management, monitoring, and automation software infrastructure that creates, maintains, and monitors one or more standby databases to protect enterprise data from failures, disasters, errors, and corruptions.
Data Guard maintains these standby databases as synchronized copies of the production database. These standby databases can be located at remote disaster recovery sites thousands of miles away from the production data center, or they may be located in the same city, same campus, or even in the same building.
If the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, thus minimizing the downtime associated with the outage, and preventing any data loss.Data Guard 11g builds upon an already unique set of capabilities, and redefines what users can expect from a disaster recovery solution. It can address both High Availability and Disaster Recovery requirements, and is the ideal complement to Oracle Real Application Clusters (Oracle RAC). Data Guard has the requisite knowledge of the Oracle database to reliably protect a standby database from corruptions that attempt to propagate from a primary database. It enables all standby databases, both physical and logical, to be used for productive purposes while in standby role. Data Guard delivers:
Reliability– optimum data protection and availability. You always know the state of your standby database and it can very quickly (in seconds), assume the primary role.
Lower cost and complexity – Data Guard's mature capabilities and rich management interface are included features of Oracle Enterprise Edition.
Maximum return on investment – All standby databases can be utilized for production

purposes while in standby role. Idle resources are eliminated.
Data Guard can be used in combination with other Oracle High Availability (HA) solutions such as Real Application Clusters (RAC), Oracle Flashback , Oracle Recovery Manager (RMAN), and new database options for Oracle Database 11g that include Oracle Active Data Guard and Oracle Advanced Compression, to provide a high level of data protection, data availability, and resource utilization that is unprecedented in the industry.

Data Guard Benefits
1. Disaster recovery and high availability:Data Guard provides an efficient and comprehensive disaster recovery and high availability solution. Automatic failover and easy-to-manage switchover capabilities allow quick role reversals between primary and standby databases, minimizing the downtime of the primary database for planned and unplanned outages.
2.Complete data protection: A standby database also provides an effective safeguard against data corruptions and user errors. Storage level physical corruptions on the primary database do not propagate to the standby database. Similarly, logical corruptions or user errors that cause the primary database to be permanently damaged can be resolved. Finally, the redo data is validated at the time it is received at the standby database and further when applied to the standby database.
3.Efficient utilization of system resources: A physical standby database can be used for backups and read-only reporting, thereby reducing the primary database workload and saving valuable CPU and I/O cycles. A physical standby database can also be easily converted back and forth between being a physical standby database and an open read/write database, without compromising data protection. A logical standby database enables read-write access to a synchronized standby database, and/or adding local tables to the standby database that can also be updated, and/or creating additional indexes to optimize read performance.
4.Flexibility in data protection to balance availability against performance requirements :Oracle Data Guard offers the maximum protection, maximum availability, and maximum performance modes to help enterprises balance data availability against system performance requirements.
5.Protection from communication failures :If network connectivity is lost between the primary and one or more standby databases, redo data cannot be sent from the primary to those standby databases. Once connectivity is re-established, the missing redo data is automatically detected by Data Guard and the necessary archive logs are automatically transmitted to the standby databases. The standby databases are resynchronized with the primary database, with no manual intervention by the administrator.
6.Centralized and simple management :Data Guard Broker automates the management and monitoring tasks across the multiple databases in a Data Guard configuration. Administrators may use either Oracle Enterprise Manager or the Broker’s own specialized command-line interface (DGMGRL) to take advantage of this integrated management framework.
7.Integrated with Oracle database Data Guard is available as an integrated feature of the Oracle Database (Enterprise Edition) at no extra cost.

Locally managed tablespaces

Locally managed tablespaces have some part of the tablespace set aside for a bitmap.
* AUTOALLOCATE specifies that the tablespace is system managed. Users cannot specify an extent size.
* UNIFORM specifies that the tablespace is managed with uniform extents of SIZE bytes. The default SIZE is 1 megabyte. NOTE: If you do not specify either AUTOALLOCATE or UNIFORM with the LOCAL parameter, then AUTOALLOCATE is the default. Locally Managed Tablespaces---------------------------
A tablespace that manages its own extents maintains a bitmap in each datafile to keep track of the free or used status of blocks in that datafile. Each bit in the bitmap corresponds to a group of blocks. When an extent is allocated or freed for reuse, Oracle changes the bitmap values to show the newstatus of the blocks. These changes do not generate rollback information because they do not update tables (like sys.uet$, sys.fet$) in the data dictionary (except for special cases such as tablespace quota information).
Local management of extents automatically tracks adjacent free space, eliminating the need to coalesce free extents. The sizes of extents that are managed locally can be determined automatically by the system. Alternatively,all extents can have the same size in a locally managed tablespace. A tablespace that manages its extents locally can have either uniform extent sizes or variable extent sizes that are determined automatically by the system.When you create the tablespace, the UNIFORM or AUTOALLOCATE (system managed) option specifies the type of allocation.
For system managed extents, Oracle determines the optimal size of extents, witha minimum extent size of 64K. This is the default for permanent tablespaces.
For uniform extents, you can specify an extent size or use the default size,which is 1 MB. Temporary tablespaces that manage their extents locally can onlyuse this type of allocation.
In locally managed tablespaces, Oracle uses object's storage parameters INITIAL,NEXT, PCTINCREASE, and MINEXTENTS in conjunction with the size of extents specified for the tablespace to determine the object's first extent. For example, in a uniform locally managed tablespace with 2M extents, if you specify (in CREATE TABLE, etc. )INITIAL value of 1M, NEXT 2M, and MINEXTENTS 4, then Oracle initially creates four 2M extents - to provide for 1M+2M+2M+2M=7M of initial size of the object.
One cannot create a locally managed SYSTEM tablespace in 8i.This is possible with in 9.2.0.X, where SYSTEM tablespace is created by DBCA as locally managed by default.With a locally managed SYSTEM tablespace, the rest of the tablespaces in such database have to be locally managed as well.
Locally managed temporary tablespaces can not be of type "permanent".
Examples (1)------------
REM AUTOALLOCATE specifies that the tablespace is system managed. REM Users cannot specify an extent size.
SQL> CREATE TABLESPACE local_uniform DATAFILE 2 'local_u.dbf' SIZE 1M reuse 3 EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; Tablespace created.
SQL> CREATE TABLESPACE local_auto DATAFILE 2 'local_auto.dbf' SIZE 1M reuse 3 EXTENT MANAGEMENT LOCAL AUTOALLOCATE; Tablespace created.
SQL> select TABLESPACE_NAME, INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, 2 MIN_EXTLEN, EXTENT_MANAGEMENT, ALLOCATION_TYPE, PLUGGED_IN 3 from dba_tablespaces 4 where tablespace_name like 'LOCAL%';
TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS MIN_EXTLEN EXTENT_MAN ALLOCATION PLUGG --------------- -------------- ------------ ----------- ---------------- ----- ---------- ---------- ----- LOCAL_AUTO 65,536 1 2,147,483,645 65,536 LOCAL SYSTEM NO LOCAL_UNIFORM 131,072 131,072 1 2,147,483,645 131,072 LOCAL UNIFORM NO
Now let us create 2 tables in the local tablespaces:
SQL> CREATE TABLE local_table1 ( 2 DEPTNO NUMBER(3) NOT NULL, 3 DNAME VARCHAR2(14), 4 LOC VARCHAR2(13)) 5 storage (initial 10k next 10k) 6 tablespace local_uniform; Table created.
SQL> CREATE TABLE local_table2 ( 2 DEPTNO NUMBER(3) NOT NULL, 3 DNAME VARCHAR2(14), 4 LOC VARCHAR2(13)) 5 storage (initial 10k next 10k) 6 tablespace local_auto; Table created.
Look at the INITIAL and NEXT extents in the USER_TABLES:
SQL> select table_name, tablespace_name, INITIAL_EXTENT, NEXT_EXTENT 2 from user_tables where table_name like 'LOCAL%';
TABLE_NAME TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT ------------------------------ --------------- -------------- ------------ LOCAL_TABLE1 LOCAL_UNIFORM 10,240 131,072 LOCAL_TABLE2 LOCAL_AUTO 10,240
Check the actual size of each allocated extent:
SQL> select SEGMENT_NAME, SEGMENT_TYPE, TABLESPACE_NAME, EXTENT_ID, BYTES, BLOCKS 2 from user_extents where segment_name like 'LOCAL%';
SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME EXTENT_ID BYTES BLOCKS --------------- --------------- --------------- --------- ------------ ------ LOCAL_TABLE1 TABLE LOCAL_UNIFORM 0 131,072 64 LOCAL_TABLE2 TABLE LOCAL_AUTO 0 65,536 32
Let us allocate another extent for each table:
SQL> alter table local_table1 allocate extent; Table altered.
SQL> alter table local_table2 allocate extent; Table altered.
SQL> select SEGMENT_NAME, SEGMENT_TYPE, TABLESPACE_NAME, EXTENT_ID, BYTES, BLOCKS 2 from user_extents where segment_name like 'LOCAL%';
SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME EXTENT_ID BYTES BLOCKS --------------- --------------- --------------- --------- ------------ ------ LOCAL_TABLE1 TABLE LOCAL_UNIFORM 0 131,072 64 LOCAL_TABLE1 TABLE LOCAL_UNIFORM 1 131,072 64 LOCAL_TABLE2 TABLE LOCAL_AUTO 0 65,536 32 LOCAL_TABLE2 TABLE LOCAL_AUTO 1 65,536 32
The db_block_size in this example is 2048 (2K).
Thus, the extent sizes for tables created in tablespaces LOCAL_UNIFORM andLOCAL_AUTO are 128K (131072 bytes) and 64K (65536 bytes), correspondingly.
In the next Example, note the effect of object's INITIAL, NEXT, PCTINCREASE, and MINEXTENTS storage parameters when creating tables.
Example (2)-----------
CREATE TABLESPACE lmt_1 DATAFILE 'lmt_1.dbf'size 20m EXTENT MANAGEMENT LOCAL uniform size 2m;
Tablespace created.
create table lmt_test1 (a number)STORAGE (initial 1m next 2m minextents 4) tablespace lmt_1;
Table created.
select EXTENT_ID, BYTES from dba_extents where SEGMENT_NAME='LMT_TEST1';
EXTENT_ID BYTES---------- ---------- 0 2097152 1 2097152 2 2097152 3 2097152
create table lmt_test2 (a number)STORAGE (INITIAL 1m next 2m minextents 4 pctincrease 50) tablespace lmt_1;
Table created.
select EXTENT_ID, BYTES from dba_extents where SEGMENT_NAME='LMT_TEST2';
EXTENT_ID BYTES---------- ---------- 0 2097152 1 2097152 2 2097152 3 2097152 4 2097152 5 2097152
create table lmt_test3 (a number)STORAGE (INITIAL 1m next 2m minextents 4 pctincrease 90) tablespace lmt_1;
Table created.
select EXTENT_ID, BYTES from dba_extents where SEGMENT_NAME='LMT_TEST3';
EXTENT_ID BYTES---------- ---------- 0 2097152 1 2097152 2 2097152 3 2097152 4 2097152 5 2097152 6 2097152 7 2097152
As we could see, tables' storage parameters are taken in consideration wheninitially calculating the size of each table (in terms of number of allocated ts extents). After that, each allocated extent will be the tablespace's extent size, or multiple extents - if table extent is allocated via"alter table allocate extent (size XXX)".
Advantages of Locally Managed Tablespaces:------------------------------------------
1. Better space management due to: - uniform extent sizes; - reduced data dictionary access.
2. Reduced tablespace fragmentation.
3. Better management of temporary space.

ORA-1555 error

The ORA-1555 error means that a consistent get on a particular database block has failed.

When a transaction or query begins, the current SCN is recorded. That SCN serves as the snapshot SCN for the query or transaction. This term is derived from the requirement that the transaction or query must see a consistent snapshot of the database at that time.
Every block used to select rows for the query or transaction must reflect the state of the database at the snapshot SCN. This applies to the selection of rows to be updated or deleted, as much as it does to the selection of rows for a query. If a block has to be changed, then those changes will be applied to the current version of that block. However, the selection of the rows to be changed must be based on a version of the block consistent with the snapshot SCN. The temporary reconstruction of a version of the block consistent with the snapshot SCN is called a consistent get.
This error can be the result of there being insufficient rollback segments. A query may not be able to create the snapshot because the rollback data is not available. This can happen when there are many transactions that are modifying data, and performing commits and rollbacks. Rollback data is overwritten when the rollback segments are too small for the size and number of changes that are being performed.
To correct this problem, make more larger rollback segments available. Your rollback data for completed transactions will be kept longer.

This error can be the result of programs not closing cursors after repeated FETCH and UPDATE statements. To correct this problem, make sure that you are closing cursors when you no longer require them.

This error can occur if a FETCH is executed after a COMMIT is issued. The number of rollback records created since the last CLOSE of your cursor will fill the rollback segments and you will begin overwriting earlier records.

Tuesday, January 22, 2008

2 system privileges :SYSDBA & SYSOPER

SYSDBA and SYSOPER
Administrative privileges that are required for an administrator to perform basic database operations are granted through two special system privileges, SYSDBA and SYSOPER. You must have one of these privileges granted to you, depending upon the level of authorization you require.
The following are the operations that are authorized by the SYSDBA and SYSOPER system privileges:
System Privilege Operations Authorized SYSDBA Perform

  1. STARTUP and SHUTDOWN operations
  2. ALTER DATABASE: open, mount, back up, or change character set
  3. CREATE DATABASE
  4. CREATE SPFILE
  5. ARCHIVELOG and RECOVERY
  6. Includes the RESTRICTED SESSION privilege
Effectively, this system privilege allows a user to connect as user SYS.

SYSOPER Perform
  1. STARTUP and SHUTDOWN operations
  2. CREATE SPFILE
  3. ALTER DATABASE OPEN/MOUNT/BACKUP
  4. ARCHIVELOG and RECOVERY
  5. Includes the RESTRICTED SESSION privilege
This privilege allows a user to perform basic operational tasks, but without the ability to look at user data.
The manor in which you are authorized to use these privileges depends upon the method of authentication that you choose to use.
When you connect with SYSDBA or SYSOPER privileges, you connect with a default schema, not with the schema that is generally associated with your username. For SYSDBA this schema is SYS; for SYSOPER the schema is PUBLIC.

SYS and SYSTEM DBA role

SYS
When any database is created, the user SYS is automatically created and granted the DBA role.

All of the base tables and views for the database's data dictionary are stored in the schema SYS. These base tables and views are critical for the operation of Oracle. To maintain the integrity of the data dictionary, tables in the SYS schema are manipulated only by Oracle. They should never be modified by any user or database administrator, and no one should create any tables in the schema of user SYS. (However, you can change the storage parameters of the data dictionary settings if necessary.)

Ensure that most database users are never able to connect using the SYS account.

SYSTEM
When a database is created, the user SYSTEM is also automatically created and granted the DBA role.

The SYSTEM username is used to create additional tables and views that display administrative information, and internal tables and views used by various Oracle options and tools. Never create in the SYSTEM schema tables of interest to individual users.

The DBA Role
A predefined role, named DBA, is automatically created with every Oracle database. This role contains most database system privileges. Therefore, it is very powerful and should be granted only to fully functional database administrators.

The DBA role does not include the SYSDBA or SYSOPER system privileges. These are special administrative privileges that allow an administrator to perform basic database administration tasks, such as creating the database and instance startup and shutdown