When You will be given an 'USER NAME' and 'PASSWORD' of a new database and a new environment then the basic things you need to check are below,
Goto $ORACLE_HOME/dbs and see the parameter file ,SPFILE
uname -a
$echo $oracle_HOME
$.ps-efgrep pmon --- to check databases
$echo $SID ---
$echo $oracle_sid
$ ps -efgrep pmon ---It ll show u all the databases
user02 319658 1 0 Feb 24 - 3:37 ora_pmon_sun
user02 663636 1 0 Feb 28 - 3:52 ora_pmon_MOON
user02 2265204 1224816 0 12:56:55 pts/0 0:00 grep pmon
$ grep env *To see all the env variables
$sqlplus "/as sysdba"
SQL> select name from v$database;
NAME
---------
RT02
SQL> select status from v$instance;
STATUS
------------
OPEN
SQL> show parameter control
NAME TYPE VALUE
control_file_record_keep_time integer 7
control_files string /apps/user02/oracle/admin/cntr l_rt02.dbf, /apps/user02/oracle/admin/cnrtl_rt02.dbf,/apps/user02/oracle/admin/cntrl_rt03.dbf
SQL>show parameter dump;
NAME TYPE VALUE
background_core_dump string partial
background_dump_dest string /apps/user02/oracle/admin/
bdumpcore_dump_dest string /apps/user02/oracle/admin/
cdumpmax_dump_file_size string UNLIMITED
shadow_core_dump string partial
user_dump_dest string /apps/user02/oracle/admin/udump
SQL> select * from v$log;
SQL> select * from v$logfile
SQL> show parameter undo
NAME TYPE VALUE
undo_management string AUTO
undo_retention integer 3600
undo_tablespace string undo_ts
SQL> select file_name,sum(bytes)/1024/1024 from dba_data_files where tablespace_name='UNDO_TS' group by file_name;
SQL> col FILE_NAME for a60
SQL>set lines 200
SQL> /
SQL> select FILE_NAME, TABLESPACE_NAME,(bytes)/1024/1024 from dba_data_files;
select tablespace_name,sum(bytes)/1024/1024 from dba_free_space where tablespace_name='SYSTEM' group by tablespace_name;
SQL> select AUTOEXTENSIBLE,FILE_NAME,TABLESPACE_NAME from dba_data_files where tablespace_name='&TSNAME';
Enter value for tsname: RETEK_DATA
=========
Examples -
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data03/temp01.dbf' SIZE 8000M AUTOEXTEND OFF
Added a new file -
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data03/temp02.dbf' SIZE 8000M AUTOEXTEND OFF
Reused a existing tempfile which had come from clone -
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/apps/rt02/oradata/data01/temp02.dbf' SIZE 16000M REUSE AUTOEXTEND OFF
Sunday, March 29, 2009
Basic checks for a new Daatabase
Posted by
Bis
at
2:51 PM
0
comments
Labels: Fundamentals
Tuesday, March 18, 2008
When Does DBWR Write?
When Does Database Writer Write?
The DBWn background process writes to the datafiles whenever one of the following events occurs:
A user’s Server Process has searched too long for a free buffer when reading a buffer into
the Buffer Cache.
The number of modified and committed, but unwritten, buffers in the Database Buffer
Cache is too large.
At a database Checkpoint event. See Chapters 10 and 11 for information on checkpoints.
The instance is shut down using any method other than a shutdown abort.
A tablespace is placed into backup mode.
A tablespace is taken offline to make it unavailable or changed to READ ONLY.
A segment is dropped.
Posted by
Bis
at
10:44 AM
1 comments
Labels: Fundamentals
When Does LGWR Write?
When Does Log Writer Write?
The LGWR background process writes to the current redo log group whenever one of the
following database events occurs:
Every three seconds.
A user commits a transaction.
The Redo Log Buffer is one-third full.
The Redo Log Buffer contains 1MB worth of redo information.
Before the DBWn process whenever a database checkpoint occurs. See Chapter 10 for more
information on checkpoints.
Posted by
Bis
at
10:43 AM
1 comments
Labels: Fundamentals
Oracle Physical Files
Oracle Physical Files
Controlfiles: Locations of other physical files, database name, database block size, database character set, and recovery information. These files are required to open the database.
Datafile: All application data and internal metadata.
Redo log files: Record of all changes made to the database; used for recovery.
Parameter (PFILE or SPFILE): Configuration parameters for the SGA, optional Oracle features,and background processes.
Archived logfiles: Copy of the contents of previous online redo logs, used for recovery.
Password Optional file : used to store names of users who have been
granted the SYSDBA and SYSOPER privileges.
Control Files
Control files are critical components of the database because they store important information
that is not available anywhere else. This information includes the following:
1.The name of the database
2.The names, locations, and sizes of the datafiles and redo log files
3.Information used to recover the database in the case of a disk failure or user error
The control files are created when the database is created in the locations specified in the
control_files parameter in the parameter file. Because loss of the control files negatively
impacts the ability to recover the database, most production databases multiplex their control files to multiple locations. Oracle uses the CKPT background process to automatically update each of these files as needed, keeping the contents of all copies of the control synchronized. You can use the dynamic performance view V$CONTROLFILE to display the names and locations of all the database’s control files.
A sample query of V$CONTROLFILE on a Unix system is shown here:
SQL> select name from v$controlfile;
NAME
------------------------------------
/orcl/MARS/control/control01.ctl
/orcl/MARS/control/control02.ctl
/orcl/MARS/control/control03.ctl
One thing that the control files keep track of in the database are the names, locations, and
sizes of the database datafiles. Datafiles, and their relationship to another database structure
called a tablespace.
Datafiles
Datafiles are the physical files that actually store the data that has been inserted into each table in the database. The size of the datafiles is directly related to the amount of table data that they store. Datafiles are the physical structure behind another database storage area called a tablespace. A tablespace is a logical storage area within the database. Tablespaces group logically related segments.
Common tablespaces are : SYSTEM,SYSAUX, TEMP, TOOLS,USER,UNDOTBS1
SQL> select tablespace_name from dba_tablespaces order by tablespace_name;
TABLESPACE_NAME
------------------------------
APPL_IDX
APPL_TAB
EXAMPLE
SYSAUX
SYSTEM
TEMP
UNDOTBS1
7 rows selected.
or you can try following sql command for more info.
SQL> select tablespace_name, file_name from dba_data_files order by tablespace_name;
Redo Log Files
Whenever a user performs a transaction in the database, the information needed to reproduce
this transaction in the event of a database failure is automatically recorded in the Redo Log
Buffer. The contents of the Redo Log Buffer are ultimately written to the redo logs by the
LGWR background process. Because of the important role that redo logs play in Oracle’s recovery mechanism, they are usually multiplexed, or copied. This means that each redo log contains one or more copies of itself in case one of the copies becomes corrupt or is lost due to a hardware failure. Collectively, these sets of redo logs are referred to as redo log groups. Each multiplexed file within the group is called a redo log group member. Oracle automatically writes to all members of the redo log group to keep the files in sync. Each redo log group must be composed of one or more members. Each database must have a minimum
of two redo log groups because redo logs are used in a circular fashion.
SQL> select * from v$logfile ;
SQL> select * from v$log ;
Posted by
Bis
at
9:54 AM
1 comments
Labels: Fundamentals
5 mandatory Oracle Background Processes
System Monitor SMON: Performs instance recovery following an instance crash, coalesces
free space in the database, and manages space used for sorting
Process Monitor PMON: Cleans up failed user database connections
Database Writer DBWn* : Writes modified database blocks from the SGA’s Database Buffer Cache to the datafiles on disk
Log Writer LGWR : Writes transaction recovery information from the SGA’s Redo Log Buffer
to the online Redo Log files on disk
Checkpoint CKPT : Updates the database files following a Checkpoint Event
On Unix systems, you can view these background processes from the operating system using
the ps command, as shown here:
$ ps -ef grep MARS
oracle 3969 1 0 10:02 ? 00:00:05 ora_pmon_MARS
oracle 3971 1 0 10:02 ? 00:00:00 ora_mman_MARS
oracle 3973 1 0 10:02 ? 00:00:07 ora_dbw0_MARS
oracle 3975 1 0 10:02 ? 00:00:07 ora_lgwr_MARS
oracle 3977 1 0 10:02 ? 00:00:10 ora_ckpt_MARS
oracle 3979 1 0 10:02 ? 00:00:20 ora_smon_MARS
oracle 3981 1 0 10:02 ? 00:00:00 ora_reco_MARS
oracle 3983 1 0 10:02 ? 00:00:09 ora_cjq0_MARS
oracle 3985 1 0 10:02 ? 00:00:00 ora_d000_MARS
oracle 3987 1 0 10:02 ? 00:00:00 ora_s000_MARS
oracle 4052 1 0 10:02 ? 00:00:00 ora_qmnc_MARS
oracle 4054 1 0 10:02 ? 00:00:29 ora_mmon_MARS
oracle 4057 1 0 10:02 ? 00:00:08 ora_mmnl_MARS
oracle 4059 1 0 10:02 ? 00:01:04 ora_j000_MARS
oracle 27544 1 0 20:29 ? 00:00:00 ora_q000_MARS
This output shows that 15 background processes are running on the Unix server for the MARS
database.
Posted by
Bis
at
9:35 AM
0
comments
Labels: Fundamentals
Parameters in PFILE/SPFILE
many initialization parameters are used to specify the size of the SGA and its components. Any parameters not specified in the PFILE or SPFILE take on their default values. The following is an example of the contents of a typical Unix Oracle 10g PFILE that contains both basic and advanced parameters:
db_block_size=8192
db_file_multiblock_read_count=16
open_cursors=300
db_name=MARS
background_dump_dest=/u01/app/oracle/admin/MARS/bdump
core_dump_dest=/oracle/admin/MARS/cdump
user_dump_dest=/oracle/admin/MARS/udump
control_files=(/vitaldata/c1/MARS/control01.ctl, /vitaldata/c2/MARS/control02.ctl,
/vitaldata/c3/MARS/control03.ctl)
db_recovery_file_dest=/oracle/flash_recovery_area/
db_recovery_file_dest_size=2147483648
job_queue_processes=10
compatible=10.1.0.2.0
sga_target=500M
max_sga_size=800M
processes=250
remote_login_passwordfile=EXCLUSIVE
pga_aggregate_target=25165824
sort_area_size=65536
undo_management=AUTO
undo_tablespace=UNDOTBS1
SQL> select * 2 from V$SGA;
NAME VALUE
-------------------- ----------
Fixed Size 787988
Variable Size 145750508
Database Buffers 25165824
Redo Buffers 262144
SQL> select component,current_size from v$sga_dynamic_components;
Posted by
Bis
at
9:26 AM
0
comments
Labels: Fundamentals
Monday, March 17, 2008
Data dictionary view VS Dynamic view
Dictionary Views vs Dynamic Performance Views:
1. Dictionary Views Dynamic Performance Views
The DBA_ views usually have plural names (for
example, DBA_DATA_FILES).
The names of the V$ views are generally singular
(for example, V$DATAFILE).
2. The DBA_ views are available only when the
database is open and running.
Some V$ views are available even when the
database is not fully open and running.
3. The data contained in the DBA_ views is generally
uppercase.
The data contained in the V$ views is usually
lowercase.
4. The data contained in the DBA_ views is static
and is not cleared when the database is shut
down.
The V$ views contain dynamic statistical data
that is lost each time the database is shut
down.
Posted by
Bis
at
11:56 AM
5
comments
Labels: Fundamentals
Thursday, February 7, 2008
About RESETLOGS Option
The RESETLOGS option is always required after incomplete media recovery or recovery using a backup control file. Resetting the redo log does the following:
1. Archives the current online redo logs (if they are accessible) and then erases the contents of the online redo logs and resets the log sequence number to 1. For example, if the current online redo logs are sequence 1000 and 1001 when you open RESETLOGS, then the database archives logs 1000 and 1001 and then resets the online logs to sequence 1 and 2.
2.Creates the online redo log files if they do not currently exist.
3.Reinitializes the control file metadata about online redo logs and redo threads.
Updates all current datafiles and online redo logs and all subsequent archived redo logs with a new RESETLOGS SCN and time stamp.
Because the database will not apply an archived log to a datafile unless the RESETLOGS SCN and time stamps match, the RESETLOGS prevents you from corrupting datafiles with archived logs that are not from direct parent incarnations of the current incarnation.
In prior releases, it was recommended that you back up the database immediately after the RESETLOGS. Because you can now easily recover a pre-RESETLOGS backup like any other backup, making a new database backup is optional. In order to perform recovery through resetlogs you must have all archived logs generated since the last backup and at least one control file (current, backup, or created).
Posted by
Bis
at
2:19 PM
0
comments
Labels: Fundamentals
Friday, February 1, 2008
Change the Database Name
In situations we need to change the database name , for example when we do a clone from production server to a development server , we rename the datbase name at the development site.
Let me give an example to you : My company's prod server is BNFD. At the develoment site I have restore a copy of the database and it is BNFD.But we want to rename it to a development server say new name would be MARS.
Theoritical Steps:
1. Check the database Name.
2. Take a Trace of the control file.
3.Edit the trace and change the database Name. - Create the controlfile script
4.Change the db_name and instance_name in the init.ora file.
5.Move or remove the binary copies from multiplexed Locations.
6. At NOMOUNT state create the control file.
7.Open the Database - And Check the database name to confirm.
SQL> select name from v$database;
NAME
---------
BNFD
SQL> show parameters dump_dest ;
NAME TYPE VALUE
--------- -------- -----------
background_dump_dest string C:\orcl\admin\BNFD\bdump
core_dump_dest string C:\orcl\admin\BNFD\cdump
user_dump_dest string C:\orcl\admin\BNFD\udump
Step 1: Take a Backup trace of Control file
SQL> alter database backup controlfile to trace;
This ll create the trace file of a control file in UDUMP
The instance needs to be shut down. SYSDBA privileges will be needed.
SQL> shutdown immediate ;
STEP :2Modify (and optionally rename) the created trace file:
1.Find the line reading # Set #2. RESETLOGS case
2.Remove all lines above this line.
3.Change the line containing the database name from CREATE CONTROLFILE REUSE DATABASE "ORA9" RESETLOGS NOARCHIVELOG
toCREATE CONTROLFILE SET DATABASE "CAT" RESETLOGS NOARCHIVELOG
Note, in my case, the database is running in noarchivelog mode.
The corresponding line reads ARCHIVELOG otherwise.
4.Remove the line reading RECOVER DATABASE USING BACKUP CONTROLFILE.
5.Remove lines starting with #.
It looks then something like:
STARTUP NOMOUNT
CREATE CONTROLFILE SET DATABASE "CAT" RESETLOGS NOARCHIVELOG
-- SET STANDBY TO MAXIMIZE PERFORMANCE
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 30
MAXINSTANCES 1
MAXLOGHISTORY 454
LOGFILE
GROUP 1 '/home/oracle/databases/cat/redo1.ora' SIZE 100M,
GROUP 2 '/home/oracle/databases/cat/redo2.ora' SIZE 100M,
GROUP 3 '/home/oracle/databases/cat/redo3.ora' SIZE 100M
-- STANDBY LOGFILE
DATAFILE
'/home/oracle/databases/cat/system.dbf',
'/home/oracle/databases/cat/undo.dbf',
'/home/oracle/databases/cat/data.dbf'
CHARACTER SET WE8ISO8859P1
;
ALTER DATABASE OPEN RESETLOGS;
ALTER TABLESPACE TEMP ADD TEMPFILE '/home/oracle/OraHome1/databases/ora9/temp.dbf'
SIZE 104857600 REUSE AUTOEXTEND OFF;
in my case I renamed the file to newcontrol123.sql
STEP3: Move the all the previous Control file to otherlocation
Move the controlfiles away so that they can be re-created..
$ mv ctl_1.ora ctl_1.ora.moved
$ mv ctl_2.ora ctl_2.ora.moved
$ mv ctl_3.ora ctl_3.ora.moved
STEP 4:Change the Database name in Init.ora File
The database name must be entered (changed) in the initSID.ora:
initcat.ora
db_name = CAT
Step 5 : Connect as sysdba and RUN the new controlfile
sqlplus "/ as sysdba"
SQL> @/tmp/newcontrol
SQL> startup open resetlogs ;
SQL> select name from v$database;
NAME
---------
CAT
Posted by
Bis
at
12:09 AM
1 comments
Labels: Fundamentals
Thursday, January 31, 2008
Practice Session 1
C:\orcl>sqlplus /nolog
SQL*Plus: Release 9.2.0.1.0 - Production on Fri Feb 1 02:42:58 2008
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
SQL> conn sys/justchill as sysdba ;Connected.SQL> select name from v$database ;
NAME
---------
DIPS
SQL> select status from v$instance ;
STATUS
------------
MOUNTED
SQL> alter database open ;
Database altered.
SQL> select status from v$instance ;
STATUS
------------
OPEN
SQL> shut immediate ;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup nomount
ORACLE instance started.
Total System Global Area 135338868
bytesFixed Size 453492
bytesVariable Size 109051904
bytesDatabase Buffers 25165824 bytes
Redo Buffers 667648 bytes
SQL> select name from v$controlfile ;
no rows selected
SQL> select * from v$log ;
*ERROR at line 1:ORA-01507: database not mounted
SQL> alter database mount 2 ;
Database altered.
SQL> select status from v$instance ;
STATUS
------------
MOUNTED
SQL> select * from v$logfile ;
GROUP# STATUS TYPE MEMBER
---------- ------- --------- ---------------
3 ONLINEC:\ORCL\Z123DATA\Z123\REDO03.LOG
2 ONLINEC:\ORCL\Z123DATA\Z123\REDO02.LOG
1 ONLINEC:\ORCL\Z123DATA\Z123\REDO01.LOG
SQL> alter database open resetlogs ;
*ERROR at line 1:ORA-01139: RESETLOGS option only valid after an incomplete database recovery
SQL> recover database ;
ORA-00283: recovery session canceled due to errorsORA-00264: no recovery required
SQL> alter database open ;
Database altered.
SQL> select status from v$instance ;
STATUS
------------
OPEN
Posted by
Bis
at
11:30 PM
2
comments
Labels: Fundamentals
Saturday, January 26, 2008
How does one put a database into ARCHIVELOG mode?
The main reason for running in archivelog mode is that one can provide 24-hour availability and guarantee complete data recoverability. It is also necessary to enable ARCHIVELOG mode before one can start to use on-line database backups.
Issue the following commands to put a database into ARCHVELOG mode:
SQL> select log_mode from v$database;
LOG_MODE
------------
NOARCHIVELOG
SQL> CONNECT sys AS SYSDBA
SQL> STARTUP MOUNT EXCLUSEVE;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ARCHIVE LOG START;
SQL> ALTER DATABASE OPEN;
Alternatively, add the above commands into your database's startup command script, and bounce the database.
The following parameters needs to be set IN INIT.ORA for databases in ARCHIVELOG mode:
log_archive_start = TRUE
log_archive_dest_1 = 'LOCATION=/arch_dir_name'
log_archive_dest_state_1 = ENABLE
log_archive_format = %d_%t_%s.arc
NOTE 1: Remember to take a baseline database backup right after enabling archivelog mode. Without it one would not be able to recover. Also, implement an archivelog backup to prevent the archive log directory from filling-up.
NOTE 2: ARCHIVELOG mode was introduced with Oracle V6, and is essential for database point-in-time recovery. Archiving can be used in combination with on-line and off-line database backups.
NOTE 3: You may want to set the following INIT.ORA parameters when enabling ARCHIVELOG mode: log_archive_start=TRUE, log_archive_dest=..., and log_archive_format=...
NOTE 4: You can change the archive log destination of a database on-line with the ARCHIVE LOG START TO 'directory'; statement. This statement is often used to switch archiving between a set of directories.
NOTE 5: When running Oracle Real Application Server (RAC), you need to shut down all nodes before changing the database to ARCHIVELOG mode.
Posted by
Bis
at
7:51 PM
1 comments
Labels: Fundamentals
ORA-1555 error
The ORA-1555 error means that a consistent get on a particular database block has failed.
When a transaction or query begins, the current SCN is recorded. That SCN serves as the snapshot SCN for the query or transaction. This term is derived from the requirement that the transaction or query must see a consistent snapshot of the database at that time.
Every block used to select rows for the query or transaction must reflect the state of the database at the snapshot SCN. This applies to the selection of rows to be updated or deleted, as much as it does to the selection of rows for a query. If a block has to be changed, then those changes will be applied to the current version of that block. However, the selection of the rows to be changed must be based on a version of the block consistent with the snapshot SCN. The temporary reconstruction of a version of the block consistent with the snapshot SCN is called a consistent get.
This error can be the result of there being insufficient rollback segments. A query may not be able to create the snapshot because the rollback data is not available. This can happen when there are many transactions that are modifying data, and performing commits and rollbacks. Rollback data is overwritten when the rollback segments are too small for the size and number of changes that are being performed.
To correct this problem, make more larger rollback segments available. Your rollback data for completed transactions will be kept longer.
This error can be the result of programs not closing cursors after repeated FETCH and UPDATE statements. To correct this problem, make sure that you are closing cursors when you no longer require them.
This error can occur if a FETCH is executed after a COMMIT is issued. The number of rollback records created since the last CLOSE of your cursor will fill the rollback segments and you will begin overwriting earlier records.
Posted by
Bis
at
3:40 PM
0
comments
Labels: Fundamentals
Wednesday, November 14, 2007
Overview of Common Schema Objects
A schema is a collection of Database objects. A schema is owned by a database user and has the same name as that user. Schema objects are the logical structures that directly refer to the database's data. Schema objects include structures like tables, views, and indexes. (There is no relationship between a tablespace and a schema. Objects in the same schema can be in different tablespaces, and a tablespace can hold objects from different schemas.)
Some of the most common schema objects are defined in the following section.
Tables
Tables are the basic unit of data storage in an Oracle database. Database tables hold all user-accessible data. Each table has columns and rows. Columns in a table is the different types of information that the table will contain and all instances of such data is stored in rows.
Indexes
Indexes are optional structures associated with tables. Indexes can be created to increase the performance of data retrieval. Just as the index in a book helps you quickly locate specific information, an Oracle index provides an access path to table data.
When processing a request, Oracle can use some or all of the available indexes to locate the requested rows efficiently. Indexes are useful when applications frequently query a table for a range of rows (for example, all employees with a salary greater than 1000 dollars) or a specific row. Indexes are created on one or more columns of a table. After it is created, an index is automatically maintained and used by Oracle. Changes to table data (such as adding new rows, updating rows, or deleting rows) are automatically incorporated into all relevant indexes with complete transparency to the users.
Views
Views are customized presentations of data in one or more tables or other views. A view can also be considered a stored query. Views do not actually contain data. Rather, they derive their data from the tables on which they are based, referred to as the base tables of the views.
Like tables, views can be queried, updated, inserted into, and deleted from, with some restrictions. All operations performed on a view actually affect the base tables of the view.
Views provide an additional level of table security by restricting access to a predetermined set of rows and columns of a table. They also hide data complexity and store complex queries.
Clusters
Clusters are groups of one or more tables physically stored together because they share common columns and are often used together. Because related rows are physically stored together, disk access time improves.
Like indexes, clusters do not affect application design. Whether a table is part of a cluster is transparent to users and to applications. Data stored in a clustered table is accessed by SQL in the same way as data stored in a nonclustered table.
Synonyms
A synonym is an alias for any table, view, materialized view, sequence, procedure, function, package, type, Java class schema object, user-defined object type, or another synonym. Because a synonym is simply an alias, it requires no storage other than its definition in the data dictionary.
Posted by
Bis
at
1:05 PM
0
comments
Labels: Fundamentals
Friday, October 26, 2007
Oracle Architecture
User Process - A user process connects with a server process toestablish a session.
Server Process : serves the requests of the user process.
Oracle can be said to be composed of 2 major components :
1. Instance =SGA ( System Global Area) + Background Processes
2. Database - All the Files ( C - D - R) - This is the physical Database
C = Control File , D= Datafile , R= Redo log file
SGA (System Global Area)
------------------------------------------------------------
Major Components :
SGA = Database Buffer Cache + Shared Pool + Redo Buffer
Shared Pool = Library Cache + Data Dictionary Cache
Shared Pool
Library Cache
The compiled version of the statement (Parse Tree)The Execution Plan - tells how to run the statement, determined by theoptimizer.
Data dictionary cache: Used during the Parse phase to resolve objectnames and validate access privileges.
Table and column definitionsUser names, Passwords and Privileges
-Database Buffer cache: Holds the most recently used data.The Least Recently Used (LRU) algorithm is used to age out blocks ofdata in the Buffer Cache. Each buffer in the buffer cache is equal tothe size of the specified data block size.
DB_BLOCK_BUFFERS -parameter sets the number of buffers available.
DB_BLOCKSIZE - parameter sets the size of the data block
-Redo Log Buffer: Registers changes made to the database via theinstance. Size is defined by LOG_BUFFER in the parameter file.
Stores records of changes: Block that changed, the new value, andlocation of the change.The buffer is reused after the redo entries are recorded in the redo logfiles.
PGA (Program Global Area)Used by one Process only and contains the following
Session Information Sort AreaCursor StateStack SpaceDBWR (Database Writer)Writes changed data to the database file when one of the followingconditions are met:Dirty Buffers reach a threshold valueNo free blocks are availableA timeout occurs.
A DBWR checkpoint takes placeLGWR (Log Writer)Writes changes registered in the redo log buffer to the database filewhen one of the following conditions are met:
When the redo log buffer is 1/3 full
A timeout occurs (about every 3 seconds)Before DBWR writes modified blocks to the data files
A transaction COMMITS
SMON (System Monitor)Checks for consistency and initiates recovery.
PMON (Process Monitor)Cleans up resources after a process fails. Watches after the serverprocess
CKPT (Checkpoint Process)Updates the database status information after changes are permanentlywritten to the database file from the buffer cache.
Database Files
-Data files
-Redo log files
-Control files
-Parameter file
-Password file
-Archived redo log files
Query Processing
Parse: checks syntax
Execute:
Fetch: returns data to userDML
StatementsRequires Parse and Execute
Server process reads the data and rollback blocks from the data files.
Blocks that are read then are placed in the Buffer Cache.
Server process locks the data and objects.Server Process records the changes to the rollback and data (new) in theredo log buffer.The Server Process then records the rollback block "old" and updates thedata block "new" in the database buffer cache. Both are marked "dirtybuffers."
Posted by
Bis
at
11:35 AM
7
comments
Labels: Fundamentals
Friday, October 19, 2007
What Happens when you login to Oracle Database:
Ans:How the Oracle Database processes operate—by decoupling the user process from the server process. If you don't remember it completely,When a user connects to an Oracle database, say with SQL*Plus, Oracle creates a new process to service this user's program.
This new process is called the Oracle server process, which differs from the user's process (sqlplus, sqlplus.exe, TOAD.exe, or whatever it else it may be). This server process interacts with the memory structures such as the System Global Area (SGA) and reads from the data files; if the data is not found in the data block buffers in the SGA, and so on. Under no circumstances is the user's process (sqlplus) allowed to directly interact with the Oracle database. As there are two processes (the user process and the server process) working in tandem to get the work done, this is sometimes known as two-task architecture.If a user process does something potentially disruptive, such as violating the memory management in the host machine, the Oracle database itself is not affected and the damage is limited to the user's process.
Note: that the above applies to Oracle connections in a dedicated server environment. In a multi-threaded server environment, this model is a little different in the sense that a single server process can service more than one user process. It's still two-task, but instead of a 1:1 relation between the server and user processes, it's 1:many.
What are Host based Connections and What are Network-based Connections?
Ans:Host Based Connection:
Suppose the user logs into the database using SQL*Plus.
oracle@myserver::/scripts [practicedb] $ sqlplus bnfdba
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Oct 19 17:44:56 2007
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
Enter password:
Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit ProductionWith the Partitioning, Real Application Clusters, OLAP and Data Mining options
USER is "BNFDBA"
Instance
----------
PracticeDB
PracticeDB > select sid from v$mystat where rownum=1;
SID
----------
3917
PracticeDB > select spid from v$session s, v$process p
2 where s.sid = (select sid from v$mystat where rownum <2) addr =" s.paddr;"> !ps -eafgrep 10442
bnfdba 17151 17149 1 18:32:58 pts/7 0:00 grep 10442
bnfdba 10442 10212 0 18:27:07 ? 0:00 oraclepracticedb (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
This, of course, assumes that no other SQL*Plus sessions have been running on the server.
The first one you've seen before (the process for the SQL*Plus session). The second one—process ID 10442—is the server process that Oracle creates for the user. Note the Parent Process ID of the process; it's 10212, which is the process ID of the SQL*Plus session.
The process name is "oraclepracticedb (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))", which tells you several things. First, the presence of the clause
LOCAL=YES indicates that this process started due to another process that is running locally, on the same server as the database itself. It also shows PROTOCOL=beq, which means that the connection was made through a bequeath connection. You can also find the information about the server process from the dynamic views.
select spid
from v$session s,v$process p
where s.sid = (select sid from v$mystat where rownum <2) addr =" s.paddr;" practicedb =" (DESCRIPTION" address_list =" (ADDRESS" protocol =" TCP)(HOST" port =" 1521))" connect_data =" (SERVER" service_name =" PRACTICEDB)" onclick="return top.js.OpenExtLink(window,event,this)" href="mailto:oracle@myserver::/scripts" target="_blank">oracle@myserver::/scripts [practicedb] $ sqlplusbnfdba@practicedb.bnfd.com
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Oct 19 17:44:56 2007
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
Enter password:
Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit ProductionWith the Partitioning, Real Application Clusters, OLAP and Data Mining options
USER is "BNFDBA"
Instance
----------
PracticeDB
PracticeDB > select sid from v$mystat where rownum=1;
SID
----------
3939
After this, if you search for this process:
It shows:
PracticeDB > @vusersid
Enter value for sid: 3939
Press return to continue
Fri Oct 19 page 1
User Session Statistics
CLIENT SERVER OS ORACLE
PID PID SID SERIAL USERNAME USERNAME MODULE
---------- ------- ----------- ----- --------- -------- ----- - --------------------------------26917 27292 3939 34941 oracle BNFDBA SQL*Plus
This, of course, assumes that no other SQL*Plus sessions have been running on the server.
Note the process id (26917). Now if you search that process ID Searching for this on the server:
PracticeDB > !ps -eafgrep 26917
oracle 383 26917 1 17:47:22 pts/7 0:00 grep 26917
oracle 26917 29489 1 17:44:55 pts/7 0:00 sqlplus bnfdba@practicedb.bnfd.com <---This is the Client PID 29489 PracticeDB > !ps -eafgrep 27292
oracle 27292 1 0 17:45:03 ? 0:00 oraclepracticedb (LOCAL=NO) <-----This is the Server PID 27292 oracle 286 26917 1 17:47:11 pts/7 0:00 grep 27292 The parent process is "1". But why isn't it 29489?
Ans:To understand the answer, you have to understand how different Oracle server processes are created. In the first case, when the user did not use a TNS connect string, theconnection was routed directly to the database without going to the listener first. The database created a user process and then handed the control of the process to the
process owner, a process known as bequeathing—hence the term bequeath process, which showed up in the process name. In the second case, when the user was still on the same server but connected through the listener, the listener created the process for the user—which is known as forking. Similarly, if the user process were running on a different machine (such as a laptop), the connection would have to be made to the listener and the listener would have
created the process. The process was created by a remote server, hence the process name contains the clause LOCAL=NO. Even if the SQL*Plus session was running on the same server, the fact that it was a non-bequeath connection made it a non-LOCAL one.
Note: Depending on the OS, you may not see the parent ID of the server process the same way you see the SQL*Plus session in bequeath connections. In some cases, even through the connection is bequeath, the parent ID will show as "1". Therefore, don't rely on the parent ID to determine what type of server process it is; use the process name instead.
Posted by
Bis
at
10:08 AM
0
comments
Labels: Fundamentals