Thursday, December 19, 2013

Wednesday, February 29, 2012

Creating SQL Profile

You can use below code to create the sql profile:
my_task_name VARCHAR2(30);
my_sqltext CLOB;

my_sqlprofile_name VARCHAR2(30);
BEGIN my_sqltext := 'select * from emp where emp_id=1555';
my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(sql_text =my_sqltext, user_name =& 'SCOTT',scope => 'COMPREHENSIVE', time_limit => 60, task_name => 'my_sql_tuning_task', description => 'Test Task to tune a query');
--You can use the sql_id from shared pool, you can use v$sqlarea or v$sql to find sql_id for the previously executed query
--my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(sql_id => '8wvyt72dd4zur', user_name => 'SCOTT',scope => 'COMPREHENSIVE', time_limit => 60, task_name => 'my_sql_tuning_task', description => 'Test Task to tune a query'); DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => 'my_sql_tuning_task'); my_sqlprofile_name := DBMS_SQLTUNE.ACCEPT_SQL_PROFILE (task_name =>'my_sql_tuning_task',name => 'my_sql_profile');

You can use below statment to check the detail of tuning task:
select DBMS_SQLTUNE.report_tuning_task(task_name =>'my_sql_tuning_task') from dual;

You can use below statement to drop tuning task:
execute dbms_sqltune.drop_tuning_task(task_name =>'my_sql_tuning_task');

To drop sql profile:
execute DBMS_SQLTUNE.drop_SQL_PROFILE (name => 'my_sql_profile');

Friday, December 16, 2011

Oracle RAC Interview Questions

Q-1 : What is the split-brain scenario?
A-1 : In Oracle RAC, split-brain is the scenario when one or more nodes updates to the database files w/o considering the integrity with other nodes. so in that scenario there is high possiblity of compromissing of database integrity and introducing the corruption to the database.

Q-2: What is the role of voting disk/file in RAC?
A-2: In Oracle RAC, voting disk file is used to determine the state of each nodes in the cluster. Each node should write heartbeat to the voting disk in predetermine interval i.e. 1 sec, so other nodes in the the cluster know that the node is alive. If node could not register the heartbeat to voting disk in stipulated time frame then it should be fence out from cluster to avoid split-brain scenario, which might introduce corruption to the database. Oracle Cluster Synchronization Service Daemon(OCSSD) is responsible to maintain synchronization of the cluster using voting disk.

...Post Underconstruction!!....

Monday, February 7, 2011

How to restore local OLR in Oracle 11gR2 RAC?

When you see the following error in your OHSD log file under $CRS_HOME/cdata/

2009-10-16 15:02:43.664: [ default][3046311632] OHASD Daemon Starting. Command string :restart
2009-10-16 15:02:43.668: [ default][3046311632] Initializing OLR
2009-10-16 15:02:43.672: [ OCROSD][3046311632]utopen:6m':failed in stat OCR file/disk /u01/app/11.2.0/grid/cdata/server1.olr, errno=2, os err string=No such file or directory
2009-10-16 15:02:43.672: [ OCROSD][3046311632]utopen:7:failed to open any OCR file/disk, errno=2, os err string=No such file or directory
2009-10-16 15:02:43.673: [ OCRRAW][3046311632]proprinit: Could not open raw device
2009-10-16 15:02:43.673: [ OCRAPI][3046311632]a_init:16!: Backend init unsuccessful : [26]
2009-10-16 15:02:43.673: [ CRSOCR][3046311632] OCR context init failure. Error: PROCL-26: Error while accessing the physical storage Operating System error [No such file or directory] [2]
2009-10-16 15:02:43.673: [ default][3046311632] OLR initalization failured, rc=26
2009-10-16 15:02:43.674: [ default][3046311632]Created alert : (:OHAS00106:) : Failed to initialize Oracle Local Registry
2009-10-16 15:02:43.674: [ default][3046311632][PANIC] OHASD exiting; Could not init OLR
2009-10-16 15:02:43.674: [ default][3046311632] Done.

cd /oracle_crs/product/11.2.0/crs_1/cdata

touch lkcme25070.olr

cd /oracle_crs/product/11.2.0/crs_1/bin

./ocrconfig -local –restore /oracle_crs/product/11.2.0/crs_1/cdata/lkcme25070/backup_20101130_154551.olr

Monday, July 19, 2010

Shell script to get the information from multiple Oracle databases

I will demonstrate in the following example, how to get the informaiton from the many oracle databases quickly and easly using the Unix/Linux shell script.

Example 1: You want to get the information about the version of the oracle database for many(100s of oracle databases in very quick and efficient manner using shell script, assuming that use have common user id with same passwor in all the databases i.e. scott/tiger.
You will need i. DB list: which will be input to your shell script ii. SQL file containing oracle query iii. Shell script ,iv. Log file, which is output of the execution of the shell script.
Input file 1: db_list.txt: which will contain list of the databses i.e
$cat db_list.txt

Input file 2: db_version.sql: which will contain SQL query i.e.
$cat db_version.sql
set feedback off
set line 200
set pagesize 0
set echo off
set heading off

select d.global_name, v.versionfrom global_name d, product_component_version v where product like 'Oracle Database%';

File 3: Shell - Main Korn shell script i.e
LOG=Db_version_info_`date +"%m%d%y%H%M%S"`.log
echo "Log File Name->"${PWD}${LOG} >$LOG
cat $DB_INPUT while read line do
#echo $line >>$LOG
sqlplus -s
mailto:gems_read/gems_passw0rd@$line <<"EOC">> ./$LOG
echo $?

Now lets execute the shell script, -x option is to run the script with debug option.
$>ksh -x
will generate the following output show in File4:

File 4: O/P or Log file: Will give you the list of database with oracle version, when you execute the shell script.
$>cat Db_version_info_071910175552.log
Log File Name->/balvant/exp/Db_version_info_071910175552.log

Monday, June 28, 2010

Why online redo logs should never be backed up?

1. In “NO ARCHIVELOG” mode, you can take only closed backup, which is consistent and whole DB backup and upon restore does not require recovery, hence no need to have redo log backups.
2. In “ARCHIVELOG” mode, ARCH process/s are archiving the redo logs.
3. By multiplexing the redo log file(multiple log member in a redo group) on separate disk/spindle will protect from single point of failure(SPOF)
4. Main reason not to backup redo logs is to avoid the scenario during restore, where existing online logs are overwritten by the backed up redo logs, which will create issue during complete recovery of your database.

Tuesday, June 22, 2010

Oracle datafile size

What is the limitation for oracle datafile size?

It depends on 2 factors:
i. OS and ii. Database block size (DB_BLOCK_SIZE) parameter.
In 32 bit OS, You can create datafile upto 2GB to 4GB.
Following is the impact of DB_BLOCK_SIZE parameter on datafile size limitation:
For smallfile tablespace, single datafile can hold upto 2^22 or 4 MB or 4 million blocks, it means:
with DB_BLOCK_SIZE=4k, you can have max file size= 4k*4MB =16GB
with DB_BLOCK_SIZE=8k, you can have max file size= 8k*4MB =32GB
with DB_BLOCK_SIZE=16k, you can have max file size= 16k*4MB =64GB and so on..

For Bigfile tablespace(10g feature), a single data file can hold upto 2^32 or 4GB or 4 billion blocks, it means:
with DB_BLOCK_SIZE=4k, you can have max file size= 4k*4GB =16TB
with DB_BLOCK_SIZE=8k, you can have max file size= 8k*4GB =32TB
with DB_BLOCK_SIZE=16k, you can have max file size= 16k*4GB =64TB and so on..
Other Limits you can find in following Oracle Document: