Wednesday, December 31, 2008

Install asmlib on RHEL5

[root@prod-oranode-210 oracleasm]# rpm -Uvh oracleasmlib-2.0.3-1.el5.x86_64.rpm
Preparing... ########################################### [100%]
package oracleasmlib-2.0.3-1.el5 is already installed
[root@prod-oranode-210 oracleasm]# rpm -Uvh oracleasm-support-2.1.2-1.el5.x86_64.rpm
Preparing... ########################################### [100%]
1:oracleasm-support ########################################### [100%]
[root@prod-oranode-210 oracleasm]# rpm -Uvh oracleasm-2.6.18-92.1.17.el5-2.0.5-1.el5.x86_64.rpm
Preparing... ########################################### [100%]
1:oracleasm-2.6.18-92.1.1########################################### [100%]
[root@prod-oranode-210 oracleasm]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]


dd if=/dev/zero of=/dev/asm_oradate_211.1 bs=8192 count=5000
dd if=/dev/zero of=/dev/asm_oradate_211.2 bs=8192 count=5000
dd if=/dev/zero of=/dev/asm_oradate_211.3 bs=8192 count=5000
dd if=/dev/zero of=/dev/asm_oradate_211.4 bs=8192 count=5000
dd if=/dev/zero of=/dev/asm_oradate_211.5 bs=8192 count=5000
dd if=/dev/zero of=/dev/asm_oradate_211.6 bs=8192 count=5000
dd if=/dev/zero of=/dev/asm_oradate_211.7 bs=8192 count=5000


/usr/sbin/asmtool -C -l /dev/oracleasm -n ASM_DISK211_01 -s /dev/emcpowera -a force=yes
/usr/sbin/asmtool -C -l /dev/oracleasm -n ASM_DISK211_02 -s /dev/emcpowerb -a force=yes
/usr/sbin/asmtool -C -l /dev/oracleasm -n ASM_DISK211_03 -s /dev/emcpowerc -a force=yes
/usr/sbin/asmtool -C -l /dev/oracleasm -n ASM_DISK211_04 -s /dev/emcpowerd -a force=yes
/usr/sbin/asmtool -C -l /dev/oracleasm -n ASM_DISK211_05 -s /dev/emcpowere -a force=yes
/usr/sbin/asmtool -C -l /dev/oracleasm -n ASM_DISK211_06 -s /dev/emcpowerf -a force=yes
/usr/sbin/asmtool -C -l /dev/oracleasm -n ASM_DISK211_07 -s /dev/emcpowerg -a force=yes

Monday, December 8, 2008

Oracle Configuration Manager Install

Downloaded the platform specific OCM binaries from Metalink.
Unzip'd to $ORACLE_HOME/ccr (as recommended by Oracle).
unzip -d $ORACLE_HOME ocm-Production-AIX-ppc.zip

Install OCM:
execute the following:
$ORACLE_HOME/ccr/bin/setupCCR -s
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your MetaLink
Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name: xxx@cmcmarkets.com
Provide your MetaLink password to receive security updates via your MetaLink
account.
Password (optional):
** Installing base package **
Deploying core - Version 10.3.0.1.0
** Registering installation with Oracle Configuration Manager server(s) **
Deploying engines - Version 10.2.7.1.0
Deploying metricdata - Version 10.2.4.0.3
Deploying scripts - Version 10.3.0.1.0
** Getting package updates from ContentServer **
** Starting the Oracle Configuration Manager Scheduler **
Oracle Configuration Manager - Release: 10.3.0.1.0 - Production
Copyright (c) 2005, 2008, Oracle. All rights reserved.
------------------------------------------------------------------
Starting Oracle Configuration Manager...
Waiting for status from Oracle Configuration Manager....
Start Date 27-Aug-2008 07:42:54
Last Collection Time -
Next Collection Time 28-Aug-2008 07:41:00
Collection Frequency Daily at 07:41
Collection Status scheduled collection running
Log Directory /scratch/jdoe/test/ccr/hosts/myhost/log
Registered At 27-Aug-2008 07:41:57
Automatic Update On
Collector Mode Connected
Oracle Configuration Manager successfully started.
Oracle Configuration Manager has been configured in connected mode. If the
target ORACLE_HOME is running a database, please refer to the
"Post-installation Database Configuration" section of the OCM Installation
and Administration Guide
(http://www.oracle.com/technology/documentation/ocm.html) to complete the
installation.
View configuration data reports and access valuable configuration best
practices by going to MetaLink.

Friday, November 7, 2008

Audit Vault

# Install the Audit Vault Server
# This installs Oracle 10.2.0.3, plus database, plus Database Vault, plus Audit Vault

./runInstaller

File not found: %fineName% - IGNORE

# EM http://server:1158/em
# DV http://server:1158/dva
# AV http://server:5700/av

############################################

# Add the agent configuration on the server
avca add_agent -agentname avagent -agenthost server.domain
# creates user/password on the server for the agent to connect to.

# Install the Audit Vault Agent

./runInstaller

# Failed :- Oracle Audit Vault Configuration Assistant

Check what failed in configToolFailedCommands

# manually run the configuration on the agent ORACLE_HOME

avca -s initialize_agent -agentname avagent -agentusr user/password -agentport 7016 -av server:port:DB -rmiport 3121 -jmsport 3300

avctl show_agent_status -agentname avagent

############################################
# Create a DBAUD collector

# On the source database create the collector user

cd $ORACLE_HOME/av/scripts/streams/source
sqlplus dvmanager/password
create user srcuser1 identified by password;
@zarsspriv.sql srcuser setup
connect srcuser/password
SELECT * FROM SESSION_PRIVS;
SELECT * FROM SESSION_ROLES;

# on the source agent verify the source database permissions

avorcldb verify -src server:port:service -colltype ALL

ERROR: parameter GLOBAL_NAMES = false is not set to required value true

# on the AV server add the Oracle Database Source

avorcldb add_source -src server:port:service -desc 'av' -agentname avagent

# on the AV server add the DBAUD collector

avorcldb add_collector -srcname AV.CMC.LOCAL -agentname avagent -colltype DBAUD

# On the agent setup the source

avorcldb setup -verbose -srcname AV.CMC.LOCAL

###########################################

# Check the server
avctl show_av_status

# Stop AV
avctl stop_av

# Start the agent
# From the agent
avctl start_oc4j

# From the server
avctl start_agent -agentname avagent

Friday, October 31, 2008

Working on an Oracle Cost Reduction presentation

http://docs.google.com/EmbedSlideshow?docid="dcvphs7f_0hqt4mwhj

Thursday, October 30, 2008

Granting SYSDBA adds an entry to the password file

SQL> grant sysdba to robert;

Grant succeeded.

SQL> connect robert as sysdba
Enter password:
Connected.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

[t-dev-oranode-50.cmc.local:dv1]$ diff orapwapex.bak orapwapex
< Z[\]ORACLE Remote Password fileNTERNA232A4CC5194A53EESYSDCB748A5BC5390F2
---
> Z[\]ORACLE Remote Password fileNTERNA232A4CC5194A53EESYSDCB748A5BC5390F2ROBERT6D938DC70DAA0F72

Wednesday, October 29, 2008

Install Apex on 10.2.0.3

Install 10.2.0.1 binaries
Install Companion CD
Install 10.2.0.3 patchset
Create database using dbca

# Download Apex zip file and install full Apex dev environment

@apexins sysaux sysaux temp /i/

# change admin password

@apxchpwd

@apex_epg_config /opt/oracle/product/10.2.0/apex

alter user anonymous account unlock;

SELECT DBMS_XDB.GETHTTPPORT FROM DUAL;

EXEC DBMS_XDB.SETHTTPPORT(8080);


When Oracle Application Express installs it creates three new database accounts:

* FLOWS_030100 - The account that owns the Oracle Application Express schema and metadata.
* FLOWS_FILES - The account that owns the Oracle Application Express uploaded files.
* APEX_PUBLIC_USER - The minimally privileged account used for Oracle Application Express configuration with Oracle HTTP Server and mod_plsql.

http://host:port/apex/apex_admin admin/password1

Create Workspace and map to schema

http://host:port/apex system/password admin/password1

Tuesday, October 28, 2008

REDO is generated on global temporary tables.

In this test redo on global temporary table = 4% of permanent table.

SQL> insert into perm_table select * from dba_source where rownum < 100000;

99999 rows created.


Statistics
----------------------------------------------------------
1822 recursive calls
27577 db block gets
9419 consistent gets
2 physical reads
20110532 redo size
832 bytes sent via SQL*Net to client
765 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
99999 rows processed

SQL> insert into temp_table select * from dba_source where rownum < 100000;

99999 rows created.


Statistics
----------------------------------------------------------
27 recursive calls
16980 db block gets
6319 consistent gets
0 physical reads
814600 redo size
832 bytes sent via SQL*Net to client
765 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
99999 rows processed

ORA-01666: controlfile is for a standby database

# PROBLEM :-
# unable to mount an Oracle 9i standby database

SQL> alter database mount;
alter database mount
*
ERROR at line 1:
ORA-01666: controlfile is for a standby database

# SOLUTION :-

SQL> alter database mount standby database;

Database altered.

ORA-09968: scumnt: unable to lock file

# PROBLEM :-
# unable to mount Oracle 9i standby database using the same ORACLE_HOME as the primary

alter database mount
Mon Oct 27 18:53:03 2008
scumnt: failed to lock /opt/oracle/product/9.2.0.4/dbs/lkDB1 exclusive
Mon Oct 27 18:53:03 2008
ORA-09968: scumnt: unable to lock file
Linux Error: 11: Resource temporarily unavailable
Additional information: 32061

# SOLUTION :-
# add the following to init.ora and restart
lock_name_space=db2

Thursday, October 23, 2008

Database Vault Example

# Using Database Vault to stop SYS from selecting from a table

1. login to the Database Vault console
2. create a new Realm, and add table robert.test as a Realm Secured Object
3. check who has priveleges on the object

SQL> connect robert/password
Connected.
SQL> grant select on test to robert2;

Grant succeeded.

SQL> connect robert2/password
Connected.
SQL> select * from robert.test;

ID
----------
1

SQL> connect sys as sysdba
Enter password:
Connected.
SQL> select * from robert.test;
select * from robert.test
*
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> connect dvowner
Enter password:
Connected.
SQL> select * from robert.test;

ID
----------
1

Wednesday, October 22, 2008

Debug CRS

# turn on debug
sudo crsctl debug log res "ora.p01dwh.pl01dwh2.inst:5"
sudo crsctl debug log crs CRSRES:5
sudo crsctl debug log crs CRSAPP:5
sudo crsctl debug log crs CRSEVT:5

# run test
srvctl start instance -d p01dwh -i pl01dwh2

# turn off debug

sudo crsctl debug log res "ora.p01dwh.pl01dwh2.inst:0"
sudo crsctl debug log crs CRSRES:0
sudo crsctl debug log crs CRSAPP:0
sudo crsctl debug log crs CRSEVT:0

Tuesday, October 21, 2008

Database Vault

# Download database vault for the database ORACLE_HOME version.

unset LANG
./runInstaller

#It install Label Security and will ask which database to install in, and will register itself with the OEM dbconsole.

http://t-dev-oranode-50.cmc.local:1158/dva

SELECT * FROM V$OPTION WHERE PARAMETER = 'Oracle Database Vault';

select username from dba_users where username = 'DVSYS';

select name from REALM_T$;


# INSTALL DATABASE VAULT (run by runInstaller)
/opt/oracle/product/10.2.0/dv/bin/dvca -action option -oh /opt/oracle/product/10.2.0/dv -s_path /tmp -logfile /opt/oracle/product/10.2.0/dv/cfgtoollogs/dvca_install.log -owner_account dvowner -owner_passwd password -jdbc_str jdbc:oracle:oci:@dv1 -sys_passwd password -lockout -nodecrypt -silent

# ENABLE DATABASE VAULT :-
dvca -action enable -service dv -sys_passwd password -owner_account dvowner -owner_passwd dvowner#1 -logfile dvenable.log -nodecrypt

emctl stop dbconsole

sqlplus "sys / as sysoper"
shutdown immediate;
exit

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk dv_on
cd $ORACLE_HOME/bin
relink oracle

sqlplus "sys / as sysoper"
startup
exit

# DISABLE DATABASE VAULT

emctl stop dbconsole
sqlplus "SYS / AS SYSOPER"
shutdown immediate;
exit

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk dv_off
cd $ORACLE_HOME/bin
relink oracle
sqlplus "SYS / AS SYSOPER"
startup
exit

CONNECT SYS / AS SYSDBA
ALTER TRIGGER DVSYS.DV_BEFORE_DDL_TRG DISABLE;
ALTER TRIGGER DVSYS.DV_AFTER_DDL_TRG DISABLE;

dvca -action disable
-service service_name
-instance Oracle_instance_name
-dbname database_name
-sys_passwd SYS_password
-owner_account DV_owner_account_name
-owner_passwd DV_owner_account_password
[-logfile ./dvca.log]
[-nodecrypt]
[-racnode node]

  • -silent is the option to run in command line mode.

  • -nodecrypt is the option to read plaintext passwords.

  • -lockout is the flag to use to disable SYSDBA operating system authentication.

# Recreate password file (note the nosysdba=n)
orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=mysyspassword force=y nosysdba=n

Wednesday, October 15, 2008

Logon Trigger to trace a specific user

CREATE OR REPLACE TRIGGER trace_trigger AFTER LOGON ON DATABASE
WHEN (user='APEX_ADMINISTRATOR')
BEGIN
execute immediate 'alter session set sql_trace=true';
END;
/

Check which blocks are used in a table

select distinct DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid)
from ETSDYN.ETPPRICE;

Tuesday, October 14, 2008

OWB reports startup errors after database server move

UPDATE OWBRT_SYS.OWBRTPS
SET value=''
WHERE key =';

UPDATE OWB10G_RUNTIME.WB_RT_SERVICE_NODES
SET host = ''
, port =
, service_name = '
, server_side_home = '
WHERE node_id = 1;

UPDATE OWB10G_RUNTIME.WB_RT_STORE_PARAMETERS
SET parameter_value = ''
WHERE parameter_name = 'Service';

UPDATE OWB10G_RUNTIME.WB_RT_STORE_PARAMETERS
SET parameter_value = ''
WHERE parameter_name = 'Host';

UPDATE OWB10G_RUNTIME.WB_RT_STORE_PARAMETERS
SET parameter_value = ''
WHERE parameter_name = 'Port';

OWB reports ORA-01882 when connecting to Oracle 9i databases

vi $ORACLE_HOME/owb/bin/unix/run_service.sh

# add -Duser.timezone="+00:00" as follows

$JAVAPATH/bin/java -Xmx768M -Duser.timezone="+00:00" -Djava.awt.headless=true -DORACLE_HOME="${RTHOME}" -
DOCM_HOME="${RTHOME}" -DOCM_ORACLE_HOME="${RTHOME}" $CLASSPATH_LAUNCHER oracle.wh.runtime.platform.servic
e.Service ${STARTUP_TYPE} ${NODEID} ${RTUSER} ${HOST} ${PORT} ${SERVICE}

#Restart the OWB services using stop_service.sql/start_service.sql.

Memory leak on RAC ONS process due to port conflict

After CRS is upgraded to 10.2.0.4 the ONS ports are changed in ons.config, but not changed in OCR

Check if the ports match in OCR and ons.config

ocrdump -stdout -keyname DATABASE.ONS_HOSTS
onsctl ping
more $ORACLE_HOME/opmn/conf/ons.config

Monday, October 13, 2008

catalog all archivelogs in a directory

rman
connect target /
catalog start with '+ASM_ORADATA12/P01CFD/ARCHIVELOG/'

DBMS_REPAIR.SEGMENT_FIX_STATUS

If you suspect issues with space allocation in ASSM tablespaces :-

exec dbms_repair.segment_fix_status('ETSDYN','ETPPRICE');

DBMS_SPACE.SPACE_USAGE

# For ASSM tablespaces, check space usage :-

create or replace procedure show_space (a_seg_owner IN VARCHAR2, a_seg_name IN VARCHAR2, a_seg_type IN VARCHAR2) is
unformatted_blocks NUMBER;
unformatted_bytes NUMBER;
fs1_blocks NUMBER;
fs1_bytes NUMBER;
fs2_blocks NUMBER;
fs2_bytes NUMBER;
fs3_blocks NUMBER;
fs3_bytes NUMBER;
fs4_blocks NUMBER;
fs4_bytes NUMBER;
full_blocks NUMBER;
full_bytes NUMBER;
BEGIN
DBMS_SPACE.SPACE_USAGE(
segment_owner => a_seg_owner,
segment_name => a_seg_name,
segment_type => a_seg_type,
unformatted_blocks => unformatted_blocks,
unformatted_bytes => unformatted_bytes,
fs1_blocks => fs1_blocks,
fs1_bytes => fs1_bytes,
fs2_blocks => fs2_blocks,
fs2_bytes => fs2_bytes,
fs3_blocks => fs3_blocks,
fs3_bytes => fs3_bytes,
fs4_blocks => fs4_blocks,
fs4_bytes => fs4_bytes,
full_blocks => full_blocks,
full_bytes => full_bytes,
partition_name=>NULL);

dbms_output.put_line('Space usage report for ' || a_seg_type || ': ' ||
a_seg_owner || '.' || a_seg_name);
dbms_output.put_line('***************************************');
dbms_output.put_line('Unformatted : '||unformatted_blocks||' blocks, '||
unformatted_bytes/1024/1024||' MB');
dbms_output.put_line(' 0% < Free Space < 25% : '||fs1_blocks||' blocks, '||
fs1_bytes/1024/1024||' MB');
dbms_output.put_line('25% < Free Space < 50% : '||fs2_blocks||' blocks, '||
fs2_bytes/1024/1024||' MB');
dbms_output.put_line('50% < Free Space < 75% : '||fs3_blocks||' blocks, '||
fs3_bytes/1024/1024||' MB');
dbms_output.put_line('75% < Free Space < 100% : '||fs4_blocks||' blocks, '||
fs4_bytes/1024/1024||' MB');
dbms_output.put_line('Full blocks : '||full_blocks||' blocks, '||
full_bytes/1024/1024||' MB');
END;
/

SQL> set serveroutput on
SQL> exec show_space ('ETSDYN','ETPPRICE','TABLE');
Space usage report for TABLE: ETSDYN.ETPPRICE
***************************************
Unformatted : 0 blocks, 0 MB
0% < Free Space < 25% : 16 blocks, .125 MB
25% < Free Space < 50% : 3 blocks, .0234375 MB
50% < Free Space < 75% : 4 blocks, .03125 MB
75% < Free Space < 100% : 2004570 blocks, 15660.703125 MB
Full blocks : 0 blocks, 0 MB

PL/SQL procedure successfully completed.

Friday, October 10, 2008

OEM error oracle.sysman.emSDK.emd.comm.OperationException: could not write to targets.xml.bak file

# When changing configuration in OEM :-

oracle.sysman.emSDK.emd.comm.OperationException: could not write to targets.xml.bak file

# In emagent.trc many filesystem errors

emagent.trc.4:2008-10-10 01:11:41,951 Thread-1114 ERROR util.files: ERROR: nmeufos_new: failed in lfiopn on file: /opt/oracle/product/10.2.0/agent10g/sysman/emd/agntstmp.txt.error = 24 (Too many open files)
emagent.trc.4:2008-10-10 01:11:41,953 Thread-1114 ERROR util.files: ERROR: nmeufos_new: failed in lfiopn on file: /opt/oracle/product/10.2.0/agent10g/sysman/emd/agntstmp.txt.error = 24 (Too many open files)

# Caused by filesystem problems. Shutdown all Oracle processes, umount the filesystem, and run an fsck.



Using agentca to recreate the OEM agent targets.xml

emctl start agent
Oracle Enterprise Manager 10g Release 4 Grid Control 10.2.0.4.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
Starting agent ...... failed.
Failed to initialize the Target Manager.

# Found that targets.xml file was empty and recreated using :-
# Single node :-
agentca -d -f

# Cluster :-
agentca -c PDCRS03 -d -f

Thursday, October 9, 2008

OEM cannot monitor second RAC instance - ORA-12519

Failed to connect to database instance: ORA-12519: TNS:no appropriate service handler found (DBD ERROR: OCIServerAttach).

Modify the SID definition for the second instance in the "Database Configuration Page":

FROM :-

racsid2

TO :-

racsid2)(INSTANCE_ROLE=ANY

emctl clearstate agent
emctl upload agent
emctl status agent

See the following Metalink Note :-
Note.335046.1 Problem Database Monitoring ORA-12519 Returned When Monitoring Secondary Instance Of Rac Database

Wednesday, October 8, 2008

Debugging OEM issues with EMDIAG

+ Download the EMDiagkit from the Note 421053.1 EMDiagkit Download and Master Index

Set the ORACLE_HOME environment for OMS on the OEM console server.
mkdir $ORACLE_HOME/emdiag
cp emdiag20080902.zip $ORACLE_HOME/emdiag
cd $ORACLE_HOME/emdiag
unzip emdiag20080902.zip
cd bin
chmod u+x repvfy
./repvfy -usr SYSMAN -pwd password install


+ Install it as per instructions in Note 421499.1 How to Install - Deinstall
+ Set the environment as per Note 421586.1 Environment Variables
+Provide the output of the below test:

./repvfy -details

Tuesday, October 7, 2008

Using emcli to connect to OEM

export JAVA_HOME=/opt/oracle/product/10.2.0/oms10g/jdk
export PATH=$PATH:$JAVA_HOME/bin

cd /opt/oracle/product/10.2.0/oms10g/sysman/jlib
java -jar emclikit.jar client -install_dir=/home/oradba/scripts

emcli setup -url="https://test-oemnode-10:1159/em -username oradba -dir /home/oracle/scripts -trustall -novalidate

emcli get_targets
emcli get_jobs

Monday, October 6, 2008

Check which Oracle instance you are connected to

select host_name, instance_name from v$instance;

Get oracle client IP address from AUD$

select distinct
terminal||'^'||
substr(substr(comment$text,instr(comment$text,'HOST=')+5,100),1,instr(substr(comment$text,instr(comment$text,'HOST=')+5,100),')')-1)
from sys.aud$;

Kill all Oracle sessions

set pagesize 0
spool kill_all.sql
select 'alter system kill session '''||sess.sid||', '||sess.serial#||''';' from v$session sess where username = 'PL_SCHEMA';
spool off
spool kill_all.log
@kill_all.sql
spool off


ps -ef |grep oraclepl01cdy1 | grep NO | grep -v grep | awk '{print $2}' | xargs kill -9

Friday, October 3, 2008

RMAN restore datafile

rman
connect target /
catalog backuppiece '/bie_oraexp/rman_P01DIS_20081002_702_1';
run{
allocate channel disk1 device type disk;
set newname for datafile 17 to '/sl02dis/oraarch1/SL02DIS/datafile/o1_mf_disdwh_d_4g9yqvjx_.dbf';
restore datafile 17;
}

Thursday, October 2, 2008

Enable Block Change Tracking

ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '+ASM_ORADATA41';

select * from v$block_change_tracking;

Oracle export with query (where clause)

export NLS_LANG=american_america.WE8ISO8859P1
exp user/password file=SUBSCRIPTIONGROUPS.dmp tables=SUBSCRIPTIONGROUPS query=\"where system_profile_id \= 1770\"

Tuesday, September 30, 2008

OEM job FAILED INIT

# OEM job fails with the following error :-
RemoteOperationException: Failed to establish input-streaming thread

# stop the agent
emctl stop agent

# increate the ThreadPoolModel
cp $ORACLE_HOME/sysman/config/emd.properties $ORACLE_HOME/sysman/config/emd.properties.
vi $ORACLE_HOME/sysman/config/emd.properties

ThreadPoolModel=MEDIUM

# Start the agent
emctl start agent

Redirect an OEM agent to another console

# stop the agent
emctl stop agent

# Change REPOSITORY_URL and emdWalletSrcUrl in emd.properties

# single instance

cp $ORACLE_HOME/sysman/config/emd.properties $ORACLE_HOME/sysman/config/emd.properties.20080930

vi $ORACLE_HOME/sysman/config/emd.properties
:%s/prod-oemnode-10/qa-oemnode-10/g

rm -r $ORACLE_HOME/sysman/emd/state/*
rm -r $ORACLE_HOME/sysman/emd/collection/*
rm -r $ORACLE_HOME/sysman/emd/upload/*
rm $ORACLE_HOME/sysman/emd/lastupld.xml
rm $ORACLE_HOME/sysman/emd/agntstmp.txt
rm $ORACLE_HOME/sysman/emd/blackouts.xml
rm $ORACLE_HOME/sysman/emd/protocol.ini

# RAC

cp $ORACLE_HOME/*/sysman/config/emd.properties $ORACLE_HOME/*/sysman/config/emd.properties.20080930

vi $ORACLE_HOME/*/sysman/config/emd.properties
:%s/prod-oemnode-10/qa-oemnode-10/g

# Delete old config files

rm -r $ORACLE_HOME/*/sysman/emd/state/*
rm -r $ORACLE_HOME/*/sysman/emd/collection/*
rm -r $ORACLE_HOME/*/sysman/emd/upload/*
rm $ORACLE_HOME/*/sysman/emd/lastupld.xml
rm $ORACLE_HOME/*/sysman/emd/agntstmp.txt
rm $ORACLE_HOME/*/sysman/emd/blackouts.xml
rm $ORACLE_HOME/*/sysman/emd/protocol.ini

# start the agent
emctl start agent

# clear the agent
emctl clearstate agent

# register with the console (prompts for password)
emctl secure agent

# upload the configuration to the console
emctl upload

Thursday, September 25, 2008

Delete all files in a directory

[prod-oranode-50:pd01bie1]$ rm *
-bash: /usr/bin/rm: The parameter or environment lists are too long.

for FILE in $(ls); do rm $FILE; done

Find all files with a specified string

find . -exec grep "cmcmarkets" '{}' \; -print

Oracle between is inclusive ...

SQL> select * from test where id between 1 and 3;

ID
----------
1
2
3

Tuesday, September 23, 2008

ora_rowscn does not seem to work ?

SQL> select scn_to_timestamp(max(ORA_ROWSCN)) from TABLE1;

SCN_TO_TIMESTAMP(MAX(ORA_ROWSCN))
---------------------------------------------------------------------------
22-SEP-08 01.13.30.000000000 PM

SQL> insert into table1 values(8);

1 row created.

SQL> select scn_to_timestamp(max(ORA_ROWSCN)) from TABLE1;

SCN_TO_TIMESTAMP(MAX(ORA_ROWSCN))
---------------------------------------------------------------------------
22-SEP-08 01.13.30.000000000 PM

SQL> alter system switch logfile;

System altered.

SQL> select scn_to_timestamp(max(ORA_ROWSCN)) from TABLE1;

SCN_TO_TIMESTAMP(MAX(ORA_ROWSCN))
---------------------------------------------------------------------------
22-SEP-08 01.13.30.000000000 PM

SQL> select sysdate from dual;

SYSDATE
--------------------
23 SEP 2008 17:05:07

Disable streams processes

# stop apply

BEGIN
FOR cur_rec IN (SELECT apply_name FROM dba_apply) LOOP
DBMS_APPLY_ADM.STOP_APPLY(
apply_name => cur_rec.apply_name);
END;

# stop capture

BEGIN
FOR cur_rec IN (SELECT capture_name FROM dba_capture) LOOP
DBMS_CAPTURE_ADM.STOP_CAPTURE(
capture_name => cur_rec.capture_name);
END;

# stop propagate
BEGIN
DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE(
queue_name => 'strmadm.queue_name',
destination => 'dest');
END;

SELECT propagation_name
FROM dba_propagation;

exec dbms_propagation_adm.stop_propagation('XXX');

Monday, September 22, 2008

Logical Standby Management

# Override Guard protection
ALTER SESSION DISABLE GUARD


# ALTER DATABASE GUARD
ALL - everything
STANDBY - replicated from primary
NONE - nothing


# Change the commit order (faster performance, no read consistency)

exec dbms_logstdby.apply_set('PRESERVE_COMMIT_ORDER', 'FALSE');

# Skip DML on a table

ALTER DATABASE STOP LOGICAL STANDBY APPLY;
EXECUTE DBMS_LOGSTDBY.SKIP ('DML', 'ROBERT', 'TABLE1', null);
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

EXECUTE DBMS_LOGSTDBY.UNSKIP ('DML', 'ROBERT', 'TABLE1');
# Skip a transaction

SELECT XIDUSN, XIDSLT, XIDSQN FROM DBA_LOGSTDBY_EVENTS WHERE EVENT_TIME = (SELECT MAX(EVENT_TIME) FROM DBA_LOGSTDBY_EVENTS);
exec dbms_logstdby.skip_transaction(XIDUSN => 1, XIDSLT => 10, XIDSQN => 1000);

# Instantiate a table using a database link

EXECUTE DBMS_LOGSTDBY.INSTANTIATE_TABLE (SCHEMA_NAME => 'ROBERT', TABLE_NAME => 'TABLE2', DBLINK => 'A_LINK');

# Tables for Diagnostics

SELECT FILE_NAME, SEQUENCE# AS SEQ#, FIRST_CHANGE# AS FCHANGE#, NEXT_CHANGE# AS NCHANGE#, TIMESTAMP, DICT_BEGIN AS BEG, DICT_END AS END,
THREAD# AS THR# FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;

SELECT EVENT_TIME, STATUS, EVENT FROM DBA_LOGSTDBY_EVENTS ORDER BY EVENT_TIMESTAMP, COMMIT_SCN;

SELECT APPLIED_SCN, LATEST_SCN, MINING_SCN, RESTART_SCN FROM V$LOGSTDBY_PROGRESS;

SELECT SID, SERIAL#, LOGSTDBY_ID AS LID, SPID, TYPE, HIGH_SCN FROM V$LOGSTDBY_PROCESS;

SELECT TYPE, STATUS_CODE, STATUS FROM V$LOGSTDBY_PROCESS;

SELECT * FROM V$LOGSTDBY_STATE;

SELECT NAME, VALUE, TIME_COMPUTED FROM V$LOGSTDBY_STATS;

v$LOGSTDBY

# What Sql is being applied ?

SELECT SAS.SERVER_ID, SS.OWNER, SS.OBJECT_NAME, SS.STATISTIC_NAME, SS.VALUE
FROM V$SEGMENT_STATISTICS SS
, V$LOCK L
, V$STREAMS_APPLY_SERVER SAS
WHERE SAS.SERVER_ID = &SLAVE_ID
AND L.SID = SAS.SID
AND L.TYPE = 'TM'
AND SS.OBJ# = L.ID1;

Friday, September 19, 2008

Moving a datafile to another ASM group

# set tablespace read only
sqlplus
connect / as sysdba
alter tablespace bi_mifid_stg_lob read only;

# datafile offline
alter database datafile '+ASM_ORADATA52/p01bie/datafile/bi_mifid_stg_lob.267.663234691' offline;

# copy datafile
RMAN
connect target /
copy datafile '+ASM_ORADATA52/p01bie/datafile/bi_mifid_stg_lob.267.663234691' to '+ASM_ORADATA51';

# rename datafile
switch datafile '+ASM_ORADATA52/p01bie/datafile/bi_mifid_stg_lob.267.663234691' to copy;
# or
alter database rename datafile
'+ASM_ORADATA52/p01bie/datafile/bi_mifid_stg_lob.267.663234691'
to
'ASM_ORADATA51/p01bie/datafile/bi_mifid_stg_lob.....';

# recover datafile
recover datafile 'ASM_ORADATA51/p01bie/datafile/bi_mifid_stg_lob.....';

# datafile online
sqlplus
connect / as sysdba
alter database datafile 'ASM_ORADATA51/p01bie/datafile/bi_mifid_stg_lob.....' online;

# tablespace read write
alter tablespace bi_mifid_stg_lob read write;

Thursday, September 18, 2008

Prepare devices for ASM on AIX

[dr-oranode-00:su01cfd1]$ ls -l /dev/hdisk*
brw------- 1 root system 18, 1 18 Aug 2007 /dev/hdisk14

mknod /dev/asm_disk501 c 18 1

chown oracle:dba /dev/asm*
chmod 660 /dev/asm*

dd if=/dev/zero of=/dev/asm_disk501 bs=8192 count=25000

Compressed tables and indexes

# create a compressed table
create table orderaudit_compress_pf0_32k compress pctfree 0 tablespace orderaudit_32k nologging as select * from bobj_login.orderaudit
where rownum < 1000000;

# create a compressed index
CREATE INDEX IDX_AUDIT_IDVER_CMP_32K ON orderaudit_nocompress_32k
(UNIQUEID, OBJECTVERSION)
COMPRESS 2
LOGGING
TABLESPACE orderaudit_32k
PARALLEL ( DEGREE 7 INSTANCES 1 );

DBMS_STATS

# find the latest partition
select partition_name from user_tab_partitions where table_name = 'ORDERS';

# create a stats backup table
exec dbms_stats.create_stat_table('BOBJ_LOGIN','STATS_BACKUP');

# export partition stats
exec dbms_stats.export_table_stats('BOBJ_LOGIN','ORDERS','ORDERS_WE210908','STATS_BACKUP',NULL,TRUE);

# collect new partition stats
begin
dbms_stats.gather_table_stats(ownname=> 'bobj_login'
, tabname=> 'ORDERS'
, estimate_percent => 10
, partname=>'ORDERS_WE210908'
,granularity =>'PARTITION'
,cascade =>TRUE
,degree=>4
,no_invalidate =>false);
end;
/

# gather system stats

dbms_stats.create_stat_table(ownname => 'system', stattab => 'system_stats', tblspace => 'datafix_dat');

dbms_stats.gather_system_stats(gathering_mode => 'START',stattab => 'system_stats', statid => 'systats_25062007' ,statown => 'system')

dbms_stats.gather_system_stats(gathering_mode => 'STOP',stattab => 'system_stats', statid => 'systats_25062007' ,statown => 'system');

# export system stats

dbms_stats.export_system_stats(stattab => 'systats_apn_5_1_2006', statid => 'systats_before_new_cpus', statown => 'datafix');

# delete system stats

dbms_stats.delete_system_stats;

# import system stats

dbms_stats.import_system_stats(stattab => 'system_stats', statid => 'systats_25062007', statown => 'system');

# export schema stats

dbms_stats.export_system_stats(stattab => 'systats_apn_5_1_2006', statid => 'systats_before_new_cpus', statown => 'datafix');

# gather schema stats

DBMS_STATS.GATHER_SCHEMA_STATS
(
OwnName => 'bobj_login',
Estimate_Percent => 2,
Block_sample => FALSE,
Method_Opt => 'FOR ALL COLUMNS SIZE 100',
Degree => 4,
Cascade => TRUE,
GRANULARITY => 'ALL'
);

Wednesday, September 17, 2008

ASM rebalance power.

To increase the power of the rebalancing operation on the fly you can do the following (this has been done for ASM_ORADATA52)

ALTER DISKGROUP ASM_ORADATA52 REBALANCE POWER 8 NOWAIT;

You can monitor the rebalancing by querying v$asm_operation via sqlplus (ASM environment)

select * from v$asm_operation;
GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES
-----------
2 REBAL RUN 8 8 1320 170597 393
430

Tuesday, September 16, 2008

ipcs ipcrm

[oracle@dev-statsdb-1 bdump]$ ipcs

------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x0001ffb8 0 root 666 76 4
0x00025990 32769 root 666 8308 4
0x00027cb9 65538 root 666 132256 1
0x00027cba 98307 root 666 132256 1
0x00027cbb 131076 root 666 132256 1
0x8142536c 163845 oracle 640 3156213760 0

[oracle@dev-statsdb-1 bdump]$ ipcrm -m 163845

AWR snaps and reports

# take a snapshot

EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;

# Run a report

@$ORACLE_HOME/rdbms/admin/awrrpt.sql

Thursday, September 11, 2008

If the db recovery file dest fills up.

************************************************************************
You have following choices to free up space from flash recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
system command was used to delete files, then use RMAN CROSSCHECK and
DELETE EXPIRED commands.
************************************************************************

rman
connect target /
crosscheck archivelog all;
delete expired archivelog all;

delete archivelog all completed before 'sysdate -10';

delete noprompt force archivelog all completed before "sysdate-3" backed up 2 times to device type 'SBT_TAPE';

ORA-01111: name for data file 22 is unknown - rename to correct file

# Physical Standby. Datafile added to primary causes recovery to fail on Standby

SQL> recover standby database;
ORA-00283: recovery session canceled due to errors
ORA-01111: name for data file 22 is unknown - rename to correct file
ORA-01110: data file 22: '/opt/oracle/product/10.2.0/db_1/dbs/UNNAMED00022'
ORA-01157: cannot identify/lock data file 22 - see DBWR trace file
ORA-01111: name for data file 22 is unknown - rename to correct file
ORA-01110: data file 22: '/opt/oracle/product/10.2.0/db_1/dbs/UNNAMED00022'

# Fix
alter database create datafile '/opt/oracle/product/10.2.0/db_1/dbs/UNNAMED00022' as '+ASM_ORADATA52/pl01dis1/datafile/disdwh_dat_nassm.dbf';

# To avoid a repeat
alter system set db_create_file_dest='+ASM_ORADATA52';
alter system set standby_file_management=auto;

Wednesday, September 10, 2008

Using cpio to extract from a file

cat 10g_wf_aix.cpio | cpio -idcmv

Rebuilding Indexes, Index Partitions, Index Subpartitions

# INDEX
select distinct 'alter index bi_mifid_staging.'||segment_name||' rebuild tablespace bi_mifid_stg_data online;'
from dba_segments
where owner = 'BI_MIFID_STAGING'
and segment_type = 'INDEX'
and tablespace_name = 'BI_MIFID_STG_IND';

# INDEX PARTITION
select distinct 'alter index bi_mifid_staging.'||segment_name||' rebuild partition '||partition_name||' tablespace bi_mifid_stg_data online;'
from dba_segments
where owner = 'BI_MIFID_STAGING'
and segment_type = 'INDEX PARTITION'
and tablespace_name = 'BI_MIFID_STG_IND';

# INDEX SUBPARTITIONS
select distinct 'alter index bi_mifid_staging.'||index_name||' rebuild subpartition '||subpartition_name||' tablespace bi_mifid_stg_data online;'
from dba_ind_subpartitions
where index_owner = 'BI_MIFID_STAGING'
and tablespace_name = 'BI_MIFID_STG_IND';

Tuesday, September 9, 2008

Check for Oracle block corruption

# To check for Oracle block corruption you can :-

# 1. use DBV
dbv file=/db1/system01.dbf blocksize=8192

#2. use rman
run {
allocate channel disk1 device type disk ;
allocate channel disk2 device type disk ;
backup validate check logical database;
}
select count(*) from v$database_block_corruption;


RMAN> blockrecover corruption list;

Saturday, September 6, 2008

Firewall trigger ?

Unfortunately this did not work because ora_client_ip_address is null :(

REATE OR REPLACE TRIGGER firewall_trig
AFTER LOGON
ON DATABASE
BEGIN
if NVL(ora_client_ip_address, 'N/A') like '111.111.111%' then
raise_application_error(-20000,'Your IP address is not allowed to login');
end if;
END firewall_trig;
/

Alter user trigger

create or replace trigger alter_trigger
before alter
on database
declare
begin
if (ora_dict_obj_type = 'USER') then
raise_application_error(-20010,'you are not allowed to alter users');
end if;
end;
/

dbms_application_info

dbms_application_info.set_module (module_name=>'test_package', action_name=>'starting batch');
dbms_application_info.set_action
(action_name => 'loading table x');
dbms_application_info.set_client_info ('inserting row ' || rowcount);

select module, action, client_info from v$session;

Also use
dbms_application_info.set_session_longops to set v$session_longops.
Can set sofar, total work and Oracle calculates time remaining.

UTL_MAIL example

DECLARE
CRLF CHAR(2) := CHR(10) || CHR(13);

BEGIN
UTL_MAIL.SEND(
sender => 'test@test.com'
,recipients => 'test@test.com'
,cc => NULL
,bcc => NULL
,subject => 'Test Email'
,message => 'Test line1' || CRLF || CRLF
|| 'Test line2'
,mime_type => 'text/plain; charset=us-ascii'
,priority => 1
);
END;

RESUMABLE_TIMEOUT

Temp tablespace filling up and you can't work out who is doing it ?

set RESUMABLE_TIMEOUT. The session will hang when it runs out of space, and you can decide whether to kill the session, or add space. Note that while the temp tablespace is full other sessions will hang also, so think carefully whether you would prefer consumers of abnormally large amounts of space to terminate, or potentially hang other sessions.

Thursday, September 4, 2008

Archive to db_recovery_file_dest

alter system set db_recovery_file_dest_size=400G;
alter system set db_recovery_file_dest='+ASM_ORADATA42';
alter system set LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=pd01dis1';

Create the Oracle Password file

orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=mypassword entries=5

Moving redo logs to new ASM disk groups

alter system set db_create_online_log_dest_1='+ASM_ORADATA43';
alter system set db_create_online_log_dest_2='+ASM_ORADATA43';

select group#, status from v$log;

ALTER DATABASE DROP LOGFILE GROUP 1;
ALTER DATABASE ADD LOGFILE GROUP 1 SIZE 600M;

ALTER DATABASE DROP LOGFILE GROUP 2;
ALTER DATABASE ADD LOGFILE GROUP 2 SIZE 600M;

ALTER DATABASE DROP LOGFILE GROUP 3;
ALTER DATABASE ADD LOGFILE GROUP 3 SIZE 600M;

SR Dataguard Diag for Standby

-- NAME: DG_phy_stby_diag.sql   
-- ------------------------------------------------------------------------
-- AUTHOR:
-- Michael Smith - Oracle Support Services - DataServer Group
-- Copyright 2002, Oracle Corporation
-- ------------------------------------------------------------------------
-- PURPOSE:
-- This script is to be used to assist in collection information to help
-- troubeshoot Data Guard issues.
-- ------------------------------------------------------------------------
-- DISCLAIMER:
-- This script is provided for educational purposes only. It is NOT
-- supported by Oracle World Wide Technical Support.
-- The script has been tested and appears to work as intended.
-- You should always run new scripts on a test instance initially.
-- ------------------------------------------------------------------------
-- Script output is as follows:

set echo off
set feedback off
column timecol new_value timestamp
column spool_extension new_value suffix
select to_char(sysdate,'Mondd_hhmi') timecol,
'.out' spool_extension from sys.dual;
column output new_value dbname
select value || '_' output
from v$parameter where name = 'db_name';
spool dgdiag_phystby_&&dbname&&timestamp&&suffix
set lines 200
set pagesize 35
set trim on
set trims on
alter session set nls_date_format = 'MON-DD-YYYY HH24:MI:SS';
set feedback on
select to_char(sysdate) time from dual;

set echo on

--
-- ARCHIVER can be (STOPPED | STARTED | FAILED) FAILED means that the archiver failed
-- to archive a -- log last time, but will try again within 5 minutes. LOG_SWITCH_WAIT
-- The ARCHIVE LOG/CLEAR LOG/CHECKPOINT event log switching is waiting for. Note that
-- if ALTER SYSTEM SWITCH LOGFILE is hung, but there is room in the current online
-- redo log, then value is NULL

column host_name format a20 tru
column version format a9 tru
select instance_name,host_name,version,archiver,log_switch_wait from v$instance;

-- The following select will give us the generic information about how this standby is
-- setup. The database_role should be standby as that is what this script is intended
-- to be ran on. If protection_level is different than protection_mode then for some
-- reason the mode listed in protection_mode experienced a need to downgrade. Once the
-- error condition has been corrected the protection_level should match the protection_mode
-- after the next log switch.

column ROLE format a7 tru
select name,database_role,log_mode,controlfile_type,protection_mode,protection_level
from v$database;

-- Force logging is not mandatory but is recommended. Supplemental logging should be enabled
-- on the standby if a logical standby is in the configuration. During normal
-- operations it is acceptable for SWITCHOVER_STATUS to be SESSIONS ACTIVE or NOT ALLOWED.

column force_logging format a13 tru
column remote_archive format a14 tru
column dataguard_broker format a16 tru
select force_logging,remote_archive,supplemental_log_data_pk,supplemental_log_data_ui,
switchover_status,dataguard_broker from v$database;

-- This query produces a list of all archive destinations and shows if they are enabled,
-- what process is servicing that destination, if the destination is local or remote,
-- and if remote what the current mount ID is. For a physical standby we should have at
-- least one remote destination that points the primary set but it should be deferred.

COLUMN destination FORMAT A35 WRAP
column process format a7
column archiver format a8
column ID format 99

select dest_id "ID",destination,status,target,
archiver,schedule,process,mountid
from v$archive_dest;

-- If the protection mode of the standby is set to anything higher than max performance
-- then we need to make sure the remote destination that points to the primary is set
-- with the correct options else we will have issues during switchover.

select dest_id,process,transmit_mode,async_blocks,
net_timeout,delay_mins,reopen_secs,register,binding
from v$archive_dest;

-- The following select will show any errors that occured the last time an attempt to
-- archive to the destination was attempted. If ERROR is blank and status is VALID then
-- the archive completed correctly.

column error format a55 tru
select dest_id,status,error from v$archive_dest;

-- Determine if any error conditions have been reached by querying thev$dataguard_status
-- view (view only available in 9.2.0 and above):

column message format a80
select message, timestamp
from v$dataguard_status
where severity in ('Error','Fatal')
order by timestamp;

-- The following query is ran to get the status of the SRL's on the standby. If the
-- primary is archiving with the LGWR process and SRL's are present (in the correct
-- number and size) then we should see a group# active.

select group#,sequence#,bytes,used,archived,status from v$standby_log;

-- The above SRL's should match in number and in size with the ORL's returned below:

select group#,thread#,sequence#,bytes,archived,status from v$log;

-- Query v$managed_standby to see the status of processes involved in the
-- configuration.

select process,status,client_process,sequence#,block#,active_agents,known_agents
from v$managed_standby;

-- Verify that the last sequence# received and the last sequence# applied to standby
-- database.

select max(al.sequence#) "Last Seq Recieved", max(lh.sequence#) "Last Seq Applied"
from v$archived_log al, v$log_history lh;

-- The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next
-- gap that is currently blocking redo apply from continuing. After resolving the
-- identified gap and starting redo apply, query the V$ARCHIVE_GAP fixed view again
-- on the physical standby database to determine the next gap sequence, if there is
-- one.

select * from v$archive_gap;

-- Non-default init parameters.

set numwidth 5
column name format a30 tru
column value format a50 wra
select name, value
from v$parameter
where isdefault = 'FALSE';

spool off

###################################################################

-- NAME: dg_table_diag.sql (Run on PRIMARY and on LOGICAL STANDBY)
-- ------------------------------------------------------------------------
-- Copyright 2004, Oracle Corporation
-- LAST UPDATED: 2/20/04
--
-- Usage: @dg_table_diag schema table
-- ------------------------------------------------------------------------
-- PURPOSE:
-- This script is to be used to assist in collection information to help
-- troubeshoot Data Guard issues with an emphasis on Logical Standby.
-- ------------------------------------------------------------------------
-- DISCLAIMER:
-- This script is provided for educational purposes only. It is NOT
-- supported by Oracle World Wide Technical Support.
-- The script has been tested and appears to work as intended.
-- You should always run new scripts on a test instance initially.
-- ------------------------------------------------------------------------
-- Script output is as follows:

set echo off
set feedback off
column timecol new_value timestamp
column spool_extension new_value suffix
select to_char(sysdate,'Mondd_hhmi') timecol,
'.out' spool_extension from sys.dual;
column output new_value dbname
select value || '_' output
from v$parameter where name = 'db_name';
spool dg_table_diag_&&dbname&&timestamp&&suffix
set linesize 79
set pagesize 180
set trim on
set trims on
alter session set nls_date_format = 'MON-DD-YYYY HH24:MI:SS';
set numwidth 15
set long 90000
set feedback on

-- Update the version when you change this file, so we know what file was
-- used to produce what output.

set echo on
--
-- Diagnostic version: 040618
--
set echo off
select to_char(sysdate) time from dual;


-- Show the database name and role, then display details about the table.
select name,database_role,supplemental_log_data_pk pk,
supplemental_log_data_ui ui
from v$database;

-- Exit on error
Whenever sqlerror exit


-- Get the input
variable inowner varchar2(30);
variable intable varchar2(30);
variable objnum number;

exec :inowner := UPPER('&1'); :intable := UPPER('&2')

-- Continue on error
Whenever sqlerror continue

-- Show the table
select DBMS_METADATA.GET_DDL('TABLE', :intable, :inowner) create_table
from dual;

-- Show the indexes on the table.
select DBMS_METADATA.GET_DEPENDENT_DDL('INDEX', :intable, :inowner)
create_indexes
from dual;

-- Show the triggers on the table.
select DBMS_METADATA.GET_DEPENDENT_DDL('TRIGGER', :intable, :inowner)
create_triggers
from dual;

-- Exit on error
Whenever sqlerror exit

-- Get the object number from the log miner dictionary
begin
select slo.obj# into :objnum
from system.logmnr_obj$ slo, system.logmnr_user$ slu,
system.logmnr_uid$ sud, system.logstdby$parameters slp
where slo.owner# = slu.user#
and slo.logmnr_uid = sud.logmnr_uid
and slu.logmnr_uid = sud.logmnr_uid
and to_char(sud.session#) = slp.value
and slp.name = 'LMNR_SID'
and (slo.type# = 2 or slo.type# = 6)
and slu.name = :inowner
and slo.name = :intable;
end;
/

-- Show the various column numbers. (Needed if no dictionary)
set numwidth 8
select c.name, c.col#, c.segcol#, c.null$, c.obj#
from system.logmnr_col$ c,
system.logmnr_uid$ sud, system.logstdby$parameters slp
where c.obj# = :objnum
and c.logmnr_uid = sud.logmnr_uid
and to_char(sud.session#) = slp.value
and slp.name = 'LMNR_SID'
order by col#;

-- Show the statement we died on.
select commit_scn, current_scn, xidusn, xidslt, xidsqn, event, status
from dba_logstdby_events
where commit_scn = (select max(commit_scn) from dba_logstdby_events);


set echo on
--
-- Log Miner: Changes to table on primary.
-- Data gathered from logs registered on standby.
--
set echo off

-- This assumes a standby with logs in it. If you have no logs
-- This does no harm
declare
final_scn number;
mineopt number;
instver number;
dict number;

CURSOR logfiles IS SELECT file_name FROM dba_logstdby_log
WHERE first_change# <= final_scn;
begin
-- Get the instance base version#
select to_number(substr(version, 1, 2)) into instver from v$instance;

-- Compute the last scn to be mined.
select greatest(
(select applied_scn from dba_logstdby_progress),
(select max(commit_scn) from dba_logstdby_events))
into final_scn
from sys.dual;

-- If there is a dictionary use local catelog.
select count(*) into dict from dba_logstdby_log
where dict_begin = 'YES';

-- Register the necessary files
FOR log IN logfiles LOOP
dbms_logmnr.add_logfile(log.file_name);
END LOOP;

-- Start Log Miner (without a dictionary)
-- In 10.1 dbms_logmnr.NO_ROWID_IN_STMT = 2048
mineopt := dbms_logmnr.PRINT_PRETTY_SQL;
if instver > 9 then mineopt := mineopt + 2048; end if;
if dict > 0
then
mineopt := mineopt + dbms_logmnr.DICT_FROM_REDO_LOGS;
mineopt := mineopt + dbms_logmnr.DDL_DICT_TRACKING;
end if;
dbms_logmnr.start_logmnr(
endScn => final_scn,
options => mineopt);
end;
/

-- Continue on error
Whenever sqlerror continue

-- Verify we've added the right logs
set numwidth 6
select thread_id, thread_sqn, low_time, high_time, dictionary_begin
from v$logmnr_logs order by thread_id, thread_sqn;

-- Look at redo for a given table.
column scn format 999999999999999
select 'P' P, SCN, XIDUSN, XIDSLT, XIDSQN, ROLLBACK,
THREAD#, RBASQN, RBABLK, RBABYTE,
SQL_REDO
from v$logmnr_contents
where DATA_OBJ# = :objnum;

-- Exit on error
Whenever sqlerror exit

-- Finish by cleaning up this log miner session.
exec dbms_logmnr.end_logmnr;


set echo on
--
-- Log Miner: Changes to table on local system
-- Data gathered only if ARCHIVELOG mode is on
--
set echo off

-- Mine local logs, since the time we registered our first from primary.
-- The files may not be on disk, so we trap for that.
declare
CURSOR logfiles IS SELECT name FROM v$archived_log
WHERE dest_id = 1
AND completion_time > (select min(timestamp) from dba_logstdby_log)
ORDER BY first_time DESC;
instver NUMBER;
mineopt NUMBER;
begin
BEGIN
FOR log IN logfiles LOOP
dbms_logmnr.add_logfile(log.name);
END LOOP;

EXCEPTION
WHEN OTHERS THEN mineopt := 0;
END;
end;
/

declare
instver NUMBER;
mineopt NUMBER;
begin
-- Get the instance base version#
select to_number(substr(version, 1, 2)) into instver from v$instance;

-- Start Log Miner (without a dictionary)
-- In 10.1 dbms_logmnr.NO_ROWID_IN_STMT = 2048
mineopt := dbms_logmnr.PRINT_PRETTY_SQL +
dbms_logmnr.DICT_FROM_ONLINE_CATALOG;
if instver > 9 then mineopt := mineopt + 2048; end if;

dbms_logmnr.start_logmnr(options => mineopt);
end;
/


-- Continue on error
Whenever sqlerror continue

-- Verify we've added the right logs
set numwidth 6
select thread_id, thread_sqn, low_time, high_time, dictionary_begin
from v$logmnr_logs order by thread_id, thread_sqn;

-- Look at redo for a given table or standby tables.
column scn format 999999999999999
select 'S' S, SCN, XIDUSN, XIDSLT, XIDSQN, ROLLBACK,
THREAD#, RBASQN, RBABLK, RBABYTE,
SQL_REDO
from v$logmnr_contents
where seg_name = 'LOGSTDBY$PARAMETERS'
or seg_name = 'LOGSTDBY$HISTORY'
or seg_name = 'LOGSTDBY$SKIP'
or seg_name = 'LOGSTDBY$SKIP_TRANSACTION'
or (seg_owner = :inowner and seg_name = :intable);

-- Finish by cleaning up this log miner session.
exec dbms_logmnr.end_logmnr;

spool off

exit

Dataguard SR diagnostics script for PRIMARY

-- NAME: dg_prim_diag.sql  (Run on PRIMARY with a LOGICAL or PHYSICAL STANDBY)
-- ------------------------------------------------------------------------
-- Copyright 2002, Oracle Corporation
-- LAST UPDATED: 2/23/04
--
-- Usage: @dg_prim_diag
-- ------------------------------------------------------------------------
-- PURPOSE:
-- This script is to be used to assist in collection information to help
-- troubeshoot Data Guard issues with an emphasis on Logical Standby.
-- ------------------------------------------------------------------------
-- DISCLAIMER:
-- This script is provided for educational purposes only. It is NOT
-- supported by Oracle World Wide Technical Support.
-- The script has been tested and appears to work as intended.
-- You should always run new scripts on a test instance initially.
-- ------------------------------------------------------------------------
-- Script output is as follows:

set echo off
set feedback off
column timecol new_value timestamp
column spool_extension new_value suffix
select to_char(sysdate,'Mondd_hhmi') timecol,
'.out' spool_extension from sys.dual;
column output new_value dbname
select value || '_' output
from v$parameter where name = 'db_name';
spool dg_prim_diag_&&dbname&&timestamp&&suffix
set linesize 79
set pagesize 35
set trim on
set trims on
alter session set nls_date_format = 'MON-DD-YYYY HH24:MI:SS';
set feedback on
select to_char(sysdate) time from dual;

set echo on

-- In the following the database_role should be primary as that is what
-- this script is intended to be run on. If protection_level is different
-- than protection_mode then for some reason the mode listed in
-- protection_mode experienced a need to downgrade. Once the error
-- condition has been corrected the protection_level should match the
-- protection_mode after the next log switch.

column role format a7 tru
column name format a10 wrap

select name,database_role role,log_mode,
protection_mode,protection_level
from v$database;

-- ARCHIVER can be (STOPPED | STARTED | FAILED). FAILED means that the
-- archiver failed to archive a log last time, but will try again within 5
-- minutes. LOG_SWITCH_WAIT The ARCHIVE LOG/CLEAR LOG/CHECKPOINT event log
-- switching is waiting for. Note that if ALTER SYSTEM SWITCH LOGFILE is
-- hung, but there is room in the current online redo log, then value is
-- NULL

column host_name format a20 tru
column version format a9 tru

select instance_name,host_name,version,archiver,log_switch_wait
from v$instance;

-- The following query give us information about catpatch.
-- This way we can tell if the procedure doesn't match the image.

select version, modified, status from dba_registry
where comp_id = 'CATPROC';

-- Force logging is not mandatory but is recommended. Supplemental
-- logging must be enabled if the standby associated with this primary is
-- a logical standby. During normal operations it is acceptable for
-- SWITCHOVER_STATUS to be SESSIONS ACTIVE or TO STANDBY.

column force_logging format a13 tru
column remote_archive format a14 tru
column dataguard_broker format a16 tru

select force_logging,remote_archive,
supplemental_log_data_pk,supplemental_log_data_ui,
switchover_status,dataguard_broker
from v$database;

-- This query produces a list of all archive destinations. It shows if
-- they are enabled, what process is servicing that destination, if the
-- destination is local or remote, and if remote what the current mount ID
-- is.

column destination format a35 wrap
column process format a7
column archiver format a8
column ID format 99
column mid format 99

select dest_id "ID",destination,status,target,
schedule,process,mountid mid
from v$archive_dest order by dest_id;

-- This select will give further detail on the destinations as to what
-- options have been set. Register indicates whether or not the archived
-- redo log is registered in the remote destination control file.

set numwidth 8
column ID format 99

select dest_id "ID",archiver,transmit_mode,affirm,async_blocks async,
net_timeout net_time,delay_mins delay,reopen_secs reopen,
register,binding
from v$archive_dest order by dest_id;

-- The following select will show any errors that occured the last time
-- an attempt to archive to the destination was attempted. If ERROR is
-- blank and status is VALID then the archive completed correctly.

column error format a55 wrap

select dest_id,status,error from v$archive_dest;

-- The query below will determine if any error conditions have been
-- reached by querying the v$dataguard_status view (view only available in
-- 9.2.0 and above):

column message format a80

select message, timestamp
from v$dataguard_status
where severity in ('Error','Fatal')
order by timestamp;

-- The following query will determine the current sequence number
-- and the last sequence archived. If you are remotely archiving
-- using the LGWR process then the archived sequence should be one
-- higher than the current sequence. If remotely archiving using the
-- ARCH process then the archived sequence should be equal to the
-- current sequence. The applied sequence information is updated at
-- log switch time.

select ads.dest_id,max(sequence#) "Current Sequence",
max(log_sequence) "Last Archived"
from v$archived_log al, v$archive_dest ad, v$archive_dest_status ads
where ad.dest_id=al.dest_id
and al.dest_id=ads.dest_id
group by ads.dest_id;

-- The following select will attempt to gather as much information as
-- possible from the standby. SRLs are not supported with Logical Standby
-- until Version 10.1.

set numwidth 8
column ID format 99
column "SRLs" format 99
column Active format 99

select dest_id id,database_mode db_mode,recovery_mode,
protection_mode,standby_logfile_count "SRLs",
standby_logfile_active ACTIVE,
archived_seq#
from v$archive_dest_status;

-- Query v$managed_standby to see the status of processes involved in
-- the shipping redo on this system. Does not include processes needed to
-- apply redo.

select process,status,client_process,sequence#
from v$managed_standby;

-- The following query is run on the primary to see if SRL's have been
-- created in preparation for switchover.

select group#,sequence#,bytes from v$standby_log;

-- The above SRL's should match in number and in size with the ORL's
-- returned below:

select group#,thread#,sequence#,bytes,archived,status from v$log;

-- Non-default init parameters.

set numwidth 5
column name format a30 tru
column value format a48 wra
select name, value
from v$parameter
where isdefault = 'FALSE';

spool off

Wednesday, September 3, 2008

oradebug to suspend and resume another user's session

# You can suspend and resume another user's session using the oradebug command in sqlplus.

select spid, pid from v$process where addr = (select paddr from v$session where sid = 117);

SPID PID
------------ ----------
729304 32

SQL> oradebug setospid 729304
Oracle pid: 32, Unix process pid: 729304, image: oracle@p-test-oranode-30 (TNS V1-V3)
SQL> oradebug suspend
Statement processed.
SQL> oradebug resume
Statement processed.

Tuesday, September 2, 2008

Taking ASM disk groups offline.

# first shutdown the databases on the relevant disk groups.

# on ASM
ALTER DISKGROUP ASM_ORADATA31 DISMOUNT;
ALTER DISKGROUP ASM_ORADATA32 DISMOUNT;

SQL> select name, state from v$asm_diskgroup;

NAME STATE
------------------------------ -----------
ASM_ORADATA11 MOUNTED
ASM_ORADATA12 MOUNTED
ASM_ORADATA13 MOUNTED
ASM_ORADATA32 DISMOUNTED
ASM_ORADATA31 DISMOUNTED
ASM_ORADATA41 MOUNTED
ASM_ORADATA42 MOUNTED

ALTER DISKGROUP ASM_ORADATA31 MOUNT;
ALTER DISKGROUP ASM_ORADATA32 MOUNT;

Working with RMAN stored scripts

LIST SCRIPT NAMES;

PRINT SCRIPT full_backup;

PRINT GLOBAL SCRIPT full_backup TO FILE 'my_script_file.txt';

CREATE SCRIPT full_backup { BACKUP DATABASE PLUS ARCHIVELOG; DELETE OBSOLETE; }

CREATE GLOBAL SCRIPT global_full_backup COMMENT 'use only with ARCHIVELOG mode databases' { BACKUP DATABASE PLUS ARCHIVELOG; DELETE OBSOLETE; }

CREATE SCRIPT full_backup FROM FILE 'my_script_file.txt';

RUN { EXECUTE SCRIPT full_backup; }
RUN { EXECUTE GLOBAL SCRIPT global_full_backup; }

replace script delete_sl02sbt_archives
{ delete noprompt force archivelog LIKE '%sl02sbt%' completed before "sysdate-2"; }

IP address exclusion rules in sqlnet.ora

$ORACLE_HOME/NETWORK/ADMIN/SQLNET.ORA

tcp.validnode_checking = YES
tcp.invited_nodes = {list of IP addresses}
tcp.excluded_nodes = {list of IP addresses}

You cannot specify a range, wildcard, partial IP or subnet mask
You must put all invited nodes in one line; likewise for excluded nodes.
You should always enter localhost as an invited node.
TCP.INVITED_NODES takes precedence over the TCP.EXCLUDED_NODES

Monday, September 1, 2008

Trigger events

DML_event_clause

DELETE

Specify DELETE if you want the database to fire the trigger whenever a DELETE statement removes a row from the table or removes an element from a nested table.


INSERT

Specify INSERT if you want the database to fire the trigger whenever an INSERT statement adds a row to a table or adds an element to a nested table.


UPDATE

Specify UPDATE if you want the database to fire the trigger whenever an UPDATE statement changes a value in one of the columns specified after OF. If you omit OF, then the database fires the trigger whenever an UPDATE statement changes a value in any column of the table or nested table.

For an UPDATE trigger, you can specify object type, varray, and REF columns after OF to indicate that the trigger should be fired whenever an UPDATE statement changes a value in one of the columns. However, you cannot change the values of these columns in the body of the trigger itself.

ddl_event

ALTER

Specify ALTER to fire the trigger whenever an ALTER statement modifies a database object in the data dictionary.

ANALYZE

Specify ANALYZE to fire the trigger whenever the database collects or deletes statistics or validates the structure of a database object.


ASSOCIATE STATISTICS

Specify ASSOCIATE STATISTICS to fire the trigger whenever the database associates a statistics type with a database object.


AUDIT

Specify AUDIT to fire the trigger whenever the database tracks the occurrence of a SQL statement or tracks operations on a schema object.


COMMENT

Specify COMMENT to fire the trigger whenever a comment on a database object is added to the data dictionary.


CREATE

Specify CREATE to fire the trigger whenever a CREATE statement adds a new database object to the data dictionary.


DISASSOCIATE STATISTICS

Specify DISASSOCIATE STATISTICS to fire the trigger whenever the database disassociates a statistics type from a database object.


DROP

Specify DROP to fire the trigger whenever a DROP statement removes a database object from the data dictionary.


GRANT

Specify GRANT to fire the trigger whenever a user grants system privileges or roles or object privileges to another user or to a role.


NOAUDIT

Specify NOAUDIT to fire the trigger whenever a NOAUDIT statement instructs the database to stop tracking a SQL statement or operations on a schema object.


RENAME

Specify RENAME to fire the trigger whenever a RENAME statement changes the name of a database object.


REVOKE

Specify REVOKE to fire the trigger whenever a REVOKE statement removes system privileges or roles or object privileges from a user or role.


TRUNCATE

Specify TRUNCATE to fire the trigger whenever a TRUNCATE statement removes the rows from a table or cluster and resets its storage characteristics.


DDL

Specify DDL to fire the trigger whenever any of the preceding DDL statements is issued.

database_event

SERVERERROR

Specify SERVERERROR to fire the trigger whenever a server error message is logged.

The following errors do not cause a SERVERERROR trigger to fire:

  • ORA-01403: no data found

  • ORA-01422: exact fetch returns more than requested number of rows

  • ORA-01423: error encountered while checking for extra rows in exact fetch

  • ORA-01034: ORACLE not available

  • ORA-04030: out of process memory when trying to allocate string bytes (string, string)


LOGON

Specify LOGON to fire the trigger whenever a client application logs onto the database.


LOGOFF

Specify LOGOFF to fire the trigger whenever a client application logs off the database.


STARTUP

Specify STARTUP to fire the trigger whenever the database is opened.


SHUTDOWN

Specify SHUTDOWN to fire the trigger whenever an instance of the database is shut down.


SUSPEND

Specify SUSPEND to fire the trigger whenever a server error causes a transaction to be suspended.

DB_ROLE_CHANGE

In a Data Guard configuration, specify DB_ROLE_CHANGE to fire the trigger whenever a role change occurs from standby to primary or from primary to standby.


Saturday, August 30, 2008

Copy an archivelog file from ASM to disk using RMAN

backup as copy archivelog from logseq=78 until logseq=78 format ‘/tmp/%t%r%s%p’;

# or

backup as copy archivelog sequence between 22804 and 22804 thread 2 format '/tmp/22804.arc';

CATALOG ARCHIVELOG '/disk1/arch_logs/archive1_731.dbf',
'/tmp/archive1_78.dbf';

RAC node shutdown / startup

# shutdown services on a RAC node
sudo -u oracle /opt/oracle/product/10.2.0/crs/bin/srvctl stop instance -d p01cfd -i pl01cfd2 -o abort
sudo -u oracle /opt/oracle/product/10.2.0/crs/bin/srvctl stop asm -n t-prod-oranode-11
sudo -u oracle /opt/oracle/product/10.2.0/crs/bin/srvctl stop nodeapps -n t-prod-oranode-11


# startup services on a RAC node
sudo -u oracle /opt/oracle/product/10.2.0/crs/bin/srvctl start nodeapps -n t-prod-oranode-11
sudo -u oracle /opt/oracle/product/10.2.0/crs/bin/srvctl start asm -n t-prod-oranode-11
sudo -u oracle /opt/oracle/product/10.2.0/crs/bin/srvctl start instance -d p01cfd -i pl01cfd2

Thursday, August 28, 2008

What do filesystemio_options and disk_asynch_io Do?

From Metalink note 432854.1

What do filesystemio_options and disk_asynch_io Do?
disk_asynch_io is a kind of master switch, which turns on or off Async I/O to database files on any type of storage, whether it's raw device or filesystem. The filesystemio_options parameter gives finer control over I/O to database files on filesystems. It allows you to turn off async I/O to filesystem files but keep async I/O to raw devices if the "master" switch disk_asynch_io is set to true.

Instance initialization parameter filesystemio_options has four options:
1. "asynch" : means buffered I/O + Async I/O
2. "directIO" : means Direct I/O only
3. "setall" : means Direct I/O + Async I/O
4. "none" : disables Async I/O and Direct I/O

One should always use at least Direct I/O with OCFS/OCFS2. In fact one does not have choice as the database automatically adds that mode whenever it sees the file is on an OCFS volume or OCFS2 volume mounted with the datavolume mount option.
If the user wants aio with OCFS/OCFS2, use setall.
If the user wants aio with ASM/ASMlib, he is expected to set filesystemio_options=setall and disk_asynch_io=true, this is because ASM's io is transparent to RDBMS, but these 2 parameters will impact the behaviors of RDBMS, please see Note 413389.1 .

Wednesday, August 27, 2008

Oracle 10g proxy users

If you don't have the password of an account, but you need to connect to test or extract DDL, Oracle 10g allows you to connect as a proxy (instead of resetting the password). This also allows you to have one schema, but allow many named users to connect, meaning no need for grants, roles, or synonyms.

This also means that one named user can be a proxy for many different application schemas depending on the connect string.

# allow oradba to connect to bi_staging as a proxy
alter user bi_staging grant connect through oradba;

# connect to the bi_staging account using the oradba password
sqlplus oradba[bi_staging]/oradba_password

# check the environment
show user
select user from dual;
select sys_context('USERENV','PROXY_USER') from dual;

Using dbms_metadata.get_ddl

In Oracle 10g sys.link$ no longer contains a clear text password, so if you need to recreate the database links you need to use the encrypted password. e.g

SET LONG 10000
SELECT dbms_metadata.get_ddl('DB_LINK', 'MY_LINK')FROM dual;

CREATE DATABASE LINK "MY_LINK"
CONNECT TO "TEST_ACCOUNT" IDENTIFIED BY VALUES '05E6F2E494A9AA49D4E0CA5F42300237FE'
USING 'TEST_DB';

Thursday, August 21, 2008

NLS_LANG diagnostics

select parameter, value from v$nls_parameters where parameter in ('NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET');

echo $NLS_LANG

# run csscan to check for issues when export/import between databases with different characterset

CSSCAN SYSTEM/MANAGER FULL=y TOCHAR=AL32UTF8 ARRAY=102400 PROCESS=3

Wednesday, August 20, 2008

Transportable tablespace

# on the source database

ALTER TABLESPACE test READ ONLY;
select file_name from dba_data_files where tablespace_name = 'TEST';

exp
userid=\"sys/change_on_install@olddb as sysdba\"
transport_tablespace=y
tablespaces=test
triggers=y
constraints=y
grants=y
file=test.dmp

copy test_01.dbf to the filesystem for the destination database

ALTER TABLESPACE fact1 READ WRITE;

# on the destination database

imp
userid=\"sys/change_on_install@newdb as sysdba\"
transport_tablespace=y
datafiles='test_01.dbf'
file=test.dmp

ALTER TABLESPACE test READ WRITE;
select file_name from dba_data_files where tablespace_name = 'TEST';

External table to select from Oracle listener log

create directory LISTENER_LOG_DIR
as '/u01/app/oracle/10.1/db1/network/log'
/

create table listener_log
(
log_date date,
connect_string varchar2(300),
protocol_info varchar2(300),
action varchar2(15),
service_name varchar2(15),
return_code number(10)
)
organization external (
type oracle_loader
default directory LISTENER_LOG_DIR
access parameters
(
records delimited by newline
nobadfile
nologfile
nodiscardfile
fields terminated by "*" lrtrim
missing field values are null
(
log_date char(30) date_format
date mask "DD-MON-YYYY HH24:MI:SS",
connect_string,
protocol_info,
action,
service_name,
return_code
)
)
location ('listener_prolin01.log')
)
reject limit unlimited
/

External table to select from Oracle alert log

create or replace directory bdump
as '/u01/app/oracle/admin/ts01/bdump';

create table alert_log ( text varchar2(400) )
organization external (
type oracle_loader
default directory BDUMP
access parameters (
records delimited by newline
nobadfile
nodiscardfile
nologfile
)
location('alert_ts01.log')
)
reject limit unlimited
/

Tuesday, August 19, 2008

External table

create or replace directory test_dir as '/bie_oraexp';
grant read,write on directory test_dir to robert;

CREATE TABLE ext_test (id)
ORGANIZATION EXTERNAL
(TYPE oracle_datapump DEFAULT DIRECTORY test_dir LOCATION ('test_table.csv'))
as select * from table1;

UTL_FILE test

create or replace directory test_dir as '/bie_oraexp';
grant read,write on directory test_dir to robert;

create or replace procedure robert.utl_file_test_write (firstline in varchar2)
is
output_file utl_file.file_type;
begin
output_file := utl_file.fopen ('TEST_DIR','test_file.txt', 'W');
utl_file.put_line (output_file, firstline);
utl_file.fclose(output_file);
end;
/

Monday, August 18, 2008

Drop all Oracle users created in the last year

select 'drop user '||username||' cascade;' from dba_users where created > sysdate -365;

Unusable indexes can't be skipped if they are a primary key.

create table test(id integer primary key, text varchar2(10));
create index text_index on test(text);
insert into test values (1,'test1');
alter table test move tablespace users;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
skip_unusable_indexes boolean TRUE

SQL> select table_name, index_name, status from user_indexes;

TABLE_NAME INDEX_NAME STATUS
------------------------------ ------------------------------ --------
TEST SYS_C005223 UNUSABLE
TEST TEXT_INDEX UNUSABLE

SQL> select * from test where id = 1;

ID TEXT
---------- ----------
1 test1

SQL> update test set text = 'test2';

1 row updated.

SQL> update test set id = '2';
update test set id = '2'
*
ERROR at line 1:
ORA-01502: index 'ROBERT.SYS_C005223' or partition of such index is in unusable
state

Saturday, August 16, 2008

RMAN examples

# cold backup
rman
connect target /
connect catalog rcowner/password@rman
configure controlfile autobackup on;
startup mount;
run {
allocate channel disk1 device type disk format '/backup/rman_%d_%T_%s_%p';
backup as compressed backupset database;
}

# compressed backup
connect target /
RUN {
ALLOCATE CHANNEL disk1 DEVICE TYPE DISK FORMAT '/backup/rman_%d_%T_%s_%p';
BACKUP as compressed backupset database plus archivelog;
}

# point-in-time restore
# restore controlfile
RMAN
CONNECT TARGET /
connect catalog rcowner/password@rman
set dbid 3298136637
RUN
{
sql 'alter session set NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"';
SET UNTIL TIME '2008-06-03 13:30:00';
ALLOCATE CHANNEL disk1 DEVICE TYPE DISK FORMAT '/backup/rman_%d_%T_%s_%p';
RESTORE CONTROLFILE;
}

# restore database
# exit rman and login again (to avoid ORA-01460)
RMAN
CONNECT TARGET /
connect catalog rcowner/password@rman
RUN
{
sql 'alter session set NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"';
SET UNTIL TIME '2008-06-03 13:30:00';
ALLOCATE CHANNEL disk1 DEVICE TYPE DISK FORMAT '/backup/rman_%d_%T_%s_%p';
ALTER DATABASE MOUNT;
RESTORE DATABASE;
}

# recover database
RUN
{
sql 'alter session set NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"';
SET UNTIL TIME '2008-06-03 14:00:00';
RECOVER DATABASE;
}

# open database
sqlplus '/ as sysdba'
ALTER DATABASE OPEN RESETLOGS;


# backup archivelogs for a specific day
backup archivelog from time 'SYSDATE-38' until time 'SYSDATE-37' filesperset 50 format '/arch1_%d_%u/';

# backup recovery area
BACKUP RECOVERY AREA;

# Incremental backups

ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/oracle/admin/test/bct.dbf';

SELECT * FROM V$BLOCK_CHANGE_TRACKING;

BACKUP DEVICE TYPE SBT INCREMENTAL FROM SCN 750923 DATABASE;
BACKUP INCREMENTAL FROM SCN 750923 DATABASE;
BACKUP DEVICE TYPE DISK INCREMENTAL FROM SCN 750983 DATABASE FORMAT '/tmp/incr_standby_%U';

BACKUP AS COMPRESSED INCREMENTAL LEVEL 1 TAG = WEEKLY DATABASE PLUS ARCHIVELOG;

# rolling incremental backups

RUN {
RECOVER COPY OF DATABASE WITH TAG 'incr_backup' UNTIL TIME 'SYSDATE - 7';
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'incr_backup' DATABASE;
}

# Rename datafile

select 'set newname for datafile '||file_id||' to '''||file_name||''';' from dba_data_files;

Check for scheduled and running Discoverer jobs

# all jobs

SELECT F.USER_NAME||' '||R.BR_WORKBOOK_NAME||' '||R.BR_JOB_ID
FROM DISCOVR_APPS.EUL5_BATCH_REPORTS R, APPS.FND_USER F, DBA_JOBS J
WHERE F.USER_ID = SUBSTR(R.BR_CREATED_BY,2,4)
AND J.JOB = R.BR_JOB_ID
ORDER BY F.USER_NAME;

# running jobs

SELECT /*+ ordered */ J.SID, R.BR_JOB_ID, F.USER_NAME, R.BR_WORKBOOK_NAME
FROM DISCOVR_APPS.EUL5_BATCH_REPORTS R, APPS.FND_USER F, DBA_JOBS_RUNNING J
WHERE F.USER_ID = SUBSTR(R.BR_CREATED_BY,2,4)
AND J.JOB = R.BR_JOB_ID;

Profile Options changes to enable diagnostics

If you set the profile option "FND: Diagnostics" to "yes" and "Utilities: Diagnostics" to "yes" and "Hide Diagnostics menu entry" to "no", that will give the user of the form the ability to trace.

Find the forms session holding a lock

SELECT to_char(start_time,'HH:MI'),SUBSTR(d.user_name,1,30) "User Name"
, a.pid
, b.sid
, b.serial#
FROM v$process a, v$session b, fnd_logins c, fnd_user d
WHERE a.pid = c.pid
AND d.user_id = c.user_id
AND a.addr = b.paddr
AND c.end_time IS NULL
and to_char(c.start_time,'DD-MON-YYYY') = '11-MAR-2007'
and b.sid = 28
order by start_time;

Shared Pool Usage

select 100-round(a.bytes/b.sm*100,2) pctused from
(select bytes from v$sgastat where name='free memory' AND pool='shared pool') a,
(select sum(bytes) sm from v$sgastat where pool = 'shared pool') b;

UNDO tablespace usage

# UNDO tablespace usage

select a.process, a.program, a.module, a.machine, b.USED_UREC, c.sql_text
from v$sql c, v$session a, v$transaction b
where b.addr = a.taddr
and a.sql_address = c.address
and a.sql_hash_value = c.hash_value
order by b.USED_UREC;

SELECT s.sid , s.username , t.used_ublk
FROM v$transaction t
, v$session s
WHERE 1 = 1
AND t.ses_addr = s.saddr

column username format a15;
column segment_name format a15;
SELECT s.sid , s.username , t.used_ublk, round((t.used_ublk*8)/1024) size_in_MB_8kb_Block_size, round((t.used_ublk*16)/1024
) size_in_MB_16kb_Block_size
FROM v$transaction t
, v$session s
WHERE 1 = 1
AND t.ses_addr = s.saddr;

SELECT distinct rpad(s.sid,3) "SID",S.USERNAME,
E.SEGMENT_NAME,
T.START_TIME "Start",
rpad(T.STATUS,9) "Status",
round((t.used_ublk*8)/1024) "Size(MB)"
--T.USED_UBLK||' Blocks and '||T.USED_UREC||' Records' "Rollback Usage"
FROM DBA_DATA_FILES DF,
DBA_EXTENTS E,
V$SESSION S,
V$TRANSACTION T
WHERE DF.TABLESPACE_NAME = E.TABLESPACE_NAME AND
DF.FILE_ID = UBAFIL AND
S.SADDR = T.SES_ADDR AND
T.UBABLK BETWEEN E.BLOCK_ID AND E.BLOCK_ID+E.BLOCKS AND
E.SEGMENT_TYPE in( 'ROLLBACK','TYPE2 UNDO')

TEMP tablespace usage

#TEMP tablespace usage
column tablespace format a12
column username format a12
break on username nodup skip 1
select se.username
,se.sid
,su.extents
,su.blocks * to_number(rtrim(p.value)) as Space
,tablespace
,segtype
from v$sort_usage su
,v$parameter p
,v$session se
where p.name = 'db_block_size'
and su.session_addr = se.saddr
order by space , se.username, se.sid;

# coalesce qmn queues

# coalesce qmn queues

Select 'alter table applsys.aq$_'||queue_table||'_I coalesce;'
From dba_queues
Where owner = 'APPLSYS'
Union
Select 'alter table applsys.aq$_'||queue_table||'_T coalesce;'
From dba_queues
Where owner = 'APPLSYS'
Union
Select 'alter table applsys.aq$_'||queue_table||'_H coalesce;'
From dba_queues
Where owner = 'APPLSYS';

Check if a patch has been applied to Oracle Apps

select patch_level from fnd_product_installations where patch_level like '%FND%';
select patch_name from ad_applied_patches where patch_name in('5137963','4968815','4898130');
select bug_number from ad_bugs where bug_number in('5137963','4968815','4898130');

Test URLS for Oracle Apps 11i

# Test URLS after a clone or upgrade
http://server:8062/dev60cgi/f60cgi
http://server:8062/OA_HTML/US/ICXINDEX.htm
http://server:8062/pls/gfiinfd1/fnd_web.ping
http://server:8062/servlets/IsItWorking
http://server:8062/servlets/oracle.apps.icx.common.ConnectionTest
http://server:8062/OA_HTML/jsp/fnd/aoljtest.jsp
http://server:8062/servlets/oracle.apps.bne.framework.BneCaboTestServlet
http://server:8062/oa_servlets/BNETEST

# Oracle Applications Manager :-
http://server:8050/servlets/weboam/oam/oamLogin

# Diagnostics
http://server:8060/OA_HTML/jtfqalgn.htm

To speed up large patchsets or merge patches with adpatch

# to run a large merge patch in minimum time without multiple compiles etc turn off the unnecessary options. After the patch has completed, you will need to manually run these steps using adadmin.

adpatch options=nogenform,nocompiledb,noautoconfig,nocompilejsp

#To run adpatch in silent mode, first edit adalldefaults.txt then run patches as follows :-

adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/adalldefaults.txt patchtop=/TARGZ2TEMP/test/FPKRUP1/5610862_JA workers=15 options=nogenform,nocompiledb,noautoconfig,nocompilejsp logfile=u5610862_JA.log

Use FNDCPASS to change passwords

# change the apps password
FNDCPASS apps/apps 0 Y system/manager SYSTEM APPLSYS newappspassword

# change non-apps passwords
select 'FNDCPASS apps/apps 0 Y system/manager ORACLE '||oracle_username||' '||'newpassword;' from fnd_oracle_userid
where oracle_username not in ('APPS','APPS_MRC','APPLSYS','APPLSYSPUB','CTXSYS','EDWREP','PORTAL30_SSO','PORTAL30');

# change a user password
FNDCPASS apps/apps 0 Y system/manager USER sysadmin newpassword

java.lang.NoClassDefFoundError: oracle/apps/jtf/cache/CacheManager

java.lang.NoClassDefFoundError: oracle/apps/jtf/cache/CacheManager
1. Edit $APPL_TOP/admin/adjborg2.txt and comment out the line which has the reference to cache.jar or cache.zip.
2. Run adadmin Regenerate JAR Files.

Kill all unix processes matching a string

ps -ef |grep FNDLIBR | grep -v grep | awk '{print $2}' | xargs kill -9

WFRESGEN

WFRESGEN apps/apps 0 Y DATABASE /d01/app/applmgr/product/r11i/fnd/11.0.28/resource/US/wfcfg_ANZINTE.msg

Check for datafile corruption with dbverify

dbv FILE=/ghrqa2-ds1/ora10/oradata/ghrqa2/applsysd11.dbf BLOCKSIZE=8192

Manually compile a form using f45gen

f45gen userid=apps/apps module=ZGLX_ACCINQUIRY_US.fmb

Use ar25run to test a report

ar25run userid=apps/apps batch=yes destype=file desname=test.out desformat=HPW.prt \
report=/db11/applmgr/product/10.7_sgtst/mlapr/1.0.0/srw/MLFA0018.rdf \
P_BOOK='ML SGD FA BOOK' \
P_PERIOD1='JAN-2003' \
P_PERIOD2='FEB-2003' \
P_REPORT_TYPE='COST' \
P_ENTITY='2770' \
P_GL_ACCOUNT='1311600094' \
P_FR_COSTCTR='785087' \
P_TO_COSTCTR='785Z36'

Run a script across multiple databases

for dbservername in `cat /tmp/ora_server.lst`
do
echo `date '+%b %d %Y %T'` Checking $dbservername for something for Robert
$ORACLE_HOME/bin/sqlplus -s username/password@$dbservername @scanner.sql >> /tmp/robert_$dbservername
done

dbms_jobs

# check the status of jobs
select broken, failures from dba_jobs;

# execute a job
execute dbms_job.run (1);

# set jobs to broken so they don't run again
select 'execute dbms_job.broken('||job||',TRUE,'''||sysdate||''');' from dba_jobs where log_user = 'APPS' and broken = 'N';

Load a new workflow

WFLOAD apps/apps 0 Y UPLOAD FGIFA_ACCGEN_WF.wft

Find Unsuccessful Logins in the last 24 hours

select fu.user_name||' '||to_char(ful.attempt_time,'DD-MON-YYYY:HH24:MI')||' '||ful.login_name||' '||ful.terminal_id
from fnd_user fu, FND_UNSUCCESSFUL_LOGINS ful
where ful.user_id = fu.user_id (+)
and ful.attempt_time > sysdate - 1;

User rcp to copy a directory

rcp -rp applmgr@server1:/db02/applmgr/product/10.7_hkpro ./

Use RSYNC to synchronise two directories

# First setup ssh keys to allow login with no password

rsync -avz -e ssh --delete xxx@server1:/xxx/ /xxx/ --delete

Send a test email from Unix

/usr/ucb/mail -s "Test Email" test@test.com

Get explain plan for select statement

explain plan for select * from gl_balances order by period_name;
select plan_table_output from table(dbms_xplan.display('plan_table'));

or

set autotrace on statistics;
select x from y;

or

alter session set timed_statistics=true;
alter session set max_dump_file_size=unlimited;
sys.dbms_support.start_trace(waits=>true, binds=>true)
{{{{ Put the code to be traced here}}}}
sys.dbms_support.stop_trace()

# turn on trace for a running session

select 'exec sys.dbms_system.set_ev('||sid||','||serial#||',10046,12,'''');' from v$session where sid in ('75');

begin
sys.dbms_system.set_sql_trace_in_session(165,31125,TRUE );
end;

tkprof sambprd_ora_29857.trc gps.tkp explain=username/password

ORADEBUG SETOSPID 21237
ORADEBUG EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 4

Identify concurrent requests using the most system resource.

select fcp.USER_CONCURRENT_PROGRAM_NAME||'^'||count(*)||'^'||trunc(sum(actual_completion_date - actual_start_date)*24)
from fnd_concurrent_programs_tl fcp,
fnd_concurrent_requests fcr
where fcp.concurrent_program_id = fcr.concurrent_program_id
and fcp.language = 'US'
group by fcp.USER_CONCURRENT_PROGRAM_NAME
order by sum(actual_completion_date - actual_start_date)*24 desc;

Identify concurrent requests using a specific printer.

select
to_char(fcr.actual_completion_date,'dd-mon-yyyy:hh24:mi')||' '||
fcr.request_id||' '||
fu.user_name||' '||
fcp.user_concurrent_program_name
from fnd_concurrent_requests fcr, fnd_concurrent_programs_tl fcp, fnd_user fu
where fcr.concurrent_program_id = fcp.concurrent_program_id
and fcr.requested_by = fu.user_id
and fcp.language = 'US'
and fcr.number_of_copies > 0
and upper('hkfinp09') like upper(fcr.printer)
and sysdate - fcr.actual_completion_date < 30
order by fcr.actual_completion_date;

Create a public database link

create public database link xxx connect to user identified by password using 'xxx';

Adpatch hotpatch

Apply a small patch without the need to enable maintenance mode.

adpatch options=hotpatch

tar and gzip via a unix pipe to save space

# tar and gzip via a unix pipe
mknod mypipe p
gzip -c <> 9.2.0.tar.gz &
tar cvf mypipe 9.2.0

or

tar cf - gfiprd1appl | gzip -1 -c > gfiprd1appl.tar.gz

# gunzip and untar in one command via a unix pipe
gunzip < 9.2.0.tar.gz | tar xvf -

or

gzip -dc 920.tar.gz | tar -xvf -

Friday, August 15, 2008

ASM diagnostics statements.

SELECT GROUP_NUMBER, DISK_NUMBER, TOTAL_MB/1024 total, free_mb/1024 used, NAME FROM v$asm_disk;

select group_number, name, total_mb/1024 total_gb, free_mb/1024 free_gb
from v$asm_diskgroup;

select type, sum(bytes) from v$asm_file group by type;

Rollback a patch with opatch

PATH=$PATH:$ORACLE_HOME/OPatch
opatch lsinventory
opatch rollback -id 6646853 -all_nodes
opatch lsinventory

create an spfile on ASM from a pfile in the OS

create spfile='+DISK/spfile' from pfile='/private/init.ora';

Reclaim space by purging the DBA Recycle Bin

SQL> select count(*), sum(space) from dba_recyclebin;

COUNT(*) SUM(SPACE)
---------- ----------
187 85504

purge dba_recyclebin;


# Flashback a dropped table.

select type, object_name, original_name from recyclebin;


TYPE OBJECT_NAME
------------------------- ------------------------------
ORIGINAL_NAME
--------------------------------
TABLE BIN$X/qPdSN0kATgQNwKilpFfQ==$0
X

TABLE BIN$YRYsJpVJzxngQNwKilpz2Q==$0
ADDRESS

INDEX BIN$YRYsJpVIzxngQNwKilpz2Q==$0
IDX_ADDR_DELETED

INDEX BIN$YRYsJpVHzxngQNwKilpz2Q==$0
IDX_ADDR_CITYSTATE

INDEX BIN$YRYsJpVGzxngQNwKilpz2Q==$0
IDX_ADDR_ADDRESS

INDEX BIN$YRYsJpVFzxngQNwKilpz2Q==$0
IDX_ADDR_ADDRESSID


flashback table address to before drop;