Wednesday, September 23, 2009

Enable Oracle auditing BEFORE you need it.

With increasing application complexity, data privacy laws, hosting, outsourcing, and the use of electronic records in litigation the need for database and application audit reports is increasing.

  • If a client sued your company, could you give a full report listing changes to their account data ?
  • If a code change lost your company millions of dollars, could you identify who ran it ?
  • If you found client credit card numbers on the internet, could you identify the leak ?

All transactions that run in an Oracle database will leave evidence in the table blocks, undo tablespace, redo logs, controlfiles, or in memory, but a lot of that evidence is transient, and searching through many GB of files and memory dumps with a Hex editor is expensive and time consuming. With a few simple commands you can start collecting audit information for future use to clearly identify who, what, where, and when. In addition to tracking changes audit does have side benefits for system management like identifying unused tables, unused indexes, inactive accounts, failed transactions etc.

Oracle databases allow you to audit everything from a failed user login down to selecting a specific table attribute or row, but remember that auditing does use CPU and IO, so will slowdown the transactions you audit and increase disk usage. There is a cost/benefit to auditing of granularity vs performance, so if DML execution time is critical on your system you should choose an audit method with minimum impact, only audit specific data like financial payments, or only audit DDL. Because audit can write a lot of data to tables or logs you need to define a policy for retention, purging, and archival. Remember that any audit records on a compromised system may have been modified to cover up the breach, so extra care should be taken to secure audit records on the OS, in the database, or preferable on an a secure remote server.

As a rough guide, if you increase security on a system, you should increase auditing to confirm that security. If you are worried about access to sensitive data, then along with database security and auditing you should try to limit the number of access routes to that data, the number of copies of that data kept on disk, and also implement a data scrubbing process to execute when refreshing DEV and TEST environments from PROD.

  • Low security databases :- e.g development and test databases, read-only historical databases, databases with transient data (e.g not backed up). At an absolute minimum all databases should be auditing logons. This will produce minimal records of who logged in, with no impact on DML transaction times.
  • Medium security databases :- The majority of databases should fit in this category - e.g stock exchange trading databases, telecoms billing databases, finance systems. These databases should audit SESSION, DDL, DCL to identify changes in table structure, code, and privileges. This will produce minimal records with no impact on DML transaction times. If additional audit is needed, then enabling supplemental logging will capture all DDL and DML transactions with minimal additional server load so they can be retrieved by log mining if needed.
  • High security databases :- This includes databases with sensitive information like HR salary and bonus information, Credit Card details, password stores, audit stores etc. These databases should audit SESSION, DDL, DCL, and data owners should identify what level of DML audit or even select audit is required, which tables should be audited, and what form of auditing is easiest to manage. Enabling supplemental logging will capture all DML and DDL transactions with minimum additional server load so they can be retrieved by log mining if needed. If the audit records need to be accessed on a regular basis, then an auditing solution which leaves records in tables would be a better solution than Logminer.


Below are some basic methods that Oracle provides for database auditing, with examples of how to access the audit information.

AUDIT_SYS_OPERATIONS parameter - Audit connections by sys and sysdba to .aud files.
When you enable the parameter audit_sys_operations the database will write a trace file of the session actions to the udump directory. This parameter should be enabled on ALL production databases.
Enable It
alter system set audit_sys_operations=TRUE scope=spfile;
-- then restart
shutdown immediate;
startup

Verify it
show parameter audit_sys_operations;

What does it give me ?
Check the .aud file. You will find a log of transactions run by the database session including SESSION, DML, DDL, Select.

More information on this parameter can be found here :- http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/initparams.htm#REFRN10005

AUDIT_TRAIL parameter - Audit SESSION, DCL, DDL, DML, Select statements to table AUD$ or OS files.
Setting up Oracle Audit using audit_trail parameter will cause the database to write audit records to table SYS.AUD$ or OS files. The DBA can then choose which audit options to enable to capture information about SESSION, DDL, DCL, DML, and select statements. This is the easiest and most common method of auditing an oracle database, and this parameter should be set on ALL production databases. Usually there will be a lot more DML statements than DDL and DCL, so many companies choose to only audit SESSION, DDL, and DCL.

AUDIT_TRAIL settings

Parameter Value Meaning
DB :- Enables database auditing and directs all audit records to the database audit trail (SYS.AUD$), except for records that are always written to the operating system audit trail
DB_EXTENDED :- Does all actions of AUDIT_TRAIL=DB and also populates the SQL bind and SQL text columns of the SYS.AUD$ table
XML :- Enables database auditing and directs all audit records in XML format to an operating system file
XML_EXTENDED Does all actions of AUDIT_TRAIL=XML, adding the SQL bind and SQL text columns
OS :- (recommended) Enables database auditing and directs all audit records to an operating system file

Enable it

alter system set audit_trail=DB, EXTENDED scope=spfile;
shutdown immediate;
startup;

# Audit logon, DDL and DCL (the following is only a basic list of audit options - see documentation for more detail)

audit create session by access; -- this collects login details including OSUSER, HOST etc, but unfortunately not program.
audit audit system by access;
audit grant any privilege by access;
audit grant any object privilege by access;
audit grant any role by access;
audit create user by access;
audit create any table by access;
audit create public database link by access;
audit create any procedure by access;
audit alter user by access;
audit alter any table by access;
audit alter any procedure by access;
audit alter database by access;
audit alter system by access;
audit alter profile by access;
audit drop user by access;
audit drop any procedure by access;
audit drop any table by access;
audit drop profile by access;

# If you choose to audit DML on a schema then it may generate a lot of data. Enable DML audit selectively.

audit select table, insert table, update table, delete table by payroll by access;

Verify it
show parameter audit_trail;

select * from dba_stmt_audit_opts
union
select * from dba_priv_audit_opts;

What does it give me ?

Select from the AUD$, DBA_AUDIT_TRAIL, DBA_AUDIT_SESSION tables/views to find records relating to audited SESSION, DDL, DML, DCL, Select statements.


-- Basic audit output listing user actions
select os_username,username,timestamp,action_name,returncode from dba_audit_session;

-- Audit report including the transaction statement for user PAYROLL.

select os_username,username,timestamp,action_name,sql_text
from dba_audit_trail where username = 'PAYROLL' order by timestamp;

# Remember that any information that identifies client information such as module and osuser could be faked from the client, so should be separately verified.


More information can be found here :- http://download.oracle.com/docs/cd/B28359_01/network.111/b28531/auditing.htm#BCGIDBFI

AUDIT VAULT - copy audit records to a remote server for protection and analysis.
Oracle Audit Vault is an Oracle product for storing and managing audit settings and audit records, which must be purchased in addition to your database license. An agent is installed on the database server and collects audit records from AUD$, FGA_LOG$, or OS files, and loads them back into the warehouse which is secured by Database Vault. Audit Vault can also extract audit settings for viewing, modification, or copying between databases. Audit Vault comes with pre-built fact and dimension tables, along with load scripts and pre-built reports for viewing logons, transactions etc. You can use the Audit Vault console to configure auditing rules.

More information can be found here :- http://download.oracle.com/docs/cd/E13850_01/doc.102/e13842/avusr_overview.htm#CJAJIDID

AWR / STATSPACK - record information from dynamic tables for later analysis.
AWR is part of the Database Diagnostic Pack which is an extra cost option on top of your database license. AWR is supposed to replace STATSPACK, but statspack is still available for Oracle 11g. Both tools read from database dynamic views and write to WRH$ and STATS$ tables for later use in performance analysis reports, but these tables can also be used for checking transaction history. Because these tools were written for performance analysis they only log transactions with CPU and IO over pre-set thresholds, so the WRH$ and STATS$ tables are only useful for checking large transactions, as small transactions may not be recorded.
Enable it
Take a snapshot to save dynamic view information to permanent tables.
-- AWR
exec dbms_workload_repository.create_snapshot;
--Statspack
exec statspack.snap
Verify it
--AWR
select snap_id, first_time from sys.wrh$_log;

--Statspack
select snap_id, snap_time from perfstat.stats$snapshot;

What does it give me ?
Select from the WHR$ and STATS$ tables to find large DDL, DML, Select transactions that started prior to the time the snap was taken.

--AWR report listing sql statements for table "employee"
select sql_text from wrh$_sqltext where sql_text like '%employee%';
--Statspack report listing sql statements for table "employee"
select sql_text from perfstat.STATS$SQLTEXT where sql_text like '%employee%';
More information on AWR can be found here :- http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/autostat.htm

COMMENTS in STATEMENTS and COMMITS

To more easily trace the source of statements in V$SQL or DBA_2PC_PENDING developers can add statement and commit comments.

Example of a commit comment

--run an insert to a remote database with a commit comment
insert into payroll.employee_interface@prod values (1,'JOHN);
commit comment 'EMPLOYEE_INTERFACE_LOAD_DB_LINK';
What does it give me ?
select local_tran_id, tran_comment from dba_2pc_pending;
Example of a statement comment
insert /* EMPLOYEE_LOAD_PKG.LOAD_EMPLOYEE */ into payroll.employee select * from payroll.employee_interface;
What does it give me ?
select sql_text from v$sql where sql_text like '%EMPLOYEE_LOAD_PKG%'
DATABASE VAULT - an extra layer of security to stop and/or audit access to specific data and transactions.

If you have the Database Vault option enabled you can use it to audit on DDL and DML. For example if a new Realm was created to protect the PAYROLL schema, and set to audit on Success and Failure, then it would record all actions on that schema. Database Vault is an extra cost option on top of your database license.

Enable it

-- Create a new PAYROLL realm
exec dvsys.dbms_macadm.create_realm('PAYROLL','PAYROLL','Y',3);
-- Add the payroll schema to the realm
exec dvsys.dbms_macadm.add_auth_to_realm('PAYROLL','PAYROLL',1);
-- Secure all objects in the payroll schema.
exec dvsys.dbms_macadm.add_object_to_realm('PAYROLL','PAYROLL','%','%');
Verify it
select id#, name from dvsys.realm_t$ where name = 'PAYROLL';
select grantee from dvsys.realm_auth$ where realm_id# in (
select id# from dvsys.realm_t$ where name = 'PAYROLL');
select owner, object_name from dvsys.realm_object$ where realm_id# in (
select id# from dvsys.realm_t$ where name = 'PAYROLL');
What does it give me ?

The ability to select DDL, DML, SELECT audit information from tables owned by DVSYS, or view security reports in the Database Vault console.

select timestamp, username, userhost, action_name, returncode from dvsys.audit_trail$ where username = 'PAYROLL';

-- Or you can use the Database Vault console to view security violations

More information on Oracle Database Vault can be found here :- http://download.oracle.com/docs/cd/B28359_01/server.111/b31222/dvintro.htm#DVADM001

DBMS_APPLICATION_INFO
You can call DBMS_APPLICATION_INFO.SET_MODULE and DBMS_APPLICATION_INFO.SET_ACTION from the application code to populate V$SESSION.MODULE and V$SESSION.ACTION. Well-written applications should use DBMS_APPLICATION_INFO to provide more information about running programs.
Enable it
exec dbms_application_info.set_module('EMPLOYEE_INTERFACE','IMPORT_EMPLOYEE_PKG');
exec dbms_application_info.set_action('VERIFY_EMPLOYEE_PKG');

What does it give me ?
The ability to more easily track the actions of a program via V$SESSION, and associated trace files.

select module, action from v$session where module = 'EMPLOYEE_INTERFACE';

DBMS_CRYPTO - collect checksums to be certain
Audit records found on a compromised system are always suspect, so to be absolutely certain of what has changed one option is to record row and object checksums and keep them in a remote secure location. Very few databases require this level of audit.

More information in this article by Paul M Wright :- http://www.oracleforensics.com/wordpress/index.php/2007/07/25/forensic-checksumming-on-all-versions-of-supported-oracle-databases/

DBMS_ERRLOG - Error Logging into tables
Error logging tables make use of the DBMS_ERRLOG package and the "log errors into" clause.
More information can be found here :- http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9014.htm#BGBEIACB

DBMS_FGA - Audit specific statements to table FGA_LOG$
Fine Grained Audit allows you to audit very specific changes to reduce audit records and limit impact to performance. E.g only audit a select statement when it includes the SALARY attribute, or only audit changes to bank account details.
Enable it
-- Audit select of attribute PAYROLL.EMPLOYEE.SALARY
exec dbms_fga.add_policy (object_schema=>'PAYROLL',
object_name=>'EMPLOYEE',
policy_name=>'EMPLOYEE_SALARY',
audit_column => 'SALARY',
statement_types => 'SELECT');

Verify it
select * from dba_audit_policies;
What does it give me ?
The ability to audit very specific actions to minimise audit records - e.g only selects that include a specific attribute.
select timestamp,db_user,object_schema,object_name,scn,sql_text
from dba_fga_audit_trail
where db_user = 'PAYROLL';
More information can be found here :- http://download.oracle.com/docs/cd/B28359_01/network.111/b28531/auditing.htm#DBSEG525

DBMS_FLASHBACK / FLASHBACK QUERY - select old data from UNDO tablespace
Flashback query selects from the UNDO tablespace. Undo records are aged out of the undo tablespace, so to retain them longer you may want to increase undo retention, increase the size of the undo tablespace, or use UNDO RETENTION GUARANTEE when you create your undo tablespace. As UNDO records are logged by normal database transactions, there is no extra workload to run Flashback Query, but undo records are normally aged out of the undo tablespace, so this is not a good solution for checking what happened last month. For long term storage of UNDO records see FLASHBACK ARCHIVE below.

-- Using DBMS_FLASHBACK to select a previous state from the employee table
exec dbms_flashback.enable_at_time(to_timestamp('2009-07-08 22:37:24', 'YYYY-MM-DD HH24:MI:SS'));
select salary from payroll.employee where employee_id = 1;
exec dbms_flashback.disable;

More information can be found in the Oracle Database PL/SQL Packages and Types Reference :- http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_flashb.htm#BGBHHJHI

-- Using the "as of timestamp" clause to select a previous state from the employee table
select salary
from payroll.employee as of timestamp to_timestamp('2009-07-08 22:37:24', 'YYYY-MM-DD HH24:MI:SS')
where employee_id = 1;
More information can be found here :- http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/consist.htm#CNCPT421

DBMS_LOGMNR / SUPPLEMENTAL LOGGING - Log transaction statements to REDO logs.
Supplemental logging adds the transaction statement into the redo logs. Logminer can then extract and store those transactions from the archive logs. Supplemental logging only adds a small performance impact, and you only need to mine the data you want, when you want it, so this is a low impact option for auditing all DDL and DML, but it does not audit selects.
Enable it
shutdown immediate;
startup mount exclusive;
alter database add supplemental log data;
shutdown immediate;
startup

Verify it
select supplemental_log_data_min from v$database;
What does it give me?
Since Logminer records the redo and undo for all transactions in the database including DDL, DML, DCL, triggers, audit etc there is a huge amount of data collected, including all DDL and DML on the tables with supplemental logging, but unfortunately not select statements.

-- Mine a specific log
select name, first_time from v$archived_log;
exec dbms_logmnr.add_logfile(logfilename => '/ora01/flash_recovery_area/AUDIT1/archivelog/2009_07_08/o1_mf_1_108_55b65m23_.arc', OPTIONS => DBMS_LOGMNR.NEW);
exec dbms_logmnr.start_logmnr(options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
-- Generate a report on the transactions that ran against table PAYROLL.EMPLOYEE.
select scn, operation, sql_redo, sql_undo from V$LOGMNR_CONTENTS where table_name = 'EMPLOYEE' and username = 'PAYROLL';
More information can be found here :- http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/logminer.htm#i1021068

CHANGE DATA CAPTURE - Log table changes to a CDC table.
Change Data Capture uses Logminer to log data changes to a change table.
Enable it.
-- Create the change table.
exec dbms_logmnr_cdc_publish.create_change_table ('payroll','payroll_ct','SYNC_SET','payroll','employee','employee_id NUMBER,salary NUMBER','BOTH', 'Y','N','N','Y','N','Y','N','','N');
-- Create a logminer subscription handle
variable subhandle NUMBER;
execute dbms_logmnr_cdc_subscribe.get_subscription_handle (CHANGE_SET => 'SYNC_SET', DESCRIPTION => 'Changes to classes table', SUBSCRIPTION_HANDLE => :subhandle);
-- Create a logminer subscription
execute dbms_logmnr_cdc_subscribe.subscribe (subscription_handle => :subhandle, source_schema => 'payroll', source_table => 'employee', column_list => 'employee_id, salary');
-- Activate the subscription
execute dbms_logmnr_cdc_subscribe.activate_subscription (SUBSCRIPTION_HANDLE => :subhandle);
Verify it.
select count(*) from payroll_ct;
What does it give me ?

-- Select from the change table to list DML data changes.

variable viewname varchar2(40)
execute dbms_logmnr_cdc_subscribe.extend_window (subscription_handle => :subhandle);
execute dbms_logmnr_cdc_subscribe.prepare_subscriber_view (SUBSCRIPTION_HANDLE => :subhandle,
SOURCE_SCHEMA => 'payroll',
SOURCE_TABLE => 'employees',
VIEW_NAME => :viewname);
print viewname
column myview new_value subscribed_view noprint
select :viewname myview FROM dual;
select * FROM &subscribed_view;

More information can be found here :- http://download.oracle.com/docs/cd/B28359_01/server.111/b28313/cdc.htm#insertedID0

DBMS_MVIEW
Materialised Views can be used to copy tables to a remote location for comparison with the primary dataset. Materialised View Logs can also be used to identify rows that have changed since the last refresh.
Enable it
grant create materialized view to payroll_audit;
-- Create the mview log
connect payroll
create materialized view log on employee;
grant select on payroll.employee to payroll_audit;
grant select on MLOG$_EMPLOYEE to payroll_audit;
-- Create the mview
connect payroll_audit
create materialized view payroll_audit.employee_mview refresh fast as select * from payroll.employee;
exec dbms_mview.refresh('EMPLOYEE_MVIEW','F');
Verify it
select * from employee_mview;
What does it give me ?
Mviews can be used to copy user data and audit records to a secure database at regular intervals, and you can select mview log entries for transactions since the last mview refresh.
select employee_id, dmltype$$ from payroll.MLOG$_EMPLOYEE;

More information can be found here :- http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_mview.htm#CEGHCECB

DBMS_SESSION
You can call DBMS_SESSION.SET_IDENTIFIER from the application to populate V$SESSION.CLIENT_IDENTIFIER with additional information. This is useful for keeping track of session information and trace files.
Enable it.
exec dbms_session.set_identifier('EMPLOYEE_BATCH');
What does it give me ?
The ability to more easily track user sessions in the database.
select sid from v$session where client_identifier = 'EMPLOYEE_BATCH';
DBMS_STREAMS_ADM - capture, propagate, transform, and apply DDL and DML changes to another table or another database.
Steams combines Logminer and AQ to provide a method to replicate data to another location. This can be used to copy data to a secure schema or remote database for later comparison with the primary data.

An example can be found here :- http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14229/capappdemo.htm

It is also possible with Streams to send archivelogs to a remote server and do the Logmining there to avoid anyone intercepting the logs, mined data, or AQ transactions before they are replicated.

Enable it.
-- Example code to create a Streams Capture Rule.
begin
dbms_streams_adm.ADD_TABLE_RULES(
table_name => 'payroll.employee',
streams_type => 'capture',
streams_name => 'payroll_employee_stream',
queue_name => 'strmadmin.employee_queue',
include_dml => true,
include_ddl => true,
inclusion_rule => true);
end;
/

-- Example code to create a Streams Propagate Rule.
begin
dbms_streams_adm.ADD_TABLE_PROPAGATION_RULES(
table_name => 'payroll.employee',
streams_name => 'payroll_employee_propagate',
source_queue_name => 'strmadmin.employee_queue',
destination_queue_name => 'strmadmin.employee_queue@AUDIT',
include_dml => true,
include_ddl => true,
source_database => 'PROD',
inclusion_rule => true);
end;
/
-- Example code to create a Streams Apply Rule.
begin
dbms_streams_adm.ADD_TABLE_RULES(
table_name => 'payroll.employee',
streams_type => 'apply',
streams_name => 'payroll_employee_apply',
queue_name => 'strmadmin.employee_queue',
include_dml => true,
include_ddl => true,
source_database => 'PROD',
inclusion_rule => true);
end;
/
Verify it.
update employee@prod set salary = 100 where employee_id=1;
select salary from employee@AUDIT where employee_id=1;
What does it give me ?

DML and DDL replicated to an audit log table, or a secure remote database. It is also possible to add rules to transform data into log formats or skip certain transactions.

DBMS_WORKLOAD_CAPTURE - records all transactions over a period to a file for playback.
DBMS_WORKLOAD_CAPTURE captures database activity to a file for playback. The file format is binary, so not easily used for audit purposes, but it is possible to grep specific statements out of the file. This tool records all DML, DDL, Select statements, so can create very large files very quickly, so is really only suitable for short-term auditing.

Enable it
create or replace directory replay as '/ora01/replay';
-- Start capture to file
begin
dbms_workload_capture.start_capture (name => 'test_capture',
dir => 'REPLAY',
duration => NULL);
end;
/
-- End capture to file
begin
dbms_workload_capture.finish_capture;
end;
/
What does it give me ?
Files containing all DDL, DML, Select transactions run during the capture period. The output files can be played back, or searched for specific table names etc.

More information on Oracle Workload Capture can be found here :- http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/d_workload_capture.htm#CHDEDGGH

FLASHBACK DATABASE - return the database to a previous state.

Flashback database relies on additional Flashback Logs being written to the db_recovery_file_dest. This is an extra set of logs for the database to write, so will increase total IO and CPU usage on the database. Flashback logs can take up a lot of space, so usually DBAs will need to set a target retention period to limit the logs kept on disk, meaning that flashback database is only useful to rollback changes that happened in recently - e.g in the last 24 hours.
Enable it

shutdown immediate;
startup mount exclusive;
alter database flashback on;
shutdown immediate;
startup;
Verify it
show parameter db_recovery_file_dest;
show parameter db_recovery_file_dest_size;
select flashback_on from v$database;
select * from v$flashback_database_log;
What does it give me ?

The ability to flashback the whole database to a previous point in time to compare previous data values.

More information on Flashback Database can be found here :- http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/backrec.htm#CNCPT1440

FLASHBACK ARCHIVE - store UNDO records in a permanent tablespace for long-term storage.

Flashback Archive / Oracle Total Recall extends the use of flashback query by saving the undo records in a tablespace so they are retained for a longer period. As it saves data from the UNDO tablespace, there should not be a performance impact on transactions. Oracle Total Recall is an extra cost option in Oracle 11g.

Enable it
create flashback archive default payroll_fla tablespace payroll_data quota 1G retention 1 year;
alter table payroll.employee flashback archive payroll_fla;
What does it give me ?

The same result as flashback query, but a much longer retention period.

More information can be found here :- http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28424/adfns_flashback.htm#ADFNS01011

PROFILES - Log extra information to USER$ and USER_HISTORY$
User profiles can be useful to record additional information about login failures, password changes, and stop reuse of passwords.
Enable it
create profile payroll_profile limit failed_login_attempts 10;
alter profile payroll_profile limit password_reuse_max 10;
alter user payroll profile payroll_profile;
Verify it
select * from dba_profiles where profile = 'PAYROLL_PROFILE';
select profile from dba_users where username = 'PAYROLL';
What does it give me ?

# Password history
select u.name, uh.password, uh.password_date from user_history$ uh, user$ u where u.user#=uh.user# and u.name = 'PAYROLL';
# Users who have had failed logins since the last successful login.
select name, ltime, lcount from user$ where lcount > 1;
More information can be found here :- http://download.oracle.com/docs/cd/B28359_01/network.111/b28531/authentication.htm#DBSEG33236

RECYCLEBIN - store dropped tables for emergency restore.
The Oracle Recyclebin feature allows DBAs to see what tables have been dropped (until the recyclebin has been purged). Recyclebin functionality only impacts dropping tables, so won't cause a performance impact to DML transactions. The DBA can purge the recyclebin at regular intervals, and the objects in the recyclebin can be purged automatically by the database if it needs the space.
Enable it
alter system set recyclebin=on;
Verify it
show parameter recyclebin;
What does it give me ?

The ability to select from dropped tables, and restore them if needed.
select object_name, original_name from user_recyclebin;
More information can be found here :- http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/tables011.htm#ADMIN11681

TRACE FILES
  • It is possible to trace every database session by setting init.ora parameter sql_trace=TRUE, but the volume of trace files generated will probably be too big to search.
  • Oracle recommend that you set parameter trace_enabled=TRUE so that system errors are logged to trace files.
  • If you know the SID, or PID you can enable tracing to record what a session is running. You can also automate this via a trigger.

In 10g trace files will be written to user_dump_dest. In 11g trace files will be written to $ORACLE_BASE/diag/rdbms/DB_NAME/$ORACLE_SID/trace. Because so much information is logged, tracing is usually only enabled for a specific session for a short period of time, and DBAs need to decide what level of information to trace (e.g waits, bind variables etc)
Enable it.
Session tracing can be enabled via multiple methods including ALTER SESSION, ALTER SYSTEM, DBMS_SESSION, DBMS_SUPPORT, DBMS_SYSTEM, DBMS_MONITOR.

-- ALTER SESSION
alter session set tracefile_identifier = 'robert_trace'; # so that the trace files from this session are easily identified
alter session set events '10046 trace name context forever, level 8';
-- ALTER SYSTEM
alter system set sql_trace=true;
-- DBMS_SESSION
exec dbms_session.set_sql_trace(sql_trace => TRUE);
-- DBMS_SUPPORT
exec dbms_support.start_trace(waits=>TRUE, binds=>FALSE);
-- DBMS_SYSTEM
exec sys.dbms_system.set_ev(72,21237,10046,12,'');
-- ORADEBUG
select username, spid from v$process;
ORADEBUG SETOSPID 21237
ORADEBUG EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12
-- DBMS_MONITOR
exec dbms_monitor.session_trace_enable(session_id=>75, serial_num=>21237);
What does it give me ?
Any DDL, DCL, DML, Select transaction run after session tracing is enabled will be logged to the trace file.

More information can be found here :- http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_monitor.htm#i1003993

TRIGGERS - custom triggers and plsql to log audit to custom tables.

Triggers allow very specific auditing based on custom code, so can record exactly the information you want, but because the trigger fires during the transaction, the transaction needs to wait for the audit code to complete, impacting performance. Triggers can also call packages like UTL_FILE to write OS logfiles, UTL_MAIL to email DBAs, or UTL_HTTP to post messages to central logging websites. As this audit method uses custom code, it will require development and testing effort, but can be good for event driven, targeted auditing.

LOGON triggers
Login triggers can be useful for recording specific information not available in AUD$. E.g audit session to aud$ does not record the program used.
Audit user logon details to a table
Enable it.

create table payroll_audit.audit_users
(
username VARCHAR2(30),
osuser VARCHAR2(30),
sid NUMBER ,
host VARCHAR2(30),
ip_address VARCHAR2(30),
program VARCHAR2(48),
logon_time DATE
) ;

create or replace trigger sys.logon_audit_trigger
after logon on database
begin
insert
into payroll_audit.audit_users VALUES
(
sys_context('USERENV','SESSION_USER'),
sys_context('USERENV','OS_USER') ,
sys_context('USERENV','SID') ,
sys_context('USERENV','HOST') ,
sys_context('USERENV','IP_ADDRESS') ,
sys_context('USERENV','MODULE') ,
sysdate
);
end;
/
What does it give me ?

Select from the audit table to find out who logged into what schema.
select logon_time,username,program from audit_users
order by logon_time;
Trace actions by a particular user.
Enable it.
create or replace trigger payroll_trace_trigger
after logon on database
when (user='PAYROLL')
begin
execute immediate 'alter session set sql_trace=true';
end;
/
What does it give me ?

Check the trace file to see all DML, DDL, Select transactions run by the session.
Write custom login messages to the alert log.
Enable it.
create or replace trigger alert_trigger
after logon on database
when (user='PAYROLL')
begin
dbms_system.ksdwrt(2, 'ORA-20000 Login by user PAYROLL');
end;
/
What did it give me ?

Check the alert log to see when there were logins by user PAYROLL.

DDL Triggers

You can use DDL triggers and the Oracle System Events to capture current statements.
Enable it.
create table ddl_audit
(audit_date date,
username varchar2(30),
instance_number integer,
database_name varchar2(9),
object_type varchar2(19),
object_owner varchar2(30),
object_name varchar2(128),
sql_text varchar2(2000));

create or replace trigger BEFORE_DDL_TRG before ddl on database
declare
l_sql_text ora_name_list_t;
l_count NUMBER;
l_puser VARCHAR2(30) := NULL;
l_sql varchar2(2000);
begin
l_count := ora_sql_txt(l_sql_text);
l_puser := sys_context('USERENV', 'SESSION_USER');

l_count := ora_sql_txt(l_sql_text);
for i in 1..l_count
loop
l_sql := l_sql||l_sql_text(i);
end loop;

insert into ddl_audit (audit_date, username, instance_number, database_name, object_type, object_owner, object_name, sql_text)
values (sysdate, l_puser,ora_instance_num,ora_database_name,ora_dict_obj_type,ora_dict_obj_owner,ora_dict_obj_name,l_sql);

exception
when others then
null;
end;
/
show errors;
What does it give me ?
Select from the table to get a list of DDL statements executed in the database.
select * from ddl_audit;
DML Triggers

A well designed application will have table attributes for recording who changed the row, and when. Although these attributes can be populated by the application it is better to populate with a trigger so that ad-hoc updates and data fixes are also recorded.

Enable it.
create table payroll.employee(
EMPLOYEE_ID NUMBER(38),
EMPLOYEE_NAME VARCHAR2(30),
SALARY NUMBER(38),
NOTES VARCHAR2(100),
CREATED_BY VARCHAR2(30), - audit attribute populated by trigger
CREATED_DATE DATE, - audit attribute populated by trigger
LAST_MODIFIED_BY VARCHAR2(30), - audit attribute populated by trigger
LAST_MODIFIED_DATE DATE); - audit attribute populated by trigger

alter table payroll.employee add constraint employee_pk primary key (employee_id);
In addition it is possible to use triggers to record row history in a logging table.
create table payroll_audit.employee_audit(
DML CHAR(1),
EMPLOYEE_ID NUMBER(38),
EMPLOYEE_NAME VARCHAR2(30),
SALARY NUMBER(38),
NOTES VARCHAR2(100),
CREATED_BY VARCHAR2(30),
CREATED_DATE DATE,
LAST_MODIFIED_BY VARCHAR2(30),
LAST_MODIFIED_DATE DATE);

create or replace trigger payroll.employee_audit_trg
before insert or update or delete on payroll.employee
for each row
declare
dml char(1);
begin
if inserting
then

dml := 'I';

-- update the audit details for the record

:new.created_by := sys_context('USERENV','SESSION_USER');
:new.created_date := sysdate;
:new.last_modified_by:=sys_context('USERENV','SESSION_USER');
:new.last_modified_date := sysdate;

-- add a record to the audit table

insert into payroll_audit.employee_audit (dml, employee_id, employee_name, salary, notes, created_by, created_date, last_modified_by, last_modified_date) values (dml, :new.employee_id, :new.employee_name, :new.salary, :new.notes, :new.created_by, :new.created_date, :new.last_modified_by, :new.last_modified_date);
elsif updating
then
dml := 'U';

-- update the audit details for the record

:new.last_modified_by:=sys_context('USERENV','SESSION_USER');
:new.last_modified_date := sysdate;

-- add a record to the audit table
insert into payroll_audit.employee_audit (dml, employee_id, employee_name, salary, notes, created_by, created_date, last_modified_by, last_modified_date) values (dml, :new.employee_id, :new.employee_name, :new.salary, :new.notes, :new.created_by, :new.created_date, :new.last_modified_by, :new.last_modified_date);
else
dml := 'D';

-- add a record to the audit table
insert into payroll_audit.employee_audit (dml, employee_id, employee_name, salary, notes, created_by, created_date, last_modified_by, last_modified_date) values (dml, :old.employee_id, :old.employee_name, :old.salary, :old.notes, :old.created_by, :old.created_date, sys_context('USERENV','SESSION_USER'), sysdate);
end if;
end;
/
show errors;
What does it give me ?

You can easily check which records have been modified, by who, and see the record history.
select employee_id, last_modified_by, last_modified_date from employee where employee_id =1;

select dml, employee_id, salary, last_modified_by, last_modified_date from employee_audit where employee_id = 1;
More information can be found here :- http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/triggers.htm#CNCPT1677

Other places to find data that may be useful for Audit purposes.


Backups, exports, export logs, development systems, standby databases

Previous backups, export files, export logs, development clones and physical and logical standby databases can contain an earlier dataset which can be used to identify what rows or objects have been modified.

DATABASE ALERT LOG
The alert log contains information about failed transactions, jobs, checkpoints, create tablespace, and alter system statements, which can be useful in an audit.
Datafiles, Redo logs, Archive logs.

It is possible to analyze Redo logs, Datafiles, and Undo segments to identify changes and objects which have been dropped, or rows which have been deleted. Links to articles by David Litchfield below.

http://www.databasesecurity.com/dbsec/dissecting-the-redo-logs.pdf
http://www.databasesecurity.com/dbsec/Locating-Dropped-Objects.pdf
http://www.databasesecurity.com/oracle-forensics.htm
http://www.databasesecurity.com/dbsec/oracle-forensics-scns.pdf

DBA_DML_LOCKS, DBA_DDL_LOCKS
Check who has locks on a table or who is executing a package. This can be a good indicator of current activity on the database.
select name, session_id, mode_held from dba_dml_locks where owner = 'PAYROLL';
select owner, name, mode_held from dba_ddl_locks where owner = 'PAYROLL';

DBA_JOBS, DBA_SCHEDULER_JOBS, DBA_SCHEDULER_JOB_RUN_DETAILS, DBA_SCHEDULER_JOB_LOG
A job scheduled via DBMS_JOB will be listed in the DBA_JOBS table.

select job, log_user, what, last_date, this_date, next_date, failures, broken from dba_jobs;
A job scheduled via DBMS_SCHEDULER will be listed in the DBA_SCHEDULER_JOBS table, with execution history listed in DBA_SCHEDULER_JOB_RUN_DETAILS.
select owner, job_name, last_start_date, next_run_date, failure_count from dba_scheduler_jobs;
select owner, job_name, log_date from dba_scheduler_job_run_details;
DBA_OBJECTS, OBJ$ tables - Check when objects are created or last modified
The DBA_OBJECTS view is based on teh SYS.OBJ$ table which contains the data dictionary listing of all user objects, when they were created, and when they were last modified. If something has changed, this is the first place to look.
select name, ctime, mtime, stime, spare6 from obj$ where ctime > sysdate -1 or mtime > sysdate -1;
DBA_SEQUENCES
Comparing the last_number from dba_sequences and comparing it to the associated table attribute can help to identify if anyone has removed the last record. Take into account that sequences are cached in memory, so are not guaranteed to be consecutive, so use this information with caution.
select sequence_owner, sequence_name, last_number from dba_sequences;
DBA_TABLES, DBA_INDEXES, DBA_TAB_HISTOGRAMS, INDEX_STATS
If you have previously collected 100% statistics on tables and indexes using DBMS_STATS, then the recorded values can be used to identify what has changed by selecting from DBA_TABLES, DBA_INDEXES, DBA_TAB_HISTOGRAMS.
select count(*) from payroll.employee;
select table_name, num_rows from dba_tables where owner = 'PAYROLL';
select index_name, num_rows from dba_indexes where owner = 'PAYROLL';
select * from dba_tab_histograms where owner = 'PAYROLL'
DBA_TAB_PRIVS
Check who granted priveleges to a user
select grantor, grantee, privilege, owner, table_name from dba_tab_privs;
SQLNET LISTENER LOG
The sqlnet listener log lists connection requests for the database including IP address, username, osuser, program. It is usually in $ORACLE_HOME/network/log

USER$

This table allows an auditor to identify users that have been created, or have changed their password in the last 24 hours.
select name, ctime, ptime from user$ where ctime > sysdate -1 or ptime > sysdate -1;

V$ACTIVE_SESSION_HISTORY, V$SQLAREA
These views contains a good overview of recent activity on the database.
select last_active_time, parsing_user_id, sql_text from v$sql order by last_active_time;
V$OBJECT_USAGE
If an index is being monitored and it has been used, then it may be possible to detect it in v$object_usage.

More information here :- http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/indexes.htm#sthref2563

V$SESSION, V$PROCESS
These tables can be used to identify who is currently logged on to the database. More detail can be added by calling DBMS_APPLICATION_INFO.
select username, osuser, machine, program, module, action from v$session where username = 'PAYROLL';
CONCLUSION

Every company, application, and database will have different audit requirements, and there are decisions to be made to balance audit granularity vs performance and disk usage, but the decision to collect audit data needs to be made before you need that data. I hope that the audit examples given in this document will help in making those decisions.