Oracle Database 10g. High Availablity with RAC Flashback & Data Guard
| < Day Day Up > |
|
The section on using LogMiner could very well be placed in Chapter 8, on backup and recovery, as the primary function of LogMiner is to undo a bad transaction, or to redo transactions that were lost due to a point-in-time recovery. However, the LogMiner utility can be yet another useful tool for your HA toolbelt. LogMiner does just what its name implies: it mines the data in the online redo logs and archived redo logs for information about what changes have been recorded in the database.
There is a bit of setup and configuration involved, but you do not need to configure it before you need it. For example, if you need to look at archivelogs from last week, you can get LogMiner set up today to review the old archivelogs. However, reacting to situations is not the call of the HADBA. We are, in a single buzz-wordy kind of way, proactive. (Someone please explain the difference between active and proactive. Actually, that's okay. We embrace our inner buzz word.) In fact, we are ultra-proactive, as HADBAs, so let's take a close look at LogMiner usage.
You set up LogMiner to work from the SQL interface. The packages for doing so are DBMS_LOGMINER_D and DBMS_LOGMNR. Both packages are required for configuring and using LogMiner if you will be configuring LogMiner to review archivelogs for a noncurrent database. Otherwise, you will only need DBMS_LOGMNR.
HA Workshop: Configuring and Using LogMiner from the SQL Interface
![]() |
Workshop Notes
This workshop will step you through the review of a delete transaction within an archivelog. Note that a system parameter change is required prior to generating the delete in order to get full transaction discovery.
Step 1. Turn on Supplemental Logging for the database.
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
Step 2. Switch the logfile, then generate a delete.
connect / as sysdba alter system switch logfile; connect ws_app/ws_app delete from woodscrew; commit; connect / as sysdba alter system switch logfile; select name from v$archived_log;
Step 3. Add the new logfile we just generated to the LogMiner list.
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( - LOGFILENAME => '/u02/oradata/flash_recovery_area/ORCL/o1_mf_1_161_032xckmg_.arc', - OPTIONS => DBMS_LOGMNR.NEW);
Step 4. Specify the online catalog for LogMiner to use. This is available if the source database is open and available.
EXECUTE DBMS_LOGMNR.START_LOGMNR(- OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
Step 5. Query the V$LOGMNR_CONTENTS for information on the delete.
SQL> select username, sql_redo, sql_undo from v$logmnr_contents where username='WS_APP' and operation = 'DELETE';
With an eye for availability, it is best to see LogMiner as a companion in data extraction and data rebuilding operations that do not require a database outage. By reviewing SQL redo and SQL undo based on time or user (or both), you can overcome possible user errors that occurred outside of the window of review with Flashback Technologies (see Chapter 9). In addition, if you are forced to do a FLASHBACK DATABASE, and there is transaction loss that has occurred, the archivelogs that were generated after the flashback point-in-time can be mined for the lost transactions, and those transactions can be executed again. If you do need to mine the archivelogs of a database that is not in its current state, you will need to generate the LogMiner dictionary into a flat file instead of using the online catalog. This requires an additional step using the DBMS_LOGMINER_D package.
Execute dbms_logmnr_d.build ('dictionary.ora', - '/u02/oradata/', - dbms_logmnr_d.store_in_flat_file);
| < Day Day Up > |
|