分享
 
 
 

Auditing Past Transactions With Oracle LogMiner

王朝oracle·作者佚名  2008-05-31
窄屏简体版  字體: |||超大  

by Mike Hordila

Have you ever had to answer questions like, “This data does not look right. Could we find out who changed it and when? How did they change it? What was there before the change? How do we fix it back?” If so, this article will give you a quick start in the right direction. It really is possible to find out the answers to these questions, and not that difficult.I have used these principles on all types of systems in prodUCtion today, on Unix (AIX, HP-UX, Sun Solaris, Linux) and Windows (NT, 2000) servers, on Oracle 8.x, 8i, 9i. The techniques I'm going to cover require some knowledge of Oracle and some eXPerimentation. However, samples are provided that should help you to get Oracle Log MinerMost DBAs would not want to enable the Oracle auditing, as there is a visible impact on space consumption and especially performance (some authors report a 10 — 20 percent performance loss for significant auditing). However, transaction information is recorded in the redo logs (on line and archives) and, starting with version 8.1.5, Oracle supports log mining. LogMiner can be run on the redo log producing (source) database or on an analyzing (miner) database.Some restrictions of LogMiner:It is only available in Oracle version 8.1 or later It can only analyze redo log files (online or archived) from 8.0 or later databases Oracle7 has a different format of the redo log files, so this version cannot be log mined The same hardware platform must be on both databases The same database character set must be on both databases The same database block size must be on both databases The dictionary file can only be created in a Directory included in parameter UTL_FILE_DIR in file INIT.ORA LogMiner does not support file Access across database links, so dictionary files and redo logs must be moved to the machine hosting the analyzing instance.LogMiner 8i does not support operations on:data types LONG and LOB non-scalar data types simple and nested abstract data types (ADTs) collections (nested tables and VARRAYS) Object Refs Index Organized Tables (IOTs) clustered tables/indexes chained rows direct path inserts, even though such operations are logged

LogMiner 9.2.x can be used with LONG and LOB, but cannot be used with:simple and nested abstract data types (ADTs) collections (nested tables and VARRAYS) Object Refs Index Organized Tables (IOTs)Oracle9i Log Miner New Features

Enhancements to LogMiner for Oracle9i generated log files include:A new LogMiner Viewer GUI in addition to the command line interface Translating DML associated with Index Clusters Grouping DML statements into completed transactions, returned in the commit SCN order Mining for changes by value Support for chained and migrated rows on redos produced by 9i Support for direct path inserts Using an online dictionary Extracting the data dictionary into the redo log files to seamlessly integrate DDL changes DDL statement tracking on redos produced by 9i Can skip log corruptions Can specify that only committed transactions be displayed Can generate SQL_REDO and SQL_UNDO with primary key information to help the DBA undo changes changesPreparing The Log MinerLog Miner consists of the Log Miner ( dbms_logmnr) package with three procedures and the Dictionary (dbms_logmnr_d ) package. These are normally built by catproc, which executes the following scripts:$ORACLE_HOME/rdbms/admin/dbmslogmnrd.sql - $ORACLE_HOME/rdbms/admin/dbmslogmnr.sql - $ORACLE_HOME/rdbms/admin/prvtlogmnr.plband since 8.1.6:$ORACLE_HOME/rdbms/admin/dbmslmd.sql - $ORACLE_HOME/rdbms/admin/dbmslm.sql - $ORACLE_HOME/rdbms/admin/prvtlm.plbA few views are also created:V$LOGMNR_CONTENTS— the contents of the redo log files being analyzed – used by the DBA for auditingV$LOGMNR_DICTIONARY — the dictionary file in useV$LOGMNR_LOG — which redo log files are being analyzedV$LOGMNR_PARAMETERS — current parameter settings for LogMinerAlso, depending on the specific version, a few more objects related to the LogMiner system are created, like the view v$logmnr_interesting_cols is created by $ORACLE_HOME/rdbms/admin/dbmslmd.sql and is for internal use by LogMiner.For Oracle 8.0.x, this system is not created, so the DBA has to run manually one the dictionary scripts ( dbmslogmnrd.sql or dbmslmd.sql ) or all scripts. Simply ignore the errors referring to creating other objects than the dictionary package. Even later, some errors may be generated while running the package for objects like SUBPARTCOL$, TABSUBPART$, INDSUBPART$, TABCOMPART$ and INDCOMPART$. Ignore these errors as well.The Dictionary FileThe dictionary file is produced in order to convert object ID numbers to object names. It is not required, but is recommended. Without it the equivalent SQL statements will use Oracle internal object IDs for the object name and present column values as hex data. For example, instead of the SQL statement:INSERT INTO emp(name, salary) VALUES ('John Doe', 50000);

LogMiner will display: insert into Object#2581(col#1, col#2) values (hextoraw('4a6f686e20446f65'), hextoraw

('c306'));"

The contents of a dictionary file looks like:CREATE_TABLE DICTIONARY_TABLE ( DB_NAME VARCHAR2(9), DB_ID NUMBER(20),

DB_CREATED

VARCHAR2(20), DB_DICT_CREATED VARCHAR2(20), DB_DICT_SCN NUMBER(22),

DB_THREAD_MAP

NUMBER(22), DB_RESETLOGS_CHANGE# NUMBER(22), DB_RESETLOGS_TIME VARCHAR2(20),

DB_VERSION_TIME

VARCHAR2(20), DB_REDO_TYPE_ID VARCHAR2(8), DB_REDO_RELEASE VARCHAR2(60),

DB_CHARACTER_SET

VARCHAR2(30), DB_VERSION VARCHAR2(64), DB_STATUS VARCHAR2(64),

DB_DICT_MAXOBJECTS

NUMBER(22), DB_DICT_OBJECTCOUNT NUMBER(22));

INSERT_INTO DICTIONARY_TABLE VALUES ('MHD1',41190674,'12/24/2002

23:36:15','03/16/2003

12:30:24',,,1,'12/24/2002 16:05:38','12/24/2002

23:36:15','REDODATA','8.0.5.0.0','WE8ISO8859P1','8.0.5.1.0','Production',2788,2697);

The dictionary file can be converted into a SQL script by replacing globally the underscores with spaces.CREATE_TABLE —> CREATE TABLE; CREATE_INDEX —> CREATE INDEX; INSERT_INTO —> INSERT INTO; and so on (there are more details in the header comments in the dictionary file itself).Also, see Oracle Note 77638.1 on how to build a package and a LogMiner “Place Holder Columns” file.Running The Log MinerWe have used four scripts to demonstrate the concepts in this paper. They are in the file MHSYS-logminer.sql and the logs in MHSYS-logminer.log.First, on the source database, we create some transactions like:INSERT INTO table1 ( rec_id, emp_last_name, emp_first_name, salary )

VALUES ( 03, 'LASTTHREE', 'FIRSTTHREE', 10000.10 );

Then we update one row:UPDATE table1 SET salary = 20000.10 WHERE rec_id = 03;

Then we build the dictionary file:execute dbms_logmnr_d.build(dictionary_filename => 'dictionary.920.ora', -

dictionary_location => 'C:\TEMP');

Now, we copy the dictionary file, the online and archived redo log files from the period of time that interests us to the analyzing database machine. The analyzing database does not have to be only mounted, it can be open, in which case I normally just copy the V$LOGMNR_CONTENTS into a regular table.Then, an the analyzing database, we load the redo logs:execute dbms_logmnr.add_logfile(logfilename => 'C:\TEMP\redo01.log', -

options => dbms_logmnr.new);

(Here, if you have the wrong redo logs, you can get some errors, like “archived log does not contain any redo”. Most of them can be ignored.)And then we start the logminer:execute dbms_logmnr.start_logmnr(dictfilename => 'C:\TEMP\dictionary.920.ora', -

starttime => to_date('18-MAR-2003 00:00:00', 'DD-MON-YYYY HH24:MI:SS'), -

endtime => to_date('18-MAR-2003 23:59:59', 'DD-MON-YYYY HH24:MI:SS'));

(Here, if you have the wrong redo logs, you can get some errors, like “archived log out of range.”)Now, since the database is open, I can do my table copy. This is useful for thorough analyzing, as V$LOGMNR_CONTENTS is very slow and can contain Millions of rows on a busy production system.create table SYSTEM.LOGMINER_CONTENTS_920 tablespace TOOLS

as select * from v$logmnr_contents;

Then we can finish:execute dbms_logmnr.add_logfile(logfilename => 'C:\TEMP\redo01.log', -

options => dbms_logmnr.removefile);

execute dbms_logmnr.end_logmnr;

And now, we can use our own table. We can create a few indexes, if we really need to work a lot with it.select count(*) from SYSTEM.LOGMINER_CONTENTS_920;

COUNT(*)

----------

11037

select to_char(timestamp, 'DD-MON-YYYY HH24:MI:SS') timestamp,

scn, log_id, username, seg_owner, seg_name, seg_type, operation,

sql_redo

from SYSTEM.LOGMINER_CONTENTS_920

where username = 'TESTX'

and seg_owner = 'TESTX'

and seg_name = 'TABLE1';

TIMESTAMP SCN LOG_ID USERNAME

-------------------- ---------- ---------- -----------------

SQL_REDO

------------------------------------------------------------

18-MAR-2003 20:47:10 181209 10 TESTX

CREATE TABLE table1 (

rec_id VARCHAR2(12) NOT NULL,

emp_last_name VARCHAR2(30),

emp_first_name VARCHAR2(30),

salary NUMBER(8,2) )

TABLESPACE tools;

18-MAR-2003 20:47:16 181293 10 TESTX

insert into "TESTX"."TABLE1"("REC_ID","EMP_LAST_NAME","EMP_FIRST_NAME", ……..

18-MAR-2003 20:47:30 181409 11 TESTX

update "TESTX"."TABLE1" set "SALARY" = '20000,1' where "SALARY" = '1000 ……..

Some Frequent Questions Before Oracle9iHow do you know in versions earlier than 9i that multiple statements belong to the same transaction? You can check USERNAME (or session_info) and XIDUSN (rollback segment number) and you can see first and last for transaction. XIDSQN identifies the SCN. XIDSLOT can also be used to order the transaction components: operation START, sql_redo set transaction read write, and operation COMMIT, sql_redo commit.How do you know in versions earlier than 9i that a table was dropped? DROP TABLE will generate DELETE operations on COL$, OBJ$ and TAB$.select seg_name, operation, scn, count(*) from v$logmnr_contents

where operation != 'INTERNAL'

group by seg_name, operation, scn

order by scn;

SEG_NAME OPERATION SCN COUNT(*)

--------------- -------------------------------- ---------- ----------

COL$ DELETE 5012065 3

OBJ$ DELETE 5012065 1

TAB$ DELETE 5012065 1

START 5012065 1

SEG$ UPDATE 5012065 1

How do you know in versions earlier than 9i to repopulate a table that had chained rows? DML on chained rows are included in "v$logmnr_contents.sql_redo" and "v$logmnr_contents.sql_undo". The SQL redo/undo columns are NULL for INSERT and UPDATE and contain 'Unsupported' for DELETE. Other columns (including data_blk#, data_obj#, row_id) can be used to identify chained rows, but it we cannot determine the SQL redo/undo statement. So, note that v$logmnr_contents.sql_redo CANNOT be used to completely repopulate a table that ever contained chained rows.Log Miner Procedures Summary

Many people make the dictionary file creation part of the daily backup procedures Do not run the log analysis on the production database, as it takes a lot of I/O and PGA Accessing V$LOGMNR_CONTENTS is very slow, a full scan can take 10-20 hours, physically reading the redo log files, using a lot of PGA not SGA. The analyzing database can be only mounted. V$LOGMNR_CONTENTS can contain Millions of rows on a busy production system Some people recommend using a standby in mount or read-only state to analyze the redos from the primary. This will read directly the redo files, so it will be very slow, as described above. I would rather recommend using an opened database, even on a workstation, and doing the copy of the view V$LOGMNR_CONTENTS to a regular table. If you need, you can build indexes on it. This will also avoid a number of reported problems and crashes caused by running directly against V$LOGMNR_CONTENTS. V$LOGMNR_CONTENTS.

There is not a lot of literature on Oracle LogMiner, and some of it can be confusing, but things are not that complicated. You can avoid a lot of aggravation by just being well organized. For a list of Frequently Asked Questions and tips on running my packages, visit www.hordila.com/mhwork.htm.--Mike Hordila is a DBA OCP v.7, 8, 8i, 9i, and has his own Oracle consulting company, DBActions Inc., www.dbactions.com, in Toronto, Ontario. He specializes in tuning, automation, security, and very large databases. Mike has articles in Oracle Magazine Online, Oracle Internals and DBAzine.com. Updated versions of his work are available on www.hordila.com. He is also a technical editor with Hungry Minds (formerly IDG Books).

 
 
 
免责声明:本文为网络用户发布,其观点仅代表作者个人观点,与本站无关,本站仅提供信息存储服务。文中陈述内容未经本站证实,其真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
2023年上半年GDP全球前十五强
 百态   2023-10-24
美众议院议长启动对拜登的弹劾调查
 百态   2023-09-13
上海、济南、武汉等多地出现不明坠落物
 探索   2023-09-06
印度或要将国名改为“巴拉特”
 百态   2023-09-06
男子为女友送行,买票不登机被捕
 百态   2023-08-20
手机地震预警功能怎么开?
 干货   2023-08-06
女子4年卖2套房花700多万做美容:不但没变美脸,面部还出现变形
 百态   2023-08-04
住户一楼被水淹 还冲来8头猪
 百态   2023-07-31
女子体内爬出大量瓜子状活虫
 百态   2023-07-25
地球连续35年收到神秘规律性信号,网友:不要回答!
 探索   2023-07-21
全球镓价格本周大涨27%
 探索   2023-07-09
钱都流向了那些不缺钱的人,苦都留给了能吃苦的人
 探索   2023-07-02
倩女手游刀客魅者强控制(强混乱强眩晕强睡眠)和对应控制抗性的关系
 百态   2020-08-20
美国5月9日最新疫情:美国确诊人数突破131万
 百态   2020-05-09
荷兰政府宣布将集体辞职
 干货   2020-04-30
倩女幽魂手游师徒任务情义春秋猜成语答案逍遥观:鹏程万里
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案神机营:射石饮羽
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案昆仑山:拔刀相助
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案天工阁:鬼斧神工
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案丝路古道:单枪匹马
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:与虎谋皮
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:李代桃僵
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:指鹿为马
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案金陵:小鸟依人
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案金陵:千金买邻
 干货   2019-11-12
 
推荐阅读
 
 
 
>>返回首頁<<
 
靜靜地坐在廢墟上,四周的荒凉一望無際,忽然覺得,淒涼也很美
© 2005- 王朝網路 版權所有