Oracle收集和查看統(tǒng)計信息的方法
一、什么是統(tǒng)計信息
oracle數(shù)據(jù)庫里的統(tǒng)計信息是如下的一組數(shù)據(jù):他們存儲在數(shù)據(jù)字典里,且從多個維度描述了oracle數(shù)據(jù)庫數(shù)據(jù)對象的詳細信息。
oracle數(shù)據(jù)庫里的統(tǒng)計信息主要分為以下6種情況:
(1)表的統(tǒng)計信息。
(2)列的統(tǒng)計信息。
(3)索引的統(tǒng)計信息。
(4)系統(tǒng)統(tǒng)計信息。
(5)數(shù)據(jù)字典統(tǒng)計信息。
(6)內(nèi)部對象統(tǒng)計信息。
二、oracle收集和查看統(tǒng)計信息的方法
oracle數(shù)據(jù)庫收集統(tǒng)計信息一般有以下2種方法:
(1)analyze命令。
(2)dbms_stats包。
針對以上6種統(tǒng)計信息,其中“表的統(tǒng)計信息”,“索引統(tǒng)計信息”,“列統(tǒng)計信息”,“數(shù)據(jù)字典統(tǒng)計信息”使用analyze或dbms_stats包收集均可以,但是“系統(tǒng)統(tǒng)計信息”和“內(nèi)部對象統(tǒng)計信息”必須要dbms_stats包來收集才可以。
1、使用analyze命令收集統(tǒng)計信息
從oralce7開始,analyze命令就用來收集表、索引和列的統(tǒng)計信息。從oracle10g開始,創(chuàng)建索引后oracle會自動為您收集目標索引統(tǒng)計信息。analyze命令收集統(tǒng)計信息不會抹掉之間analyze結果。
創(chuàng)建測試表:
SQL>create table t1 as select * from dba_objects; SQL>create index idx_t1 on t1(object_id);
(1)analyze索引統(tǒng)計信息:
SQL>analyze index idx_t1 delete statistics;
(2)對表收集統(tǒng)計信息,并且以估算模式,采樣比為15%:
SQL>analyze table t1 estimate statistics sample 15 percent for table;
(3)對表收集統(tǒng)計信息,以統(tǒng)計模式:
SQL>analyze table t1 compute statistics for table;
(4)對列收集統(tǒng)計信息,以計算模式:
SQL>analyze table t1 compute statistics for columns object_name,object_id;
(5)以計算模式對表和列同時收集統(tǒng)計信息:
SQL>analyze table t1 compute statistics for t1 for columns object_name,object_id;
(6)以計算模式對索引收集統(tǒng)計信息:
SQL>analyze index idx_t1 compute statistics;
(7)刪除表、表上的索引、表的所有列的統(tǒng)計信息:
SQL>analyze table t1 delete statistics;
(8)以計算模式,同時收集表、表上的列、表上的索引的統(tǒng)計信息:
SQL>analyze table t1 compute statistics;
2、使用dbms_stats包收集統(tǒng)計信息
從oracle 8.1.5開始,dbms_stats包就被廣泛用于統(tǒng)計信息的收集,用dbms_stats包收集統(tǒng)計信息也是oracle官方推薦的方式。在收集CBO所需要的統(tǒng)計信息方面,可以簡單的將dbms_stats包理解成是analyze命令的增強版。
DBMS_STATS包最常見的4個存儲過程:
(1)dbms_stats.gather_table_stats:用于收集目標表,目標表上列及目標表上索引的統(tǒng)計信息。
(2)dbms_stats.gather_index_stats:用于收集指定索引的統(tǒng)計信息。
(3)dbms_stats.gather_schema_stats:用于收集schema下所有對象的統(tǒng)計信息。
(4)dbms_stats.gather_database_stats:用于收集全庫統(tǒng)計對象的統(tǒng)計信息。
以下是dbms_stats包的具體用法:
(1)對表收集統(tǒng)計信息,并且以估算模式,采樣比為15%:
SQL>exec dbms_stats.gather_table_stats(ownname=>'SCOTT',tabname=>'T1',estimate_percent=>15,method_opt=>'FOR TABLE',cascade=>FALSE);
注意:method_opt參數(shù)指定了FOR TABLE不是在所有版本oracle下都是好用的。
(2)對表收集統(tǒng)計信息,以計算模式:
SQL>exec dbms_stats.gather_table_stats(ownname=>'SCOTT',tabname=>'T1',estimate_percent=>100,method_opt=>'FOR TABLE',cascade=>FALSE);
或
SQL>exec dbms_stats.gather_table_stats(ownname=>'SCOTT',tabname=>'T1',estimate_percent=>NULL,method_opt=>'FOR TABLE',cascade=>FALSE);
(3)對列收集統(tǒng)計信息,以計算模式:
SQL>exec dbms_stats.gather_table_stats(ownname=>'SCOTT',tabname=>'T1',estimate_percent=>100,method_opt=>'FOR ALL CULUMNS SIZE 1 OBJECT_NAME OBJECT_ID',cascade=>FALSE);
注意:以上方法收集了列objec_name、object_id的統(tǒng)計信息,同時也會收集表的統(tǒng)計信息。
(4)以計算模式對索引收集統(tǒng)計信息:
SQL>exec dbms_stats.gather_index_stats(ownname=>'SCOTT',indname=>'INDEX_T1',estimate_percent=>100);
(5)刪除表、表上的索引、表的所有列的統(tǒng)計信息:
SQL>exec dbms_stats.delete_table_stats(ownname=>'SCOTT',tabname=>'T1');
(6)以計算模式,同時收集表、表上的列、表上的索引的統(tǒng)計信息:
SQL>exec dbms_stats.gather_table_stats(ownname=>'SCOTT',tabname=>'T1',estimate_percent=>15 ,cascade=>TRUE);
3、analyze和dbms_stats的區(qū)別
(1)analyze命令不能正確的收集分區(qū)表的統(tǒng)計信息,而dbms_stats包缺可以。
(2)analyze命令不能以并行收集統(tǒng)計信息,而dbms_stats包缺可以。
SQL>exec dbms_stats.gather_table_stats(ownname=>'SCOTT',tabname=>'T1',estimate_percent=>100, cascade=>FALSE,degree=>4);
(3)dbms_stats包只能收集與CBO相關的統(tǒng)計信息,而與CBO無關的額外信息,比如行遷移/行鏈接的數(shù)量(chain_cnt),校驗表和索引的結構信息等,dbms_stats包就無能為力了,而analyze命令是可以用來分析和收集上述額外信息。比如:
SQL>analyze table XXX list chained rows into YYY; --用來分析和收集行遷移/行鏈接的數(shù)量。 SQL>analyze index XXX validate structure; --用來分析索引結構。
4、查看統(tǒng)計信息
oracle里的統(tǒng)計信息存儲在數(shù)據(jù)字典表中,可以通過腳本來查詢對象的統(tǒng)計信息。
sosi.sh腳本如下(可以查看表、索引、列的統(tǒng)計信息):
set echo off
set scan on
set lines 150
set pages 66
set verify off
set feedback off
set termout off
column uservar new_value Table_Owner noprint
select user uservar from dual;
set termout on
column TABLE_NAME heading "Tables owned by &Table_Owner" format a30
select table_name from dba_tables where owner=upper('&Table_Owner') order by 1
/
undefine table_name
undefine owner
prompt
accept owner prompt 'Please enter Name of Table Owner (Null = &Table_Owner): '
accept table_name prompt 'Please enter Table Name to show Statistics for: '
column TABLE_NAME heading "Table|Name" format a15
column PARTITION_NAME heading "Partition|Name" format a15
column SUBPARTITION_NAME heading "SubPartition|Name" format a15
column NUM_ROWS heading "Number|of Rows" format 9,999,999,990
column BLOCKS heading "Blocks" format 999,990
column EMPTY_BLOCKS heading "Empty|Blocks" format 999,999,990
?
column AVG_SPACE heading "Average|Space" format 9,990
column CHAIN_CNT heading "Chain|Count" format 999,990
column AVG_ROW_LEN heading "Average|Row Len" format 990
column COLUMN_NAME heading "Column|Name" format a25
column NULLABLE heading Null|able format a4
column NUM_DISTINCT heading "Distinct|Values" format 999,999,990
column NUM_NULLS heading "Number|Nulls" format 9,999,990
column NUM_BUCKETS heading "Number|Buckets" format 990
column DENSITY heading "Density" format 990
column INDEX_NAME heading "Index|Name" format a15
column UNIQUENESS heading "Unique" format a9
column BLEV heading "B|Tree|Level" format 90
column LEAF_BLOCKS heading "Leaf|Blks" format 990
column DISTINCT_KEYS heading "Distinct|Keys" format 9,999,999,990
column AVG_LEAF_BLOCKS_PER_KEY heading "Average|Leaf Blocks|Per Key" format 99,990
column AVG_DATA_BLOCKS_PER_KEY heading "Average|Data Blocks|Per Key" format 99,990
column CLUSTERING_FACTOR heading "Cluster|Factor" format 999,999,990
column COLUMN_POSITION heading "Col|Pos" format 990
column col heading "Column|Details" format a24
column COLUMN_LENGTH heading "Col|Len" format 9,990
column GLOBAL_STATS heading "Global|Stats" format a6
column USER_STATS heading "User|Stats" format a6
column SAMPLE_SIZE heading "Sample|Size" format 9,999,999,990
column to_char(t.last_analyzed,'MM-DD-YYYY') heading "Date|MM-DD-YYYY" format a10
?
prompt
prompt ***********
prompt Table Level
prompt ***********
prompt
select
TABLE_NAME,
NUM_ROWS,
BLOCKS,
EMPTY_BLOCKS,
AVG_SPACE,
CHAIN_CNT,
AVG_ROW_LEN,
GLOBAL_STATS,
USER_STATS,
SAMPLE_SIZE,
to_char(t.last_analyzed,'MM-DD-YYYY')
from dba_tables t
where
owner = upper(nvl('&&Owner',user))
and table_name = upper('&&Table_name')
/
select
COLUMN_NAME,
decode(t.DATA_TYPE,
'NUMBER',t.DATA_TYPE||'('||
decode(t.DATA_PRECISION,
null,t.DATA_LENGTH||')',
t.DATA_PRECISION||','||t.DATA_SCALE||')'),
'DATE',t.DATA_TYPE,
'LONG',t.DATA_TYPE,
'LONG RAW',t.DATA_TYPE,
'ROWID',t.DATA_TYPE,
'MLSLABEL',t.DATA_TYPE,
t.DATA_TYPE||'('||t.DATA_LENGTH||')') ||' '||
decode(t.nullable,
'N','NOT NULL',
'n','NOT NULL',
NULL) col,
NUM_DISTINCT,
DENSITY,
NUM_BUCKETS,
NUM_NULLS,
GLOBAL_STATS,
USER_STATS,
SAMPLE_SIZE,
to_char(t.last_analyzed,'MM-DD-YYYY')
from dba_tab_columns t
where
table_name = upper('&Table_name')
and owner = upper(nvl('&Owner',user))
/
?
select
INDEX_NAME,
UNIQUENESS,
BLEVEL BLev,
LEAF_BLOCKS,
DISTINCT_KEYS,
NUM_ROWS,
AVG_LEAF_BLOCKS_PER_KEY,
AVG_DATA_BLOCKS_PER_KEY,
CLUSTERING_FACTOR,
GLOBAL_STATS,
USER_STATS,
SAMPLE_SIZE,
to_char(t.last_analyzed,'MM-DD-YYYY')
from
dba_indexes t
where
table_name = upper('&Table_name')
and table_owner = upper(nvl('&Owner',user))
/
break on index_name
select
i.INDEX_NAME,
i.COLUMN_NAME,
i.COLUMN_POSITION,
decode(t.DATA_TYPE,
'NUMBER',t.DATA_TYPE||'('||
decode(t.DATA_PRECISION,
null,t.DATA_LENGTH||')',
t.DATA_PRECISION||','||t.DATA_SCALE||')'),
'DATE',t.DATA_TYPE,
'LONG',t.DATA_TYPE,
'LONG RAW',t.DATA_TYPE,
'ROWID',t.DATA_TYPE,
'MLSLABEL',t.DATA_TYPE,
t.DATA_TYPE||'('||t.DATA_LENGTH||')') ||' '||
decode(t.nullable,
'N','NOT NULL',
'n','NOT NULL',
NULL) col
from
dba_ind_columns i,
dba_tab_columns t
where
i.table_name = upper('&Table_name')
and owner = upper(nvl('&Owner',user))
and i.table_name = t.table_name
and i.column_name = t.column_name
order by index_name,column_position
/
?
prompt
prompt ***************
prompt Partition Level
prompt ***************
?
select
PARTITION_NAME,
NUM_ROWS,
BLOCKS,
EMPTY_BLOCKS,
AVG_SPACE,
CHAIN_CNT,
AVG_ROW_LEN,
GLOBAL_STATS,
USER_STATS,
SAMPLE_SIZE,
to_char(t.last_analyzed,'MM-DD-YYYY')
from
dba_tab_partitions t
where
table_owner = upper(nvl('&&Owner',user))
and table_name = upper('&&Table_name')
order by partition_position
/
?
?
break on partition_name
select
PARTITION_NAME,
COLUMN_NAME,
NUM_DISTINCT,
DENSITY,
NUM_BUCKETS,
NUM_NULLS,
GLOBAL_STATS,
USER_STATS,
SAMPLE_SIZE,
to_char(t.last_analyzed,'MM-DD-YYYY')
from
dba_PART_COL_STATISTICS t
where
table_name = upper('&Table_name')
and owner = upper(nvl('&Owner',user))
/
?
break on partition_name
select
t.INDEX_NAME,
t.PARTITION_NAME,
t.BLEVEL BLev,
t.LEAF_BLOCKS,
t.DISTINCT_KEYS,
t.NUM_ROWS,
t.AVG_LEAF_BLOCKS_PER_KEY,
t.AVG_DATA_BLOCKS_PER_KEY,
t.CLUSTERING_FACTOR,
t.GLOBAL_STATS,
t.USER_STATS,
t.SAMPLE_SIZE,
to_char(t.last_analyzed,'MM-DD-YYYY')
from
dba_ind_partitions t,
dba_indexes i
where
i.table_name = upper('&Table_name')
and i.table_owner = upper(nvl('&Owner',user))
and i.owner = t.index_owner
and i.index_name=t.index_name
/
?
?
prompt
prompt ***************
prompt SubPartition Level
prompt ***************
?
select
PARTITION_NAME,
SUBPARTITION_NAME,
NUM_ROWS,
BLOCKS,
EMPTY_BLOCKS,
AVG_SPACE,
CHAIN_CNT,
AVG_ROW_LEN,
GLOBAL_STATS,
USER_STATS,
SAMPLE_SIZE,
to_char(t.last_analyzed,'MM-DD-YYYY')
from
dba_tab_subpartitions t
where
table_owner = upper(nvl('&&Owner',user))
and table_name = upper('&&Table_name')
order by SUBPARTITION_POSITION
/
break on partition_name
select
p.PARTITION_NAME,
t.SUBPARTITION_NAME,
t.COLUMN_NAME,
t.NUM_DISTINCT,
t.DENSITY,
t.NUM_BUCKETS,
t.NUM_NULLS,
t.GLOBAL_STATS,
t.USER_STATS,
t.SAMPLE_SIZE,
to_char(t.last_analyzed,'MM-DD-YYYY')
from
dba_SUBPART_COL_STATISTICS t,
dba_tab_subpartitions p
where
t.table_name = upper('&Table_name')
and t.owner = upper(nvl('&Owner',user))
and t.subpartition_name = p.subpartition_name
and t.owner = p.table_owner
and t.table_name=p.table_name
/
?
break on partition_name
select
t.INDEX_NAME,
t.PARTITION_NAME,
t.SUBPARTITION_NAME,
t.BLEVEL BLev,
t.LEAF_BLOCKS,
t.DISTINCT_KEYS,
t.NUM_ROWS,
t.AVG_LEAF_BLOCKS_PER_KEY,
t.AVG_DATA_BLOCKS_PER_KEY,
t.CLUSTERING_FACTOR,
t.GLOBAL_STATS,
t.USER_STATS,
t.SAMPLE_SIZE,
to_char(t.last_analyzed,'MM-DD-YYYY')
from
dba_ind_subpartitions t,
dba_indexes i
where
i.table_name = upper('&Table_name')
and i.table_owner = upper(nvl('&Owner',user))
and i.owner = t.index_owner
and i.index_name=t.index_name
/
?
clear breaks
set echo on附:查看表歷史收集的統(tǒng)計信息情況
SELECT b.OWNER,
b.OBJECT_NAME TABLE_NAME,
TO_CHAR(a.ANALYZETIME, 'YYYY-MM-DD HH24:MI:SS') LAST_ANALYZETIME,
TO_CHAR(a.SAVTIME, 'YYYY-MM-DD HH24:MI:SS') CURR_ANALYZETIME,
a.ROWCNT
FROM SYS.WRI$_OPTSTAT_TAB_HISTORY a, DBA_OBJECTS b
WHERE a.OBJ# = b.OBJECT_ID
AND b.OBJECT_NAME ='ZB_WHOLE_ORDERS_KAFKA_DISPATCH' and b.OWNER='ZJOPEN'
ORDER BY a.OBJ#, a.SAVTIME;總結
到此這篇關于Oracle收集和查看統(tǒng)計信息的文章就介紹到這了,更多相關Oracle統(tǒng)計信息內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!
相關文章
mybatis使用oracle進行添加數(shù)據(jù)的方法
這篇文章主要介紹了mybatis使用oracle進行添加數(shù)據(jù)的方法,本文給大家分享我的心得體會,需要的朋友可以參考下2021-04-04
如何解決Oracle數(shù)據(jù)表入庫中文亂碼問題
Oracle數(shù)據(jù)庫在處理中文數(shù)據(jù)時,經(jīng)常會遇到亂碼問題,導致數(shù)據(jù)無法正常顯示和處理,這是因為Oracle數(shù)據(jù)庫默認的字符集為US7ASCII,無法識別中文字符,通過修改數(shù)據(jù)庫,客戶端和應用程序字符集,將數(shù)據(jù)轉換為正確的字符集,可以避免亂碼問題2024-02-02
ORACLE數(shù)據(jù)庫應用開發(fā)常見問題及排除
ORACLE數(shù)據(jù)庫應用開發(fā)常見問題及排除...2007-03-03
安裝Oracle時出現(xiàn)環(huán)境變量Path的值大于1023的解決辦法
這篇文章主要介紹了安裝Oracle時出現(xiàn)環(huán)境變量Path的值大于1023的解決辦法,非常不錯,具有參考借鑒價值,感興趣的朋友一起看看吧2016-12-12

