TA的每日心情 | 开心 2023-8-9 11:05 |
---|
|
Statistics Gathering Frequently Asked Questions (Doc ID 1501712.1)
--
- begin
- dbms_stats.gather_database_stats(
- options=> 'GATHER AUTO');
- end;
复制代码 --
- begin
- dbms_stats.gather_database_stats(
- cascade=> DBMS_STATS.AUTO_CASCADE,
- gather_sys=> TRUE,
- estimate_percent=> null,
- degree=> DBMS_STATS.AUTO_DEGREE,
- no_invalidate=> DBMS_STATS.AUTO_INVALIDATE,
- granularity=> 'AUTO',
- method_opt=> 'FOR ALL COLUMNS SIZE AUTO',
- options=> 'GATHER STALE');
- end;
复制代码 --
- begin
- dbms_stats.gather_schema_stats(
- ownname=> '"SCOTT"' ,
- cascade=> DBMS_STATS.AUTO_CASCADE,
- estimate_percent=> null,
- degree=> DBMS_STATS.AUTO_DEGREE,
- no_invalidate=> DBMS_STATS.AUTO_INVALIDATE,
- granularity=> 'AUTO',
- method_opt=> 'FOR ALL COLUMNS SIZE AUTO',
- options=> 'GATHER STALE');
- end;
复制代码 --
- begin
- dbms_stats.gather_fixed_objects_stats();
- end;
复制代码 --
- begin
- dbms_stats.gather_dictionary_stats(
- options=> 'GATHER AUTO');
- end;
复制代码 --
- begin
- dbms_stats.restore_database_stats( as_of_timestamp => to_timestamp_tz('2021-07-22 15:45:00 +8:00', 'YYYY-MM-DD HH24:MI:SS TZH:TZM') );
- end;
复制代码 --- begin
- dbms_stats.lock_schema_stats( ownname=> '"SCOTT"' );
- end;
复制代码 --
- begin
- dbms_stats.delete_database_stats( force => TRUE );
- end;
复制代码 --
- select dbms_stats.get_param(pname => 'no_invalidate') from dual;
复制代码- exec dbms_stats.gather_dictionary_stats;
- exec dbms_stats.lock_table_stats (null,'X$KCCLH');
- exec dbms_stats.gather_fixed_objects_stats;
复制代码 参数说明:
- ownname:要分析表的拥有者
- tabname:要分析的表名.
- partname:分区的名字,只对分区表或分区索引有用.
- estimate_percent:采样行的百分比,取值范围[0.000001,100],null为全部分析,不采样. 常量:DBMS_STATS.AUTO_SAMPLE_SIZE是默认值,由oracle决定最佳取采样值.
- block_sapmple:是否用块采样代替行采样.
- method_opt:决定histograms信息是怎样被统计的.method_opt的取值如下(默认值为FOR ALL COLUMNS SIZE AUTO):
- for all columns:统计所有列 的histograms.
- for all indexed columns:统计所有indexed列的histograms.
- for all hidden columns:统计你看不到列的histograms
- for columns <list> SIZE <N> | REPEAT | AUTO | SKEWONLY:统计指定列的histograms.N的取值范围[1,254]; REPEAT上次统计过的histograms;AUTO由oracle决定N的大小;SKEWONLY 选项会耗费大量处理时间,因为它要检查每个索引中的每个列的值的分布情况。
- degree:决定并行度.默认值为null.
- granularity:Granularity of statistics to collect ,only pertinent if the table is partitioned.
- cascade:是收集索引的信息.默认为FALSE.
- stattab:指定要存储统计信息的表,statid如果多个表的统计信息存储在同一个stattab中用于进行区分.statown存储统计信息表的拥有者.以上三个参数若不指定,统计信息会直接更新到数据字典.
- no_invalidate: Does not invalidate the dependent cursors if set to TRUE. The procedure invalidates the dependent cursors immediately if set to FALSE.
- force:即使表锁住了也收集统计信息.
复制代码 例子:
execute dbms_stats.gather_table_stats(ownname => 'owner',tabname => 'table_name' ,estimate_percent => null ,method_opt => 'for all indexed columns' ,cascade => true);
如何使用dbms_stats分析统计信息?
--创建统计信息历史保留表
sql> exec dbms_stats.create_stat_table(ownname => 'scott',stattab => 'stat_t') ;
--导出整个scheme的统计信息
sql> exec dbms_stats.export_schema_stats(ownname => 'scott',stattab => 'stat_t') ;
--分析scheme
Exec dbms_stats.gather_schema_stats(
ownname => 'scott',
options => 'GATHER AUTO',
estimate_percent => dbms_stats.auto_sample_size,
method_opt => 'for all indexed columns ',
degree => 6 )
--分析表
sql> exec dbms_stats.gather_table_stats(ownname => 'scott',tabname => 'work_list',estimate_percent => 10,method_opt=> 'for all indexed columns') ;
--分析索引
SQL> exec dbms_stats.gather_index_stats(ownname => 'crm2',indname => 'IDX_ADM_PERMISSION_PID_MID',estimate_percent => '10',degree => '4') ;
--如果发现执行计划走错,删除表的统计信息
SQL>dbms_stats.delete_table_stats(ownname => 'scott',tabname => 'work_list') ;
--导入表的历史统计信息
sql> exec dbms_stats.import_table_stats(ownname => 'scott',tabname => 'work_list',stattab => 'stat_table') ;
--如果进行分析后,大部分表的执行计划都走错,需要导回整个scheme的统计信息
sql> exec dbms_stats.import_schema_stats(ownname => 'scott',stattab => 'stat_table');
--导入索引的统计信息
SQL> exec dbms_stats.import_index_stats(ownname => 'crm2',indname => 'IDX_ADM_PERMISSION_PID_MID',stattab => 'stat_table')
--检查是否导入成功
SQL> select table_name,num_rows,a.blocks,a.last_analyzed from all_tables a where a.table_name='WORK_LIST';
exec DBMS_STATS.GATHER_SCHEMA_STATS(ownname=>'ERPDB',cascade=>true,method_opt=>'FOR ALL COLUMNS SIZE AUTO');
exec DBMS_STATS.GATHER_DATABASE_STATS(cascade=>true,method_opt=>'FOR ALL COLUMNS SIZE AUTO');
exec DBMS_STATS.GATHER_TABLE_STATS(ownname=>'ZHYU',tabname=>'TEST1',estimate_percent =>DBMS_STATS.AUTO_SAMPLE_SIZE,cascade=>TRUE,method_opt=>'FOR ALL COLUMNS SIZE AUTO');
exec dbms_stats.gather_schema_stats('jjflsw');
Best Practices for Managing Optimizer Statistics for Oracle Communications Order and Service Management (OSM) (Doc ID 1662447.1)
What are the Default Parameters when Gathering Table Statistics on 9i and 10g? (文档 ID 406475.1)
How to Change Default Parameters for Gathering Statistics in Oracle 11g or Later (文档 ID 1493227.1)
Note that the defaults for statistics gathering on different versions of Oracle are not necessarily the same, for example:
ESTIMATE_PERCENT: defaults:
9i : 100%
10g : DBMS_STATS.AUTO_SAMPLE_SIZE (using very small estimate percentage)
11g : DBMS_STATS.AUTO_SAMPLE_SIZE (using larger estimate percentage - 100%)
METHOD_OPT: defaults:
9i : "FOR ALL COLUMNS SIZE 1" effectively no detailed column statistics.
10g and 11g : "FOR ALL COLUMNS SIZE AUTO" - This setting means that DBMS_STATS decides which columns to add histogram to where it believes that they may help to produce a better plan.
Note that on 11g, although using auto size for ESTIMATE_PERCENT tends to default to 100% ,because this is an auto sample, the engine may still decide to use a different sample size for tables and columns. This means that Column statistics could still be gathered with a small sample size and create a histogram that is missing key values. When ESTIMATE_PERCENT is set to a specific numeric value, that value will be used for both the table and columns.
Additionally, even though a 100% sample is collected, the gathering process is really fast since a new hashing algorithm is used to compute the statistics rather than sorting (in 9i and 10g the "slow" part was typically the sorting). In 10g, support experience has shown that the default ESTIMATE_PERCENT sample size was extremely small which often resulted in poor statistics and is therefore not recommended.
|
|