Greenplum 自动统计信息收集 - 暨统计信息不准引入的broadcast motion一例

简介:

标签

PostgreSQL , Greenplum , 统计信息 , 自动统计信息 , broadcast motion , 执行计划


背景

数据库执行计划的好坏,与数据库的SQL优化器息息相关。Greenplum有两套优化器,legacy query optimizer 与 ORCA。

这两个优化器都是CBO优化器,都需要依赖统计信息,如果统计信息不准确,可能生成的执行计划就不准确。

例如我们有一个这样的QUERY,发现怎么跑都跑不出来。

观察执行计划,发现有一个节点用到了broadcast motion,也就是广播。

->  Broadcast Motion 512:512  (slice1; segments: 512)  (cost=0.00..6.13 rows=1 width=16)  
  ->  Append-only Columnar Scan on xxxx  (cost=0.00..1.00 rows=1 width=16)  

当JOIN字段并非分布键时,Greenplum会根据表的大小,选择重分布或广播。(小表广播,大表的话多阶段JOIN)。

《HybridDB PostgreSQL "Sort、Group、distinct 聚合、JOIN" 不惧怕数据倾斜的黑科技和原理 - 多阶段聚合》

而这个执行计划跑偏的SQL,恰恰是在一个大表上触发了广播(broadcast motion),这不正常。

select count(*) xxxx;  
  
返回有31亿数据。  

查询pg_class,值有1条记录,占用0个BLOCK

select * from pg_class where relname='xxxx';  
  
relpages       | 1  
reltuples      | 0  

执行analyze,收集统计信息,执行计划恢复。

(Greenplum对于超级大表,收集统计信息时,会构建临时表来进行采样分析)

digoal=> analyze verbose xxxxxxxx;  
  
INFO:  Executing SQL: select sum(gp_statistics_estimate_reltuples_relpages_oid(c.oid))::float4[] from gp_dist_random('pg_class') c where c.oid=112293  
INFO:  ANALYZE estimated reltuples=3091824896.000000, relpages=47509120.000000 for table xxxxxxxx  
INFO:  ANALYZE building sample table of size 173762 on table xxxxxxxx because it has too many rows.  
INFO:  Executing SQL: create table pg_temp.pg_analyze_112293_59 as (  select Ta.xx,....Ta.xxx  from public.xxxxxxxx as Ta where random() < 0.00005620053343591280281543731689453125 limit 173762  ) distributed randomly  
INFO:  Created sample table pg_temp.pg_analyze_112293_59 with nrows=173762  
INFO:  ANALYZE computing statistics on attribute xx  
INFO:  Executing SQL: select count(*)::float4 from pg_temp_440803.pg_analyze_112293_59 as Ta where Ta.xx is null  
INFO:  nullfrac = 0.178474  
INFO:  Executing SQL: select avg(pg_column_size(Ta.xx))::float4 from pg_temp_440803.pg_analyze_112293_59 as Ta where Ta.xx is not null  
INFO:  avgwidth = 21.418087  
INFO:  Executing SQL: select count(*)::float4 from (select Ta.xx from pg_temp_440803.pg_analyze_112293_59 as Ta group by Ta.xx) as Tb  
INFO:  count(ndistinct()) gives 142751.000000 values.  
INFO:  Executing SQL: select count(v)::float4 from (select Ta.xx as v, count(Ta.xx) as f from pg_temp_440803.pg_analyze_112293_59 as Ta group by Ta.xx) as foo where f > 1  
INFO:  ndistinct = -1.000000  
..........  

收集统计信息后,执行计划恢复,没有broadcast了,执行也秒级返回了。

让Greenplum自动收集统计信息

对于在函数内 或 函数外执行DML时,内核会跟踪表的记录数变更影响的数据量,我们可以设置什么时候收集统计信息:

none:不收集

on_no_stats:没有统计信息时,收集

on_change:当写入、更新量超过阈值(gp_autostats_on_change_threshold参数设置的行数,默认为20亿)后,自动收集统计信息。

Automatic Statistics Collection  
  
Greenplum Database can be set to automatically run ANALYZE on a table that either has no statistics or has  
changed significantly when certain operations are performed on the table. For partitioned tables, automatic  
statistics collection is only triggered when the operation is run directly on a leaf table, and then only the leaf  
table is analyzed.  
  
Automatic statistics collection has three modes:  
• none disables automatic statistics collection.  
• on_no_stats triggers an analyze operation for a table with no existing statistics when any of the  
commands CREATE TABLE AS SELECT, INSERT, or COPY are executed on the table.  
• on_change triggers an analyze operation when any of the commands CREATE TABLE AS SELECT,  
UPDATE, DELETE, INSERT, or COPY are executed on the table and the number of rows affected exceeds  
the threshold defined by the gp_autostats_on_change_threshold configuration parameter.  
  
The automatic statistics collection mode is set separately for commands that occur within a procedural  
language function and commands that execute outside of a function:  
• The gp_autostats_mode configuration parameter controls automatic statistics collection behavior  
outside of functions and is set to on_no_stats by default.  
• The gp_autostats_mode_in_functions parameter controls the behavior when table operations are  
performed within a procedural language function and is set to none by default.  
  
With the on_change mode, ANALYZE is triggered only if the number of rows affected exceeds the threshold  
defined by the gp_autostats_on_change_threshold configuration parameter. The default value for this  
parameter is a very high value, 2147483647, which effectively disables automatic statistics collection;  
you must set the threshold to a lower number to enable it. The on_change mode could trigger large,  
unexpected analyze operations that could disrupt the system, so it is not recommended to set it globally. It  
could be useful in a session, for example to automatically analyze a table following a load.  
  
To disable automatic statistics collection outside of functions, set the gp_autostats_mode parameter to  
none:  
  
gpconfigure -c gp_autostats_mode -v none  
  
To enable automatic statistics collection in functions for tables that have no statistics, change  
gp_autostats_mode_in_functions to on_no_stats:  
  
gpconfigure -c gp_autostats_mode_in_functions -v on_no_stats  
  
Set the log_autostats system configuration parameter to on if you want to log automatic statistics  
collection operations.  

为了让数据库产生准确的执行计划,建议要么用户自己调度analyZE收集统计信息,要么自动收集。

目录
相关文章
|
Oracle 网络协议 安全
Oracle 11g DataGuard搭建保姆级教程
Oracle 11g DataGuard搭建保姆级教程
1359 4
|
存储 监控 固态存储
如何监控和优化 WAL 日志文件的存储空间使用?
如何监控和优化 WAL 日志文件的存储空间使用?
410 1
|
SQL Oracle 关系型数据库
|
消息中间件 存储 大数据
深度分析:Apache Kafka及其在大数据处理中的应用
Apache Kafka是高吞吐、低延迟的分布式流处理平台,常用于实时数据流、日志收集和事件驱动架构。与RabbitMQ(吞吐量有限)、Pulsar(多租户支持但生态系统小)和Amazon Kinesis(托管服务,成本高)对比,Kafka在高吞吐和持久化上有优势。适用场景包括实时处理、数据集成、日志收集和消息传递。选型需考虑吞吐延迟、持久化、协议支持等因素,使用时注意资源配置、数据管理、监控及安全性。
|
分布式计算 Java Hadoop
HDFS 集群读写压测
在虚拟机中配置集群时,需设置每台服务器网络为百兆,以模拟实际网络环境。使用Hadoop的`TestDFSIO`进行HDFS性能测试,包括写入和读取数据。写测试中,创建11个128MB文件,平均写入速度为3.86 MB/sec,总处理数据量1408 MB,测试时间137.46秒。资源分配合理,传输速度超过单台服务器理论最大值12.5M/s,说明网络资源已充分利用。读测试主要依赖硬盘传输速率,速度快。测试完成后使用`TestDFSIO -clean`删除测试数据。
550 2
|
小程序 定位技术 API
uniapp 开发微信小程序 --【地图】打开地图选择位置,打开地图显示位置(可开启导航)
uniapp 开发微信小程序 --【地图】打开地图选择位置,打开地图显示位置(可开启导航)
1599 0
|
消息中间件 SQL Kafka
数仓学习---15、数据仓库工作流调度
数仓学习---15、数据仓库工作流调度
|
SQL 分布式计算 资源调度
分享一个 hive on spark 模式下使用 HikariCP 数据库连接池造成的资源泄露问题
分享一个 hive on spark 模式下使用 HikariCP 数据库连接池造成的资源泄露问题
|
存储 分布式计算 监控
【Druid】(一)Apache Druid 基本介绍
【Druid】(一)Apache Druid 基本介绍
2860 0
【Druid】(一)Apache Druid 基本介绍