-
-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] - Support column level statistics #514
Comments
Did anyone find a use case for the column statistics feature? I tried to apply it to the unique Id field of a big table and after several minutes it computed a totally wrong number of unique values. Also it did not speed up simple sql queries at all. I agree that it looks appealing to automate these statistics with dbt. But would it be useful in real life? Given that it can slow down project building significantly. |
@roslovets I believe the main reason is a potential performance gain indeed, according to this new Cost-Based Optimizer for Athena. I haven't seen hands-on test results yet though. |
Here another article where AWS shows improved performances |
Thank you for the links folks. According to their fancy examples we should be able to really save time on downstream models and tests even if it takes up to several minutes to compute statistics for one table. But I still cannot get why Maybe you could do tests on your big tables as well? |
The reason is that neither Athena, Glue, or any other engine like Spark manages the data. Data warehouses like Redshift, Snowflake, and BigQuery manage data and can keep stats they know are reliable and use these to optimize things like The stats are still useful. Knowing the magnitude of tables involved in a join can make an enormous difference in performance. If you put a ten billion row table on the right side of a join without table stats there's a good chance the query will run out of memory. With stats, it will run just fine. It's not about the exact numbers, but about how tables compare to one another. I don't know why the distinct count in your table stats differ from the actual numbers, but I assume it's because the collector uses approximations. Because of the above, the exact count is not the goal, and it would be wasteful computing an exact count. The focus is on the magnitude, and not on the exact count. Data lake query engines can leverage other types of statistics to optimize some queries. |
Glue collects approximate count of distinct values, you are right. Now it's clearly declared in Glue UI. AWS shows up to 90% performance, so careful statistics collection for some models potentially can optimize run time of the whole pipeline. I'm going to do some tests with large and incremental tables to confirm this. |
@roslovets could you share your results here? I would like to understand if it's worth to do some work about statistic collection. |
@nicor88 it's difficult for me to share exact results but I've observed a CTAS that selects from multiple tables without column statistics take 8mins and with column statistics on all dependent tables take 30 seconds. Starting task runs on one T byte sized table and 150columns takes about 45mins on 25DPUs, 100% sample size so it could be beneficial to provide capabilities for sync/async HTH |
https://aws.amazon.com/about-aws/whats-new/2023/11/aws-glue-data-catalog-generating-column-level-statistics/
Add additional configurations that allow the user to add column level statistics to the table.
Minimal config to make it work:
Open questions
are all table types supported?seems only supported by hive tables, not iceberg.Notes
Currently not available in all regions
The text was updated successfully, but these errors were encountered: