-
Change Request
-
Resolution: Unresolved
-
Minor
-
None
-
7.0.13, 7.4.0beta2
-
None
Chunk interval settings should be revisited.
Medium to large/big instances are suffering from deadlocks due to the chunk sizes/available memory quite often + compression is not working, until chunk will be closed for writes. So if you have 50G chunk, it will be not compressed after 7 days + to compress oit without issues it's required to have shared_buffers size more than this size to handle such chunks from all the tables with TimescaleDB.
Suggestions:
- make TimescaleDB configuration related to the table chunks/compression deployable from the Frontend, since this is an online modification if TimescaleDB plugin is already loaded
- add TimescaleDB chunk statistics about 1 previous chunk size + shared_buffers size + give a warning, if all the partitions doesn't fit shared_buffers in Frontend
- reduce the size of the chunks by default for trends/trends_uint - to 1 day
https://www.timescale.com/blog/timescale-cloud-tips-testing-your-chunk-size/
https://www.timescale.com/forum/t/choosing-the-right-chunk-time-interval-value-for-timescaledb-hypertables/116
https://docs.timescale.com/use-timescale/latest/hypertables/about-hypertables/#best-practices-for-time-partitioning
https://www.timescale.com/learn/postgresql-performance-tuning-key-parameters