-
Type:
Incident report
-
Resolution: Unresolved
-
Priority:
Trivial
-
None
-
Affects Version/s: 7.0.25
-
Component/s: Server (S)
-
None
Steps to reproduce:
- Use timescaledb with large db history.
- Create widget graph with aggregation function last 1h
- Watch resource usage of postgres database
Result:
Just one of these graphs kills the database even on a host with 8GBs of RAM and tens of gigs of storage. The DB query looks something like
SELECT h.itemid,h.value,h.clock,h.ns,s.tick FROM history_uint h JOIN (SELECT h2.itemid,h2.clock,MAX(h2.ns) AS ns,s2.tick FROM history_uint h2 JOIN (SELECT itemid,CAST(clock AS BIGINT)-MOD(CAST(clock AS BIGINT),86400) AS tick,MAX(clock) AS clock FROM history_uint WHERE itemid=99999 AND clock BETWEEN 1746975594 AND 1778511593 GROUP BY itemid,CAST(clock AS BIGINT)-MOD(CAST(clock AS BIGINT),86400)) s2 ON h2.itemid=s2.itemid AND h2.clock=s2.clock WHERE h2.clock BETWEEN 1746975594 AND 1778511593 GROUP BY h2.itemid,h2.clock,s2.tick) s ON h.itemid=s.itemid AND h.clock=s.clock AND h.ns=s.ns WHERE h.clock BETWEEN 1746975594 AND 1778511593
This query took 3.5minutes on a 20GB database with 8cores/32GB of RAM
Expected:
Postgres without timescaledb handles this a lot better and graphs load succesfully.