-
Change Request
-
Resolution: Unresolved
-
Major
-
None
-
1.8.2
-
Linux, Postgresql
We have hundreds of monitored servers with thousands of checks in total. The size of the items and various history tables in the Zabbix database is a major scalability problem for us – we've got it running on a very fast RAID array with 10+ disks, but a postgres autovacuum of the items table makes the server almost unusable.
Long-term the amount of data we can store in those tables will limit whether we can continue to use Zabbix. Has any thought been given to a more scalable storage mechanism? Some ideas:
- partition the tables in SQL (we'd be OK with switching to mySQL if needed, but some official support for table partitioning could help)
- support storing the huge data in BerkeleyDB, Cassandra, Hadoop, or some other more scalable storage mechanism
- causes
-
ZBXNEXT-4417 Real time value, trends, events export module (Z4)
- Closed
-
ZBXNEXT-4868 Support of TimescaleDB
- Closed
- depends on
-
ZBXNEXT-3661 Support of RestAPI or JSON RPC as back-end transport for history database
- Reopened
- is duplicated by
-
ZBXNEXT-2640 Add ability to send numeric history to amqp queue for distribution to other databases
- Reopened
-
ZBXNEXT-2810 Send data to Fluentd
- Closed
-
ZBXNEXT-1836 Support of loadable modules for alternative storage of historical data
- Closed
- part of
-
ZBXNEXT-4417 Real time value, trends, events export module (Z4)
- Closed