We have hundreds of monitored servers with thousands of checks in total. The size of the items and various history tables in the Zabbix database is a major scalability problem for us – we've got it running on a very fast RAID array with 10+ disks, but a postgres autovacuum of the items table makes the server almost unusable.
Long-term the amount of data we can store in those tables will limit whether we can continue to use Zabbix. Has any thought been given to a more scalable storage mechanism? Some ideas:
- partition the tables in SQL (we'd be OK with switching to mySQL if needed, but some official support for table partitioning could help)
- support storing the huge data in BerkeleyDB, Cassandra, Hadoop, or some other more scalable storage mechanism