[ZBX-6353] Zabbix server does not process data by DBSyncers Created: 2013 Mar 06 Updated: 2017 May 30 Resolved: 2014 Oct 25 |
|
Status: | Closed |
Project: | ZABBIX BUGS AND ISSUES |
Component/s: | Proxy (P), Server (S) |
Affects Version/s: | 2.0.6rc1 |
Fix Version/s: | 2.2.8rc1, 2.4.2rc1, 2.5.0 |
Type: | Incident report | Priority: | Blocker |
Reporter: | Alexey Pustovalov | Assignee: | Unassigned |
Resolution: | Fixed | Votes: | 2 |
Labels: | cache, dbsyncer, performance | ||
Remaining Estimate: | Not Specified | ||
Time Spent: | Not Specified | ||
Original Estimate: | Not Specified | ||
Environment: |
Zabbix 2.0.5, MySQL 5.5.29 |
Issue Links: |
|
Description |
If an item takes a lot of data: SELECT itemid, num FROM trends WHERE clock>=unix_timestamp('2012-03-05 11:00:00') AND num > 120 ORDER BY num DESC LIMIT 100; itemid num -------------------- ----------- 11 116473 11 54078 11 10069 Zabbix server can not process all this data by DBSyncers and Trappers can not insert data to history cache. In this case normal NVPS (3.5-4 Knvps) is down to 200-300 nvps. It happens because history cache implementation: all new values should be processed by clock, so if one (several) items will have a lot of data per second, then Zabbix DBSyncers can not separate data for one item between a few DBSyncers, so one syncer will LOCK cache for full time, while other syncers will say that cache is empty. |
Comments |
Comment by Oleksii Zagorskyi [ 2014 Feb 05 ] |
|
Comment by richlv [ 2014 Mar 05 ] |
according to sasha, one solution could be to take fragments from the cache instead of parsing all of it (in case of not getting 1k different items soon enough) |
Comment by Andris Zeila [ 2014 Oct 14 ] |
wiper We decided to stick with this approach (iterating through maximum ZBX_SYNC_MAX * 10 slots when taking items from history cache). It showed nice performance increase (few times at least, hard to measure the exact value) on history cache synchronization. Also with this history cache will be locked less time, so adding to it will be faster too. Another optimization approach (using hashset to keep the processing item ids) resulted in minimal improvement (less than 1%) so this will be left for possible history cache redesign in future. asaveljevs r50053 suggests what is deemed to be an improvement: if we are iterating through slots and we see that we are taking less than 10% of the values we see, then stop trying. This showed a slightly nicer performance than the original solution. wiper Thanks. It's been reviewed and tested. |
Comment by Andris Zeila [ 2014 Oct 15 ] |
Fixed in development branch svn://svn.zabbix.com/branches/dev/ZBX-6353 |
Comment by Andris Zeila [ 2014 Oct 24 ] |
Released in:
|
Comment by Andris Zeila [ 2014 Oct 24 ] |
(1) Updated documentation
sasha CLOSED |
Comment by richlv [ 2014 Oct 25 ] |
(2) this change went in both 2.4 and trunk. the changelog entry is present for 2.5.0 - i believe it should not be sasha Thanks! Removed directly from trunk in r50188. CLOSED |
Comment by Oleksii Zagorskyi [ 2015 Mar 30 ] |
The implemented solution is not efficient enough - |