[ZBX-26184] Zabbix Server Performance Degradation After Update to Version 7.0.10 Created: 2025 Mar 16  Updated: 2025 Apr 09  Resolved: 2025 Apr 09

Status: Closed
Project: ZABBIX BUGS AND ISSUES
Component/s: Server (S)
Affects Version/s: 7.0.10
Fix Version/s: None

Type: Problem report Priority: Trivial
Reporter: Vinicius Freitas Assignee: Vladislavs Sokurenko
Resolution: Cannot Reproduce Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

OS: Redhat 8 (vcpu 6 / mem 25gb)
DB: Mariadb 6.10.21 (OS: el8 / vcpu 8 / mem 96gb / ssd 8T)
NVPS: ~9k
Proxies Count: 80

Number of hosts (enabled/disabled) 10864 10337 / 527
Number of templates 1352
Number of items (enabled/disabled/not supported) 2715946 1943367 / 601862 / 170717
Number of triggers (enabled/disabled [problem/ok]) 854963 791361 / 63602 [6645 / 784716]
Number of users (online) 104 5
Required server performance, new values per second 9026.65
High availability cluster Enabled Fail-over delay: 1 minute


Attachments: PNG File history_syncer_process_afeter_update_7.0.10.png     Text File server_diaginfo_7.0.10.txt     Text File server_diaginfo_after_rollback_15-mar.txt     PNG File zbx_cache_before-and-at_update.png     PNG File zbx_cache_graph_afeter_rollback.png     PNG File zbx_queue_before-and-at_update.png     PNG File zbx_queue_graph_afeter_rollback.png    
Issue Links:
Duplicate

 Description   

Steps to reproduce:

  1. Update from 7.0.9 to 7.0.10

Result:

After the version update, our server, which has around 9k NVPS, experienced a significant delay in normalizing data synchronization with the proxies and sending it to the database. Even after 48 hours, the issues persisted. It was observed that the history syncer processes were barely "working"—it was very difficult to see them actually sending data to the database.

Another point noted was that the History Cache usage never dropped below 80% throughout this entire period. It's important to highlight that before the update to version 7.0.9, these issues were not present.

No changes were made apart from the minor version update to 7.0.10 on 03/12/2025.

Additionally, another factor observed was that the proxy queue, which was typically stabilized between 10k and 15k, spiked to over 1.5 million. However, after rolling back to version 7.0.9, everything returned to normal in less than an hour.

Expected:
The expectation was that normalization would occur within a maximum of 3 hours after the update and that the queue and history cache levels would return to normal.

 



 Comments   
Comment by Vladislavs Sokurenko [ 2025 Mar 17 ]

Could you please be so kind and share information about following items:
itemid:5356716 values:2663765
itemid:326857 values:6612
itemid:1087000 values:6466
itemid:1205891 values:5904
itemid:7363922 values:5901
itemid:1205939 values:5684
itemid:108296375 values:4950

Comment by Vladislavs Sokurenko [ 2025 Mar 17 ]

It might be related to ZBX-26116 is VMware also used ?

Comment by Vinicius Freitas [ 2025 Mar 17 ]

About the items:

itemid   |name                                      |key_                                                                        
---------------------------------------------------------------------------------------------------------------------------
   326857|VMware: Event log                         |vmware.eventlog[\{$VMWARE.URL},skip]                                         
  1087000|Syslog check HAB-SUM01 error %ASA-6-302014|log[/var/log/network/netlogs/fwlogs/HAB-SUM01.log,"ASA-6-302014",,,skip,,,,]
  1205891|Syslog check HAB-ITI01 error %ASA-6-302014|log[/var/log/network/netlogs/fwlogs/HAB-ITI01.log,"ASA-6-302014",,,skip,,,,]
  1205939|Syslog check HDA-HONDA error %ASA-6-302014|log[/var/log/network/netlogs/fwlogs/HDA-HONDA.log,"ASA-6-302014",,,skip,,,,]
  5356716|Operating system                          |system.uname                                                                
  7363922|VMware: Event log                         |vmware.eventlog[\{$VMWARE.URL},skip]                                         
108296375|InfraSocks Log 2                          |log[/var/log/sockd_dsint.log,".*(pam.username).*",,,skip,,60]

And yes, VMware is used in the DC.

Comment by Vladislavs Sokurenko [ 2025 Mar 17 ]

It appears to have received millions of values, what is it used for, what kind of trigger there is? Is it possible to filter unneeded values ?
Please share update interval for following itemid:
itemid:5356716

Comment by Vladislavs Sokurenko [ 2025 Apr 09 ]

Closing as there is no response, please let me know if issue still persists.

Generated at Sat Aug 02 10:50:42 EEST 2025 using Jira 9.12.4#9120004-sha1:625303b708afdb767e17cb2838290c41888e9ff0.