[ZBX-23772] VMware memory leak Created: 2023 Dec 01 Updated: 2024 May 09 Resolved: 2023 Dec 11 |
|
Status: | Closed |
Project: | ZABBIX BUGS AND ISSUES |
Component/s: | Server (S) |
Affects Version/s: | 7.0.0alpha7, 7.0.0alpha8 |
Fix Version/s: | 7.0.0alpha9, 7.0 (plan) |
Type: | Problem report | Priority: | Major |
Reporter: | Tomáš Heřmánek | Assignee: | Michael Veksler |
Resolution: | Fixed | Votes: | 0 |
Labels: | VMware, alpha, crash | ||
Remaining Estimate: | Not Specified | ||
Time Spent: | Not Specified | ||
Original Estimate: | Not Specified | ||
Environment: |
Debian 11, Percona DB, tested on VMware 7.0, 8.0 |
Attachments: |
![]() ![]() ![]() |
||||||||
Issue Links: |
|
||||||||
Team: | |||||||||
Sprint: | Sprint 107 (Dec 2023) | ||||||||
Story Points: | 0.5 |
Description |
Steps to reproduce:
Result:
We are using standard template but we are also using keys for snapshots monitoring and maintenance. |
Comments |
Comment by Michael Veksler [ 2023 Dec 02 ] |
Be so kind to disable vmware.eventlog[\{$VMWARE.URL},skip] before running zabbix_server for testing. Please confirm that you eventlog item contains a 'skip' option. What is value of VMwareFrequency ? |
Comment by Tomáš Heřmánek [ 2023 Dec 02 ] |
Hi Michael, i disable item with key vmware.eventlog[\{$VMWARE.URL},skip] After around of five minutes our problem is back. 40GB is consumed by poller. I have this setting for wmware in zabbix_server.conf # Option: StartVMwareCollectors (1 vcenter with 3 Nodes + 1 server without vcenter) # Number of pre-forked vmware collector instances. # # Mandatory: no # Range: 0-250 # Default: # StartVMwareCollectors=0 StartVMwareCollectors=4 # Option: VMwareFrequency # How often Zabbix will connect to VMware service to obtain a new data. # # Mandatory: no # Range: 10-86400 # Default: # VMwareFrequency=60 VMwareFrequency=60 # Option: VMwarePerfFrequency (Yea i know too low value, but for test environment is fine) # How often Zabbix will connect to VMware service to obtain performance data. # # Mandatory: no # Range: 10-86400 # Default: # VMwarePerfFrequency=60 VMwarePerfFrequency=10 # Option: VMwareCacheSize (Increased when i found problems) # Size of VMware cache, in bytes. # Shared memory size for storing VMware data. # Only used if VMware collectors are started. # # Mandatory: no # Range: 256K-2G # Default: # VMwareCacheSize=2G # Option: VMwareTimeout # Specifies how many seconds vmware collector waits for response from VMware service. # # Mandatory: no # Range: 1-300 # Default: # VMwareTimeout=10 VMwareTimeout=300 |
Comment by Michael Veksler [ 2023 Dec 02 ] |
Be so kind to set: LogFileSize=100 StartVMwareCollectors=1 and immediately after start increase log level zabbix_server -R log_level_increase="vmware collector" Please find the last function that enters the infinity loop. |
Comment by Tomáš Heřmánek [ 2023 Dec 02 ] |
Hi Michael, i make some test and i find one discovery. This discovery causes our memory leak. You can find screenshot item in attachments. vmware.vm.discovery[{$VMWARE.URL}] I found strange error, i have all other data but not for discovery VM item. 1040346:20231202:213355.187 End of vmware_service_rest_authenticate():FAIL Here is attached log: zabbix_server.log I have idea: https://communities.vmware.com/t5/VMware-vCenter-Discussions/REST-API-authentication-RESOLVED/td-p/2971666 this test vmware server is connected to AD. But im using local user for monitoring. Maybe is more bug from VMware then Zabbix bug, but some code need to be adjusted for this kind of situation. Tom |
Comment by Michael Veksler [ 2023 Dec 03 ] |
Hi tomas.hermanek, Many thanks for the help. You may try testing fixed version from the feature/ZBX-23772-6.5 branch. |
Comment by Michael Veksler [ 2023 Dec 04 ] |
Available in:
|