[ZBX-16386] Memory leak on proxy Created: 2019 Jul 17  Updated: 2020 Apr 08  Resolved: 2020 Apr 08

Status: Closed
Project: ZABBIX BUGS AND ISSUES
Component/s: Proxy (P)
Affects Version/s: 4.2.3
Fix Version/s: None

Type: Incident report Priority: Trivial
Reporter: Ross Jurek Assignee: Edgar Akhmetshin
Resolution: Incomplete Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Production


Attachments: HTML File dep_results     PNG File image-2019-07-17-10-50-48-471.png     PNG File image-2019-07-17-10-53-00-584.png     PNG File image-2019-07-24-09-25-25-025.png     HTML File results    
Issue Links:
Duplicate
duplicates ZBX-15996 Possible memory leak in poller Closed

 Description   

Memory is slowly leaking and not recovering until the server is rebooted.

upgrade from 3.2 to 4.2.3

Proxy:

      Redhat 7, 8 cores, 12gb memory

No vmware host monitored by this proxy, FYI I have other proxies with the same config and there are no memory leaks on them, memory goes up and down as processing happens.

287 agents reporting to the proxy, 19780 items.

StartPollers=350
StartIPMIPollers=10
StartPollersUnreachable=300
StartTrappers=10
StartPingers=50
StartDiscoverers=10
StartDBSyncers=10
StartHTTPPollers=50
StartVMwareCollectors=5
VMwareFrequency=60
VMwareCacheSize=768M
CacheSize=768M
HistoryCacheSize=512M

 



 Comments   
Comment by Vladislavs Sokurenko [ 2019 Jul 17 ]

Is it possible to determine which exact process is leaking please and if disabling some specific item types help with the issue ?
ps -aux should help

Comment by Ross Jurek [ 2019 Jul 17 ]

there are no defunct'd processes, the results from ps -aux look fine.

Comment by Vladislavs Sokurenko [ 2019 Jul 17 ]

Can you please share results of ps -aux | grep zabbix

Comment by Ross Jurek [ 2019 Jul 17 ]

here are the results. of ps -aux|grep zabbix

results

 

Comment by Alexey Pustovalov [ 2019 Jul 17 ]

Also please provide the following information:
SQL query from Zabbix proxy:

select count(*), type from items group by type;

Package dependencies. For example RHEL like systems command:

yum deplist "zabbix-proxy*"
Comment by Ross Jurek [ 2019 Jul 18 ]

mysql> select count, type from items group by type;
--------------+

count type

--------------+

20750 0
580 3
462 4
24 5
2934 7
36 9

--------------+
6 rows in set (0.02 sec)

 

dep_results

Comment by Vladislavs Sokurenko [ 2019 Jul 18 ]

I guess the problem is with a poller

zabbix    2664  0.0  0.1 2850604 16788 ?       S    08:43   0:02 /usr/sbin/zabbix_proxy: poller #2 [got 0 values in 0.000015 sec, idle 1 sec]

Does disabling simple checks help with the issue ? Are you using any loadable modules ?

Comment by Ingus Vilnis [ 2019 Jul 19 ]

Way too many processes started on the proxy to begin with. You have 350 pollers for 287 agents. Same goes for caches and DB syncers. Lower the settings to reasonable ones and see how it does then. 

Related "memory leak" reported in ZBX-15996

 

Comment by Vladislavs Sokurenko [ 2019 Jul 19 ]

Closing a a duplicate of ZBX-15996, please continue discussion there

Comment by Ross Jurek [ 2019 Jul 19 ]

The ticket you stated that is a dup to this, is on a different version.  also, I have the same conf setting on my other proxies (22 of them) and do not see the memory leak on any of them.. what is the guide line to setting the conf values, example x number of items needs x number of pollers, is there a doc with that info.

Comment by Ross Jurek [ 2019 Jul 24 ]

Can I get an update on this issue?  I am starting to see memory lose in a few of my other proxies now, just not as fast as the proxy that I open the ticket on..  you can see the slow decline of memory and then the reboot.. 

Comment by Vladislavs Sokurenko [ 2019 Jul 24 ]

Does disabling simple checks help with the issue ? Are you using any loadable modules ? Could not reproduce issue locally.

Comment by Ross Jurek [ 2019 Jul 24 ]

The proxy that is losing memory faster than the others, most of the checks preformed are windows eventlog and discovery that collects 9 item values.  the event log checks are active agent and everything else is agent.

287 agents reporting to the proxy, 19780 items.

there is only 1 simple check, ping

 

Comment by Vladislavs Sokurenko [ 2019 Jul 24 ]

Is it possible to move all other checks except eventlog to other Zabbix proxy and seeng if leaving only eventlog causes memory leak ?

Comment by Ross Jurek [ 2019 Jul 24 ]

I can not easily move the host around, the proxies are there with specific firewall rules to specific hosts.  

Comment by Vladislavs Sokurenko [ 2019 Jul 25 ]

Can you please do

select count(*), type from items group by type;

On Zabbix proxy that leak, judging by report it's either simple check or snmpv2

Comment by Ross Jurek [ 2019 Jul 25 ]

mysql> select count, type from items group by type;
--------------+

count type

--------------+

20750 0
580 3
462 4
24 5
2934 7
36 9

--------------+

Comment by Vladislavs Sokurenko [ 2019 Jul 25 ]

Thanks ! In the other ticket it mentions

count | type 
-------+------
22229 |    4   snmpv2
  5150 |    6   snmpv3
  3706 |    3   simple check
  3558 |   10  external check
  3560 |    5   zabbix internal

So if true then it's either 4 or 3 that is Simple check or SNMPv2 agent.
It says that you have 580 simple checks and 462 SNMPv2 agent items, you said that you have only one simple check and it is ping, could you please double check ?

Generated at Fri Jul 18 08:39:44 EEST 2025 using Jira 9.12.4#9120004-sha1:625303b708afdb767e17cb2838290c41888e9ff0.