[ZBX-17694] High Memory utilization by preprocessing manager Created: 2020 May 08  Updated: 2024 Apr 10  Resolved: 2020 Jun 11

Status: Closed
Project: ZABBIX BUGS AND ISSUES
Component/s: Proxy (P), Server (S)
Affects Version/s: 4.4.7
Fix Version/s: 4.0.22rc1, 4.4.10rc1, 5.0.2rc1, 5.2.0alpha1, 5.2 (plan)

Type: Problem report Priority: Major
Reporter: Igor Gorbach (Inactive) Assignee: Vladislavs Sokurenko
Resolution: Fixed Votes: 5
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: JPEG File Proxy with json.jpg     JPEG File Proxy without json.jpg     File ZBX-17694-5.0-TEST.diff     File aps.json     PNG File callgrind.png     PNG File image-2020-05-08-14-10-04-050.png     PNG File image-2020-05-29-14-18-18-631.png     PNG File image-2020-05-29-14-29-00-413.png    
Issue Links:
Causes
Team: Team A
Sprint: Sprint 64 (May 2020), Sprint 65 (Jun 2020)
Story Points: 1

 Description   

Problem related with a gathering data from kafka_exporter.
HTTP agent item (basic GET request) is using as Master item for 11 LLD Rules with 1 item prototype (the same Master Item) on each one. Discovered >= 1500 items for each LLD.
Size of web page - 1,4 Mb
Each LLD using Preprocessing (Prometheus to json), LLD macro are defined by JsonPath
Each Item Prototype using Preprocessing steps: 1. Prometheus pattern 2. Custom multiplier (1)
Master Item update interval - 5 minutes
In this configuration subprocess preprocessing manager using over 19Gb of memory in customer environment, and RAM is not freed after time
Situation is reproduced in test environment (Zabbix-Proxy 4.4.7 4 CPU, 8Gb RAM, 10 Pollers, 10 Preprocessing Workers), template crushed the system (not enough memory)

Then, I've tryed to modify the rules - clone Master Item to 5 ones (same logic with uniqe key) and distributed LLDs among them (2-3 LLD per one Master Item)
Result - only 4,5 Gb of memory is using by preprocessing manager, but it's not freed after time

I've inreased update interval for all master items to 10m, updated config cache on server and proxy - no result. After restarting the proxy - bursts of disposal are observing every 10 minutes, but memory is freed finally.

 

Steps to reproduce:

  1. Import the template  kafka to zabbix 
  2. Create simple web document with kafka exporter data
  3. Create a new host, link it to the some proxy and template
  4. Wait for creating all items from item prototypes

Result:

High RAM Utilization by preprocessing manager process

Expected:
RAM is freed by preprocessing manager fast without need of zabbix-proxy restarting and separating LLDs to several master items



 Comments   
Comment by Vladislavs Sokurenko [ 2020 May 08 ]

Looks like preprocessing manager copy data from master item to dependent items before it starts sending those to workers and limit was increased from 999 to 29999. It should be elaborated if possible to start sending to workers if too much is already copied and then continuing when what is copied, is already processed. Other options is to look for is maybe there is no need to copy if data is not changed and pointer simply can be copied or string pool used.

Comment by Vladislavs Sokurenko [ 2020 May 08 ]

It could be a workaround to have more master items.

Comment by Aaron [ 2020 May 14 ]

Hello,

I believe I'm hitting the same issue. In my case I'm retrieving a JSON from a cluster of controllers which weights around 200KB. That generates a set of item prototypes created from a master item which is a script that pulls that JSON. After the first run the preprocessing manager hogs around 2G of memory and doesn' free it up after time.

This is a pre run and post run of the mater item.

 

This is the configuration I'm using:

 

As a workarond I deleted many many item prototypes and increased the proxy memory.

 

I've attached a json example as well.

aps.json

 

Please let me know if any more info is needed to reproduce the issue. I believe this approach is great for monitorization but it's a shame it can't be totally exploited due this anomalous memory usage.

 

Aarón

Comment by Vladislavs Sokurenko [ 2020 May 25 ]

Implemented in pull request feature/ZBX-17694-5.0

Comment by Vladislavs Sokurenko [ 2020 May 25 ]

If possible igorbach please check the development branch and let me know if it solves your issue, thanks !

Comment by Vladislavs Sokurenko [ 2020 Jun 03 ]

Fixed in:

  • pre-4.0.22rc1 37dd8924494
  • pre-5.0.2rc1 b4c6113dd28
  • pre-5.2.0alpha1 (master) 884f52619bc
Generated at Sat Jun 07 12:10:24 EEST 2025 using Jira 9.12.4#9120004-sha1:625303b708afdb767e17cb2838290c41888e9ff0.