[ZBXNEXT-7049] Prometheus should support bulk processing Created: 2021 Nov 12  Updated: 2024 Oct 28  Resolved: 2022 Jan 25

Status: Closed
Project: ZABBIX FEATURE REQUESTS
Component/s: Proxy (P), Server (S)
Affects Version/s: None
Fix Version/s: 6.0.0beta2, 6.0 (plan)

Type: New Feature Request Priority: Trivial
Reporter: Aleksandrs Larionovs (Inactive) Assignee: Andris Zeila
Resolution: Fixed Votes: 0
Labels: None
Σ Remaining Estimate: Not Specified Remaining Estimate: Not Specified
Σ Time Spent: Not Specified Time Spent: Not Specified
Σ Original Estimate: Not Specified Original Estimate: Not Specified

Attachments: PNG File api_doc_1.png     PNG File api_doc_2.png     File kuber2.yaml     PNG File screenshot-1.png    
Issue Links:
Duplicate
Sub-task
part of ZBXNEXT-4635 Zabbix Integration with Kubernetes Closed
part of ZBXNEXT-7002 Kubernetes API server  Closed
Sub-Tasks:
Key
Summary
Type
Status
Assignee
ZBXNEXT-7112 Prometheus should support bulk proces... Specification change (Sub-task) Closed Maxim Chudinov  
Team: Team A
Sprint: Sprint 82 (Nov 2021), Sprint 83 (Dec 2021), Sprint 84 (Jan 2022)
Story Points: 5

 Description   

Add bulk processing to prometheus metrics request to improve performance.

Add Aggregate function (Avg, Sum, Count, Sum by).

This will allow us to reduce the amount of usage of the JSONpath for template development.



 Comments   
Comment by Andris Zeila [ 2021 Dec 09 ]

Released ZBXNEXT-7049 in:

Comment by Andris Zeila [ 2021 Dec 10 ]

Released ZBXNEXT-7049 in:

  • pre-6.0.0alpha8 1b9ef9b140
Comment by Martins Valkovskis [ 2022 Jan 05 ]

Updated documentation:

Updated API documentation:

Comment by Yi Troy Lu [ 2022 Jan 29 ]

@Ivo Kurzemnieks

I am a member of the Chinese document translation team.

About prometheus bulk processing documents, I don't understand how one preprocessing step output can improve multi-dependency performance?
or
How does the first Prometheus mode preprocessing step improve Prometheus inspection performance?

Is there a blog to explain?
I think it may be necessary to add external links or more sentences in the official documentation to state that this is an optimization of the underlying implementation.
And give some correct practices to apply this improve.

In my guess, the improve just in one special case.

  • Multi dependent items' master item is same.
  • These items apply same first prometheus pattern preprocessing.
    It also about how the backend logical about preprocess process processing. 

P.S.
I found this commit about etcd metics collect. It may can be a good example.
https://git.zabbix.com/projects/ZBX/repos/zabbix/commits/9eda77c8ef4d3fb8f93a5546ec895be760fa835a#templates/app/etcd_http/template_app_etcd_http.yaml

Comment by Andris Zeila [ 2022 Jan 31 ]

When master item value is processed preprocessing manager spawned 'jobs' for each dependent item. Those 'jobs' were sent to different preprocessing workers and processed separately. So there was overhead in data sending (and with prometheus data this easily can reach megabytes per dependent item) and prometheus data parsing.

There were two improvements made to improve this process:
1) dependent item bulk processing
Now the master item value is sent only to one preprocessing worker
2) prometheus caching
If prometheus preprocessing step is the first (which is the common setup) then worker will parse and index it only once. Then dependent item values can be extracted by using parsed data (and in most cases also the index).

Comment by Yi Troy Lu [ 2022 Jan 31 ]

Thank you for your explanation.

Generated at Mon Jun 30 07:58:42 EEST 2025 using Jira 9.12.4#9120004-sha1:625303b708afdb767e17cb2838290c41888e9ff0.