• Type: Change Request
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 4.2.0alpha1
    • Fix Version/s: None
    • Component/s: Server (S)
    • Labels:


      With new features (LLD preprocessing, custom scripts) preprocessing might become a bottleneck. Currently it can push ~160k small values/sec on i7-7700 cpu. With preprocessing values becoming larger (especially in LLD case) and steps more complex (scripts) the performance drops and might not be enough.

      There are two options - rework preprocessing to use worker threads instead of processes. The data exchange load will be significantly reduced, especially with large data. Another option is brute force approach - use multiple preprocessing managers each having own worker process set, split items between managers by itemid.
      Some tests results by converting processes to threads (a 'trim' preprocessing step was applied to all values) :

      Value Size (bytes) Values/sec
      (threaded workers using sockets)
      (threaded workers using queues)
      4 167K 173k 590k
      128 158K 170k 530k
      1024 136K 148k 362k
      2048 124K 141k 268k
      4096 84K 127k 183k
      8192 68K 99k 115k

      Threaded workers using sockets

      The worker processes were replaced with threads. The old communication protocol (sockets) was used, but instead of sending data only references to the data objects were sent. It could be optimized further, but still gives a rough estimate.

      Threaded workers using queues

      In this test the manager-worker communication was changed to simple mutex protected queues.


      Or in worst case we can merge both options.



          Issue Links



              • Assignee:
       Zabbix Development Team
                wiper Andris Zeila
              • Votes:
                1 Vote for this issue
                7 Start watching this issue


                • Created: