Uploaded image for project: 'ZABBIX BUGS AND ISSUES'
  1. ZABBIX BUGS AND ISSUES
  2. ZBX-20590

preprocessing worker utilization

XMLWordPrintable

    • Sprint 85 (Feb 2022), Sprint 86 (Mar 2022), Sprint 87 (Apr 2022), Sprint 88 (May 2022), Sprint 89 (Jun 2022), Sprint 90 (Jul 2022), Sprint 91 (Aug 2022), Sprint 92 (Sep 2022), Sprint 93 (Oct 2022), Sprint 94 (Nov 2022), Sprint 95 (Dec 2022), Sprint 96 (Jan 2023), Sprint 97 (Feb 2023), Sprint 98 (Mar 2023), Sprint 99 (Apr 2023), Sprint 100 (May 2023), Sprint 101 (Jun 2023), Sprint 102 (Jul 2023), Sprint 103 (Aug 2023), Sprint 104 (Sep 2023), Sprint 105 (Oct 2023)
    • 1

      Hi, sorry for my english (google translate)

      Updated zabbix 5.4 => 6.0 and a bug or feature got out
      I monitor rabbitmq queues with a standard template's (about 400 queues)
      On 5.4 preprocessing loaded 16 workers on 30-40% CPU
      On 6.0 preprocessing loads 1 worker at 100% and everything accumulates in a queue
      If 2 hosts (cluster + node) then 2 workers 100% each
      I tried to use a proxy, the problem remained the same. Proxy loads one worker per host at 100%

      OS: Oracle8, kernel 5.4.17-2136.304.4.1.el8uek.x86_64, Database: PostgreSQL + TimescaleDB, official zabbix yum repo

        1. exchanges.json
          133 kB
        2. image-2022-07-23-22-22-06-028.png
          image-2022-07-23-22-22-06-028.png
          23 kB
        3. image-2022-08-02-16-20-52-860.png
          image-2022-08-02-16-20-52-860.png
          7 kB
        4. overview.json
          6 kB
        5. rabbitmq_nested.yaml
          65 kB
        6. zbx_export_templates.yaml
          96 kB
        7. zbx_proxy_preprocessing_queue.png
          zbx_proxy_preprocessing_queue.png
          34 kB
        There are no Sub-Tasks for this issue.

            asebiskveradze Aleksandre Sebiskveradze
            lliuxah Igor Shekhanov
            Team INT
            Votes:
            21 Vote for this issue
            Watchers:
            42 Start watching this issue

              Created:
              Updated:
              Resolved: