• Type: Change Request
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 3.4.7
    • Fix Version/s: None
    • Component/s: Server (S)
    • Labels:


      I would like to ask for improvements in bulk processing of SNMP packets. I know how internally it works ( but this isn't working well in my environment.

      I'm trying to LLD with big Cisco ASR9k router which contain thousands of interfaces.

      From command line:
      $ snmpbulkwalk -mall -v2c -c community ifName
      I'm able to get all 2k+interfaces within few seconds. Net-SNMP by default using max-repeaters 10. Playing with switch -Cr<NUM> getting better result when <NUM> is bigger, for example 100, it's still possible to get response in one packet.

      But, Zabbix using their own logic. Started slowly using low max_vars in function zbx_snmp_walk (zabbix_server/poller/checks_snmp.c) and then after some succesfuly retrieved responses increase max_vars. According to tcpdump, in my case it reached max. 15, then SNMP daemon started responding slowly (maybe because of previous small requests), then Zabbix decrease max_vars which is even worse and whole LLD failed.
      Global timeout in poller configuration is at max.30s which is also not very good.

      Disable of bulk processing for this device is no-go, because snmpwalk over ifName tree tooks ~2m30s (thousands interfaces)..

      So, what I like to expect is - having some possibility of fine tunning SNMP bulk processing.


      • ability to disable dynamic change max-repeaters
      • have possibility to define static value for max-repeaters (fine tunning for response)
      • implement it at same user interface when disabling bulk request is possible
      • global config parameter for zabbix_server

      Please, take a look on it. Thanks




            • Assignee:
     Zabbix Support Team
              phuture Tibor Pittich
            • Votes:
              0 Vote for this issue
              3 Start watching this issue


              • Created: