Uploaded image for project: 'ZABBIX BUGS AND ISSUES'
  1. ZABBIX BUGS AND ISSUES
  2. ZBX-13137

Selected 'OK event generation : None' but recovery remains within escalation

XMLWordPrintable

    • Icon: Problem report Problem report
    • Resolution: Workaround proposed
    • Icon: Critical Critical
    • None
    • 3.2.10, 3.4.4
    • Server (S)
    • None
    • Sprint 27

      'OK event generation : None' is available from 3.2 to Trigger settings.

      A bit of confusion in the esclations data.
      It seems unused recovery data appears to remain in the escalation.

      Steps to reproduce:

      Settings
      ? Trigger
      - Expression : {Zabbix server:log[/var/log/test.log].regexp(ERROR)}=1	
      - OK event generation : None
      - PROBLEM event generation mode : Multiple
      
      ? Actions
      - Recovery operations : Send recovery message
      
      # echo ERROR >> /var/log/test.log
      # echo ERROR >> /var/log/test.log
      # echo ERROR >> /var/log/test.log
      # echo ERROR >> /var/log/test.log
      # echo ERROR >> /var/log/test.log
      
      (After 1 minute)
      
      > select * from escalations where triggerid=13568;
      +--------------+----------+-----------+---------+-----------+------------+----------+--------+--------+
      | escalationid | actionid | triggerid | eventid | r_eventid | nextcheck  | esc_step | status | itemid |
      +--------------+----------+-----------+---------+-----------+------------+----------+--------+--------+
      |           20 |        3 |     13568 |    2945 |      NULL | 1512442308 |        2 |      2 |   NULL |
      |           21 |        3 |     13568 |    2946 |      NULL | 1512442314 |        2 |      2 |   NULL |
      |           22 |        3 |     13568 |    2947 |      NULL | 1512442314 |        2 |      2 |   NULL |
      |           23 |        3 |     13568 |    2948 |      NULL | 1512442314 |        2 |      2 |   NULL |
      |           24 |        3 |     13568 |    2949 |      NULL | 1512442314 |        2 |      2 |   NULL |
      +--------------+----------+-----------+---------+-----------+------------+----------+--------+--------+
      5 rows in set (0.00 sec)
      
      # echo test >> /var/tmp/test.log
      
      > select * from escalations where triggerid=13568;
      +--------------+----------+-----------+---------+-----------+------------+----------+--------+--------+
      | escalationid | actionid | triggerid | eventid | r_eventid | nextcheck  | esc_step | status | itemid |
      +--------------+----------+-----------+---------+-----------+------------+----------+--------+--------+
      |           20 |        3 |     13568 |    2945 |      NULL | 1512442488 |        2 |      2 |   NULL |
      |           21 |        3 |     13568 |    2946 |      NULL | 1512442494 |        2 |      2 |   NULL |
      |           22 |        3 |     13568 |    2947 |      NULL | 1512442494 |        2 |      2 |   NULL |
      |           23 |        3 |     13568 |    2948 |      NULL | 1512442494 |        2 |      2 |   NULL |
      |           24 |        3 |     13568 |    2949 |      NULL | 1512442494 |        2 |      2 |   NULL |
      +--------------+----------+-----------+---------+-----------+------------+----------+--------+--------+
      5 rows in set (0.00 sec)
      

      It looks like " status=2 " still remains in escalations. Nextcheck will update, but will never be RECOVERY.
      Unprocessed data remains detrimental to the performance of the escalator.

        1. 1_action.png
          30 kB
          Kim Jongkwon
        2. 2_operations.png
          68 kB
          Kim Jongkwon
        3. 3_recovery_operations.png
          54 kB
          Kim Jongkwon
        4. 4_problems.png
          128 kB
          Kim Jongkwon
        5. 5_trigger.png
          74 kB
          Kim Jongkwon
        6. okeventnone.png
          13 kB
          Kim Jongkwon

            vso Vladislavs Sokurenko
            JKKim Kim Jongkwon
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: