[ZBX-14459] Receive the same error twice with logrt Created: 2018 Jun 11  Updated: 2018 Jul 26  Resolved: 2018 Jul 11

Status: Closed
Project: ZABBIX BUGS AND ISSUES
Component/s: Agent (G), API (A), Frontend (F), Server (S)
Affects Version/s: 3.4.9
Fix Version/s: None

Type: Incident report Priority: Blocker
Reporter: keven larouche Assignee: Unassigned
Resolution: Incomplete Votes: 0
Labels: agent, api, items, macros, triggers
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Virtualization: vmware
Operating System: CentOS Linux 7 (Core)
Kernel: Linux 3.10.0-693.21.1.el7.x86_64
Architecture: x86-64


Attachments: PNG File discovery macro.png     PNG File discovery.png     Text File discoveryfilelog.py     PNG File item.png     PNG File trigger.png    

 Description   

Steps to reproduce:

  1. Create script on the server (Create discovery rule)
  2. import os
    import sys
    import json
    
    logdir = sys.argv[1]
    
    data = []
    
    for (logdir, _, files) in os.walk(logdir):
    for f in files:
    if f.endswith(".log"):
    path = os.path.join(logdir, f)
    data.append({'{#LOGFILEPATH}':path})
    jsondata = json.dumps(data)
    
    print json.dumps({"data": data})
  1. Save as discoveryfilelog.py and add this entry at the end of the agent configuration : UserParameter = discovery.logfile.path, python /home/scripts/discoveryfilelog.py /var/log/network/
  2. Save and restart the agent on the server
  3. For the discovery configuration, you can See attached screenshot

We have to monitore many log file, and search on the files (1,2,3,4 or 5).

But with this configuration we receive the same alert twice. If you have any idea to fix this issue, do not hesitate to help me.

Thanks,



 Comments   
Comment by Elina Kuzyutkina (Inactive) [ 2018 Jun 12 ]

Hello, Keven

Can you provide example which caused double alert?

Please show us listing with files in the directory with modification time information.

The first parameter in logrt is regexp_describing_filename_pattern. Сan it be that several files match the configuration of a single item? For example item is logrt[/var/log/network/access.log,error,,,] and there are files /var/log/network/access.log0.

 

Regards,

Elina

Comment by keven larouche [ 2018 Jun 12 ]

Hello,

Thank you for your reply, the discovery rule can find all files ending in ".log" and we have all the files correctly.
These are log files from rsyslog. We monitor all switches, routers and firewalls. With this rule, each file has five items to find "1,2,3,4 or-5-". Each "code" defines a severity of the error and allows to associate it with the good trigger. In fact, the problem comes when a second error arrives, it sends the first error and the new that just happened. It reads the old log file history. Unfortunately I cannot show you more details.

Let me know if you have any questions.

Thank you,

Keven

Comment by Elina Kuzyutkina (Inactive) [ 2018 Jun 13 ]

The discovery rule finds all files ending in ".log", thats right =)

For example it returned the following:

 

{"data": [{"{#LOGFILEPATH}": "/var/log/zabbix/zabbix_agentd.log"}, {"{#LOGFILEPATH}": "/var/log/zabbix/zabbix_server.log"}]}

 

And then you then you will get several items and triggers like:

 

logrt[/var/log/zabbix/zabbix_server.log,-1-,,,skip]
{SomeTemplate:logrt[/var/log/zabbix/zabbix_server.log,-1-,,,skip].strlen()}<>0

 

If i have file /var/log/zabbix/zabbix_server.log0 it will be match regular expression in that item. When file zabbix_server.log updates, Zabbix reads it from 1 byte and store last read byte. Than file zabbix_server.log0 updates, Zabbix reads it from 1 byte and store last read byte. Than file zabbix_server.log updates again and Zabbix again reads it from 1 byte.

You can put path=path+'$' in your script to form data like:

{"data": [{"{#LOGFILEPATH}": "/var/log/zabbix/zabbix_agentd.log$"}, {"{#LOGFILEPATH}": "/var/log/zabbix/zabbix_server.log$"}]}

Please make sure that you do not have multiple files which can be updated at nearly the same time per one logrt item.

Comment by keven larouche [ 2018 Jun 13 ]

Hi Elina,

Thank you for your fast reply. I added "$" at the end of my path as your talking. Normally, each items monitor only item. But, each file has 5 items :

logrt[/var/log/network/Switch-01.log$,"-0-",,,skip]
logrt[/var/log/network/Switch-01.log$,"-1-",,,skip]
logrt[/var/log/network/Switch-01.log$,"-2-",,,skip]
logrt[/var/log/network/Switch-01.log$,"-3-",,,skip]
logrt[/var/log/network/Switch-01.log$,"-4-",,,skip]
logrt[/var/log/network/Switch-01.log$,"-5-",,,skip]

And each item is joined with a trigger :

{Server:logrt[/var/log/network/Switch-01.log$,"-0-",,,skip].strlen()}<>0
{Server:logrt[/var/log/network/Switch-01.log$,"-1-",,,skip].strlen()}<>0
{Server:logrt[/var/log/network/Switch-01.log$,"-2-",,,skip].strlen()}<>0
{Server:logrt[/var/log/network/Switch-01.log$,"-3-",,,skip].strlen()}<>0
{Server:logrt[/var/log/network/Switch-01.log$,"-4-",,,skip].strlen()}<>0
{Server:logrt[/var/log/network/Switch-01.log$,"-5-",,,skip].strlen()}<>0

Let me know if you have any questions.

Thanks,

Keven

Comment by Elina Kuzyutkina (Inactive) [ 2018 Jun 13 ]

It's okay if one log file is associated with several items, the main thing that you have no item witch collects data from multiple files.

I can not reproduce the situation after adding '$'. Can you share example of double alert if you still have the issue? 

Regards,

Elina

 

Comment by Vladislavs Boborikins (Inactive) [ 2018 Jul 11 ]

No activity. Closing. Reopen if required.

Generated at Wed Apr 16 03:42:14 EEST 2025 using Jira 9.12.4#9120004-sha1:625303b708afdb767e17cb2838290c41888e9ff0.