[ZBX-4476] Nodata triggers immediately fire on newly created hosts Created: 2011 Dec 22 Updated: 2017 May 30 Resolved: 2014 Feb 11 |
|
Status: | Closed |
Project: | ZABBIX BUGS AND ISSUES |
Component/s: | Server (S) |
Affects Version/s: | 1.8.9, 1.9.8 (beta), 2.0.6 |
Fix Version/s: | 2.1.5 |
Type: | Incident report | Priority: | Major |
Reporter: | Volker Fröhlich | Assignee: | Unassigned |
Resolution: | Fixed | Votes: | 11 |
Labels: | nodata, triggers | ||
Remaining Estimate: | Not Specified | ||
Time Spent: | Not Specified | ||
Original Estimate: | Not Specified |
Issue Links: |
|
Description |
Steps to reproduce:
If you're not lucky to get data for that item in the first couple of seconds the nodata trigger will fire. Expected behaviour:
I tried to work around this kind of problem by extending the trigger expression with something like "... & {bla:min(#1)}" (can't remember right now). This bug is somewhat related to |
Comments |
Comment by alix [ 2011 Dec 22 ] |
the bug is there on the versions I run: 1.8.6-1.8.9. it's quite annoying to get false notifications on each added host. |
Comment by richlv [ 2011 Dec 22 ] |
note that disabling hosts is bad practice if you do that to simulate maintenance. but on new hosts that indeed can be a more serious issue |
Comment by alix [ 2012 Jan 07 ] |
now I've looked into possible solutions of this, it seems that the only reliable way to avoid false nodata() triggering is to have an item's last update (or creation) time, and compare server time against said time in evalfunc.c's evaluate_NODATA() function. can mtime field of items table be used for this or it's reserved for some other purpose? it's not very clear, I found some references of mtime in distributed operation stuff, but I'm not quite certain. if mtime can be used, then we just update it with current server time on item creation/modification in frontend, and add a trivial if block to evaluate_NODATA() on server side. |
Comment by Oleksii Zagorskyi [ 2012 Jan 20 ] |
connected issue |
Comment by mark fermor [ 2012 Sep 26 ] |
We had exactly this problem as well. We added hosts and rely on Zabbix Agent ping in order to check the agent is OK. However on addition of a new host this always triggers, even if we disable the host first or try putting into an automated maintenance mode via API. The work around for us is to change the trigger as follows (version 1.8.x):- This works as follows.. This works fine, the only obvious problem with this set-up to us was the host wouldn't trigger if there was a time that there was never any data (which you'd certainly want if the host came up but never captured any data because something was broken). For us this wasn't so much of a problem as we just created another item which was a simple tcp check for the Zabbix agent on port 10050 and trigger on that (but not triggering on nodata as the simple check always returns data one way or the other), not a concrete solution but we're feeling its pretty close and works for us. So you may need two items in order to get around the problem and I wouldn't want to have this as a permeant solution (needing to create two items and separate triggers for anything with a nodata check), would much rather it was solved as suggested. Another work around for us was going to be to use the API to insert data into the history directly for this item, however its unfortunately not possible to falsely insert some data as if the check has run even though it hasn't with the API. This is something that's quite useful so might be worth thinking about why the API is being re-written Best Regards, |
Comment by Oleksii Zagorskyi [ 2012 Oct 09 ] |
Related to Also, some reasons clarified in |
Comment by Oleksii Zagorskyi [ 2012 Oct 09 ] |
Probably in version 2.0 (triggers have been added to internal cache) this problem is a bit less noticeable because now server first should refresh its cache and only then a timer process (responsible for time-based functions) will be able to notice this trigger. In 1.8 the timer process was doing direct queries to the DB (each 30 seconds) since items cache updated more rarely - every 60 seconds (by default) or even more rarely in middle/big installations. |
Comment by Tom M. [ 2013 Jul 05 ] |
Having it on a nodata(180) trigger for an snmp item that gets polled every 20 seconds. When I add a new host using this templated items/triggers, the trigger fires. Running zabbix 2.0.6. |
Comment by Eino Mäkitalo [ 2013 Jul 05 ] |
|
Comment by Dmitry Samsonov [ 2013 Sep 13 ] |
Any new on that issue? My version of workaround: |
Comment by Andris Zeila [ 2013 Sep 18 ] |
Fixed in development branch svn://svn.zabbix.com/branches/dev/ZBX-4476 Now if the item value was not updated since server startup the nodata() function will check the time item was added to configuration cache. |
Comment by Dmitry Samsonov [ 2013 Sep 18 ] |
Didn't understand the solution, but thank you anyway!Grate job! |
Comment by Alexander Vladishev [ 2013 Sep 19 ] |
(1) The algorithm of calculation of nodata() function shall be without changes. wiper RESOLVED in r38619 sasha CLOSED |
Comment by Alexander Vladishev [ 2013 Sep 19 ] |
Successfully tested! Please review my changes in r38635. |
Comment by Andris Zeila [ 2013 Sep 19 ] |
Released in: |
Comment by richlv [ 2014 Jan 27 ] |
(2) as volter noted, this should be listed on whatsnew wiper Please review - https://www.zabbix.com/documentation/2.2/manual/introduction/whatsnew220#fixed_nodata_function_calculation <richlv> thanks, description was understandable, but i feared it might confuse users - i tried to make it more user-oriented, please review wiper looks good, CLOSED |
Comment by Oleksii Zagorskyi [ 2014 Apr 07 ] |
One of additional positive effect of this is: After the database recovery is complete, zabbix server successfully connected to database and took configuration part. In 2.2 it works better! Say, CONFIG_SERVER_STARTUP_TIME will be counted already from a moment when server took its configuration from database (DCget_item_time_added). Tested! |