[ZBX-20172] Zabbix server keep sending duplicates erroneous configurations to proxies Created: 2021 Nov 03 Updated: 2021 Nov 07 Resolved: 2021 Nov 07 |
|
Status: | Closed |
Project: | ZABBIX BUGS AND ISSUES |
Component/s: | Server (S) |
Affects Version/s: | 5.0.16 |
Fix Version/s: | None |
Type: | Incident report | Priority: | Trivial |
Reporter: | Guillaume Fenollar | Assignee: | Alexey Pustovalov |
Resolution: | Won't fix | Votes: | 0 |
Labels: | database | ||
Remaining Estimate: | Not Specified | ||
Time Spent: | Not Specified | ||
Original Estimate: | Not Specified | ||
Environment: |
CentOS 8.1.1911 |
Attachments: |
![]() |
Description |
Comments |
Comment by Alexey Pustovalov [ 2021 Nov 04 ] |
Could you try to recreate schema on proxies? When you created DB for proxy, did you import schema only or data as well? |
Comment by Guillaume Fenollar [ 2021 Nov 04 ] |
Hello Alexey. I forgot to mention that I indeed tried several times to remove the proxy DB (I'll update the issue with this information). I also created the proxy DB using the provided schema.sql but result is unchanged in all cases. I never imported any data while creating proxy DB, either I let the proxy service do it (I only removed the whole db file), or I created the schema myself but nothing more. |
Comment by Alexey Pustovalov [ 2021 Nov 04 ] |
I recommend you increase debug level for "configuration syncer" on active Zabbix proxy:
zabbix_proxy -R log_level_increase="configuration syncer"
then wait at least one iteration and attach the file to the issue. |
Comment by Guillaume Fenollar [ 2021 Nov 04 ] |
Thanks for the help. I attached the log which is for one sync iteration on a pretty buse active proxy server. |
Comment by Alexey Pustovalov [ 2021 Nov 04 ] |
could you attach output of the following SQL query from Zabbix server side: select macro, count(*) from globalmacro group by macro; or just check, count must be 1 for all records! It looks like Zabbix server has duplicated data. |
Comment by Guillaume Fenollar [ 2021 Nov 04 ] |
Hi, There's no way somebody in my company could do that, so it looks like it was inserted by zabbix server or frontend, I don't know. 2475:20211105:084959.853 [Z3005] query failed: [0] UNIQUE constraint failed: hstgrp.groupid [insert into hstgrp (groupid) values (5); insert into hstgrp (groupid) values (5); Problem is that I couldn't find any duplicate record in Zabbix database. zabbix=# select groupid, count(*) from hstgrp group by groupid having count(*) <> 1; groupid | count ---------+------- (0 rows) Note than I dropped records from the database while zabbix-server and frontend where down for safety. Also, I tried deleting proxy database just in case, but same issue. Nothing interesting with debug "configuration syncer" log on active proxy, it only tells about the duplicate row. |
Comment by Alexey Pustovalov [ 2021 Nov 05 ] |
could you check on Zabbix server: select * from hstgrp where groupid = 5; configuration syncer return duplicated data: "hstgrp":{"fields":["groupid"],"data":[[5],[5]]} |
Comment by Guillaume Fenollar [ 2021 Nov 05 ] |
this hstgrp record is about Direcovered Hosts. It's the only one with internal column set to 1, I don't know what that means. zabbix=# select * from hstgrp where groupid = 5; groupid | name | internal | flags ---------+------------------+----------+------- 5 | Discovered hosts | 1 | 0 (1 row) Thanks again for your help |
Comment by Alexey Pustovalov [ 2021 Nov 05 ] |
maybe you have duplicate records in "config" table on Zabbix server side? |
Comment by Guillaume Fenollar [ 2021 Nov 05 ] |
We indeed have 2 rows returned, exactly the same content (same configid rows). How can that happen ? Is there a plan to put UNIQUE constraint on id columns ? I can drop one of these rows on monday, I'll comment this Jira issue then. Thanks |
Comment by Alexey Pustovalov [ 2021 Nov 05 ] |
For example, in case some fails |
Comment by Guillaume Fenollar [ 2021 Nov 07 ] |
Problem solved after last duplicates where removed from config and config_autoreg_tls tables. I'll let you decide if this issue can be closed or if you can prevent this from happening again by fixing the code Thanks a bunch Alexey for your help ! |
Comment by Alexey Pustovalov [ 2021 Nov 07 ] |
Thank you! |