-
Problem report
-
Resolution: Fixed
-
Trivial
-
6.4.7, 7.0 (plan)
-
None
-
Support backlog, Sprint 105 (Oct 2023), Sprint 106 (Nov 2023), Sprint 107 (Dec 2023), S2401
-
2
Steps to reproduce:
Here is ceph version
podman exec -it ceph-mon-1 ceph versions \{ "mon": \{ "ceph version 14.2.11-208.el8cp (6738ba96f296a41c24357c12e8d594fbde457abc) nautilus (stable)": 3 }, "mgr": \{ "ceph version 14.2.11-208.el8cp (6738ba96f296a41c24357c12e8d594fbde457abc) nautilus (stable)": 3 }, "osd": \{ "ceph version 14.2.11-208.el8cp (6738ba96f296a41c24357c12e8d594fbde457abc) nautilus (stable)": 24 }, "mds": \{}, "overall": \{ "ceph version 14.2.11-208.el8cp (6738ba96f296a41c24357c12e8d594fbde457abc) nautilus (stable)": 30 } }
Result:
agent parse wrong overall status
ceph.status\["[https://localhost:8003","zabbix","e62e1d18-abf7-485c-baf9-616cc23e4897"]\[s|\ {"overall_status":0,"num_mon":0,"num_osd":0 ,|https://localhost:8003%22,%22zabbix%22,%22e62e1d18-abf7-485c-baf9-616cc23e4897%22%5D%5Bs%7C%7B%22overall_status%22:0,%22num_mon%22:0,%22num_osd%22:0,]
For example actual status
when i look for num_osd i get "num_osds":24 and in the “mons” feild it lists all 3 monitors