-
Type:
Problem report
-
Resolution: Unresolved
-
Priority:
Trivial
-
None
-
Affects Version/s: 7.4.8
-
Component/s: Server (S)
-
None
(this is actually on version 7.4.6, which it won't let me specify)
alert / discovery / action with an add host operation referencing SNMPv2 data collection / discovery / check gets created with or modified to an SNMPv1 interface
I think is this whats going on, based on operations.c/add_discovered_host:
Hosts that respond to multiple data collection / discovery checks get created (or modified) with highest numbered dservice interface type instead of the specific dcheck associated with the alert / action / discovery action whos conditions match.
We have some devices which only respond on snmpv1, and others that respond on both snmpv1 and snmpv2. we want to create all hosts with the appropriate interface, so we created some otherwise identical check rules, assuming the check rule associated with the action would dictate the interface type that gets created, but this seems to not be the case.
create a data collection discovery rule with following elements:
SNMPv1 agent ".1.3.6.1.2.1.1.2.0"
SNMPv2 agent "1.3.6.1.2.1.1.2.0"
| SNMPv2 agent "1.3.6.1.2.1.1.5.0" | |
| SNMPv2 agent "1.3.6.1.4.1.9.9.25.1.1.1.2.4" |
(plus some others)
create alert / actions / discovery:
| Access Mgmt Cisco Nexus | Discovery check equals Access Mgmt: SNMPv2 agent "1.3.6.1.4.1.9.9.25.1.1.1.2.4" Received value equals CW_FAMILY$Nexus9000$ |
however, this host also returns meaningful values for the other dchecks shown above. this is the only discovery action that has a matching condition though.
host 172.18.1.4 matches this particular alert action, and no others, however it gets created (or modified to) an SNMPv1 agent interface.
even if manually changing its interface to snmpv2, the next time discovery runs, its interface (and all other hosts using this set of dchecks) are set back to snmpv1 interfaces (and confirmed by the audit log showing all these hosts being modified)
records in "d" part of database for a host with ip 172.18.1.4:
{{SELECT [...] WHERE ds.ip = '172.18.1.4' ORDER BY ds.dserviceid;
}}
rule name | check key_ | check type | rule status | service status | host status | service IP | service DNS | service last up | host last up | host last down | dserviceid | dhostid | druleid | dcheckid | interfaceid | interface hostid | service value (first 40)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------
Access Mgmt | 1.3.6.1.4.1.9.9.25.1.1.1.2.4 | SNMPv2c | enabled | up | up | 172.18.1.4 | agg-1.redacted | 2026-04-25 07:29:02-06 | 2026-04-25 07:29:02-06 | 1969-12-31 17:00:00-07 | 243576 | 97039 | 5 | 29 | 24745 | 30671 | CW_FAMILY$Nexus9000$
Access Mgmt | 1.3.6.1.2.1.1.5.0 | SNMPv2c | enabled | up | up | 172.18.1.4 | agg-1.redacted| 2026-04-25 07:29:02-06 | 2026-04-25 07:29:02-06 | 1969-12-31 17:00:00-07 | 243577 | 97039 | 5 | 14 | 24745 | 30671 | agg-1.redacted
Access Mgmt | 1.3.6.1.2.1.1.2.0 | SNMPv2c | enabled | up | up | 172.18.1.4 | agg-1.redacted | 2026-04-25 07:29:02-06 | 2026-04-25 07:29:02-06 | 1969-12-31 17:00:00-07 | 243578 | 97039 | 5 | 5 | 24745 | 30671 | .1.3.6.1.4.1.9.12.3.1.3.1812
Access Mgmt | .1.3.6.1.2.1.1.2.0 | SNMPv1 | enabled | up | up | 172.18.1.4 | agg-1.redacted | 2026-04-25 07:29:02-06 | 2026-04-25 07:29:02-06 | 1969-12-31 17:00:00-07 | 243579 | 97039 | 5 | 33 | 24745 | 30671 | .1.3.6.1.4.1.9.12.3.1.3.1812
Access Mgmt | 1.3.6.1.2.1.1.1.0 | SNMPv1 | enabled | up | up | 172.18.1.4 | agg-1.redacted | 2026-04-25 07:29:02-06 | 2026-04-25 07:29:02-06 | 1969-12-31 17:00:00-07 | 243580 | 97039 | 5 | 17 | 24745 | 30671 | Cisco NX-OS(tm) nxos.7.0.3.I4.3.bin, Sof
(5 rows)
when limited to the dservice whose value actually matches the condition's received value: (interface creation does not appear to respect this requirement)
**
SELECT [...] WHERE ds.ip = '172.18.1.4' AND conddvalue.value = ds.value ORDER BY ds.dserviceid;
rule name | check key_ | check type | rule status | service status | host status | service IP | service last up | dserviceid | dhostid | druleid | dcheckid | interfaceid | interface hostid | service value (first 40) | action name (first 60) | expected received value | condid | actionid | action status
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------
Access Mgmt | 1.3.6.1.4.1.9.9.25.1.1.1.2.4 | SNMPv2c | enabled | up | up | 172.18.1.4 | 2026-04-25 07:29:02-06 | 243576 | 97039 | 5 | 29 | 24745 | 30671 | CW_FAMILY$Nexus9000$ | Access Mgmt Cisco Nexus | CW_FAMILY$Nexus9000$ | 198 | 55 | enabled
(1 row)
the leading . in the one key_ was an earlier attempt to work around the issue, which didn't work (assuming at the time it was only looking at key_ and not type; and this problem started when we created the duplicate (except for v1) check)
My current workaround attempt (results not in yet) is to ensure my v2 checks have higher IDs than my v1 checks (and the v1 only hosts shouldn't respond on any v2 check so won't have any v2 entries).
It appears add_discoverd_host just iterates through all records with a particular dhostid, creating / modifying the host and host_interface as it goes through, so whatever dhost/dservice record happens to be last is the one the interface is modeled on, rather than whatever specific dservice record the condition that led here actually matched on. (if this is incorrect, then apologies in advance, but the fact remains that hosts are being created or modified with the wrong interface snmp type).