[ZBXNEXT-1527] cascaded/nested lld Created: 2012 Nov 28  Updated: 2026 Mar 04  Resolved: 2025 Nov 21

Status: Closed
Project: ZABBIX FEATURE REQUESTS
Component/s: API (A), Frontend (F), Server (S)
Affects Version/s: None
Fix Version/s: 7.4.0rc1

Type: New Feature Request Priority: Trivial
Reporter: richlv Assignee: Andris Zeila
Resolution: Fixed Votes: 145
Labels: Zabbix7.4, lld, nested
Remaining Estimate: 0h
Time Spent: 521.5h
Original Estimate: 0h

Attachments: File 58.yaml     File host_for_testing_nested_lld.yaml     File host_with_3_levels_of_nested_LLDs (2).yaml     File lld_res_removal_host.yaml     Text File proxy_log_for_not_working_nested_lld_discovery.log     PNG File screenshot-1.png     PNG File screenshot-2.png     Text File server_log_for_nested_lld_issue.log     File template_for_deletion_via_autoreg.yaml     File template_with_3_levels_of_nested_LLDs.yaml     File template_with_nested_lld_host_prototypes.yaml     Text File zabbix_server_failed_to_unlink-1.log     Text File zabbix_server_failed_to_unlink.log     File zbx_export_templates.json    
Issue Links:
Causes
causes ZBXNEXT-7527 Nested host prototypes Closed
causes ZBXNEXT-10103 Info icon for prototypes list tables ... Open
causes ZBX-26645 All prototypes are removed from no lo... Closed
causes ZBX-26651 Importing a template modifies every d... Closed
causes ZBX-26523 Mass update button for trigger protot... Closed
causes ZBX-26517 Options create trigger prototype / de... Closed
causes ZBX-26530 Fatal error opening trigger and graph... Closed
causes ZBX-26552 It is possible to discover a host pro... Closed
causes ZBX-26512 "Execute now" button is present for d... Closed
causes ZBX-27266 Incorrect link prefixes near the disc... QA Failed
Duplicate
is duplicated by ZBX-12436 Graph prototype process create only t... Closed
is duplicated by ZBX-9762 Nested LLD Closed
is duplicated by ZBX-11111 Smarter nested LLD? Closed
Related
related to ZBX-26252 A PHP runtime error occurs when attem... Closed
Sub-task
Team: Team D
Sprint: Prev.Sprint, S25-W10/11, S25-W12/13, S25-W14/15, S25-W16/17, S25-W18/19, S25-W20/21, S25-W22/23, S25-W26/27, S25-W30/31, S25-W34/35, S25-W38/39, S25-W42/43
Story Points: 15

 Description   

in some cases one-level lld is not enough. for example, one might want to discover all databases and then discover all tables for them, to monitor some statistic for all of them.

this might be doable by creating some sort of "lld rule prototype", which could then cascade almost indefinitely (although i can't think of more than 2 level practical application right now)



 Comments   
Comment by Volker Fröhlich [ 2014 Sep 26 ]

http://zabbix.org/wiki/Docs/howto/Nested_LLD

Comment by Stefan [ 2014 Sep 26 ]

i have an other example: to monitor Dell Equallogic Disks i need 2Querys, for example:
snmpwalk -v2c -c public 192.168.1.1 .1.3.6.1.4.1.12740.2.1.1.1.9.1
iso.3.6.1.4.1.12740.2.1.1.1.9.1.563984362 = STRING: "foo"
so this number is need to get the disks:
snmpwalk -v2c -c public 192.168.1.1 .1.3.6.1.4.1.12740.3.1.1.1.8.1
iso.3.6.1.4.1.12740.3.1.1.1.8.1.563984362.1 = INTEGER: 1
iso.3.6.1.4.1.12740.3.1.1.1.8.1.563984362.2 = INTEGER: 1
iso.3.6.1.4.1.12740.3.1.1.1.8.1.563984362.3 = INTEGER: 1
iso.3.6.1.4.1.12740.3.1.1.1.8.1.563984362.4 = INTEGER: 1
iso.3.6.1.4.1.12740.3.1.1.1.8.1.563984362.5 = INTEGER: 1
iso.3.6.1.4.1.12740.3.1.1.1.8.1.563984362.6 = INTEGER: 1
iso.3.6.1.4.1.12740.3.1.1.1.8.1.563984362.7 = INTEGER: 1
iso.3.6.1.4.1.12740.3.1.1.1.8.1.563984362.8 = INTEGER: 1
iso.3.6.1.4.1.12740.3.1.1.1.8.1.563984362.9 = INTEGER: 1
iso.3.6.1.4.1.12740.3.1.1.1.8.1.563984362.10 = INTEGER: 1
iso.3.6.1.4.1.12740.3.1.1.1.8.1.563984362.11 = INTEGER: 1
iso.3.6.1.4.1.12740.3.1.1.1.8.1.563984362.12 = INTEGER: 2

actually i can only use SNMPINDEX and i get 563984362, but i need the last number too. so i must use .1.3.6.1.4.1.12740.3.1.1.1.8.1.{#SNMPINDEX}.1, a better will be .1.3.6.1.4.1.12740.3.1.1.1.8.1.{#SNMPINDEX1}.{#SNMPINDEX2} to get the second one, or other idea {#SNMPINDEX(2)} to get the last 2 numbers (563984362.1), or {#SNMPINDEX(#7)} to get a specific number (for #7 i will get 12740)

Comment by Naruto [ 2016 Aug 18 ]

Should this be closed? It's supported now.

Comment by Raimo [ 2016 Aug 18 ]

It is supported now? How? Can't find it in the documentation.

Comment by Raimo [ 2016 Aug 19 ]

Stefan Krügers approach above (.1.3.6.1.4.1.12740.3.1.1.1.8.1.

{#SNMPINDEX1}

.

{#SNMPINDEX2}

) is the way to go in my opinion. It is the most intuitive solution from a users perspective....

Comment by Naruto [ 2016 Aug 19 ]

We are touching two different topics here, one is the built-in "discovery" keys, and another is the support for custom LLD rules.

As Volker pointed out, custom nested LLD's are somehow already supported: http://zabbix.org/wiki/Docs/howto/Nested_LLD

Comment by richlv [ 2016 Aug 19 ]

well, technically, the method you are using is nice & it works, but it is still a hack and not officially supported, so it would still be more like "adding support for nested lld"

Comment by Naruto [ 2016 Aug 19 ]

The complete structure of the discovered nested values is defined by the user on the template.
Asking the user to define that structure again on the JSON code is redundant and an open door to many problems; it's complicated and overloaded.

Think in folders and labels. They are two different approaches. Gmail uses labels, no hierarchy. Filesystems use directories, that's a hierarchy.

When you have already defined the hierarchy on the template side (for example, I'll use the "#DATABASE" macro for zabbix Applications, and "#TBL" for item names), there is no need to define the same hierarchy again on the JSON making side. Therefore the "labels" approach is the right approach.

Each line of the JSON code must represent a combination of labels.

But there is a problem with the current implementation of this feature.
For example, we'll suppose that I am Zabbix Server and I receive this JSON:

{
 "data":[
   { "{#INSTANCE}" : "MSSQL$TEST_INST", "{#DATABASE}" : "DBA" },
   { "{#INSTANCE}" : "MSSQL$TEST_INST", "{#DATABASE}" : "DBA2" },
   { "{#INSTANCE}" : "SQLServer", "{#DATABASE}" : "SQL_SERVER_MONITORING" },
   { "{#INSTANCE}" : "SQLServer", "{#DATABASE}" : "ORACLE_MONITORING" },
   { "{#INSTANCE}" : "SQLServer", "{#DATABASE}" : "DBA" },
 ]
}
  • Ok, now I know that I have to create items for a prototype that is referring to "#DATABASE" macro/label only. I'll create 4 items, one for each "#DATABASE" value: "DBA", "DBA2", "SQL_SERVER_MONITORING" and "ORACLE_MONITORING"; and I'll rise an error, because the fifth value is repeated ("DBA"). OK, it is not possible, and the request is not normal, I mean, why should I want a list of all databases, when I know that I could have the same name in multiple instances (by coincidence)?, it's weird. So, ok, next prototype.
  • It turns out that I've been asked to create items for another prototype, referring to both "#INSTANCE" and "#DATABASE". I will create 5 items:
    MSSQL$TEST_INST - DBA
    MSSQL$TEST_INST - DBA2
    SQLServer - SQL_SERVER_MONITORING
    SQLServer - ORACLE_MONITORING
    SQLServer - DBA
    And no error. OK. This request is normal.
  • Great, now I'm being asked to create items for another prototype, this time referring to "#INSTANCE" only. It's normal! I want to refer to instances, completely normal request! However I, Zabbix, throw errors for lines 2, 4 and 5, because I'm blindly trying to create an item for each JSON line.

How I see the solution to this problem? Simple, we don't need errors here; the user knows what he's doing; just don't try to create an item that is already created, in the first place. I can admit that that first request was weird, I know it, but it could be useful in some scenarios, having a list of unique values across many "containers".
Now, the third request, it's 100% normal, and wanted, and useful.

The question is, what is the purpose of attempting to create something that was already created? There was already an item created for "SQLServer", then why is Zabbix trying to perform the same action again? It makes no sense.

Comment by richlv [ 2016 Aug 23 ]

i wouldn't say this is supported.
host prototypes sort of provide one extra level, but only if you want separate hosts.
the hack with the somewhat duplicate entries is mostly working, and it might even end up being the supported method, but until it does become one, this issue still tracking that

Comment by Naruto [ 2016 Aug 23 ]

Hi, richlv, I know, right, that's why I'm opening this discussion about this feature, it's in an inconsistent state right now. It doesn't work well in one way or another. We need to define this, right now. What is the way to go, and how.

To me, it's pretty clear and easy. Just read my comment above, as I said there, labels is the way to go, and no hierarchy. Let the hierarchy in the user's mind, he knows what he's doing.

Otherwise you'll end up having to give support to things like this:

{
	"data": [
		{ "{#INS}": "MSSQL$TESTE_INST", "{#INL}": "TESTE_INST", "data": [
			{ "{#DBS}": "DBA" }, 
			{ "{#DBS}": "DBA2" }
	    ]},
	  	{ "{#INS}": "SQLServer", "{#INL}": "SQL Server", "data": [
			{ "{#DBS}": "SQL_SERVER_MONITORING" }, 
			{ "{#DBS}": "ORACLE_MONITORING" }, 
			{ "{#DBS}": "DBA" }
	   ]}
	]
}

Which is a pain in the butt, for both, the user and the developer.

Comment by richlv [ 2018 Apr 18 ]

Here's a usecase where the current hacks fall short.

A script discovers JMX ports on a box and creates JMX items for each found JVM. Now, on each JVM one might want to have another level of LLD, but that is not possible - the JMX discovery would have to stem from the discovered items.

The problem is that not all information can be gathered in one script and then sent to an LLD rule, different discovery methods/types have to be used.

Comment by Stéphane Olivier [ 2018 Dec 18 ]

Hi.

I'm trying to understand the following sentence from https://www.zabbix.org/wiki/Docs/howto/Nested_LLD:

"In a single go: You can implement the secondary rule with a trapper item of your choice, if possible connectivity-wise. Just invoke the trapping from your script"

For me, trapper items have to be created so you can use zabbix_sender to send values.

I don't see how to implement the second discovery rule with a trapper.

Thx

Comment by richlv [ 2018 Dec 18 ]

My interpretation:
The first method uses two LLD rules that separately gather information about the two levels (using the example from the wiki page, one to collect list of databases, and one to collect list of db:tables). The second method collects information on databases and tables in one go, and sends data to both LLD rules.

Comment by Stéphane Olivier [ 2018 Dec 18 ]

There is one problem with a big json file which includes dbs and tables. ie:

{
  "data": [
    { "#DBNAME": "db01", "#TBLNAME": "tbl01" },
    { "#DBNAME": "db01", "#TBLNAME": "tbl02" },
    { "#DBNAME": "db02", "#TBLNAME": "tbl01" },
    { "#DBNAME": "db02", "#TBLNAME": "tbl02" }
  ]
}

if you have an item property named db.size[#DBNAME] in your db LLD, you'll get duplicate keys.

That's why i was not sure to have properly understood the sentence

Comment by richlv [ 2018 Dec 18 ]

The script would send two different JSONs to two different rules, it's just that the data collection would happen once.

Comment by Stéphane Olivier [ 2018 Dec 18 ]

Data collection for both discovery rules would happen only once if you don't use "Zabbix agent" or "Zabbix agent (active)" as type for them else the script will be called twice.

Comment by Jack Valko [ 2021 Apr 21 ]

Here's another use case that really would make a nested LLD w/ host prototypes powerful.

I'm managing a large Meraki installation on 3 continents.  The only way to properly get data from devices is to use Meraki's API (snmpwalk wont scale and cant see devices behind a meraki appliance).  Meraki devices are configured hierarchically --> Org/Network/Device.

My Org discovery returns this:

{['name':'Org Name','id':'1234567','url':'https://a-meraki-url.com/foo'}

]

I then want to discover networks so I create a host from the 'name' key and use it to discover networks.  Once those networks are discovered I'd like to create hosts for those and start to discover devices.

I can write an external script to do this but I'd like to do this natively within Zabbix and not have to shell out every time I discover.  Nested LLD is about the only practical way to discover large groups of devices.  

Comment by dimir [ 2021 Aug 20 ]

jackvalko, you might be interested in voting and following ZBXNEXT-6844 then.

Comment by Stefan [ 2021 Aug 20 ]

Sorry, but no we are not interested in other features we want this

Comment by James Kirsop [ 2022 May 04 ]

I outlined another use case for nested LLD here: https://www.zabbix.com/forum/zabbix-help/433212-cascading-lld-building-juniper-cos-items-on-interface-and-queue-indexes

@palivoda can we get this request assigned to someone for review?

Comment by Gustavo Guido [ 2022 May 26 ]

The link https://www.zabbix.org/wiki/Docs/howto/Nested_LLD in pointing to https://www.zabbix.com/community, not to a particular doc, so I did not read it (sorry if Im saying something that is explained in that article)
There are many situations where the LLD should iterate, when adding tags to items or tirgger prototypes, when an entity has sub-entitys and you need to relate triggers (no trigger in sub-entity if the master entity is offline)

Comment by Gergely Czuczy [ 2022 Jun 27 ]

This would also make it possible to more easily monitor nginx JSON status.

The open module's format is here, it's an upstream/server nesting: https://github.com/nginx-modules/ngx_http_json_status_module

And the commercial official API module also has some nesting: https://nginx.org/en/docs/http/ngx_http_api_module.html

 

Comment by Gergely Czuczy [ 2022 Jun 28 ]

The Solr cluster status response is also a nested json: https://solr.apache.org/guide/6_6/collections-api.html#CollectionsAPI-clusterstatus

$.collections.{collection}.{shard}.replicas.{replica}

That's 3 levels of nesting. Without this feature, enterprises using solr cannot really utilize zabbix. It's pretty much like ZBXNEXT-1, when people need a template generator specific for their setup, due to a lack of a well needed feature in zabbix.

 

Comment by Stefan [ 2022 Jun 28 ]

maybe this feature can be funded by the community via https://bountysource.com/ ?

Comment by Oleg Ivanivskyi [ 2022 Aug 08 ]

Improved filtering is another possible use case for this feature.

For example, you would like to monitor SFP transceiver power status for a Cisco device. You can get all SFP modules via SNMP, e.g. port, serial, model, etc.:

.1.3.6.1.2.1.47.1.1.1.1.2.311 = STRING: "Transceiver (slot:11-port:11)"
.1.3.6.1.2.1.47.1.1.1.1.12.311 = STRING: "CISCO-AVAGO"
.1.3.6.1.2.1.47.1.1.1.1.13.311 = STRING: "SFP-10G"

Each module has different sensors, e.g. receive power, transmit power, etc. SFP module with 311 index has 5 different sensors:

.1.3.6.1.2.1.47.1.1.1.1.4.10001 = INTEGER: 311
.1.3.6.1.2.1.47.1.1.1.1.4.10002 = INTEGER: 311
.1.3.6.1.2.1.47.1.1.1.1.4.10003 = INTEGER: 311
.1.3.6.1.2.1.47.1.1.1.1.4.10004 = INTEGER: 311
.1.3.6.1.2.1.47.1.1.1.1.4.10005 = INTEGER: 311

Sensor data (e.g. name, status, etc.)

.1.3.6.1.2.1.47.1.1.1.1.2.10001 = STRING: "Ethernet11/11 Lane 1 Transceiver Receive Power Sensor"
.1.3.6.1.4.1.9.9.91.1.2.1.1.3.10001.1 = INTEGER: 3

At the moment, you can discover sensors but can't use module related information as a filter. I.e. you can't monitor power sensors for "SFP-10G" modules only.

Comment by Ben Hanson [ 2022 Sep 09 ]

"Improved filtering is another possible use case for this feature"

This is my motivator as well, though with a configuration/design use case rather than hardware discovery.

We utilize Cisco gear and do port discovery of all of our switches.  I currently filter based on port type and up/down status.  There is an SNMP OID that uses the interface index that will report CDP neighbor name(s) (.1.3.6.1.4.1.9.9.23.1.2.1.1.6.{IFINDEX}.#) .  Unfortunately, it has a trailing number after the index, and it also is only populated if a value is present.  If the LLD could include this OID in discovery, then I could create one discovery rule for ports with a neighbor based on device name prefix, and a second for everything else.  This would allow me to automatically assign triggers based on expected port type (uplink vs device, etc).

So, the ability to include {IFINDEX} in a discovery label would be helpful.

Comment by John Ivan [ 2022 Nov 11 ]

Might be related.  Using SNMP discovery to generate monitoring items based on a storage type.  Presently I can have several discoveries run the same SNMP walk, each with a different discovery filter by storage type (scheduled to avoid concurrency that causes the discovery to time out).  Seems to me this would be better with a master discovery (with no item prototypes) that is run once to return all storage types, then run dependent discoveries with preprocessing to match the storage type and generate the items.  The dependent discoveries do not have to repeat the SNMP walk.

This is similar to the existing feature where a Master Item run a command that returns multiple values and dependent items can preprocess those results without having to run the command again.  

Comment by Oleksii Zagorskyi [ 2023 Feb 03 ]

ZBXNEXT-7527 asks to host prototypes nesting, which is very related to current case, but still a little bit specific.

Comment by Craig Hopkins [ 2023 Feb 23 ]

Nested LLD would definitely be useful for multi-lane optics. The OID looks like

SNMPv2-SMI::enterprises.2636.3.60.1.2.1.1.6.{PORTINDEX}.{LANE}

but there's no way currently (that I can find) within Zabbix to support this.

Comment by LivreAcesso.Pro [ 2023 Jun 27 ]

Where the info at http://zabbix.org/wiki/Docs/howto/Nested_LLD go?

Comment by Volker Fröhlich [ 2023 Jun 27 ]

https://web.archive.org/web/20171115143613/http://zabbix.org/wiki/Docs/howto/Nested_LLD

The wiki is no more.

Comment by Arthur Ivanov [ 2024 Feb 27 ]

Another case for cascaded LLD is redfish API:

/redfish/v1/Chassis/<ID>/Power/PowerSupplies/<ID>

Currently, there is no way to enumerate Chassis and then Power Supplies within one template and http-agent.

Supermicro:

{
    "@odata.context": "/redfish/v1/$metadata#ChassisCollection.ChassisCollection",
    "@odata.type": "#ChassisCollection.ChassisCollection",
    "@odata.id": "/redfish/v1/Chassis",
    "Name": "Chassis Collection",
    "Members": [
        {
            "@odata.id": "/redfish/v1/Chassis/1"
        },
        {
            "@odata.id": "/redfish/v1/Chassis/HA-RAID.0.StorageEnclosure.0"
        }
    ],
    "[email protected]": 2
}

Lenovo:

{
    "@odata.type": "#ChassisCollection.ChassisCollection",
    "@odata.id": "/redfish/v1/Chassis",
    "Members": [
        {
            "@odata.id": "/redfish/v1/Chassis/1"
        },
        {
            "@odata.id": "/redfish/v1/Chassis/3"
        },
        {
            "@odata.id": "/redfish/v1/Chassis/4"
        }
    ],
    "@odata.etag": "\"2b1611aac39424e542c\"",
    "@odata.context": "/redfish/v1/$metadata#ChassisCollection.ChassisCollection",
    "Description": "A collection of Chassis resource instances.",
    "[email protected]": 3,
    "Name": "ChassisCollection"
}

Dell:

{
    "@odata.context": "/redfish/v1/$metadata#ChassisCollection.ChassisCollection",
    "@odata.id": "/redfish/v1/Chassis",
    "@odata.type": "#ChassisCollection.ChassisCollection",
    "Description": "Collection of Chassis",
    "Members": [
        {
            "@odata.id": "/redfish/v1/Chassis/System.Embedded.1"
        },
        {
            "@odata.id": "/redfish/v1/Chassis/Enclosure.Internal.0-1:RAID.SL.3-1"
        }
    ],
    "[email protected]": 2,
    "Name": "Chassis Collection"
}

ZBXNEXT-3643

Comment by Sam Potekhin [ 2024 Aug 06 ]

Yet another case:

How to monitor RAID controller:

  1. Discover controllers (in case of HP - `ssacli ctrl all show`, storcli - `storcli show`) - we got controllers names with its numbers.
  2. On every controller discover following things: logical disks, physical disks (not used or hotspares), arrays.
  3. On every logical disks discover physical disks it uses. On every logical/physical disk retrieve info about it and place them into items.
  4. Every thing from p.1, p.2 and p.3 have it's own items.

Why not make possible create discovery rule depended on another discovery rule? Currently only dependency on regular item is available.

Comment by dimir [ 2025 Feb 27 ]

Good news! The work on this issue specification has started. Stay tuned!

Comment by user185953 [ 2025 Mar 31 ]

Looks like it will also solve ZBXNEXT-6320 and ZBXNEXT-9518? Maybe too big hammer and bigger cross-lld-level trigger pain but might work good enough.

Comment by Andris Zeila [ 2025 May 30 ]

Released ZBXNEXT-1527 in:

  • pre-7.4.0rc1 af4e4aacb98
Comment by Andris Zeila [ 2025 Jun 03 ]

Released ZBXNEXT-1527 in:

  • pre-7.4.0rc1 031d514468c, 1dff8f12ec9
Comment by Martins Valkovskis [ 2025 Jul 02 ]

Updated documentation:

Comment by Stefan [ 2025 Jul 02 ]

martins-v there is an issue in the document: ZBX-26619

Comment by Max Ried [ 2025 Jul 04 ]

I expected that item prototypes in nested discovery rules could refer to the parent discovery's prototype item as a master item to create dependent items, since the parent discovery was already executed and its context is known.

Unfortunately, in Zabbix 7.4, I cannot select the parent discovery item as the master item for dependent items inside a nested discovery. I would have expected to be able to select item prototypes in nested discovery to use the parent discovery's item as a master item or alternatively, allow manual entry of item keys for master items in nested discovery, enabling proper reuse of expensive data sources.

Comment by Janis Freibergs [ 2025 Jul 10 ]

[email protected], I guess there might be some confusion regarding the term "nested" as in - the parent-child, cascaded connection of LLD rules and their child prototypes, and the specific sub-type of "Nested" available for an LLD rule prototype. In the latter case, the discovery prototype will basically behave like a dependent item, sharing a portion of the parent discovery rule's data for its input.

Let's say you have full source data consisting of an object like

{
 propertyA: valueA,
 propertyB, valueB,
 array: [A, B, C, ...]
}

The parent LLD rule will operate on $.array, as usually "slicing off" A, then on B, etc. portions from it. Imagine that these are master items that collect A, B, C... on separate occasions. In addition, properties can be mapped to LLD macros that will be "relayed" throughout the discovery, making the value of $.property* available to the child discovery prototype(s) and their child prototype(s).

Say, during processing of the "B branch" here, a discovery prototype with a type of "Nested", will behave as a dependent item of a master item that collected B as its value. The parent rule's macros can be used in the key/properties of the discovery prototype itself, mapping to f.e., discoveryrule.key[value of $.propertyA].

Assuming a structure for B similar to the JSON above, you'll likely need a JSONPath preprocessing step for the discovery prototype to operate on $.array of B, basically meaning iterations on $.array[index of B].array of the initial JSON.
Further macro mappings can be used here to extract some properties of B to be used for the item, ... prototypes od the discovery prototype, thus making it is possible to combine, say, an item prototype key like item.key[value of $.propertyA][value of B object's property].

Is this what you mean/need?

<edit> On second reading, I think the initial idea is to be able to choose the master item as an item/prototype from the tree of parent rules/prototypes "above" the cascaded discovery rule - expecting that such an the item will already be created before arriving at the current level/branch of discovery, and/or being allowed to specify the master item as key as "key...{#MACRO}". Is this interpretation correct? Can you describe a use case?

NB: I have an inkling something like that could be achieved by making use of host prototypes with a template with a nested LLD rule expecting some general JSON property(ies) as well as "current branch slice" passed through macros of the discovery rule - an item prototype key can contain a macro there, and dependent item prototypes along that use that as master.

Comment by user185953 [ 2025 Jul 10 ]

Documentation (https://www.zabbix.com/documentation/7.4/en/manual/introduction/whatsnew#nested-low-level-discovery) says it too:
"A nested discovery prototype may use the same JSON value as the parent rule, but then use a different "slice" of data from the JSON value."
So we are expected to slice the master item from begginning on each LLD depth intead of making many technical sub-sliced master items.
Not needed. New nested LLD type will automatically give only relevant part of master json, see https://www.zabbix.com/documentation/7.4/en/manual/discovery/low_level_discovery/discovery_prototypes#example but nothing stops us doing dependent LLD type and slicing differently?

But Yes sometimes next LLD level needs data not present in original master item? But for all my usecases I can either pull all data in original master item or I discover VMs so it makes sense to do host prototypes+sub-templates anyway. Looks this is reason why they left this feature out, nobody really needs it?
 

Comment by Max Ried [ 2025 Jul 10 ]

Thank you for your input:
I have a API I can query a list of devices attached to a server. This is a cheap operation. I use this as the outmost low-level discovery.
Each device provides various metrics. They differ from device to device. Querying the devices is a costly operation. My plan was to:

Enumerate the devices using a json from the first LLD. Each device then consists of a raw json item that contains all measurements and another discovery that creates dependent items that extract all measurements from the already done costly API call. Querying the API for each metric is way too expensive.

Comment by user185953 [ 2025 Jul 10 ]

Querying devices is costly but you can't avoid this, right? Can you query all metrics in one item? It will be very expensive but so far this turned out cheaper than total sum of querying each device one by one. Totally surprised me.

This is more of discussion for forum?

Comment by Max Ried [ 2025 Jul 10 ]

No, I can't. I might end up working around that using more custom scripts on the host. But it would easily be solvable by allowing prototypes from the upper discovery levels to be visible on lower discoveries.

Comment by Janis Freibergs [ 2025 Jul 10 ]

One way to go about the scenario above...

Let's say it operates on something like

[
  {
    server_name: A,
    devices: [{device_name: DA_A}, {device_name: DA_B}, ...]
  },
  {
    server_name: B,
    devices: [{device_name: DB_A}, {device_name: DB_B}, ...]
  },
]

Create a template with a discovery rule. key = something like device.discovery.rule{#SERVER_NAME}, type: Nested, preprocessing - JSONPath = $.devices. Add an LLD macro mapping for {#DEVICE_NAME} = $.device_name.
Add an item prototype like device.query{#DEVICE_NAME} - this will be the master item.
Add further dependent item prototypes extracting specific metrics, linked to the master item from previous step.

On a host, set up an LLD rule to get the initial JSON. Map {#SERVER_NAME} to $,server_name.
Add a host prototype (host name - server-{#SERVER_NAME}) and link the template to it. Run/await discovery.

The initial rule iterates on each server entry, passing it along "through" the discovered host (expanded as server-A, ...) to the rule (device.discovery.rule[A], ...) inherited from the template. There the master-items (device.query[DA_A], device.query[DA_B], ...) are created along dependent items once another round of discovery kicks in (on the discovered host).

Sorry in advance for any incorrections  - writing off the top of my head, trying to explain the gist of somewhat "back-to-forward" flow here.

Will pass along your suggestion regarding the "macros in master item" - I see how it could simplify things and, for one, also allow stuff to be grouped under the same host as the rule.

Comment by user185953 [ 2025 Aug 20 ]

Can this also work as two-level SNMP discovery or we still need snmpwalk to JSON changes aka ZBXNEXT-9518?

Comment by Tommy [ 2025 Oct 12 ]

user185953 "But for all my usecases I can either pull all data in original master item"

Doesn't pulling the info in the master item defeat the point of discovering it? What if I only want to pull the data based on the discovery? I would have to create the item on the master item, even though not all my hosts using that template may need it, so I may need to just disable it for those hosts.

For example, if I want to walk an SNMP OID, I may not want to walk the whole thing every time. There is going to be information that rarely changes, and information that you want to poll regularly. Why would I get all of the information frequently, when I could just collect the smaller amount of info infrequently.

If I create an item discovery, I may only want it to run once an hour, but if it does discovery something, then I want it to get a small subsection of data every minute. So you create a new item prototype that runs every minute. If you want another discovery dependent on that information you just got from the item prototype, you can create a discovery prototype referencing a dependent item. And yet once you make a new item prototype in the sub-discovery prototype, you cannot reference the parent item prototype. Your options would be either to create an item on the master host itself and reference that, or make a duplicate SNMP request item prototype, and make dependent items off of that.

Really, there are two different solutions as I can see it.

  1. Allow child item prototypes to be dependent on item prototypes from the parent
  2. Allow item prototypes to be dependent on the discovery item itself

Either way would allow you to make a single SNMP request, then split that data into unique items without needing to make even more requests

Comment by Ryan Eberly [ 2026 Jan 29 ]

I am extremely late to exploring this functionality in Version 7.4, but has anyone been able to make the nested discovery work with host prototypes? I've read the example described in case 2 here, but when I set this up I have to send the JSON twice. On the first send it creates my host from the host prototype (host prototype includes a template to link the discovered hosts to). The template on the discovered hosts has an LLD rule of type "nested", but the itemprototypes for that nested rule do not get discovered until I send the JSON a second time. And, only after the configuration syncer has synced that my discovered host (from the host prototype rule) exists in the configuration cache.

 

So, in summary:

  1. create a host called 'root host'
  2. 'root host' has a template linked to it with an LLD rule with type=trapper. this LLD rule only has a host prototype that links another template to the discovered host - we can call the other template "Template #2".
  3. The discovered host does get created and "Template #2" is linked to it.
  4. Template #2 has only an LLD rule with a type of Nested. 

 

I tried making the LLD key identical between the two LLD rules in both templates above, but that didn't make a difference. I also set the zabbix_server.conf CacheUpdateFrequency to a ridiculously high value (300 seconds), then sent my JSON to discover the host (host gets discovered as expected), and continued to send that same JSON repeatedly so that the now discovered host could discover and use the JSON. But it was not until after I force config_cache_reload that the LLD on Template #2 which is linked to the discovered host actually created the items from Template #2's item prototypes.

 

Does the documentation in the manual need to reflect this? I presumed all I should need to do is send the JSON one time and everything should be created in one shot.

 

jfreibergs or wiper - is this the expected behavior? that we need to send the JSON twice? If you need more information, screenshots, some example templates that illustrate the issue I described above, etc. I can provide that. Or, if this should just go in as a separate Issue ticket, I can do that as well.

Comment by Andris Zeila [ 2026 Feb 03 ]

Yes, only one level of nesting is discovered per one processing round. More precisely - configuration cache must be synced after each discovery to process the nested level.

Generated at Thu Apr 02 02:04:39 EEST 2026 using Jira 10.3.13#10030013-sha1:56dd970ae30ebfeda3a697d25be1f6388b68a422.