Uploaded image for project: 'ZABBIX BUGS AND ISSUES'
  1. ZABBIX BUGS AND ISSUES
  2. ZBX-3490

vfs.dev.write[] and vfs.dev.read[] cannot treat LVM devices.

XMLWordPrintable

    • Icon: Incident report Incident report
    • Resolution: Fixed
    • Icon: Major Major
    • 1.8.6, 1.9.5 (alpha)
    • 1.8.4
    • Agent (G)
    • None
    • Zabbix Agent for Linux

      vfs.dev.write[] and vfs.dev.read[] cannot treat LVM devices.

      For example, I have following disk devices.
      I'd like to get "vfs.dev.write[mapper/VolGroup00-LogVolVMWare01]", but it returns ZBX_NOTSUPPORTED
      #########################################################################

      1. df
        Filesystem 1K-blocks Used Available Use% Mounted on
        /dev/sdb1 10153988 9565416 64456 100% /
        tmpfs 3057412 0 3057412 0% /dev/shm
        /dev/sdb8 98118256 79910280 13223812 86% /var/lib/vmware
        /dev/mapper/VolGroup00-LogVolVMWare01
        110437216 103922020 905316 100% /var/lib/vmware/LVM_VMWare01
        /dev/sdc1 240362656 224830024 3322832 99% /var/lib/vmware/LVM_VMWare02
      1. cat /proc/diskstats
        1 0 ram0 0 0 0 0 0 0 0 0 0 0 0
        1 1 ram1 0 0 0 0 0 0 0 0 0 0 0
        1 2 ram2 0 0 0 0 0 0 0 0 0 0 0
        1 3 ram3 0 0 0 0 0 0 0 0 0 0 0
        1 4 ram4 0 0 0 0 0 0 0 0 0 0 0
        1 5 ram5 0 0 0 0 0 0 0 0 0 0 0
        1 6 ram6 0 0 0 0 0 0 0 0 0 0 0
        1 7 ram7 0 0 0 0 0 0 0 0 0 0 0
        1 8 ram8 0 0 0 0 0 0 0 0 0 0 0
        1 9 ram9 0 0 0 0 0 0 0 0 0 0 0
        1 10 ram10 0 0 0 0 0 0 0 0 0 0 0
        1 11 ram11 0 0 0 0 0 0 0 0 0 0 0
        1 12 ram12 0 0 0 0 0 0 0 0 0 0 0
        1 13 ram13 0 0 0 0 0 0 0 0 0 0 0
        1 14 ram14 0 0 0 0 0 0 0 0 0 0 0
        1 15 ram15 0 0 0 0 0 0 0 0 0 0 0
        8 0 sda 1654961 152313 55550056 29417338 220612252 1480505653 13611736176 2159922341 0 2294306082 2192725174
        8 1 sda1 410 826 0 0
        8 2 sda2 1807096 55548910 1701458169 726798368
        8 16 sdb 1026519 130631 22693437 7777766 62776012 19146814 655459112 655832480 0 472751459 663786416
        8 17 sdb1 762084 14253276 29390593 235124824
        8 18 sdb2 806 815 0 0
        8 19 sdb3 806 815 0 0
        8 20 sdb4 5 10 0 0
        8 21 sdb5 810 819 0 0
        8 22 sdb6 398 802 0 0
        8 23 sdb7 398 802 0 0
        8 24 sdb8 391893 8435298 52541750 420334000
        8 32 sdc 116153 11234 3362726 725861 33834038 44268352 625042224 2767518537 0 33771182 2768502817
        8 33 sdc1 124462 3339262 78130069 625040992
        253 0 dm-0 55 0 440 408 0 0 0 0 0 25 408
        253 1 dm-1 110 0 880 1025 358 0 2864 13363 0 435 14388
        253 2 dm-2 1806769 0 55546954 34161598 1701466664 0 13611733312 2435789496 0 2294642704 2470671328
        3 0 hda 0 0 0 0 0 0 0 0 0 0 0
        9 0 md0 0 0 0 0 0 0 0 0 0 0 0
        #########################################################################

      [Why it occurs]
      vfs.dev.write[] and vfs.dev.read[] get device name from argument.
      They search same name in /proc/diskstats and If they found same name, they return the statistics values.
      But there is no "mapper/VolGroup00-LogVolVMWare01".
      "dm-2" is the LVM device name in /proc/diskstats.

      So, Zabbix returns ZBX_NOTSUPPORTED

      [How to fix]
      The current Zabbix is searching device by name in /proc/diskstats.
      If Zabbix search it by device major number and minor number, Zabbix can find the LVM device.
      It is almost same fix in ZBX-1015

      For example, "ls -al /dev/mapper/VolGroup00-LogVolVMWare01" shows LVM device has major number and minor number as "253, 2".
      Then, we can find LVM device as "dm-2"

      1. ls -al /dev/mapper/VolGroup00-LogVolVMWare01
        brw-rw---- 1 root disk 253, 2 Sep 8 19:39 /dev/mapper/VolGroup00-LogVolVMWare01
      2. cat /proc/diskstats |grep "253 2"
        253 2 dm-2 1806769 0 55546954 34161598 1701502889 0 13612023112 2452860745 0 2294837356 2487742578

            dimir dimir
            tsuzuki Takanori Suzuki
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

              Created:
              Updated:
              Resolved: