Zabbix-Agent2 oracle plugin Cannot fetch data: context deadline exceeded

XMLWordPrintable

    • Type: Problem report
    • Resolution: Duplicate
    • Priority: Major
    • None
    • Affects Version/s: 5.4.4, 6.0.5
    • Component/s: Agent2 plugin (G)
    • None

      Complex SQL statement to gather space metrics for 42 tablespaces takes 104 seconds and exceeds deadline for Oracle plugin.

      Steps to reproduce:

      1. Configure Oracle plugin for zabbix-agent2 (on Oracle Linux 7 with Oracle DB installed locally)
      2. Set DPI_DEBUG_LEVEL=16 in /etc/sysconfig/zabbix-agent2
      3. Start agent
      4. Monitor /var/log/zabbix/zabbix_agent2.log for entry "[Oracle] Cannot fetch data: context deadline exceeded."
      5. Review /var/log/messages for sql statement (see below).

      Result:
      The sql statement 

      SELECT
      JSON_ARRAYAGG(
      JSON_OBJECT(TABLESPACE_NAME VALUE
      JSON_OBJECT(
      'contents'    VALUE CONTENTS,
      'file_bytes'    VALUE FILE_BYTES,
      'max_bytes'     VALUE MAX_BYTES,
      'free_bytes'    VALUE FREE_BYTES,
      'used_bytes'    VALUE USED_BYTES,
      'used_pct_max'  VALUE USED_PCT_MAX,
      'used_file_pct' VALUE USED_FILE_PCT,
      'status'        VALUE STATUS
      )
      ) RETURNING CLOB
      )
      FROM
      (
      SELECT
      df.TABLESPACE_NAME AS TABLESPACE_NAME,
      df.CONTENTS AS CONTENTS,
      NVL(SUM(df.BYTES), 0) AS FILE_BYTES,
      NVL(SUM(df.MAX_BYTES), 0) AS MAX_BYTES,
      NVL(SUM(f.FREE), 0) AS FREE_BYTES,
      SUM(df.BYTES)-SUM(f.FREE) AS USED_BYTES,
      ROUND(DECODE(SUM(df.MAX_BYTES), 0, 0, (SUM(df.BYTES) / SUM(df.MAX_BYTES) * 100)), 2) AS USED_PCT_MAX,
      ROUND(DECODE(SUM(df.BYTES), 0, 0, (SUM(df.BYTES)-SUM(f.FREE)) / SUM(df.BYTES)* 100), 2) AS USED_FILE_PCT,
      DECODE(df.STATUS, 'ONLINE', 1, 'OFFLINE', 2, 'READ ONLY', 3, 0) AS STATUS
      FROM
      (
      SELECT
      ddf.FILE_ID,
      dt.CONTENTS,
      dt.STATUS,
      ddf.FILE_NAME,
      ddf.TABLESPACE_NAME,
      TRUNC(ddf.BYTES) AS BYTES,
      TRUNC(GREATEST(ddf.BYTES, ddf.MAXBYTES)) AS MAX_BYTES
      FROM
      DBA_DATA_FILES ddf,
      DBA_TABLESPACES dt
      WHERE
      ddf.TABLESPACE_NAME = dt.TABLESPACE_NAME
      ) df,
      (
      SELECT
      TRUNC(SUM(BYTES)) AS FREE,
      FILE_ID
      FROM
      DBA_FREE_SPACE
      GROUP BY
      FILE_ID
      ) f
      WHERE
      df.FILE_ID = f.FILE_ID (+)
      GROUP BY
      df.TABLESPACE_NAME, df.CONTENTS, df.STATUS
      UNION ALL
      SELECT
      Y.NAME AS TABLESPACE_NAME,
      Y.CONTENTS AS CONTENTS,
      NVL(SUM(Y.BYTES), 0) AS FILE_BYTES,
      NVL(SUM(Y.MAX_BYTES), 0) AS MAX_BYTES,
      NVL(MAX(NVL(Y.FREE_BYTES, 0)), 0) AS FREE,
      SUM(Y.BYTES)-SUM(Y.FREE_BYTES) AS USED_BYTES,
      ROUND(DECODE(SUM(Y.MAX_BYTES), 0, 0, (SUM(Y.BYTES) / SUM(Y.MAX_BYTES) * 100)), 2) AS USED_PCT_MAX,
      ROUND(DECODE(SUM(Y.BYTES), 0, 0, (SUM(Y.BYTES)-SUM(Y.FREE_BYTES)) / SUM(Y.BYTES)* 100), 2) AS USED_FILE_PCT,
      DECODE(Y.TBS_STATUS, 'ONLINE', 1, 'OFFLINE', 2, 'READ ONLY', 3, 0) AS STATUS
      FROM
      (
      SELECT
      dtf.TABLESPACE_NAME AS NAME,
      dt.CONTENTS,
      dt.STATUS AS TBS_STATUS,
      dtf.STATUS AS STATUS,
      dtf.BYTES AS BYTES,
      (
      SELECT
      ((f.TOTAL_BLOCKS - s.TOT_USED_BLOCKS) * vp.VALUE)
      FROM
      (
      SELECT
      TABLESPACE_NAME, SUM(USED_BLOCKS) TOT_USED_BLOCKS
      FROM
      GV$SORT_SEGMENT
      WHERE
      TABLESPACE_NAME != 'DUMMY'
      GROUP BY
      TABLESPACE_NAME) s, (
      SELECT
      TABLESPACE_NAME, SUM(BLOCKS) TOTAL_BLOCKS
      FROM
      DBA_TEMP_FILES
      WHERE
      TABLESPACE_NAME != 'DUMMY'
      GROUP BY
      TABLESPACE_NAME) f, (
      SELECT
      VALUE
      FROM
      V$PARAMETER
      WHERE
      NAME = 'db_block_size') vp
      WHERE
      f.TABLESPACE_NAME = s.TABLESPACE_NAME
      AND f.TABLESPACE_NAME = dtf.TABLESPACE_NAME
      ) AS FREE_BYTES,
      CASE
      WHEN dtf.MAXBYTES = 0 THEN dtf.BYTES
      ELSE dtf.MAXBYTES
      END AS MAX_BYTES
      FROM
      sys.DBA_TEMP_FILES dtf,
      sys.DBA_TABLESPACES dt
      WHERE
      dtf.TABLESPACE_NAME = dt.TABLESPACE_NAME ) Y
      GROUP BY
      Y.NAME, Y.CONTENTS, Y.TBS_STATUS
      )
      ;
      

      Returns 42 tablespace elements (abridged):

       { "DB1": { "contents": "PERMANENT", "file_bytes": 1048576000, "max_bytes": 107374182400, "free_bytes": 110100480, "used_bytes": 938475520, "used_pct_max": 0.98, "used_file_pct": 89.5, "status": 1 } }, { "DB2": { "contents": "PERMANENT", "file_bytes": 104857600, "max_bytes": 34359721984, "free_bytes": 92012544, "used_bytes": 12845056, "used_pct_max": 0.31, "used_file_pct": 12.25, "status": 1 } }, { "DB3": { "contents": "PERMANENT", "file_bytes": 18393137152, "max_bytes": 107374182400, "free_bytes": 877592576, "used_bytes": 17515544576, "used_pct_max": 17.13, "used_file_pct": 95.23, "status": 1 } },.......
      

      but took 104 seconds to run.
      Expected:
      A more efficient sql query that will return values within the timeout threshold for the agent plugin and will not overly tax the database server performing these repetitive tabulations.

            Assignee:
            Eriks Sneiders
            Reporter:
            Bruce Gilmore
            Votes:
            4 Vote for this issue
            Watchers:
            7 Start watching this issue

              Created:
              Updated:
              Resolved: