[ZBX-15587] Zabbix problem with TimeScaleDB (drop_chunks) Created: 2019 Feb 04  Updated: 2024 Apr 10  Resolved: 2020 Mar 16

Status: Closed
Project: ZABBIX BUGS AND ISSUES
Component/s: Documentation (D), Server (S)
Affects Version/s: None
Fix Version/s: 5.0 (plan)

Type: Documentation task Priority: Major
Reporter: SuNova Assignee: Alex Kalimulin
Resolution: Fixed Votes: 0
Labels: CentOS, PostgreSQL, TimescaleDB
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File image-2019-02-11-17-31-05-008.png    
Issue Links:
Duplicate
Team: Team A

 Description   

 

Errors in zabbix_server.log:

24889:20190204:215925.404 forced execution of the housekeeper
24889:20190204:215925.404 executing housekeeper
24889:20190204:215925.435 [Z3005] query failed: [0] PGRES_FATAL_ERROR:ERROR: function drop_chunks(integer, unknown) does not exist
LINE 1: SELECT drop_chunks(1548700165,'history')
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
[SELECT drop_chunks(1548700165,'history')]
24889:20190204:215925.435 cannot drop chunks for history
24889:20190204:215925.435 [Z3005] query failed: [0] PGRES_FATAL_ERROR:ERROR: function drop_chunks(integer, unknown) does not exist
LINE 1: SELECT drop_chunks(1548700165,'history_uint')
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
[SELECT drop_chunks(1548700165,'history_uint')]
24889:20190204:215925.435 cannot drop chunks for history_uint
24889:20190204:215925.435 [Z3005] query failed: [0] PGRES_FATAL_ERROR:ERROR: function drop_chunks(integer, unknown) does not exist
LINE 1: SELECT drop_chunks(1517768965,'trends')
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
[SELECT drop_chunks(1517768965,'trends')]
24889:20190204:215925.435 cannot drop chunks for trends
24889:20190204:215925.435 [Z3005] query failed: [0] PGRES_FATAL_ERROR:ERROR: function drop_chunks(integer, unknown) does not exist
LINE 1: SELECT drop_chunks(1517768965,'trends_uint')
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
[SELECT drop_chunks(1517768965,'trends_uint')]
24889:20190204:215925.435 cannot drop chunks for trends_uint
24889:20190204:215925.610 housekeeper [deleted 0 hist/trends, 26 items/triggers, 659 events, 332 problems, 294 sessions, 0 alarms, 294 audit items in 0.179683 sec, idle for 1 hour(s)]

My env:

zabbix_server --version
zabbix_server (Zabbix) 4.2.0alpha3

sudo -u postgres psql <<< "select version();"
---------------------------------------------------------------------------------------------------------
PostgreSQL 11.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

uname -a
Linux LCT-MON-Zabbix 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

yum list timescaledb_11
Installed Packages
timescaledb_11.x86_64                   1.1.1-1.rhel7                                                                              

 .



 Comments   
Comment by Arturs Lontons [ 2019 Feb 05 ]

Hi,
Thank you for reporting the issue.

Did you upgrade to 4.2.0 version or is it a fresh 4.2.0 install?

Comment by SuNova [ 2019 Feb 05 ]

Greetings,

I've upgraded from 4.0.2 using timescaledb.sql you've included in your source and documented here.

Comment by Aigars Kadikis [ 2019 Feb 08 ]

Hello SuNova,

Does this error message make zabbix-server daemon completely unusable?

Could you try the same upgrade step on "fresh" Zabbix setup without customisation or describe more detailed update procedure here? What did you actually execute?

Probably the most important step: did you stopped backend and frontend while executing:

cat timescaledb.sql | sudo -u zabbix psql zabbix

Was the output error-free?

P.S. I've tried to update 4.0.4 to 4.2.0alpha3 and enabling TimeScaleDB feature at the end. Did succeed while having only one host and almost no historical data. PostgreSQL 11.1 was on the table.

Comment by SuNova [ 2019 Feb 08 ]

Hello,

Zabbix daemon is not unusable after this error, it's only about housekeeper.

Unfortunately, fresh install is not an option here, but here is the (almost) exact procedure:
First I upgraded from postgresql 9.x to 11.1 by moving from official centos repos to official postgres repos, with having some downtime, exporting then importing all the data between two postgresl major versions, and then bringing zabbix_server and frontend back online again. This was a completely error free procedure and I didn't actually notice even a single error.

After that, I upgraded by installing relevant packages from repo.zabbix.com via yum directly. It was successful.

Then I tried to do make an extension part in database: zabbix via power user: postgres.
I was encountering error because I only installed `timescaledb-postgresql-11.x86_64` package, but `timescaledb_11.x86_64` was also needed. It wasn't documented in TimeScaleDB setup documentation you also provided link in your migration documentation, and there were no dependency between these two packages (strange!). I installed `timescaledb_11.x86_64` too and then creating extension was successful. 

And then yes, I brought zabbix_server down, executed script you've provided, and then started it again.

It was working normally (except one stacktrace in `zabbix.php?action=problem.view` section which I was also receiving before migration, and I forgot to submit a bug report for that).

Right now I have no problem with my history data but just housekeeping. Maybe this one can help:

select * from timescaledb_information.hypertable;

<?xml version="1.0" encoding="UTF-8"?>
<data>
<row>
<table_schema>zabbix</table_schema>
<table_name>history</table_name>
<table_owner>zabbix</table_owner>
<num_dimensions>1</num_dimensions>
<num_chunks>6</num_chunks>
<table_size>5223 MB</table_size>
<index_size>9102 MB</index_size>
<toast_size>NULL</toast_size>
<total_size>14 GB</total_size>
</row>
<row>
<table_schema>zabbix</table_schema>
<table_name>history_str</table_name>
<table_owner>zabbix</table_owner>
<num_dimensions>1</num_dimensions>
<num_chunks>0</num_chunks>
<table_size>NULL</table_size>
<index_size>NULL</index_size>
<toast_size>NULL</toast_size>
<total_size>NULL</total_size>
</row>
<row>
<table_schema>zabbix</table_schema>
<table_name>history_log</table_name>
<table_owner>zabbix</table_owner>
<num_dimensions>1</num_dimensions>
<num_chunks>0</num_chunks>
<table_size>NULL</table_size>
<index_size>NULL</index_size>
<toast_size>NULL</toast_size>
<total_size>NULL</total_size>
</row>
<row>
<table_schema>zabbix</table_schema>
<table_name>history_text</table_name>
<table_owner>zabbix</table_owner>
<num_dimensions>1</num_dimensions>
<num_chunks>0</num_chunks>
<table_size>NULL</table_size>
<index_size>NULL</index_size>
<toast_size>NULL</toast_size>
<total_size>NULL</total_size>
</row>
<row>
<table_schema>zabbix</table_schema>
<table_name>history_uint</table_name>
<table_owner>zabbix</table_owner>
<num_dimensions>1</num_dimensions>
<num_chunks>4</num_chunks>
<table_size>2482 MB</table_size>
<index_size>4340 MB</index_size>
<toast_size>NULL</toast_size>
<total_size>6822 MB</total_size>
</row>
<row>
<table_schema>zabbix</table_schema>
<table_name>trends</table_name>
<table_owner>zabbix</table_owner>
<num_dimensions>1</num_dimensions>
<num_chunks>364</num_chunks>
<table_size>686 MB</table_size>
<index_size>496 MB</index_size>
<toast_size>NULL</toast_size>
<total_size>1181 MB</total_size>
</row>
<row>
<table_schema>zabbix</table_schema>
<table_name>trends_uint</table_name>
<table_owner>zabbix</table_owner>
<num_dimensions>1</num_dimensions>
<num_chunks>364</num_chunks>
<table_size>772 MB</table_size>
<index_size>518 MB</index_size>
<toast_size>NULL</toast_size>
<total_size>1290 MB</total_size>
</row>
</data>

Tell me what more info I should provide for you.

Best regards.

Comment by SuNova [ 2019 Feb 08 ]

It's good to mention that I put textual history data in ElasticSearch.

Comment by Aigars Kadikis [ 2019 Feb 11 ]

Hello SuNova,

I've created raw history data for multiple hours and then shrink the historical period so housekeeper will have a job to do.

Housekeeper did touch the data end removed some values. The output was:

3268:20190211:152143.984 housekeeper [deleted 204 hist/trends, 0 items/triggers, 0 events, 0 problems, 0 sessions, 0 alarms, 0 audit items in 0.149562 sec, idle for 1 hour(s)]

Please share your global housekeeper configuration. 

Where did you obtain timescaledb.sql file before slipstreaming into the database?

Comment by Aigars Kadikis [ 2019 Feb 11 ]

We got some outdated links on February 8 at the

https://www.zabbix.com/documentation/4.2/manual/appendix/install/timescaledb

When you performed a setup, you probably installed a package:

# from https://docs.timescale.com/v1.1/getting-started/installation/rhel-centos/installation-yum
sudo yum install -y https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm

Now it provides a different package:

# from https://docs.timescale.com/getting-started/installation/rhel-centos/installation-yum
sudo yum install -y https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7-x86_64/pgdg-centos11-11-2.noarch.rpm

Could you recheck which repo you actually installed/used?

Comment by SuNova [ 2019 Feb 11 ]

Hello Aigars,

I've obtained timescledb.sql from the [source code|https://sourceforge.net/projects/zabbix/files/ZABBIX%20Latest%20Development/4.2.0alpha3/zabbix-4.2.0alpha3.tar.gz/download.]

Here are my config:

I can access directly to zabbix database. If you need something just tell.

 

Comment by SuNova [ 2019 Feb 11 ]
sudo yum install -y https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7-x86_64/pgdg-centos11-11-2.noarch.rpm
Loaded plugins: fastestmirror
pgdg-centos11-11-2.noarch.rpm | 4.8 kB 00:00:00
Examining /var/tmp/yum-root-J9qZ1A/pgdg-centos11-11-2.noarch.rpm: pgdg-centos11-11-2.noarch
/var/tmp/yum-root-J9qZ1A/pgdg-centos11-11-2.noarch.rpm: does not update installed package.
Error: Nothing to do
# yum list pgdg-centos*
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.plusweb.com.tr
* epel: mirror.hoster.kz
* extras: ftp.linux.org.tr
* ius: ftp.upcnet.ro
* updates: mirror.plusweb.com.tr
Installed Packages
pgdg-centos11.noarch 11-2 installed
Comment by SuNova [ 2019 Feb 11 ]

This is what I have run from the server (at the time of migration):

cat /root/timescaledb.sql
SELECT create_hypertable('history', 'clock', chunk_time_interval => 86400, migrate_data => true);
SELECT set_adaptive_chunking('history', chunk_target_size => 'estimate');
SELECT create_hypertable('history_uint', 'clock', chunk_time_interval => 86400, migrate_data => true);
SELECT set_adaptive_chunking('history_uint', chunk_target_size => 'estimate');
SELECT create_hypertable('history_log', 'clock', chunk_time_interval => 86400, migrate_data => true);
SELECT set_adaptive_chunking('history_log', chunk_target_size => 'estimate');
SELECT create_hypertable('history_text', 'clock', chunk_time_interval => 86400, migrate_data => true);
SELECT set_adaptive_chunking('history_text', chunk_target_size => 'estimate');
SELECT create_hypertable('history_str', 'clock', chunk_time_interval => 86400, migrate_data => true);
SELECT set_adaptive_chunking('history_str', chunk_target_size => 'estimate');
SELECT create_hypertable('trends', 'clock', chunk_time_interval => 86400, migrate_data => true);
SELECT set_adaptive_chunking('trends', chunk_target_size => 'estimate');
SELECT create_hypertable('trends_uint', 'clock', chunk_time_interval => 86400, migrate_data => true);
SELECT set_adaptive_chunking('trends_uint', chunk_target_size => 'estimate');
UPDATE config SET db_extension='timescaledb',hk_history_global=1,hk_trends_global=1;
Comment by SuNova [ 2019 Feb 17 ]

This is what `\df` provides for me:

Schema | Name |Result data type | Argument data types | Type
--------+----------------------------------+----------------------------------------------------------------------------------------------------------------------
public | drop_chunks | SETOF regclass | older_than "any" DEFAULT NULL::unknown, table_name name DEFAULT NULL::name, schema_name name DEFAULT NULL::name, cascade boolean DEFAULT false, newer_than "any" DEFAULT NULL::unknown, "verbose" boolean DEFAULT false | func
Comment by SuNova [ 2019 Feb 17 ]

I've got the actual cause... extension is created in public schema (by default) though my installation is on zabbix schema (both on the same database)

I think search_path is not being used in query you issue in housekeeper. 

Comment by Aigars Kadikis [ 2019 Mar 06 ]

Thank you SuNova for coming back.

I assume then the right way to install timescale support for PostgreSQL 11 is by issuing the command:

cat timescaledb.sql | sudo -u zabbix psql zabbix

Which will prepare the extension only for database 'zabbix'.

Closing this case now.

Comment by SuNova [ 2019 Mar 06 ]

I don't think issue is resolved yet.

I installed timescaledb.sql on 'zabbix' database not any other one, but in 'zabbix' database I've got two schemas ('zabbix', 'public') and timescaledb is being installed on public schema by default. That's why the query I brought from zabbix_server.log says drop_chunks is unknown, because it is defined in public schema not zabbix schema, though both schemas are on the same database. That's something like the concept of search_path in PostgreSQL  and I think housekeeping is not using functions defined in 'public' schema.

Comment by SuNova [ 2019 Mar 18 ]

I'm sorry but I still have this problem. Why is it labled fixed?

Comment by Erhan ERTUL [ 2019 May 08 ]

I had same issue on my side. It resolved after I moved back to public schema. It only works if you're running under public schema.

Is there any other option tu run this on another schema? Thanks.

Comment by Ronald Rood [ 2019 Oct 14 ]

I just installed a fresh zabbix-4.4 with timescaleDB and also have this issue.

zabbix is in the zabbix schema and drop_chunks landed in the public schema. I think the server should be able to see both schema's. Yes, I could move the zabbix tables back to the public schema but rather have them in their own schema.

Comment by Alan McAlexander [ 2019 Oct 16 ]

I kept running into the same issue, until I visited this page and scrolled down a bit - https://www.zabbix.com/documentation/4.4/manual/appendix/install/db_scripts#postgresql

In particular, it's the TimescaleDB section - https://www.zabbix.com/documentation/4.4/manual/appendix/install/db_scripts#timescaledb

Run the create.sql.gz, then the echo command below, THEN you'll be able to run either zcat or cat on the timescaledb.sql file. 

TimescaleDB extension must also be enabled for the specific DB by executing:
echo "CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;" | sudo -u postgres psql zabbix

Comment by dimir [ 2020 Feb 27 ]

aigars.kadikis, since nothing was done in the code the issue cannot be marked as "Fixed". Re-opening.

Comment by Alex Kalimulin [ 2020 Mar 02 ]

By default TimescaleDB stores all their functions in public schema. Zabbix cannot access TSDB's functions (such as drop_chunks())  if DBSchema is set to some non-default value in the server config. This is because Zabbix uses SET SCHEMA during connect rather that search_path. To use TSDB in a non-default schema one must add TSDB extension to the database this way:

CREATE EXTENSION IF NOT EXISTS timescaledb SCHEMA customschema CASCADE

So it looks like it's documentation issue.

Comment by SuNova [ 2020 Mar 05 ]

Alex

AFA I remember I submitted this issue once installing TimescaleDB wasn't possible on custom schema.

Here is the link to github issue which seems to be fixed:

https://github.com/timescale/timescaledb/issues/554

However I haven't tested this capability cause out of monitoring tasks for now.

 

Comment by Alex Kalimulin [ 2020 Mar 05 ]

SuNova:

$ echo 'CREATE EXTENSION timescaledb CASCADE;' | sudo -u postgres psql demo
WARNING:  
WELCOME TO

... skip ...

CREATE EXTENSION

$ psql demo
psql (10.12 (Ubuntu 10.12-0ubuntu0.18.04.1))
Type "help" for help.

demo=> create schema demo;
CREATE SCHEMA
demo=> set schema 'demo';
SET
demo=> create table demotable (clock integer, value integer);
CREATE TABLE
demo=> select create_hypertable('demotable','clock',chunk_time_interval => 123);
ERROR:  function create_hypertable(unknown, unknown, chunk_time_interval => integer) does not exist
LINE 1: select create_hypertable('demotable','clock',chunk_time_inte...
               ^
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
demo=> \q

$ echo 'DROP EXTENSION timescaledb CASCADE;' | sudo -u postgres psql demo
DROP EXTENSION
$ echo 'CREATE EXTENSION timescaledb SCHEMA demo CASCADE;' | sudo -u postgres psql demo
WARNING:  
WELCOME TO

... skip ...

CREATE EXTENSION
$ psql demo
psql (10.12 (Ubuntu 10.12-0ubuntu0.18.04.1))
Type "help" for help.

demo=> set schema 'demo';
SET
demo=> select create_hypertable('demotable','clock',chunk_time_interval => 123);
NOTICE:  adding not-null constraint to column "clock"
DETAIL:  Time dimensions cannot have NULL values
  create_hypertable   
----------------------
 (1,demo,demotable,t)
(1 row)

Comment by Cory Rankin [ 2020 Mar 06 ]

How would I fix this properly on a system that was already migrated using the previous instructions? Running the instructions again just has the extension already there (in public schema). 

 

 

zabbix=# CREATE EXTENSION IF NOT EXISTS timescaledb SCHEMA zabbix CASCADE;
NOTICE:  extension "timescaledb" already exists, skipping
CREATE EXTENSION

 

 

[Z3005] query failed: [0] PGRES_FATAL_ERROR:ERROR:  function drop_chunks(integer, unknown) does not exist

For now I have manually moved the drop_chunks function from public schema to zabbix schema and it works but I would like to do this the correct way as to not cause myself problems in the future.

 

Comment by Alex Kalimulin [ 2020 Mar 10 ]

schmug, the only way I see to correct it is to DROP EXTENSION then to recreate it in the database with SCHEMA option. But you can only drop it if you specify CASCADE option which removes your hypertable data. So before dropping you need to export your history and trends, then recreate extension in a proper schema, then run timescaledb.sql again and import your previously saved history and trends.

Comment by Cory Rankin [ 2020 Mar 10 ]

Alex,

 

That's what I feared, thanks. I can confirm once moving the function to zabbix schema housekeeper is working as expected after increasing the housekeeper debug level for now.

Comment by Alexander Vladishev [ 2020 Mar 16 ]

Updated documentation:

Generated at Fri Apr 26 08:45:44 EEST 2024 using Jira 9.12.4#9120004-sha1:625303b708afdb767e17cb2838290c41888e9ff0.