[ZBX-22492] "Error: log exceeds the maximum size of 8388608 bytes." logged Created: 2023 Mar 11  Updated: 2024 Apr 10  Resolved: 2023 Apr 17

Status: Closed
Project: ZABBIX BUGS AND ISSUES
Component/s: Server (S)
Affects Version/s: 6.0.14, 6.4.0, 7.0.0alpha1
Fix Version/s: 5.0.34rc1, 6.0.17rc1, 6.4.2rc1, 7.0.0alpha1, 7.0 (plan)

Type: Problem report Priority: Critical
Reporter: Rudolf Kastl Assignee: Dmitrijs Goloscapovs
Resolution: Fixed Votes: 7
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

rhel 8 / zabbix,proxy,agent 6.4.0 / openshift 4.10 / kubernetes 6.4 templates


Attachments: Text File 0001-.S.-ZBX-22492-removed-limitation-for-size-of-logged-.patch     PNG File HugeMemoryUsageOfProxy.png     JPEG File MaximumSizeBytes.JPG     PNG File image-2023-05-02-10-43-50-362.png    
Issue Links:
Causes
caused by ZBX-22589 Allocation of large memory chunks and... Closed
Duplicate
is duplicated by ZBX-22496 Zabbix 6.4 proxy 6.4 agent 6.4 opensh... Closed
is duplicated by ZBX-22614 Asterisk AMI integration stopped afte... Closed
is duplicated by ZBX-22876 Kubernetes memory leak Closed
Team: Team A
Sprint: Sprint 98 (Mar 2023), Sprint 99 (Apr 2023)
Story Points: 0.25

 Description   

Steps to reproduce:

  1. Use latest kubernetes 6.4 server state template
  2. Configure to monitor openshift 4.10
  3. Go to the discovery rules of the server state template dummy host

Result:

Red exclamation marks appearing on and off across mostly the kube state metrics based discovery entries with the error message:

Cannot execute script: Error: log exceeds the maximum size of 8388608 bytes.
at [anon] (zabbix.c:83) internal
at [anon] () native strict preventsyield
at [anon] (function:106) preventsyield


 Comments   
Comment by Rudolf Kastl [ 2023 Mar 11 ]

Update:

The error message keeps on appearing for ALL discovery items, but readyz and livez.

Comment by Edgar Akhmetshin [ 2023 Mar 13 ]

Hello Rudolf,

This limit can be modified:
https://git.zabbix.com/projects/ZBX/repos/zabbix/browse/src/libs/zbxembed/embed.h?at=refs%2Fheads%2Frelease%2F6.4#26

Default is 8MB:

#define ZBX_ES_LOG_MEMORY_LIMIT	(ZBX_MEBIBYTE * 8)

Confirming for DEV team to decide if it can be safely increased in upstream code.

Regards,
Edgar

Comment by Rudolf Kastl [ 2023 Mar 13 ]

Thank you very much Edgar!

Depending on the feedback of the DEV team we might consider patching our local build. I hope that the report was useful to you!

Comment by Alessandro Lombardi [ 2023 Mar 21 ]

some issue with 6.0.14 on my side with a K8s cluster

Comment by Jeudiel Guerrero [ 2023 Mar 28 ]

Same with Azure http, sometimes appears the same error. Zabbix 6.4

Comment by Łukasz Sęk [ 2023 Mar 28 ]

I have the same problem with Azure HTTP  on Zabbix 6.4.0.

Comment by Vladislavs Sokurenko [ 2023 Mar 29 ]

This should be fixed, workaround is to comment out all  Zabbix.log functions from javascript

Comment by Łukasz Sęk [ 2023 Mar 30 ]

@Vladislavs - after commenting in javascript Zabbix.log I get another error: Error: maximum count of HttpRequest objects was reached

Comment by Vladislavs Sokurenko [ 2023 Mar 30 ]

Unfortunately there is no workaround for that seniu123, but is fixed under ZBX-22490

Comment by Łukasz Sęk [ 2023 Mar 30 ]

Is there any information when the next version of zabbix with patches is planned to be released?

Comment by Peter Krall [ 2023 Apr 03 ]

@edgar.akhmetshin, can you also please add version 6.2.9 to the Versions to fix if you closed my #22614 as duplicate of this one?

Comment by Sergei Grigorev [ 2023 Apr 04 ]

Same problem with AKS

Comment by Dmitrijs Goloscapovs [ 2023 Apr 17 ]

Available in versions:

Comment by FABRICIO DA SILVA PEREIRA Pereira [ 2023 May 02 ]

Comment by Steve [ 2023 May 04 ]

Looking for a fix in the 6.2.X chain also! Solved for now by commenting out the log statements as we have a huge AWS EKS cluster.

Comment by Sergei Grigorev [ 2023 May 04 ]

Please upgrade the helm chart with a new version, 6.4.1 has the same issue even the server has been already upgraded.

Comment by Mika Tiainen [ 2023 May 08 ]

I still get this error after upgrade to 6.0.17 (zabbix.com Ubuntu package):

634103:20230508:102120.992 Starting Zabbix Server. Zabbix 6.0.17 (revision c81d82859a8).
....
634126:20230508:102203.059 error reason for "XXX nodes:kube.nodes.lld" changed: Preprocessing failed for: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"16904533"},"items":[{"metadat...
1. Failed: Error: log exceeds the maximum size of 8388608 bytes.
    at [anon] (zabbix.c:83) internal
    at [anon] () native strict preventsyield
    at [anon] (function:85) preventsyield

Update: This was because zabbix-proxy inside Kubernetes was still 6.0.16. Updating that also fixes the issue.

Comment by Samuele Bianchi [ 2023 May 22 ]

Hi,

i'm having the same problem even after the upgrade to 6.4.2

I'm using openshift container platform stable 4-10.
I have used the chart helm for zabbix 6.4 version 1.2.3  (zabbix-helm-chrt-1.2.3.tgz)

 

[ the 1.2.5 has zabbix proxy version to 6.0 why? {}|https://cdn.zabbix.com/zabbix/integrations/kubernetes-helm/6.4/zabbix-helm-chrt-1.2.3.tgz]

Comment by Sergei Grigorev [ 2023 Jun 08 ]

The problem still exists for 6.4.3 version.

Cannot execute script: Error: log exceeds the maximum size of 8388608 bytes.
at [anon] (zabbix.c:83) internal
at [anon] () native strict preventsyield
at [anon] (function:239) preventsyield

 

UPD:

My bad, after re-link the template everything works.

Comment by Oleksii Zagorskyi [ 2023 Jun 08 ]

Make sure all your proxies have been upgraded too.

Comment by Alessandro Lombardi [ 2023 Jun 08 ]

the problem is still existing if you have zabbix server, zabbix proxies and zabbix agents on 6.0.18 but only with the updated template for cluster state and only kube-state-metrics item.

Proxy with new template is also using a huge amount of resources and goes OOM.

Comment by Samuele Bianchi [ 2023 Jun 09 ]

Hi,

i have updated the chart helm two days ago. I'm using the version 1.2.4 , with zabbix version 6.4.3 and all is working correctly.
I  have seen today that are presents other two charts... 1.3.0 and 1.3.1.   ( https://cdn.zabbix.com/zabbix/integrations/kubernetes-helm/6.4/ )
the changelog is avaiable here: https://git.zabbix.com/projects/ZT/repos/kubernetes-helm/browse?at=refs%2Fheads%2Fmaster

For me all is working good now. I have had only to change a macro into the hosts that use the template "Kubernetes nodes by http" .
The macro is : {$KUBE.API.ENDPOINT.URL} that now MUSTE BE https://api.xxx.xxx:6443 intead of https://api.xxx.xx:6443/api

  ( so remove the "/api" at the end)
otherwise the discovery rules of the pod insn't working
Mu openshift cluster is 4.10.47 build ( stable)

I'll test the latest helm chart next week. The major changes seems to be the kube-state-metrics pod ,updated.
stay tuned and cross finger!

ADDENDUM: the memory usage of the proxy, now, has growt to much.  from about 400/500 MB of memory is now about 6800 MB.. 

Comment by Dimitri Bellini [ 2023 Jun 29 ]

Hi DevTeam,
as I can see and also from our test the problem seems still presento on Zabbix Server 6.0.17.
There are something that we can do to troubleshoot it?
Thanks so much

Comment by Samuele Bianchi [ 2023 Jun 29 ]

I have updated the chart to the latest avaiable....  1.3.2 that use the updated kube-state-metrics, but... the memory used by the zabbix proxy is too high!
After the 3 hours of uptime of the proxy  has already reached 7 GB of memory usage...
Yesterday it has reached 18 Gb of memory... and i have deleted the Pod... so rebuilding it starts from a low memory...
Is it a know issue? what is possible to do for debugging better ?

Comment by Vladislavs Sokurenko [ 2023 Jun 29 ]

It can still happen with web hook test if there are too much logs being writtendimitri.bellini, is that the case ?

Comment by Dimitri Bellini [ 2023 Jun 29 ]

Ok guys, for us the problem it's fixed...sorry

In our Test enviroments, based on Rancher with Kube using Zabbix Server 6.0.17 after upgrade to the latest Chart (1.3.2) and with the updated Zabbix Proxy (6.0.17) inside the Kube.

So everything was related to a not updated Proxy and Chart.

Thanks so much

Comment by Samuele Bianchi [ 2023 Jun 30 ]

Just to be clear...
With the latest chart it's solved the problem :

Cannot execute script: Error: log exceeds the maximum size of 8388608 bytes.
at [anon] (zabbix.c:83) internal
at [anon] () native strict preventsyield
at [anon] (function:239) preventsyield

BUT, it's not solved the huge memory usage of the proxy, for me. (8gb of memory from 8.06 AM to 11:42 AM, and it continues to go up until out of memory and pod terminating )

Generated at Wed Jul 16 11:09:14 EEST 2025 using Jira 9.12.4#9120004-sha1:625303b708afdb767e17cb2838290c41888e9ff0.