- 
    Problem report 
- 
    Resolution: Won't fix
- 
    Trivial 
- 
    None
- 
    6.0.18
- 
    RHEL8
- 
        Sprint 102 (Jul 2023), Sprint 103 (Aug 2023), Sprint 104 (Sep 2023), Sprint 105 (Oct 2023), Sprint 106 (Nov 2023)
Related in ZBX-22797
While the increase in vmRSS is not apparent due to the fix in ZBX-22797, 
But we can still see that the memory usage increases over time when viewed in systemctl status zabbix-agent2.
If PostgreSQL connection is not available, we can observe nearly 1G of usage per 2 week.
$ systemctl status zabbix-agent2
● zabbix-agent2.service - Zabbix Agent 2
   Loaded: loaded (/usr/lib/systemd/system/zabbix-agent2.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/zabbix-agent2.service.d
           └─override.conf
   Active: active (running) since Wed 2023-06-14 17:18:42 JST; 1 weeks 2 days ago
 Main PID: 1201 (zabbix_agent2)
    Tasks: 17 (limit: 838780)
   Memory: 427.1M
   CGroup: /system.slice/zabbix-agent2.service
           ├─1201 /usr/sbin/zabbix_agent2 -c /etc/zabbix/zabbix_agent2.conf
           └─2505 /usr/sbin/zabbix-agent2-plugin/zabbix-agent2-plugin-postgresql /tmp/agent.plugin.sock false
The memory usage similar to what is shown in "systemctl status zabbix-agent2" can be found in cgroup
/sys/fs/cgroup/memory/system.slice/zabbix-agent2.service/memory.usage_in_bytes
This is also clearly reflected in the decreasing of Available memory in Zabbix's Linux monitoring items. (ex: vm.memory.size[available])
... However, this usage is not visible in the VmRSS (ps)
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1201 0.1 0.3 1267560 31220 ? Ssl 6月14 22:24 /usr/sbin/zabbix_agent2 -c /etc/zabbix/zabbix_agent2.conf root 2505 0.0 0.3 1388744 24680 ? Sl 6月14 8:44 /usr/sbin/zabbix-agent2-plugin/zabbix-agent2-plugin-postgresql /tmp/agent.plugin.sock false
- duplicates
- 
                    ZBX-23076 Zabbix Agent2 high memory usage -         
- Closed
 
-