High latency links can cause significant problems with the proxy sending data to the Zabbix server if the proxy receives large amounts of data. Here's an example:
1. I have a Zabbix server located in Texas, USA.
2. I have a Zabbix proxy (virtual machine - 2 CPU 4GB RAM, sqlite3) located in Texas, USA.
3. I have another Zabbix proxy (virtual machine - 4 CPU 8GB RAM, tried with sqlite3 and mysql) located in Singapore.
With testing, I found that the proxy in Texas can support at least 1500 nvps. I didn't have any more monitoring to throw at it, but it was able to handle the incoming data as well as send it to the Zabbix server without any backlog.
The proxy in singapore could receive large amounts of data for its region (was collecting 500 nvps), but it was unable to send data to the Zabbix server fast enough to prevent a backlog. With a 229ms latency between it and the server in Texas, it was only able to send 1000 values (the current hardcoded max) to the Zabbix server every 2-3 seconds. Inspecting a packet dump showed what was happening. The high latency was causing the transfer to go slow for two reasons:
1. ACK/SYN packets obviously had high amounts of latency due to the distance. The initial connection was slow due to the latency.
2. Send/Receive windows in TCP cause multiple ACKs to occur during the transfer, plus they keep the initial transfer rate slow.
3. As a result of #3, and the behavior of Zabbix to not have a persistent connection, each upload to the Zabbix server of 1000 values takes roughly 2-3 seconds.
What I propose here (and Richlv mentioned in IRC ) is that it would be helpful to have a configuration value for the quantity of values which the Zabbix proxy will send in one connection.
Another thing (that would be far more difficult to implement I'm sure) would be to have a persistent connection to the Zabbix server from the proxy (but that should be another ticket ).