Values of the same item with 'change' preprocessing steps must be executed serially. To ensure it preprocessing manager links such values in the queue and does not flush the first value until the next value processing is started.
In this case the queue is blocked from flushing until the next value is being processed. However when processing huge amount of cached (proxy history after some downtime) data there might be thousands of other values between the first value and the next one. When retrieving next value to process the manager will have to iterate the queue from begining, in worst case reaching 10+ thousands of iterations each time, slowing the manager down.