
Responses inline. On 30/08/2021 17:45, Ralph Goers wrote:
As I am pretty sure you are aware, we’ve been doing extensive testing at Log4j using these performance tests and have come to the conclusionthat the way you have presented these results is terribly misleading. What they show is that Logback’s FileAppender currently performs better than Log4j 2’s (we are working on that). These tests show nothing in the way of asynchronous performance comparison since the queues/ring buffers are always full and the overhead of having to go through the queue or ring buffer is insignificant compared to the overhead of the synchronous logging.
While there might be better benchmarks, the benchmark presented at [1] and [2] clearly show the impact of asynchronous logging in the case of overloaded buffers. While I agree that FileAppenderBenchmark and AsyncFileAppenderBenchmark in [2] constitute worst-case analysis (and do not represent the nominal use case), saying that the overhead of going through a circular buffer is insignificant seems quite counter factual. The impact of going through the ring buffer/queue is easily measurable and very far from insignificant. Perhaps you mean insignificant compared to geo-political events or at the scale of the universe? Anyway, the numbers are all there to be observed by all those who care to observe. [1] http://logback.qos.ch/performance.html [2] https://github.com/ceki/logback-perf As for the pertinence of FileAppenderBenchmark and AsyncFileAppenderBenchmark, even in case the queue is not overloaded, there will be some contention accessing the queue. The aforementioned benchmarks show this cost and are therefore quite meaningful in my opinion. Personally, I was quite surprised by: 1) the disappointing performance of the LMAX disruptor. 2) the non uniform impact of running the benchmark under a hypervisor/virtual CPU. I should also add that the benchmark was first and foremost designed to improve the asynchronous logging performance in logback. Believe it or not, comparing it with log4j1 and log4j2 came as an afterthought.
While it is fine for you to claim better performance for the file appender in the specific releases you are testing I would ask that you change the page to not pretend it is comparing the performance of asynchronous operations as it doesn’t do that. You would need to modify the test so that the synchronous operation can complete in less time than it takes to enqueue an event so that the queues don’t fill up for it to really test the max speed of asynchronous operations.
I am sorry that you feel that [1] is not representative of asynchronous logging performance. As explained above, I feel rather differently. While I will not be ordered around, I remain open to suggestions including alternative ways of benchmarking.
Also, I noticed that you have configured Logback’s FileAppender with a 256KB buffer but left Log4j2’s appender at its default of 8KB.
This is a fair point. I have modified the configuration files [3] and will run the benchmarks again. [3] https://github.com/ceki/logback-perf/commit/9736a37f76492b
By the way, it is good to see you back working on the projects again.
Thank you. -- Ceki