Release of logback version 1.3.0-alpha10

Hello all, I am happy to announce the release of logback version 1.3.0-alpha10, the best performing logback yet. It requires slf4j-api version 2.0.0-alpha4 or later. For benchmarking figures please see: http://logback.qos.ch/performance.html This version supports the fluent-API introduced in SLF4J 2.0.x. It also supports Jigsaw/Java9 modularization. Joran, logback's configuration system, has been rewritten to use an internal representation model which can be processed separately. Given the breadth of the changes, support for SiftingAppender and Groovy configuration have been dropped temporarily. No new features are available in this version. Future releases will gradually introduce new features made possible by Joran representation model. The 1.3.x series requires Java 8 at runtime. If you wish to build logback from source, you will need Java 9. Please refer to the news page for precise details. http://logback.qos.ch/news.html You can receive logback related announcements by subscribing to the QOS.ch announce mailing list. To subscribe to QOS.ch announce list, please visit the following URL. http://www.qos.ch/mailman/listinfo/announce You may also receive announcements via twitter by following: https://twitter.com/qos_ch Enjoy, -- Ceki Gülcü

As I am pretty sure you are aware, we’ve been doing extensive testing at Log4j using these performance tests and have come to the conclusionthat the way you have presented these results is terribly misleading. What they show is that Logback’s FileAppender currently performs better than Log4j 2’s (we are working on that). These tests show nothing in the way of asynchronous performance comparison since the queues/ring buffers are always full and the overhead of having to go through the queue or ring buffer is insignificant compared to the overhead of the synchronous logging. While it is fine for you to claim better performance for the file appender in the specific releases you are testing I would ask that you change the page to not pretend it is comparing the performance of asynchronous operations as it doesn’t do that. You would need to modify the test so that the synchronous operation can complete in less time than it takes to enqueue an event so that the queues don’t fill up for it to really test the max speed of asynchronous operations. Also, I noticed that you have configured Logback’s FileAppender with a 256KB buffer but left Log4j2’s appender at its default of 8KB. By the way, it is good to see you back working on the projects again. Ralph
On Aug 23, 2021, at 9:55 AM, Ceki <ceki@qos.ch> wrote:
On 23.08.2021 17:32, David Roussel wrote:
Good work Ceki. Impressive results.
Thank you David.
Please feel free to run the tests on your end to confirm the results.
-- Ceki Gülcü _______________________________________________ logback-user mailing list logback-user@qos.ch http://mailman.qos.ch/mailman/listinfo/logback-user

Responses inline. On 30/08/2021 17:45, Ralph Goers wrote:
As I am pretty sure you are aware, we’ve been doing extensive testing at Log4j using these performance tests and have come to the conclusionthat the way you have presented these results is terribly misleading. What they show is that Logback’s FileAppender currently performs better than Log4j 2’s (we are working on that). These tests show nothing in the way of asynchronous performance comparison since the queues/ring buffers are always full and the overhead of having to go through the queue or ring buffer is insignificant compared to the overhead of the synchronous logging.
While there might be better benchmarks, the benchmark presented at [1] and [2] clearly show the impact of asynchronous logging in the case of overloaded buffers. While I agree that FileAppenderBenchmark and AsyncFileAppenderBenchmark in [2] constitute worst-case analysis (and do not represent the nominal use case), saying that the overhead of going through a circular buffer is insignificant seems quite counter factual. The impact of going through the ring buffer/queue is easily measurable and very far from insignificant. Perhaps you mean insignificant compared to geo-political events or at the scale of the universe? Anyway, the numbers are all there to be observed by all those who care to observe. [1] http://logback.qos.ch/performance.html [2] https://github.com/ceki/logback-perf As for the pertinence of FileAppenderBenchmark and AsyncFileAppenderBenchmark, even in case the queue is not overloaded, there will be some contention accessing the queue. The aforementioned benchmarks show this cost and are therefore quite meaningful in my opinion. Personally, I was quite surprised by: 1) the disappointing performance of the LMAX disruptor. 2) the non uniform impact of running the benchmark under a hypervisor/virtual CPU. I should also add that the benchmark was first and foremost designed to improve the asynchronous logging performance in logback. Believe it or not, comparing it with log4j1 and log4j2 came as an afterthought.
While it is fine for you to claim better performance for the file appender in the specific releases you are testing I would ask that you change the page to not pretend it is comparing the performance of asynchronous operations as it doesn’t do that. You would need to modify the test so that the synchronous operation can complete in less time than it takes to enqueue an event so that the queues don’t fill up for it to really test the max speed of asynchronous operations.
I am sorry that you feel that [1] is not representative of asynchronous logging performance. As explained above, I feel rather differently. While I will not be ordered around, I remain open to suggestions including alternative ways of benchmarking.
Also, I noticed that you have configured Logback’s FileAppender with a 256KB buffer but left Log4j2’s appender at its default of 8KB.
This is a fair point. I have modified the configuration files [3] and will run the benchmarks again. [3] https://github.com/ceki/logback-perf/commit/9736a37f76492b
By the way, it is good to see you back working on the projects again.
Thank you. -- Ceki

On Aug 30, 2021, at 10:57 AM, Ceki Gülcü <ceki@qos.ch <mailto:ceki@qos.ch>> wrote:
Responses inline.
Personally, I was quite surprised by:
1) the disappointing performance of the LMAX disruptor. 2) the non uniform impact of running the benchmark under a hypervisor/virtual CPU.
Well, here we will have to just disagree. From my analysis you aren’t measuring the performance of the disruptor at all. When I ran it under a profiler the disruptor doesn’t even show up. It is all overhead in Log4j’s PatternLayout coupled with some of the complexities in the way I/O was done to be garbage free. We are working on correcting that and have seen some promising results. But as long as file I/O is the limiting factor you really aren’t measuring the performance of the disruptor. On item 2, I am not sure what you are referring to. I’ve just been running my tests on my MacBook Pro. While I do see that Logback’s FileAppender has been faster my results for Log4j2 look nothing like yours. FYI, here are results that I just ran on my MacBook Pro using Log4j 2.15.0-SNAPSHOT. My MacBook Pro says it has 8 cores. 8 Threads Benchmark Mode Cnt Score Error Units AsyncWithFileAppenderBenchmark.log4j1File thrpt 10 1409.525 ± 57.006 ops/ms AsyncWithFileAppenderBenchmark.log4j2AsyncFile thrpt 10 1743.430 ± 51.178 ops/ms AsyncWithFileAppenderBenchmark.logbackFile thrpt 10 2157.374 ± 49.966 ops/ms FileAppenderBenchmark.log4j1File thrpt 10 764.812 ± 13.624 ops/ms FileAppenderBenchmark.log4j2File thrpt 10 2281.328 ± 148.979 ops/ms FileAppenderBenchmark.logbackFile thrpt 10 1946.389 ± 62.855 ops/ms 16 Threads Benchmark Mode Cnt Score Error Units AsyncWithFileAppenderBenchmark.log4j1File thrpt 10 1228.149 ± 76.286 ops/ms AsyncWithFileAppenderBenchmark.log4j2AsyncFile thrpt 10 1590.441 ± 29.938 ops/ms AsyncWithFileAppenderBenchmark.logbackFile thrpt 10 1994.770 ± 105.827 ops/ms FileAppenderBenchmark.log4j1File thrpt 10 825.138 ± 18.012 ops/ms FileAppenderBenchmark.log4j2File thrpt 10 2231.246 ± 111.451 ops/ms FileAppenderBenchmark.logbackFile thrpt 10 1884.429 ± 48.463 ops/ms My Mac apparently doesn’t do hyper threading so if that is what you mean I don’t have something handy to test it on. I will admit that both the results above for both Log4j 1 and Log4j 2 are a bit surprising. Log4j 1’s synchronous test should be much faster. And Log4j 2’s asynchronous test shouldn’t be much different than the synchronous test. You can be sure that we are continuing to look at these until we can explain the results better.
While I will not be ordered around, I remain open to suggestions including alternative ways of benchmarking.
I apologize if you felt like I was commanding you to do something. I certainly know better than that. What I am suggesting is that the whole section above "Comparing log4j, log4j2 and logback” is fine IMO. My issue is that when readers get to the section below there is no mention that what is being benchmarked is asynchronous logging with the queues full. For example, "The above results show that throughput in synchronous logging is actually higher than that of asynchronous logging” would be more clear if you simply added “when the number of incoming events exceeds the speed of the appender.” to the end of the sentence. Otherwise the casual reader will believe it is truly measuring just the overhead of asynchronous logging.
Also, I noticed that you have configured Logback’s FileAppender with a 256KB buffer but left Log4j2’s appender at its default of 8KB.
This is a fair point. I have modified the configuration files [3] and will run the benchmarks again.
[3] https://github.com/ceki/logback-perf/commit/9736a37f76492b <https://github.com/ceki/logback-perf/commit/9736a37f76492b>
I looked at the commit. I still don’t see bufferSize=“262144” added to the configuration for Log4j2. Ralph
participants (4)
-
Ceki
-
Ceki Gülcü
-
David Roussel
-
Ralph Goers