
Hello Joern, I agree with Jozsef: a bounded blocking queue could be an interesting option. For me, it would also be acceptable that events are dropped when the queue fills up: we will always have the log-files as a backup. see http://logging.apache.org/log4j/1.2/faq.html#1.2 (I know, it's about log4j not LogBack) What I want can probably be done with the AsyncAppender http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/AsyncAppender.h... Ceki, any plans on adding a AsyncAppender to LogBack ? I have also thought about gzipping the events, with MINA this would be very easy http://mina.apache.org/report/1.1/apidocs/org/apache/mina/filter/Compression... Maarten On 9/28/07, Hontvari Jozsef <hontvari3@solware.com> wrote:
The problem with asynchronous SocketAppenders is that you have essentially three options: a) you keep events in an in memory-back-buffer. This will lead to out-of-memory situations if more events are produced than transfered. At this point your app will either explode or drop events. Both is not really an option. b) you keep the events in a disk-based buffer.This will lead to out-of-disk-space situations if more events are produced than transfered. See a) ;)
So event transmission must be synchronous.
You can also use a bounded blocking queue. In this way the process is usually asynchronous but falls back to (nearly) synchronous it there are too many events and slows down the system. But I think the througput will be higher even in the latter case.
_______________________________________________ logback-dev mailing list logback-dev@qos.ch http://qos.ch/mailman/listinfo/logback-dev