[JIRA] Created: (LBCORE-128) Please support implementation of binary log files in RollingFileAppender/FileAppender

Please support implementation of binary log files in RollingFileAppender/FileAppender ------------------------------------------------------------------------------------- Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Logback dev list This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- I've given an implementation a try. Please have a look at it at http://github.com/huxi/logback/commits/LBCORE-128
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Logback dev list
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- Is there any chance that this might get included in 0.9.19?
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Logback dev list
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- Hi Joern, The list of issues to be corrected for 0.9.19 is pretty long (around 30 issues). While it is obvious that this change would make logback more flexible, it would also increase the complexity of the code. What I am missing is a use case. How is writing binary log files useful? Is it for differed processing of log files, e.g having Lilith read log files without needing to parse free format text? I would appreciate if you could shed some light onto the matter.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Logback dev list
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- One of the benefits is that logs can be compressed while they are created, not just while rotating them. (Yes, we have really huge production logs - and we actually need them ;)) Another point is that simply serializing log events is likely performing better (depending on the serialization mechanism, obviously) than creating human-readable files and does not have to involve any loss of information. I can decide about the human-readable format when I want to read/evaluate/work with the logs, not while writing them. What I'd like to do is create a simple command-line app that just prints such a file in a human-readable format that is specified as an argument. It would also be possible to transform it into LOG4J XML, Lilith XML, HTML or whatever you'd like to do/need. I'd simply implement a matching Lilith file appender that produces the same files that are used by Lilith - but that's only my use-case. Another benefit, at least in case of my implementation, is that I only require a header and no footer. This removes the problem of malformed XML or HTML caused by missing closing tags, for example.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Logback dev list
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- While fixing LBCORE-109, I'll try to fix this bug as well. When this issue was raised previously, I remember being impressed by Maarten Bosteel's ideas as well: http://tinyurl.com/encoder-interface http://tinyurl.com/encoder-example I'll try to address both issues if I can.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu edited comment on LBCORE-128 at 2/16/10 10:22 PM: ------------------------------------------------------------- While fixing LBCORE-109, I'll try to fix this bug as well. When this issue was raised previously, I remember being impressed by Maarten Bosteel's ideas as well: http://tinyurl.com/encoder-interface http://tinyurl.com/encoder-example I'll try to address both issues if I can. There are 1) write a text stream using a layout 2) write a compressed stream using a layout 3) write logging events as Objects, no layout is necessary 4) write logging objects encoded with ProtoBuf, no layout is neccesssary. Looking your LBCORE-128 branch and in particular at http://github.com/huxi/logback/commit/d1d9a045ec55a856560909151a1f51fee3851f..., I am guessing that your are catering for cases 1 and 2 described above but I don't see how you instruct FileAppender to use a compressed stream. Am I missing a commit, or have you put the general design in place leaving some details unimplemented? was (Author: noreply.ceki@qos.ch): While fixing LBCORE-109, I'll try to fix this bug as well. When this issue was raised previously, I remember being impressed by Maarten Bosteel's ideas as well: http://tinyurl.com/encoder-interface http://tinyurl.com/encoder-example I'll try to address both issues if I can.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu edited comment on LBCORE-128 at 2/16/10 10:43 PM: ------------------------------------------------------------- While fixing LBCORE-109, I'll try to fix this bug as well. When this issue was raised previously, I remember being impressed by Maarten Bosteel's ideas as well: http://tinyurl.com/encoder-interface http://tinyurl.com/encoder-example I'll try to address both issues if I can. Here are several use cases one can imagine: 1) write a text stream using a layout 2) write a compressed stream using a layout 3) write logging events as Objects, no layout necessary 4) write logging objects encoded with ProtoBuf, no layout necessary. Looking your LBCORE-128 branch and in particular at http://github.com/huxi/logback/commit/d1d9a045ec55a856560909151a1f51fee3851f..., I am guessing that your are catering for cases 1 and 2 described above but I don't see how you instruct FileAppender to use a compressed stream. Am I missing a commit, or have you put the general design in place leaving some details unimplemented? was (Author: noreply.ceki@qos.ch): While fixing LBCORE-109, I'll try to fix this bug as well. When this issue was raised previously, I remember being impressed by Maarten Bosteel's ideas as well: http://tinyurl.com/encoder-interface http://tinyurl.com/encoder-example I'll try to address both issues if I can. There are 1) write a text stream using a layout 2) write a compressed stream using a layout 3) write logging events as Objects, no layout is necessary 4) write logging objects encoded with ProtoBuf, no layout is neccesssary. Looking your LBCORE-128 branch and in particular at http://github.com/huxi/logback/commit/d1d9a045ec55a856560909151a1f51fee3851f..., I am guessing that your are catering for cases 1 and 2 described above but I don't see how you instruct FileAppender to use a compressed stream. Am I missing a commit, or have you put the general design in place leaving some details unimplemented?
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- My branch was meant to implement what I need in a manner of least possible impact. I detached the implementation of FileAppender from WriterAppender, extending UnsynchronizedAppenderBase instead. writerWrite(String s, boolean flush) (which I didn't rename even though there's no writer anymore but a stream instead) is now using convertToBytes(String s) and writeBytes(byte[] bytes, boolean flush), i.e. the appender writes bytes into a stream instead of writing a String into a Writer. Encoding of the String isn't handled by the Writer anymore. Instead, String.getBytes(charset) is used to transform into bytes. The charset is initialized once during startup. I've also taken a look at Maartens Encoder but I fear I don't really get the reason for either the decorate method or the use of generics in that case. In the meantime, I've also implemented streaming encoder/decoder interfaces for sulky/Lilith. http://github.com/huxi/sulky/tree/master/sulky-codec/src/main/java/de/huxhor... Those are a supplement of my previous byte[]-based encoders/decoders and allow chaining. I use generics to define the type that is encoded/decoded. The decoration of the stream is meant to be encapsulated in the actual implementation. I realize that this does not allow reusing of the same encapsulated stream, though. This means that encoding using Java serialization is less efficient since duplicate strings, for example, are serialized over and over instead of only once, referencing the already serialized string. It has the advantage that written byte[] are not dependent on previously written ones, on the other hand. This is crucial for random event access, in my case. I don't say Maartens design is bad and mine is better. Neither you nor (especially!) Maarten should get this impression. If you are actually aiming for a bigger change than my small patch then it might be a good idea to keep the layout concept, even for binary files, and add the ability to use binary layouts that would also contain the encoder for the event. I've enabled binary headers and footers by declaring writeHeader() & writeFooter() protected instead of private in FileAppender - so I could (re)implement them as needed in my extending class. If Layout would/could handle binary data for header, footer and actual events instead, extending RollingFileAppender wouldn't be necessary anymore. One could simply declare a LilithLayout (with parameters?) instead. This would be a much cleaner design but I feared that such a fundamental change wouldn't have any chance of being accepted for inclusion. Thanks for taking a look at this issue. I really appreciate it!
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

Hi Joern, The reason for using generics in the Encoder interface is to avoid having to cast. (hmm, I guess that's always the reason to use generics ?) Take for example, the ObjectEncoder: it needs an ObjectOutputStream, so it wraps the given OutputStream in an ObjectOutputStream: public class ObjectEncoder implements Encoder<ObjectOutputStream> { public void encode(ILoggingEvent event, ObjectOutputStream output) throws IOException { output.writeObject(event); } public ObjectOutputStream decorate(OutputStream os) throws IOException { return new ObjectOutputStream(os); } } On the other hand, the ProtobufEncoder can use a plain OutputStream, so its decorate impl is simply a no-op: public class ProtobufEncoder implements Encoder<OutputStream>{ private ProtoBufConvertor convertor = new ProtoBufConvertor(); public void encode(ILoggingEvent event, OutputStream output) throws IOException { LoggingProtos.LoggingEvent ev = convertor.convert(event); ev.writeTo(output); } public OutputStream decorate(OutputStream os) throws IOException { return os; } } Clear now ? Maarten On Tue, Feb 16, 2010 at 11:18 PM, Joern Huxhorn (JIRA) <noreply-jira@qos.ch>wrote:
[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i...]
Joern Huxhorn commented on LBCORE-128: --------------------------------------
My branch was meant to implement what I need in a manner of least possible impact. I detached the implementation of FileAppender from WriterAppender, extending UnsynchronizedAppenderBase instead.
writerWrite(String s, boolean flush) (which I didn't rename even though there's no writer anymore but a stream instead) is now using convertToBytes(String s) and writeBytes(byte[] bytes, boolean flush), i.e. the appender writes bytes into a stream instead of writing a String into a Writer. Encoding of the String isn't handled by the Writer anymore. Instead, String.getBytes(charset) is used to transform into bytes. The charset is initialized once during startup.
I've also taken a look at Maartens Encoder but I fear I don't really get the reason for either the decorate method or the use of generics in that case.
In the meantime, I've also implemented streaming encoder/decoder interfaces for sulky/Lilith.
http://github.com/huxi/sulky/tree/master/sulky-codec/src/main/java/de/huxhor... Those are a supplement of my previous byte[]-based encoders/decoders and allow chaining. I use generics to define the type that is encoded/decoded. The decoration of the stream is meant to be encapsulated in the actual implementation.
I realize that this does not allow reusing of the same encapsulated stream, though. This means that encoding using Java serialization is less efficient since duplicate strings, for example, are serialized over and over instead of only once, referencing the already serialized string. It has the advantage that written byte[] are not dependent on previously written ones, on the other hand. This is crucial for random event access, in my case.
I don't say Maartens design is bad and mine is better. Neither you nor (especially!) Maarten should get this impression.
If you are actually aiming for a bigger change than my small patch then it might be a good idea to keep the layout concept, even for binary files, and add the ability to use binary layouts that would also contain the encoder for the event. I've enabled binary headers and footers by declaring writeHeader() & writeFooter() protected instead of private in FileAppender - so I could (re)implement them as needed in my extending class.
If Layout would/could handle binary data for header, footer and actual events instead, extending RollingFileAppender wouldn't be necessary anymore. One could simply declare a LilithLayout (with parameters?) instead. This would be a much cleaner design but I feared that such a fundamental change wouldn't have any chance of being accepted for inclusion.
Thanks for taking a look at this issue. I really appreciate it!
Please support implementation of binary log files in RollingFileAppender/FileAppender
-------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at
http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this.
Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
_______________________________________________ logback-dev mailing list logback-dev@qos.ch http://qos.ch/mailman/listinfo/logback-dev

On 17.02.2010, at 10:53, Maarten Bosteels wrote:
Hi Joern,
The reason for using generics in the Encoder interface is to avoid having to cast. (hmm, I guess that's always the reason to use generics ?)
Take for example, the ObjectEncoder: it needs an ObjectOutputStream, so it wraps the given OutputStream in an ObjectOutputStream:
public class ObjectEncoder implements Encoder<ObjectOutputStream> {
public void encode(ILoggingEvent event, ObjectOutputStream output) throws IOException { output.writeObject(event); }
public ObjectOutputStream decorate(OutputStream os) throws IOException { return new ObjectOutputStream(os); } }
On the other hand, the ProtobufEncoder can use a plain OutputStream, so its decorate impl is simply a no-op:
public class ProtobufEncoder implements Encoder<OutputStream>{
private ProtoBufConvertor convertor = new ProtoBufConvertor();
public void encode(ILoggingEvent event, OutputStream output) throws IOException { LoggingProtos.LoggingEvent ev = convertor.convert(event); ev.writeTo(output); }
public OutputStream decorate(OutputStream os) throws IOException { return os; } }
Clear now ?
Yes, understood. But if I take a look at http://code.google.com/p/firewood/source/browse/trunk/compare-formats/src/te... ...I'm not sure why SocketAppender should be concerned about the implementation of the encoder, or, more precisely, about the stream type that is required by the encoder implementation. I also don't know if this would be of any help during appender initialization via Joran since the generic type will be erased. If I understand everything correctly then decorate is called once during startup and the decorated stream is reused. This could become problematic since not every encoder can be implemented like this. If Java serialization is used and events should be randomly accessible without reading all previous events first, then a new ObjectOutputStream must be created for every event that is written. Otherwise they depend on previously written events. The same is the case for GZIPInputStream/OutputStream. Cheers, Jörn.
Maarten
On Tue, Feb 16, 2010 at 11:18 PM, Joern Huxhorn (JIRA) <noreply-jira@qos.ch
wrote:
[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ]
Joern Huxhorn commented on LBCORE-128: --------------------------------------
My branch was meant to implement what I need in a manner of least possible impact. I detached the implementation of FileAppender from WriterAppender, extending UnsynchronizedAppenderBase instead.
writerWrite(String s, boolean flush) (which I didn't rename even though there's no writer anymore but a stream instead) is now using convertToBytes(String s) and writeBytes(byte[] bytes, boolean flush), i.e. the appender writes bytes into a stream instead of writing a String into a Writer. Encoding of the String isn't handled by the Writer anymore. Instead, String.getBytes(charset) is used to transform into bytes. The charset is initialized once during startup.
I've also taken a look at Maartens Encoder but I fear I don't really get the reason for either the decorate method or the use of generics in that case.
In the meantime, I've also implemented streaming encoder/decoder interfaces for sulky/Lilith. http://github.com/huxi/sulky/tree/master/sulky-codec/src/main/java/de/huxhor... Those are a supplement of my previous byte[]-based encoders/decoders and allow chaining. I use generics to define the type that is encoded/decoded. The decoration of the stream is meant to be encapsulated in the actual implementation.
I realize that this does not allow reusing of the same encapsulated stream, though. This means that encoding using Java serialization is less efficient since duplicate strings, for example, are serialized over and over instead of only once, referencing the already serialized string. It has the advantage that written byte[] are not dependent on previously written ones, on the other hand. This is crucial for random event access, in my case.
I don't say Maartens design is bad and mine is better. Neither you nor (especially!) Maarten should get this impression.
If you are actually aiming for a bigger change than my small patch then it might be a good idea to keep the layout concept, even for binary files, and add the ability to use binary layouts that would also contain the encoder for the event. I've enabled binary headers and footers by declaring writeHeader() & writeFooter() protected instead of private in FileAppender - so I could (re)implement them as needed in my extending class.
If Layout would/could handle binary data for header, footer and actual events instead, extending RollingFileAppender wouldn't be necessary anymore. One could simply declare a LilithLayout (with parameters?) instead. This would be a much cleaner design but I feared that such a fundamental change wouldn't have any chance of being accepted for inclusion.
Thanks for taking a look at this issue. I really appreciate it!
Please support implementation of binary log files in RollingFileAppender/FileAppender
-------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2
and I forgot to file a ticket about this.
Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
_______________________________________________ logback-dev mailing list logback-dev@qos.ch http://qos.ch/mailman/listinfo/logback-dev
_______________________________________________ logback-dev mailing list logback-dev@qos.ch http://qos.ch/mailman/listinfo/logback-dev

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn edited comment on LBCORE-128 at 2/16/10 11:19 PM: ---------------------------------------------------------------- My branch was meant to implement what I need in a manner of least possible impact. I detached the implementation of FileAppender from WriterAppender, extending UnsynchronizedAppenderBase instead. writerWrite(String s, boolean flush) (which I didn't rename even though there's no writer anymore but a stream instead - I didn't want to create problems in extending classes outside of logback) is now using convertToBytes(String s) and writeBytes(byte[] bytes, boolean flush), i.e. the appender writes bytes into a stream instead of writing a String into a Writer. Encoding of the String isn't handled by the Writer anymore. Instead, String.getBytes(charset) is used to transform into bytes. The charset is initialized once during startup. I've also taken a look at Maartens Encoder but I fear I don't really get the reason for either the decorate method or the use of generics in that case. In the meantime, I've also implemented streaming encoder/decoder interfaces for sulky/Lilith. http://github.com/huxi/sulky/tree/master/sulky-codec/src/main/java/de/huxhor... Those are a supplement of my previous byte[]-based encoders/decoders and allow chaining. I use generics to define the type that is encoded/decoded. The decoration of the stream is meant to be encapsulated in the actual implementation. I realize that this does not allow reusing of the same encapsulated stream, though. This means that encoding using Java serialization is less efficient since duplicate strings, for example, are serialized over and over instead of only once, referencing the already serialized string. It has the advantage that written byte[] are not dependent on previously written ones, on the other hand. This is crucial for random event access, in my case. I don't say Maartens design is bad and mine is better. Neither you nor (especially!) Maarten should get this impression. If you are actually aiming for a bigger change than my small patch then it might be a good idea to keep the layout concept, even for binary files, and add the ability to use binary layouts that would also contain the encoder for the event. I've enabled binary headers and footers by declaring writeHeader() & writeFooter() protected instead of private in FileAppender - so I could (re)implement them as needed in my extending class. If Layout would/could handle binary data for header, footer and actual events instead, extending RollingFileAppender wouldn't be necessary anymore. One could simply declare a LilithLayout (with parameters?) instead. This would be a much cleaner design but I feared that such a fundamental change wouldn't have any chance of being accepted for inclusion. Thanks for taking a look at this issue. I really appreciate it! was (Author: jhuxhorn): My branch was meant to implement what I need in a manner of least possible impact. I detached the implementation of FileAppender from WriterAppender, extending UnsynchronizedAppenderBase instead. writerWrite(String s, boolean flush) (which I didn't rename even though there's no writer anymore but a stream instead) is now using convertToBytes(String s) and writeBytes(byte[] bytes, boolean flush), i.e. the appender writes bytes into a stream instead of writing a String into a Writer. Encoding of the String isn't handled by the Writer anymore. Instead, String.getBytes(charset) is used to transform into bytes. The charset is initialized once during startup. I've also taken a look at Maartens Encoder but I fear I don't really get the reason for either the decorate method or the use of generics in that case. In the meantime, I've also implemented streaming encoder/decoder interfaces for sulky/Lilith. http://github.com/huxi/sulky/tree/master/sulky-codec/src/main/java/de/huxhor... Those are a supplement of my previous byte[]-based encoders/decoders and allow chaining. I use generics to define the type that is encoded/decoded. The decoration of the stream is meant to be encapsulated in the actual implementation. I realize that this does not allow reusing of the same encapsulated stream, though. This means that encoding using Java serialization is less efficient since duplicate strings, for example, are serialized over and over instead of only once, referencing the already serialized string. It has the advantage that written byte[] are not dependent on previously written ones, on the other hand. This is crucial for random event access, in my case. I don't say Maartens design is bad and mine is better. Neither you nor (especially!) Maarten should get this impression. If you are actually aiming for a bigger change than my small patch then it might be a good idea to keep the layout concept, even for binary files, and add the ability to use binary layouts that would also contain the encoder for the event. I've enabled binary headers and footers by declaring writeHeader() & writeFooter() protected instead of private in FileAppender - so I could (re)implement them as needed in my extending class. If Layout would/could handle binary data for header, footer and actual events instead, extending RollingFileAppender wouldn't be necessary anymore. One could simply declare a LilithLayout (with parameters?) instead. This would be a much cleaner design but I feared that such a fundamental change wouldn't have any chance of being accepted for inclusion. Thanks for taking a look at this issue. I really appreciate it!
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- Given that we are talking about logback extensions and not user facing code such as the SLF4J API and given the conceptual depth of this change, it's ok if compatibility with existing appenders extending FileAppender is broken. The generics in Maaerten's code ensures that the type of the output stream passed to a particular encoder's encode() method is verified by the compiler and avoids a cast from OutputStream to a subtype. It's just a nice touch. I think the bigger innovation comes from letting the encoder control the type of the OutputStream. The containing appender invokes its encoder's decorate(OutputStream) method with an opened stream and then passes the decorated result to encode(ILoggingEvent, OutputStream) in the second argument. I don't see how it would be possible to achieve similar flexibility (of letting the encoder select the output stream type) more elegantly especially if you consider that the actual OutputStream might change during the lifetime of the appender (think RollingFileAppener). The more I think of it the more I am tempted to merge the concept of a layout into an encoder. After all a Layout is an encoder which encodes events in text. Since we talking about a very substantial change, I'll start a new branch and see how it goes.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- I've just answered Maarten on the mailinglist. The problem with the decoration method is that it defines (if I'm not mistaken) that a given stream is decorated once and reused subsequently for all the encoding. This works well for streaming encoding like SocketAppender but doesn't work for encoding that uses independent messages (i.e. one single event is serialized into <size of serialized event in bytes><bytes of serialized event> blocks). Those are crucial for the possibility of random access. In my encoders, the given OutputStream is wrapped (and flushed) for every event. This is necessary because both ObjectOutputStream and GZIPOutputStream have a context. Serializing in this way enables me to read any event knowing only it's start-offset. Otherwise, reading event 10.000 would mean that I had to read and drop 9.999 events until I reach the relevant event.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- I understand. How do you deal with this issue? For example, for compression to work properly, some context is necessary because otherwise compression will be extremely poor.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- Average event size for protobuf in my testcase is: 2.54kb uncompressed 705b compressed using gzip These are events containing full callstacks, that's why they compress quite good anyway. For real applications, the average event size is 758b (averaged over 19.267.00 events). FYI, we leave the callstack on - even in case of our production system - and the performance penalty is absolutely acceptable. It pays of as soon as something strange happens ;)
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- Thanks for sharing the info. In about the performance with callstack on. Pretty surprising... Since each log entry comes with a full stack trace, it is probably large enough to be compressed. However, I very much doubt that a single event (in isolation) without a full stack trace would compress at all.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- I just checked it out and removed both callstack and exception info from the events before serializing them using protobuf. 206b Protobuf uncompressed (76,908.29 ops/s) 150b Protobuf compressed (5,007.15 ops/s) 359b Streaming Java serialization (44,939.78 ops/s) like SocketAppender is currently performing, i.e. without creating new ObjectOutputStream for every event. 1.49kib Java serialization uncompressed (39,169.6 ops/s, new OOS for every event) 742b Java serialization compressed (3,533.51 ops/s, new OOS for every event) Compresses reasonably well, I think. The way an encoder decorates a stream is, imo, an implementation detail that shouldn't be relevant for the appender. I don't say that every encoder must work that way, I'd just like to be able to implement an encoder that is working that way (other examples include signing and/or encrypting events). I could implement my encoder the way I like by implementing a no-op decorate, so this won't be a problem for me. I'm simply not really sure if it's a good idea.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- Thank you for providing these figures. If I understand correctly, an event uses up 1.49 KB (kilobytes) of space without compression which is reduced to 742 bytes after compression. I am also assuming and this is pretty important, that each event is written in isolation to the output stream. In other words, you create a new ObjectOutputStream, write the event as an object and flush the stream. In compression mode the, a new GZIPOutputStream is created for each event, the ObjectOutputStream writes to the GZIPOutputStream. The finish() method of GZIPOutputStream is called after writing each event. Anyway, in serialization tests we do in logback, each event is less than 160 bytes in size, almost ten times smaller than your standard size. While 160 is perhaps a bit small, 1500 bytes is perhaps too big to be "representative". The word representative is in quote because, you seem to be compressing real events so they are certainly more representative than my imaginary events. But yes, 50% although not very good for logging (where you aim for at least 90%), is far from my "won't compress at all" assertion. I stand corrected. You argument about not wishing to read a whole stream to decode a given event was very convincing. So letting a encoder to decorate an OutputStream seems like overkill because the idea of a long uninterrupted stream does not work well for storage purposes. (You don't want to read 100'000 events before reading the one you are interested in.) Moreover, in my yet unpublished "encoder" branch (which does not even compile), the main method in the Encoder interface, that is doEncode takes an event and the OutputStream as a second argument. Here is the Encoder interface as exists in my "encoder" branch: public interface Encoder<E> extends ContextAware, LifeCycle { void doEncode(E event, OutputStream os) throws IOException; void close(OutputStream os) throws IOException; } In a nutshell, it is now the responsibility of FileAppender to provide a valid OutputSream to the encoder whose responsibility is to encode the event and also to write the results (encoded bytes) onto the stream. The encoder is given total freedom in how it writes the events.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- I'm fine with the doEncode method, but why is Encoder extending ContextAware and LifeCycle? And what is the reason for close? Is Encoder supposed to be stateful? I guess so, but it might be better in that case to define it like this: public interface Encoder<E> extends ContextAware, LifeCycle { setOutputStream(OutputStream os); void doEncode(E event) throws IOException; // does nothing if os is null void close() throws IOException; // does nothing if os is null } I think I'll just wait for your results and have a look when it's ready.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- Adding setOutputStream method would work as well and yes, an encoder can be stateful if it chooses to be.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu edited comment on LBCORE-128 at 2/18/10 1:11 PM: ------------------------------------------------------------ Adding setOutputStream method would work as well and yes, an encoder can be stateful if it chooses to be. The close() method is justified by the fact that the encoder might need a chance to clean up things, if its particular encoding format mandates it. was (Author: noreply.ceki@qos.ch): Adding setOutputStream method would work as well and yes, an encoder can be stateful if it chooses to be.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- The encoder branch compiles and passes all tests. The code is not polished but should be sufficient to get the point across. Next step is to write a binary encoder and its mirror image, a decoder.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Maarten Bosteels edited comment on LBCORE-128 at 2/19/10 9:47 AM: ------------------------------------------------------------------ I think adding a separate setOutputStream method to the Encoder interface is a good idea. It defines the contract much better than my initial proposal. Ceki: "The more I think of it the more I am tempted to merge the concept of a layout into an encoder. After all a Layout is an encoder which encodes events in text" I would keep the Layout interface. It has a very simple contract, but different from the Encoder contract. It's just like the difference between java.io.OutputStream and java.io.Writer : one for writing character streams and one for writing bytes. was (Author: maartenb.bosteels): I think adding a separate setOutputStream method to the Encoder interface is a good idea. It defines the contract much better than my initial proposal. {quote}The more I think of it the more I am tempted to merge the concept of a layout into an encoder. After all a Layout is an encoder which encodes events in text.{quote} I would keep the Layout interface. It has a very simple contract, but different from the Encoder contract. It's just like the difference between java.io.OutputStream and java.io.Writer : one for writing character streams and one for writing bytes.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Maarten Bosteels commented on LBCORE-128: ----------------------------------------- I think adding a separate setOutputStream method to the Encoder interface is a good idea. It defines the contract much better than my initial proposal. {quote}The more I think of it the more I am tempted to merge the concept of a layout into an encoder. After all a Layout is an encoder which encodes events in text.{quote} I would keep the Layout interface. It has a very simple contract, but different from the Encoder contract. It's just like the difference between java.io.OutputStream and java.io.Writer : one for writing character streams and one for writing bytes.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- I agree that a setOutputStream() method would define a cleaner API. Unfortunately, the actual OutputStream can change in mid-flight and for concurrency reasons, it was easier to have FileAppender hand the "current" OutputStream as locally available to the method invoking the encoder's methods. As for merging Layout and Encoder, if you look in the "encoder" branch [1], you'll see that for WriteAppender and sub-classes now take an encoder instead of a layout. Other appenders which had a layout continue to have a layout and those which did not have a layout still don't. In summary, only WriterAppender and sub-classes are affected. We'll see how the API holds up when tested with a binary encoder. [1] http://github.com/ceki/logback/tree/encoder
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu edited comment on LBCORE-128 at 2/19/10 10:36 AM: ------------------------------------------------------------- Thank you for providing these figures. If I understand correctly, an event uses up 1.49 KB (kilobytes) of space without compression which is reduced to 742 bytes after compression. I am also assuming and this is pretty important, that each event is written in isolation to the output stream. In other words, you create a new ObjectOutputStream, write the event as an object and flush the stream. In compression mode the, a new GZIPOutputStream is created for each event, the ObjectOutputStream writes to the GZIPOutputStream. The finish() method of GZIPOutputStream is called after writing each event. Anyway, in serialization tests we do in logback, the everage footprint of event is less than 160 bytes in size, almost ten times smaller than your standard size. However, 160 is already the result of some compression since ObjectOutputStream keeps track of previous objects. Instead of writing a whole object, ObjectOutpuStram will write the reference of the previous occurrence which can result in very substantial gains in space. But yes, 50% although not very good for logging (where you aim for at least 90%), is far from my "won't compress at all" assertion. I stand corrected. You argument about not wishing to read a whole stream to decode a given event was very convincing. So letting a encoder to decorate an OutputStream seems like overkill because the idea of a long uninterrupted stream does not work well for storage purposes. (You don't want to read 100'000 events before reading the one you are interested in.) Moreover, in my yet unpublished "encoder" branch (which does not even compile), the main method in the Encoder interface, that is doEncode takes an event and the OutputStream as a second argument. Here is the Encoder interface as exists in my "encoder" branch: public interface Encoder<E> extends ContextAware, LifeCycle { void doEncode(E event, OutputStream os) throws IOException; void close(OutputStream os) throws IOException; } In a nutshell, it is now the responsibility of FileAppender to provide a valid OutputSream to the encoder whose responsibility is to encode the event and also to write the results (encoded bytes) onto the stream. The encoder is given total freedom in how it writes the events. was (Author: noreply.ceki@qos.ch): Thank you for providing these figures. If I understand correctly, an event uses up 1.49 KB (kilobytes) of space without compression which is reduced to 742 bytes after compression. I am also assuming and this is pretty important, that each event is written in isolation to the output stream. In other words, you create a new ObjectOutputStream, write the event as an object and flush the stream. In compression mode the, a new GZIPOutputStream is created for each event, the ObjectOutputStream writes to the GZIPOutputStream. The finish() method of GZIPOutputStream is called after writing each event. Anyway, in serialization tests we do in logback, each event is less than 160 bytes in size, almost ten times smaller than your standard size. While 160 is perhaps a bit small, 1500 bytes is perhaps too big to be "representative". The word representative is in quote because, you seem to be compressing real events so they are certainly more representative than my imaginary events. But yes, 50% although not very good for logging (where you aim for at least 90%), is far from my "won't compress at all" assertion. I stand corrected. You argument about not wishing to read a whole stream to decode a given event was very convincing. So letting a encoder to decorate an OutputStream seems like overkill because the idea of a long uninterrupted stream does not work well for storage purposes. (You don't want to read 100'000 events before reading the one you are interested in.) Moreover, in my yet unpublished "encoder" branch (which does not even compile), the main method in the Encoder interface, that is doEncode takes an event and the OutputStream as a second argument. Here is the Encoder interface as exists in my "encoder" branch: public interface Encoder<E> extends ContextAware, LifeCycle { void doEncode(E event, OutputStream os) throws IOException; void close(OutputStream os) throws IOException; } In a nutshell, it is now the responsibility of FileAppender to provide a valid OutputSream to the encoder whose responsibility is to encode the event and also to write the results (encoded bytes) onto the stream. The encoder is given total freedom in how it writes the events.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn edited comment on LBCORE-128 at 2/19/10 12:23 PM: ---------------------------------------------------------------- Yes, you understood it correctly. The wrapping is recreated for every event, incl. finish() and flush(). I'm fine with the doEncode method, but why is Encoder extending ContextAware and LifeCycle? And what is the reason for close? Is Encoder supposed to be stateful? I guess so, but it might be better in that case to define it like this: public interface Encoder<E> extends ContextAware, LifeCycle { setOutputStream(OutputStream os); void doEncode(E event) throws IOException; // does nothing if os is null void close() throws IOException; // does nothing if os is null } Concerning your edit in the comment above: I thought Encoder would also be used in other Appenders, i.e. SocketAppender. If this would be the case, the new SocketAppender wouldn't be able to stay compatible with the current one since it streams a fixed amount before recreating the ObjectOutputStream (if I remember correctly). I'll try to take a look at your branch this weekend. Unfortunately, I won't be able to check it out today. was (Author: jhuxhorn): I'm fine with the doEncode method, but why is Encoder extending ContextAware and LifeCycle? And what is the reason for close? Is Encoder supposed to be stateful? I guess so, but it might be better in that case to define it like this: public interface Encoder<E> extends ContextAware, LifeCycle { setOutputStream(OutputStream os); void doEncode(E event) throws IOException; // does nothing if os is null void close() throws IOException; // does nothing if os is null } I think I'll just wait for your results and have a look when it's ready.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: -----------------------------------
Concerning your edit in the comment above:
I thought Encoder would also be used in other Appenders, i.e. SocketAppender. If this would be the case, the new SocketAppender wouldn't be able to stay compatible with the current one since it streams a fixed amount before recreating the ObjectOutputStream (if I remember correctly).
Encoder could be used in other appenders, e.g. SocketAppender. I did not get that far yet. The ObjectOutputStream is reset() approx. every 100 events. It is not recreated unless the socket connection goes bad. Consequently, I think it would be possible to move some of the curent code in SocketAppender to an appropriate encoder and remain compatible with existing socket servers. Of course, this open the door for injecting different encoders catering for different needs.
I'll try to take a look at your branch this weekend. Unfortunately, I won't be able to check it out today.
Whatever is convenient for you is fine. I am also experimenting with the following Encoder interface: public interface Encoder<E> extends ContextAware, LifeCycle { void init(OutputStream os) throws IOException; void doEncode(E event) throws IOException; void close() throws IOException; } As for your original question regarding Encoder extending ContextAware and LifeCycle, all logback components are expected to support a lifecycle and be context aware.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- I have implemented a sample object encoder called ObjectStreamEncoder and a corresponding InputStream called EventObjectInputStream. These, although very minimal, seem to work nicely together. The next step is adapt the documentation which is going to be a rather long tasks. I hope the encoder interface did not miss an important requirement you might have had.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- Joern, since you are the reporter of this issue, I am wondering if the changes in the "encoder" branch meet you needs. Since the change is pretty significant and affects a rather large chunk of code it would be unfortunate if the branch became 0.9.19 and got released but then failed to satisfactorily address your requirements.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- Unfortunately, I was sick over the weekend and I'm still recovering and not 100% fit. I'll take a look at your branch ASAP, I promise. Either today or tomorrow evening, I guess. Thanks for your effort!
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- This can wait. Get well.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- I'm currently checking out the code but I'm not done yet. One thing that I'm missing is some info about whether I'm handling a fresh, i.e. empty file, or whether I'm appending to an existing one. I'd need that info because I'd have to write a file header in init(OutputStream) in case of an new file while I'd have to do nothing while appending to an existing one. (The message in encoderInit still wrong, btw. It says "Failed to write footer for appender named [".) Beside that everything looks fine, so far. I'll implement some encoder in the near future but, at the moment, it would only work if 'append' is false, creating a broken file otherwise. Cheers, Jörn.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- I was asking myself a related question. You seet, given that an output stream can be very long, the file header written at the beginning of the stream can only contain information known when the encoder is initialized, i.e. mostly nothing. The problem is more general than that because a stream only goes forward, you cannot write behind the current position. I tried to provide some solution in ObjectStreamEncoder which writes self-sufficient blocks. Moreover, each block has a clearly identifiable starting marker so that a decoder can always position itself at the start of a new block even of the start position is temporarily lost. It's a pretty interesting problem actually.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- That's right, the file header does only contain very minimal info. - a magic value (int) that's used to identify my filebuffers in general (0x0B501E7E => obsolete) - another magic value (int) that's used to identify Lilith files in particular (0x0B5E55ED => obsessed) - the size of the following metadata (int) - protobuf encoded metadata that's retrieved/set as a Map<String,String> this metadata contains info about: - the compression used, if any - the content type (logging or access) - primary and secondary identifier that can be used to identify the source of the events. The secondary identifier is usually a timestamp of creation. The offset of the first event is therefore 12+(sizeof metadata). Each event, again, is an int for the size followed by that amount of bytes of event data. Very simple but not suitable for replace or overwrite. Lilith is also using an additional index file which stores the offset for every event so it's possible to query event n very fast. This doesn't have to be created by the appender, though, since it can already be generated by Lilith for a given file. Why would the start position be lost? This can only happen if an event is written only partially. In that case I'd start with a new file (as a recovery). Such a marker could be added quite easily but I didn't do it so far because it simply wasn't necessary, yet. Hope this info helps a bit.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- Hi Ceki, I've successfully implemented a first version of encoders for both classic and access events using your encoder branch. http://github.com/huxi/lilith/commit/1fa631dd006a965f6f727e286a7469071116366... As expected, appending to an already existing file doesn't work and will break the file. Some info in init() woud be necessary to determine if the stream is a fresh stream or a reused/appended one. Thank you very much for your work so far!
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- One more thing I forgot to mention: While developing, I've seen the following output: 10:48:46,039 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [CONSOLE] 10:48:46,043 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@23:56 - no applicable action for [layout], current pattern is [[configuration][appender][layout]] 10:48:46,044 |-WARN in ch.qos.logback.core.joran.util.PropertySetter@693985fc - No such property [pattern] in ch.qos.logback.core.ConsoleAppender. 10:48:46,044 |-WARN in ch.qos.logback.core.ConsoleAppender[CONSOLE] - Encoder not yet set. Cannot invoke it's init method 10:48:46,044 |-ERROR in ch.qos.logback.core.ConsoleAppender[CONSOLE] - No encoder set for the appender named "CONSOLE". 10:48:46,044 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Popping appender named [CONSOLE] from the object stack It would be nice if <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <param name="Pattern" value="%date %level [%thread] %logger{10} [%file:%line] %msg%n" /> </layout> </appender> would result in the same behavior as <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</pattern> </encoder> </appender> Just for backwards compatibility. Otherwise, people might get irritated. Cheers, Joern.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Joern Huxhorn commented on LBCORE-128: -------------------------------------- I'm fine with your suggestion on the mailinglist. You can close this ticket, as far as I'm concerned. I'm happy :) Thanks!
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu resolved LBCORE-128. ------------------------------- Fix Version/s: 0.9.19 Resolution: Fixed
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu Fix For: 0.9.19
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Rü commented on LBCORE-128: --------------------------- I used the WriterAppender to write into a StringWriter, so I could easily unit test the logging output. Now I'd have to use a ByteArrayOutputStream or something. How about having both?!? There may be other use cases for a Writer instead of an OutputStream.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu Fix For: 0.9.19
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Ceki Gulcu commented on LBCORE-128: ----------------------------------- Consider using ListAppender which logback uses quite extensively for unit testing. If you have very long tests generating a lot of messages, then you could also use CyclicBufferAppender.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu Fix For: 0.9.19
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira

[ http://jira.qos.ch/browse/LBCORE-128?page=com.atlassian.jira.plugin.system.i... ] Rü commented on LBCORE-128: --------------------------- That's actually much better! I'll do that.
Please support implementation of binary log files in RollingFileAppender/FileAppender -------------------------------------------------------------------------------------
Key: LBCORE-128 URL: http://jira.qos.ch/browse/LBCORE-128 Project: logback-core Issue Type: Improvement Components: Appender Affects Versions: 0.9.17 Reporter: Joern Huxhorn Assignee: Ceki Gulcu Fix For: 0.9.19
This was discussed briefly at http://marc.info/?l=logback-dev&m=124905434331308&w=2 and I forgot to file a ticket about this. Currently, RandomFileAppender => FileAppender => WriterAppender is using the following method in WriterAppender to actually write the data: protected void writerWrite(String s, boolean flush) throws IOException Please add an additional method like protected void writerWrite(byte[] bytes, boolean flush) throws IOException to write to the underlying stream directly. writerWrite(String, boolean) could call that method after performing the transformation internally, making this change transparent for the rest of the implementation. Using a binary format for logfiles could have tremendous performance impact as can be seen here: http://sourceforge.net/apps/trac/lilith/wiki/SerializationPerformance
-- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
participants (6)
-
Ceki Gulcu (JIRA)
-
Joern Huxhorn
-
Joern Huxhorn (JIRA)
-
Maarten Bosteels
-
Maarten Bosteels (JIRA)
-
Rü (JIRA)