RFC: LoggingEvent redesign

Hello all, I would like to split/redesign the LoggingEvent object as follows: interface ILoggingEvent { String[] getArgumentArray(); CallerData[] getCallerData(); Level getLevel(); String getLoggerName(); Marker getMarker(); Map<String, String> getMDCPropertyMap(); String getMessage(); String getThreadName(); ThrowableDataPoint[] getThrowableDataPointArray(); long getTimeStamp(); void setArgumentArray(Object[]) // other setters omitted } // localized usage, temporary lifespan class LoggingEvent implements ILoggingEvent { // getter and setter methods from ILoggingEvent omitted void prepareForDeferredProcessing(); // create a LoggingEventMemento image of this LoggingEvent LoggingEventMemento asLoggingEventMemento(); } // distributed (or remote) usage, long term lifespan class LoggingEventMemento implements ILoggingEvent { // getter and setter methods from ILoggingEvent omitted int getVersion(); makeImmutable(); } LoggingEvent is intended to be used within the application generating the logging event. LoggingEventMemento is intended for remote applications (applications other than the application at the origin of the event) and for longer term lifespan. LoggingEventMemento objects are intended to be compatible across logback versions. If possible, LoggingEventMemento should also be suitable for long term storage (on disk). Both LoggingEvent and LoggingEventMemento implement the ILoggingEvent interface so that most logback-classic components, assuming they expect to operate on ILoggingEvent instances, will be able to handle LoggingEvent or LoggingEventMemento objects interchangeably. Instead of LoggingEvent, those appenders which perform serialization, will be serializing instances of LoggingEventMemento. The asLoggingEventMemento() method in LoggingEvent will return a LoggingEventMemento image of a given LoggingEvent. Obviously there several technical obstacles to overcome. However, I am soliciting your comments about the general goals of the above redesign. Do they make sense? Have I omitted important goals? TIA, -- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Hello Ceki, Seems like a good idea to me. But what about making a read-only interface and a read-write interface. interface ImmutableLoggingEvent { // only getters } interface MutableLoggingEvent { // getters and setters } LoggingEventMemento implements ImmutableLoggingEvent LoggingEvent implements MutableLoggingEvent And wherever possible use a ImmutableLoggingEvent . Maarten On Mon, Feb 23, 2009 at 1:44 PM, Ceki Gulcu <ceki@qos.ch> wrote:
Hello all,
I would like to split/redesign the LoggingEvent object as follows:
interface ILoggingEvent {
String[] getArgumentArray(); CallerData[] getCallerData(); Level getLevel(); String getLoggerName(); Marker getMarker(); Map<String, String> getMDCPropertyMap(); String getMessage(); String getThreadName(); ThrowableDataPoint[] getThrowableDataPointArray(); long getTimeStamp();
void setArgumentArray(Object[]) // other setters omitted }
// localized usage, temporary lifespan class LoggingEvent implements ILoggingEvent { // getter and setter methods from ILoggingEvent omitted
void prepareForDeferredProcessing();
// create a LoggingEventMemento image of this LoggingEvent LoggingEventMemento asLoggingEventMemento(); }
// distributed (or remote) usage, long term lifespan class LoggingEventMemento implements ILoggingEvent { // getter and setter methods from ILoggingEvent omitted
int getVersion(); makeImmutable(); }
LoggingEvent is intended to be used within the application generating the logging event. LoggingEventMemento is intended for remote applications (applications other than the application at the origin of the event) and for longer term lifespan. LoggingEventMemento objects are intended to be compatible across logback versions. If possible, LoggingEventMemento should also be suitable for long term storage (on disk).
Both LoggingEvent and LoggingEventMemento implement the ILoggingEvent interface so that most logback-classic components, assuming they expect to operate on ILoggingEvent instances, will be able to handle LoggingEvent or LoggingEventMemento objects interchangeably.
Instead of LoggingEvent, those appenders which perform serialization, will be serializing instances of LoggingEventMemento. The asLoggingEventMemento() method in LoggingEvent will return a LoggingEventMemento image of a given LoggingEvent.
Obviously there several technical obstacles to overcome. However, I am soliciting your comments about the general goals of the above redesign. Do they make sense? Have I omitted important goals?
TIA,
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch _______________________________________________ logback-dev mailing list logback-dev@qos.ch http://qos.ch/mailman/listinfo/logback-dev

Hi Maarteen, Thus far, mutability/immutablity, localness/remoteness, and storability on disk are aspects that we need to take into consideration. Are there other aspects? Maarten Bosteels wrote:
Hello Ceki,
Seems like a good idea to me. But what about making a read-only interface and a read-write interface.
interface ImmutableLoggingEvent { // only getters }
interface MutableLoggingEvent { // getters and setters }
LoggingEventMemento implements ImmutableLoggingEvent
LoggingEvent implements MutableLoggingEvent
And wherever possible use a ImmutableLoggingEvent .
Maarten
On Mon, Feb 23, 2009 at 1:44 PM, Ceki Gulcu <ceki@qos.ch> wrote:
Hello all,
I would like to split/redesign the LoggingEvent object as follows:
interface ILoggingEvent {
String[] getArgumentArray(); CallerData[] getCallerData(); Level getLevel(); String getLoggerName(); Marker getMarker(); Map<String, String> getMDCPropertyMap(); String getMessage(); String getThreadName(); ThrowableDataPoint[] getThrowableDataPointArray(); long getTimeStamp();
void setArgumentArray(Object[]) // other setters omitted }
// localized usage, temporary lifespan class LoggingEvent implements ILoggingEvent { // getter and setter methods from ILoggingEvent omitted
void prepareForDeferredProcessing();
// create a LoggingEventMemento image of this LoggingEvent LoggingEventMemento asLoggingEventMemento(); }
// distributed (or remote) usage, long term lifespan class LoggingEventMemento implements ILoggingEvent { // getter and setter methods from ILoggingEvent omitted
int getVersion(); makeImmutable(); }
LoggingEvent is intended to be used within the application generating the logging event. LoggingEventMemento is intended for remote applications (applications other than the application at the origin of the event) and for longer term lifespan. LoggingEventMemento objects are intended to be compatible across logback versions. If possible, LoggingEventMemento should also be suitable for long term storage (on disk).
Both LoggingEvent and LoggingEventMemento implement the ILoggingEvent interface so that most logback-classic components, assuming they expect to operate on ILoggingEvent instances, will be able to handle LoggingEvent or LoggingEventMemento objects interchangeably.
Instead of LoggingEvent, those appenders which perform serialization, will be serializing instances of LoggingEventMemento. The asLoggingEventMemento() method in LoggingEvent will return a LoggingEventMemento image of a given LoggingEvent.
Obviously there several technical obstacles to overcome. However, I am soliciting your comments about the general goals of the above redesign. Do they make sense? Have I omitted important goals?
TIA,
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch _______________________________________________ logback-dev mailing list logback-dev@qos.ch http://qos.ch/mailman/listinfo/logback-dev
------------------------------------------------------------------------
_______________________________________________ logback-dev mailing list logback-dev@qos.ch http://qos.ch/mailman/listinfo/logback-dev
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Thorbjoern Ravn Andersen wrote:
Ceki Gulcu skrev:
Hello all,
I would like to split/redesign the LoggingEvent object as follows: Hi.
Could you elaborate on what you want to achieve? Makes it easier to evaluate your suggestion.
One important goal is to better support interoperability between logback versions. For example, in a client/server situation, when a client is using a different version of logback than the server. Here the client is the application generating LoggingEvent instances and the server is the application receiving serialized LoggingEvent instances via the network. A second goal is to simplify the code in LoggingEvent. If LoggingEvent instances do not have to worry about serialization, then LoggingEvent can be simplified. LoggingEventMemento needs to worry about serialization, replacing LoggingEvent in client/server communication. Another goal of this RFC is to identify uses of LoggingEvent so that logback can cater for those use cases, possibly via new object types. It is not clear how LoggingEventMemento would actually ensure version compatibility, especially if LoggingEventMemento fields change in time. However, as LoggingEventMemento is only a data carrying tpye, it is likely to be much smaller in (code) size. -- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Hi Ceki, Thorbjoern and Maarten, I'd like to encourage all of you to take a look at my LoggingEvent implementation at http://apps.sourceforge.net/trac/lilith/browser/trunk/lilith-data , more specific the classes in http://apps.sourceforge.net/trac/lilith/browser/trunk/lilith-data/logging/sr... The LoggingEvent is just a dumb data container with the only logic being a (lazy) call to MessageFormatter if the formatted message is actually needed. Beside that, all the classes are simply Java Beans with both getters and setters for everything. They also have a default c'tor. This means they can be persisted using java.beans.XMLEncoder/ XMLDecoder which would probably be the best candidates to support future updates because they handle such a situation gracefully. I've done some benchmarks which you can check out at http://apps.sourceforge.net/trac/lilith/ticket/28 XML encoding *is* significantly slower, especially because gzipping isn't just an option but a must if you take a look at the increase of size in case of XML. Concerning immutability, I seriously fail to see any advantage of immutable data objects because the immutability can be circumvented anyway. Because of that it should be sufficient to define the hierarchy like this: interface ImmutableLoggingEvent { // only getters } interface MutableLoggingEvent extends ImmutableLoggingEvent { // only setters } class LoggingEvent implements MutableLoggingEvent { } and simply using the LoggingEvent as an ImmutableLoggingEvent as needed to document that certain classes, like appenders, aren't supposed to change the event. Granted, it would obviously still be possible to cast it to MutableLoggingEvent but it's also possible, on the other hand, to change private variables using reflection, so immutability is only a contract/illusion, anyway. I also still think that the LoggingEvent should not know about the logic behind the transformation from the Object[] arguments to the String[] arguments. Therefore I'd suggest to define void setArgumentArray(String[]) instead of void setArgumentArray(Object[]) (see http://jira.qos.ch/browse/LBCLASSIC-45 ) My LoggingEvent is using a ThrowableInfo similar to the class I suggested in http://jira.qos.ch/browse/LBCLASSIC-46 It's keeping the throwable hierarchy intact because a ThrowableInfo can have another ThrowableInfo as it's cause and so on... On a side note, the MessageFormatter contained in the same package implements my suggestions from the following slf4j "bugs": http://bugzilla.slf4j.org/show_bug.cgi?id=31 http://bugzilla.slf4j.org/show_bug.cgi?id=70 http://bugzilla.slf4j.org/show_bug.cgi?id=112 (there's just one usability glitch left which I'll fix in the near future but I'll leave it up to you to spot it ;)) I'd have absolutely no problem donating all that code back to both Logback and SLF4J, although some work would be required to backport it to jdk < 1.5... Oh, I almost forgot: My LoggingEvent does also contain support for an enhanced version of the NDC - but that functionality could simply be removed if undesired. What do all of you think? Joern. On 23.02.2009, at 18:58, Ceki Gulcu wrote:
Thorbjoern Ravn Andersen wrote:
Ceki Gulcu skrev:
Hello all,
I would like to split/redesign the LoggingEvent object as follows: Hi. Could you elaborate on what you want to achieve? Makes it easier to evaluate your suggestion.
One important goal is to better support interoperability between logback versions. For example, in a client/server situation, when a client is using a different version of logback than the server. Here the client is the application generating LoggingEvent instances and the server is the application receiving serialized LoggingEvent instances via the network.
A second goal is to simplify the code in LoggingEvent. If LoggingEvent instances do not have to worry about serialization, then LoggingEvent can be simplified. LoggingEventMemento needs to worry about serialization, replacing LoggingEvent in client/server communication.
Another goal of this RFC is to identify uses of LoggingEvent so that logback can cater for those use cases, possibly via new object types.
It is not clear how LoggingEventMemento would actually ensure version compatibility, especially if LoggingEventMemento fields change in time. However, as LoggingEventMemento is only a data carrying tpye, it is likely to be much smaller in (code) size.
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch _______________________________________________ logback-dev mailing list logback-dev@qos.ch http://qos.ch/mailman/listinfo/logback-dev

I'm -1 on this recommendation. SLF4J and Logback require messages to be Strings. I "cheat" in audit logging by passing a Map as one of the parameters. Although the Map can be serialized and deserialized the cost of doing that is very high when the Map is just being passed to a local Appender. FWIW - I'm planning on adding this Auditing capability to XLogger in the near future. Ralph On Feb 23, 2009, at 2:34 PM, Joern Huxhorn wrote:
I also still think that the LoggingEvent should not know about the logic behind the transformation from the Object[] arguments to the String[] arguments.
Therefore I'd suggest to define void setArgumentArray(String[]) instead of void setArgumentArray(Object[]) (see http://jira.qos.ch/browse/LBCLASSIC-45 )

Joern Huxhorn skrev:
XML encoding *is* significantly slower, especially because gzipping isn't just an option but a must if you take a look at the increase of size in case of XML.
The sole reason for compressing is because the network connection is too slow. Does that hold here? It would be nice if the server allowed for both compressed and uncompressed transparently. Also gzipping is rather slowish :)
Concerning immutability, I seriously fail to see any advantage of immutable data objects because the immutability can be circumvented anyway. In my world immutability deals with that you cannot CHANGE a current object, but you can create a new object based on an old one. It gives a different mindset.
-- Thorbjørn Ravn Andersen "...plus... Tubular Bells!"

Thorbjoern Ravn Andersen wrote:
Joern Huxhorn skrev:
XML encoding *is* significantly slower, especially because gzipping isn't just an option but a must if you take a look at the increase of size in case of XML.
The sole reason for compressing is because the network connection is too slow. Does that hold here?
It would be nice if the server allowed for both compressed and uncompressed transparently. Also gzipping is rather slowish :) Well, I'm using the benchmarked classes as the native file format of Lilith. My appenders (both the xml and serialization version) all have compression as an option. Ultimately, it all depends on the infrastructure and the actual logging events. We use logging in our live environment (our web-developers are receiving errors if something is wrong in one of the templates of our CMS) and single events can get quite big so in our case it increases performance.
Concerning immutability, I seriously fail to see any advantage of immutable data objects because the immutability can be circumvented anyway. In my world immutability deals with that you cannot CHANGE a current object, but you can create a new object based on an old one. It gives a different mindset. Ok, I *do* understand what immutability is *supposed* to be about but reality is quite different. In the two languages that I know best - Java and C++ - constness and immutability can be circumvented easily.
Consequently, I consider immutable objects (i.e. objects that only provide getters in Java or, as is the case with the current LoggingEvent, throw an exception in the setter if a value has previously been set) just as a documentation that they should/must not be changed. If this results in significantly reduced functionality (like e.g. the inability to use XMLEncoder) then the price is too high for the mere illusion of some kind of security (who is protected against what, anyway?). The interface suggestion provides essentially the same. While instances aren't really immutable, the interface documents that they should not be changed. To give a concrete example: the append method of Appender would have an ImmutableLoggingEvent as it's argument. Any appender would still have the ability to cast it to a MutableLoggingEvent but it would be clear from the perspective of the method signature that it should not change the event while processing it. Using reflection, it is absolutely possible to change anything, even static final constants. I'm not very fond of the memento suggestion because it would create twice the amount of objects as the current implementation and I guess that this will have some garbage collection impact. Joern.

Hello Joern, Joern Huxhorn wrote:
Ok, I *do* understand what immutability is *supposed* to be about but reality is quite different. In the two languages that I know best - Java and C++ - constness and immutability can be circumvented easily.
The fact that immutability can be circumvented by technical means is in my opinion inconsequential. Marking an object instance immutable conveys intent. You can build additional logic on top of the immutability hypothesis. If someone intentionally circumvents immutability, the ensuing problems are theirs, not ours.
Joern.
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Ceki Gulcu wrote:
Hello Joern,
Joern Huxhorn wrote:
Ok, I *do* understand what immutability is *supposed* to be about but reality is quite different. In the two languages that I know best - Java and C++ - constness and immutability can be circumvented easily. The fact that immutability can be circumvented by technical means is in my opinion inconsequential. Depends on your point of view.
C++ is just a very thin layer on top of assembly and a void pointer just points to arbitrary memory so obviously the ability to say "this memory is actually a ..." is needed. And casting an XYZ pointer back to a void pointer is comparable to using Object in Java. One can disable the ability to use reflection for this kind of dark magic using security constraints. Because of this, I think everything is OK with Java either.
Marking an object instance immutable conveys intent. You can build additional logic on top of the immutability hypothesis. If someone intentionally circumvents immutability, the ensuing problems are theirs, not ours. That's exactly what I was trying to say in the following paragraph. Using an ImmutableLoggingEvent interface should be sufficient to mark that intent. Casting from ImmutableLoggingEvent to MutableLoggingEvent is similar to using reflection magic.
Joern.

Joern Huxhorn wrote:
Marking an object instance immutable conveys intent. You can build additional logic on top of the immutability hypothesis. If someone intentionally circumvents immutability, the ensuing problems are theirs, not ours.
That's exactly what I was trying to say in the following paragraph. Using an ImmutableLoggingEvent interface should be sufficient to mark that intent. Casting from ImmutableLoggingEvent to MutableLoggingEvent is similar to using reflection magic.
Only later did I notice that we were in agreement about the consequences of immutability, i.e. intent ~ contract. Sorry for taking your comments out of their context. The distinction you make between MutableLoggingEvent and ImmutableLoggingEvent does not accurately reflect the distinction I am trying to make between LoggingEvent and LoggingEventMemento which is mostly about serialization and only marginally about mutability. Both LoggingEvent and LoggingEventMemento could be made immutable. I added the makeMutable() method to LoggingEventMemento as an afterthought.
Joern.
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Joern Huxhorn skrev:
The sole reason for compressing is because the network connection is too slow. Does that hold here?
It would be nice if the server allowed for both compressed and uncompressed transparently. Also gzipping is rather slowish :)
Well, I'm using the benchmarked classes as the native file format of Lilith. My appenders (both the xml and serialization version) all have compression as an option. Ultimately, it all depends on the infrastructure and the actual logging events. We use logging in our live environment (our web-developers are receiving errors if something is wrong in one of the templates of our CMS) and single events can get quite big so in our case it increases performance.
I am not doubting that you have real life experience with this, but I am curious if you have benchmarked to see how large the gain is by compressing the large events you describe? (I guess you are on a 100 Mbit network?)
Ok, I *do* understand what immutability is *supposed* to be about but reality is quite different. In the two languages that I know best - Java and C++ - constness and immutability can be circumvented easily.
If you can access an object through reflection you can do anything with it, including making a private member public (IIRC) and vice versa. The idea as I understand it is not to "protect" anything but to state that events are not changable after the fact. Which makes sense to me. A good example of a framework on immutable objects is the JODA framework for handling dates and times, which is recommended reading for anyone who has suffered with dealing with dates and times across timezones in the Calendar framework (which was not designed by Sun).
If this results in significantly reduced functionality (like e.g. the inability to use XMLEncoder) then the price is too high for the mere illusion of some kind of security (who is protected against what, anyway?).
Strings are immutable. Have that been a major hinderance in your programming work so far? What would make XMLEncoder break?
I'm not very fond of the memento suggestion because it would create twice the amount of objects as the current implementation and I guess that this will have some garbage collection impact.
Java programs generally create a lot of short lived objects and the garbage collectors know very well how to deal with them. I suggest we measure to see if there is a difference :) You probably have some very large datasets? -- Thorbjørn Ravn Andersen "...plus... Tubular Bells!"

Thorbjoern Ravn Andersen wrote:
Java programs generally create a lot of short lived objects and the garbage collectors know very well how to deal with them. I suggest we measure to see if there is a difference :) You probably have some very large datasets?
Preliminary test show that there is a 2% increase in the time it takes to serialize LoggingEventMemento instances (3482 nanoseconds per object) instead of LoggingEvent instances directly (3413 nanoseconds per object). In other words, the overhead is negligible. -- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

On 24.02.2009, at 17:41, Thorbjoern Ravn Andersen wrote:
Joern Huxhorn skrev:
The sole reason for compressing is because the network connection is too slow. Does that hold here?
It would be nice if the server allowed for both compressed and uncompressed transparently. Also gzipping is rather slowish :)
Well, I'm using the benchmarked classes as the native file format of Lilith. My appenders (both the xml and serialization version) all have compression as an option. Ultimately, it all depends on the infrastructure and the actual logging events. We use logging in our live environment (our web-developers are receiving errors if something is wrong in one of the templates of our CMS) and single events can get quite big so in our case it increases performance.
I am not doubting that you have real life experience with this, but I am curious if you have benchmarked to see how large the gain is by compressing the large events you describe?
(I guess you are on a 100 Mbit network?)
I haven't run a real benchmark because it's pretty hard to create a real-life situation. There are times where >10 people are listening to events of the same machine. My appender serializes (and gzips) the event only once and sends that package to every recipient. Because of that, I think that the additional cost of compression is legitimate. To make benchmarking even harder, we are all connected to the server using a tunnel that is nowhere near 100MBit (I think the last I heard was 6MBit...) :p We are using serialized objects, not XML, so they decrease in size to about 35%. The benchmark in the ticket does not take network into account at all. It's just reading and writing to files so the speed of the hard disk is the limiting factor.
Ok, I *do* understand what immutability is *supposed* to be about but reality is quite different. In the two languages that I know best - Java and C++ - constness and immutability can be circumvented easily.
If you can access an object through reflection you can do anything with it, including making a private member public (IIRC) and vice versa. The idea as I understand it is not to "protect" anything but to state that events are not changable after the fact. Which makes sense to me.
Definitely. Although I like the phrase "should not be changed" better :)
A good example of a framework on immutable objects is the JODA framework for handling dates and times, which is recommended reading for anyone who has suffered with dealing with dates and times across timezones in the Calendar framework (which was not designed by Sun).
I haven't checked but I somewhat doubt that those are initialized lazily
If this results in significantly reduced functionality (like e.g. the inability to use XMLEncoder) then the price is too high for the mere illusion of some kind of security (who is protected against what, anyway?).
Strings are immutable. Have that been a major hinderance in your programming work so far?
No, but it didn't exactly help me either ;) Seriously, I know the pro arguments of immutable objects quite well (e.g. thread-safety) but there are different types of immutability. Is an object immutable if it initializes certain properties lazily? Is an object immutable if all references contained will stay the same (i.e. are final) but the referenced objects change because they are not immutable? Or is an object only "really" immutable if it contains just basic data types with no setter? (like, probably, the JODA time objects)
What would make XMLEncoder break?
In short, the class to be en/decoded should have a default constructor and should adhere to the Java Beans method naming conventions. That way it will "simply work". It is possible to implement special persistence delegates in case of non-standard properties but I never needed to do so (beside the one for Enums in [4]) For more information, just read the docs ;) [1][2][3] Special care must be taken if the class contains references to Enums because those are only handled correctly starting with JDK 1.6. See [4]
I'm not very fond of the memento suggestion because it would create twice the amount of objects as the current implementation and I guess that this will have some garbage collection impact.
Java programs generally create a lot of short lived objects and the garbage collectors know very well how to deal with them. I suggest we measure to see if there is a difference :) You probably have some very large datasets?
I'm just a bit worried that Logback could lose ground in comparison to log4j. The reading and writing of objects doesn't have to be coupled with the data objects at all so there is no immediate use for the memento. All that is needed are Reader/Writer interfaces and various implementations of those. Serialization would just be the simplest (and probably fastest but unstable) implementation. I've done it that way in Lilith. Joern.

Joern Huxhorn wrote:
I also still think that the LoggingEvent should not know about the logic behind the transformation from the Object[] arguments to the String[] arguments.
I am quite puzzled by your line of thought expressed above. Looking at d.h.lilith.data.logging package, while the LoggingEvent class does not know about message formatting, the logic is present in the Message class (in the same package) does know about message formatting logic. It seems like d.h.lilith.data.logging.LoggingEvent class delegates message formatting logic to a subcomponent but the logic is still there. (I actually like the way its done. However, I am confused about your suggestion to remove the logic taking your code as example, while that code contains the said logic.)
Therefore I'd suggest to define void setArgumentArray(String[]) instead of void setArgumentArray(Object[]) (see http://jira.qos.ch/browse/LBCLASSIC-45 )
As Ralf mentioned, under certain circumstances it may be useful to place objects types other than strings as parameters to logging event. In my previous proposal for ILoggingEvent the getArgumentArray() method returned String[]. I think this should be modified to Object[] because even if only strings are serialized, we should probably not impact local usage of parameters. ILoggingEvent then becomes: interface ILoggingEvent { Object[] getArgumentArray(); CallerData[] getCallerData(); Level getLevel(); String getLoggerName(); Marker getMarker(); Map<String, String> getMDCPropertyMap(); String getMessage(); String getThreadName(); ThrowableDataPoint[] getThrowableDataPointArray(); long getTimeStamp(); void setArgumentArray(Object[]) // other setters omitted }
I'm not very fond of the memento suggestion because it would create twice the amount of objects as the current implementation and I guess that this will have some garbage collection impact.
Absolutely. There will be an impact on the number of objects created. At present time, when a LoggingEvent is serialized, no new object is created. With the memento proposal, a new object would be created. The cost of this new object may or may not be important. We'd need to benchmark the cost. Placing serialization concerns in a new class is likely to simplify LoggingEvent which will no longer need to worry about being accessible on a remote host. So presumably, the new code will be simpler. As for your suggestion to make LoggingEvent a dumb object with only getters and setters, since several LoggingEvent fields are computed lazily, the computation logic would need to be moved somewhere else, possibly into appenders, which seems like a bad idea to me. On the host where LoggingEvent data is generated, we just can't ignore lazy computation of certain LoggingEvent fields. Did I misinterpret your idea?
I'd have absolutely no problem donating all that code back to both Logback and > SLF4J, although some work would be required to backport it to jdk < 1.5...
I appreciate the offer. You should perhaps consider filing a contributor license agreement. Cheers, -- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Ceki Gulcu wrote:
Joern Huxhorn wrote:
I also still think that the LoggingEvent should not know about the logic behind the transformation from the Object[] arguments to the String[] arguments.
I am quite puzzled by your line of thought expressed above. Looking at d.h.lilith.data.logging package, while the LoggingEvent class does not know about message formatting, the logic is present in the Message class (in the same package) does know about message formatting logic. It seems like d.h.lilith.data.logging.LoggingEvent class delegates message formatting logic to a subcomponent but the logic is still there. (I actually like the way its done. However, I am confused about your suggestion to remove the logic taking your code as example, while that code contains the said logic.) I wasn't referring to the message formatting logic.I was referring to the logic required to convert the Object[] to the String[].
I can understand your confusion because logic implemented by SLF4J's MessageFormatter.arrayFormat(String messagePattern, Object[] argArray) is a three-step-process in my case. The call to (my) MessageFormatter.format(String messagePattern, String[] arguments) is actually just the last step and is there to support one-time lazy initialization. This could have been implemented using a wrapper to keep LoggingEvent even "purer" but I decided to implement it that way for performance reasons. It's a static method without LoggingEvent/Message dependency for better testability (and also because it kind of evolved from simply calling the SLF4J formatter to what it is now). The logic I was meaning is contained in int countArgumentPlaceholders(String messagePattern) and ArgumentResult evaluateArguments(String messagePattern, Object[] arguments). This code is only executed during creation of the LoggingEvent. It would/should be contained in SLF4J and not in Logback because it could be the same for all SLF4J implementations. The String[] is contained in the ArgumentResult.
Therefore I'd suggest to define void setArgumentArray(String[]) instead of void setArgumentArray(Object[]) (see http://jira.qos.ch/browse/LBCLASSIC-45 )
As Ralf mentioned, under certain circumstances it may be useful to place objects types other than strings as parameters to logging event. In my previous proposal for ILoggingEvent the getArgumentArray() method returned String[]. I think this should be modified to Object[] because even if only strings are serialized, we should probably not impact local usage of parameters. ILoggingEvent then becomes: I knew that somebody posted to one of the lists that he's using the Object[] feature in his code but I couldn't remember who it was. Sorry, Ralph.
I can absolutely see Ralphs point but I'd consider it downright dangerous to defer the evaluation to Strings, especially in case of asynchronous appenders. Take, for example, an object that is persisted using Hibernate. Calling toString() at the wrong time could very well lead to a LazyInitException. Or worse, what if an Object changes state (and string representation) between the logging call and the evaluation of the message? The message would essentially contain a lie. It would seem that the call to the logging method was different than it was in reality. Imagining to debug a problem like this is pure horror and would mean "forget logging, use a debugger" all over again :( And, last but not least, transforming to String[] immediately would also mean that any synchronization/locks would still be the way they (most likely;)) should be. Concerning your use case, Ralph, aren't you using an XLogger instance for that kind of magic? Couldn't you implement the "magic" part in the XLogger.
As for your suggestion to make LoggingEvent a dumb object with only getters and setters, since several LoggingEvent fields are computed lazily, the computation logic would need to be moved somewhere else, possibly into appenders, which seems like a bad idea to me. This logic would reside in the Logger, actually. It would, in my scenario, transform the Object[] to String[], resolve the ThreadName and extract CallerData while synchronously creating the actual LoggingEvent. ThreadName and CallerData evaluation would be activated/deactivated globally instead of in an actual appender. That way, there would be no performance impact if they are not requested. Evaluating the above mentioned stuff lazily would produce wrong or inaccurate results if executed from a different thread, am I right? On the host where LoggingEvent data is generated, we just can't ignore lazy computation of certain LoggingEvent fields. Did I misinterpret your idea? Only the lazy bit. I'd eagerly (synchronously) evaluate everything that is required in Logger but stuff like the creation of the formatted message is done lazily.
I'd have absolutely no problem donating all that code back to both Logback and > SLF4J, although some work would be required to backport it to jdk < 1.5...
I appreciate the offer. You should perhaps consider filing a contributor license agreement. That wouldn't be a problem, I guess, if I'm not required to sell my soul or something ;) Where can I find it? Just let me know when you think it's necessary.
Joern.

On Feb 24, 2009, at 7:36 AM, Joern Huxhorn wrote:
Ceki Gulcu wrote:
Joern Huxhorn wrote:
Therefore I'd suggest to define void setArgumentArray(String[]) instead of void setArgumentArray(Object[]) (see http://jira.qos.ch/browse/LBCLASSIC-45 )
As Ralf mentioned, under certain circumstances it may be useful to place objects types other than strings as parameters to logging event. In my previous proposal for ILoggingEvent the getArgumentArray() method returned String[]. I think this should be modified to Object[] because even if only strings are serialized, we should probably not impact local usage of parameters. ILoggingEvent then becomes: I knew that somebody posted to one of the lists that he's using the Object[] feature in his code but I couldn't remember who it was. Sorry, Ralph.
I can absolutely see Ralphs point but I'd consider it downright dangerous to defer the evaluation to Strings, especially in case of asynchronous appenders.
Take, for example, an object that is persisted using Hibernate. Calling toString() at the wrong time could very well lead to a LazyInitException.
Or worse, what if an Object changes state (and string representation) between the logging call and the evaluation of the message? The message would essentially contain a lie. It would seem that the call to the logging method was different than it was in reality.
Imagining to debug a problem like this is pure horror and would mean "forget logging, use a debugger" all over again :(
And, last but not least, transforming to String[] immediately would also mean that any synchronization/locks would still be the way they (most likely;)) should be.
Concerning your use case, Ralph, aren't you using an XLogger instance for that kind of magic? Couldn't you implement the "magic" part in the XLogger.
Yes and no. The API would be a call like logger.logEvent(EventData data); EventData is really just a Map with a few extra methods. Under the hood the event data gets serialized to XML as the "message" but the EventData map is still passed as a parameter. Then when the Appender gets the LoggingEvent it can first check for the map being present. If it is it can just use it and the serialized XML just gets ignored. Otherwise we have to go through the expense of reconstructing the map from the message. If one of the out-of-the box Appenders is used then the map will be ignored and only the serialized map is recorded, but if someone wants to write a custom appender it will save quite a bit of overhead in not having to reconstruct the EventData map on every audit event. Ralph

Thank you for sharing this example which illustrates the point quite well. Ralph Goers wrote:
Yes and no. The API would be a call like logger.logEvent(EventData data); EventData is really just a Map with a few extra methods. Under the hood the event data gets serialized to XML as the "message" but the EventData map is still passed as a parameter. Then when the Appender gets the LoggingEvent it can first check for the map being present. If it is it can just use it and the serialized XML just gets ignored. Otherwise we have to go through the expense of reconstructing the map from the message. If one of the out-of-the box Appenders is used then the map will be ignored and only the serialized map is recorded, but if someone wants to write a custom appender it will save quite a bit of overhead in not having to reconstruct the EventData map on every audit event.
Ralph
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Darn. I started to implement this and discovered that LocationAwareLogger doesn't have a method that accepts the object array so I am stuck serializing the data anyway - unless you'd care to add public void log(Marker marker, String fqcn, int level, String message, Object[] argArray, Throwable t); to the interface and the various implementations. Ralph On Feb 24, 2009, at 10:25 AM, Ceki Gulcu wrote:
Thank you for sharing this example which illustrates the point quite well.
Ralph Goers wrote:
Yes and no. The API would be a call like logger.logEvent(EventData data); EventData is really just a Map with a few extra methods. Under the hood the event data gets serialized to XML as the "message" but the EventData map is still passed as a parameter. Then when the Appender gets the LoggingEvent it can first check for the map being present. If it is it can just use it and the serialized XML just gets ignored. Otherwise we have to go through the expense of reconstructing the map from the message. If one of the out-of-the box Appenders is used then the map will be ignored and only the serialized map is recorded, but if someone wants to write a custom appender it will save quite a bit of overhead in not having to reconstruct the EventData map on every audit event. Ralph
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch _______________________________________________ logback-dev mailing list logback-dev@qos.ch http://qos.ch/mailman/listinfo/logback-dev

Joern, Accepting parameters of type Object instead of String opens the door for nasty bugs as you point out. At the same time, it also constitutes an important extension point for logback within the limits imposed by the SLF4J API. I think that the org.slf4j.Logger class should contain a waring (in red) about the arguments being evaluated asynchronously. At present time, we serialize stringified version of the arguments. However, we could imagine that certain types, such as maps and collections, or other well-defined types be serialized as is (without transformation into String). Thus, not only would be it possible to take advantage of specific object types locally but also remotely. Regarding the lazy evaluation of LoggingEvent fields such as caller data and others, the current approach is highly dynamic. For example, if an appender named A requires caller data but is not on the processing path of the current logging event, then caller data will not be evaluated. Setting caller data evaluation as an option at the level of the logger context is not as flexible. Moreover, if any single appender performs the computation of a given field in logging event, that field become available for all subsequent appenders. I tend to think that the way logging event does lazy evaluation of fields works quite well. For the contributor license agreement, please see: http://logback.qos.ch/license.html and http://logback.qos.ch/cla.txt Cheers, Joern Huxhorn wrote:
I can absolutely see Ralphs point but I'd consider it downright dangerous to defer the evaluation to Strings, especially in case of asynchronous appenders.
Take, for example, an object that is persisted using Hibernate. Calling toString() at the wrong time could very well lead to a LazyInitException.
Or worse, what if an Object changes state (and string representation) between the logging call and the evaluation of the message? The message would essentially contain a lie. It would seem that the call to the logging method was different than it was in reality.
Imagining to debug a problem like this is pure horror and would mean "forget logging, use a debugger" all over again :(
And, last but not least, transforming to String[] immediately would also mean that any synchronization/locks would still be the way they (most likely;)) should be.
Concerning your use case, Ralph, aren't you using an XLogger instance for that kind of magic? Couldn't you implement the "magic" part in the XLogger.
As for your suggestion to make LoggingEvent a dumb object with only getters and setters, since several LoggingEvent fields are computed lazily, the computation logic would need to be moved somewhere else, possibly into appenders, which seems like a bad idea to me.
This logic would reside in the Logger, actually. It would, in my scenario, transform the Object[] to String[], resolve the ThreadName and extract CallerData while synchronously creating the actual LoggingEvent.
ThreadName and CallerData evaluation would be activated/deactivated globally instead of in an actual appender. That way, there would be no performance impact if they are not requested.
Evaluating the above mentioned stuff lazily would produce wrong or inaccurate results if executed from a different thread, am I right?
On the host where LoggingEvent data is generated, we just can't ignore lazy computation of certain LoggingEvent fields. Did I misinterpret your idea?
Only the lazy bit. I'd eagerly (synchronously) evaluate everything that is required in Logger but stuff like the creation of the formatted message is done lazily.
I'd have absolutely no problem donating all that code back to both Logback and > SLF4J, although some work would be required to backport it to jdk < 1.5...
I appreciate the offer. You should perhaps consider filing a contributor license agreement. That wouldn't be a problem, I guess, if I'm not required to sell my soul or something ;) Where can I find it? Just let me know when you think it's necessary.
Joern.
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Hi Ceki On 24.02.2009, at 20:00, Ceki Gulcu wrote:
Joern,
Accepting parameters of type Object instead of String opens the door for nasty bugs as you point out. At the same time, it also constitutes an important extension point for logback within the limits imposed by the SLF4J API.
At this point I'm wondering if such an extension is really a good idea at all. Don't use a hammer to screw a screw. Ok, it would kind of work, but you'd be better of using a screwdriver instead. The hammer is slf4j/logback, a magnificent framework for application logging. The screwdriver would be an auditing framework (see http://audit.qos.ch/apidocs/index.html :D). A pretty good indicator that slf4j is, in fact, the wrong tool for the job, is that log levels are mostly meaningless in an audit context.
I think that the org.slf4j.Logger class should contain a waring (in red) about the arguments being evaluated asynchronously.
But, uhm, don't get me wrong but... isn't the SLF4J concept of parameterized logging kind of pointless with that warning in place? What am I supposed to do about that warning? I mean as the user of slf4j? I'd like to log the state of an object so I call logger.debug("{}", object). With the warning above I have not the slightest idea when the event will actually be handled. It could happen synchronously or in two days (rhetoric exaggeration). In the next line of my own code I change something in that object and log it again in the same way. Essentially 4 things can happen at this point: 1.) everything works as *I* would expect it to, i.e. synchronous logging with the two different states of the object. 2.) two events are created, both containing the state of the object after my second change. 3.) neither 1.) nor 2.) but *some* other state is logged because I'll keep changing my object and the events are handled *much* later. 4.) the events would contain some senseless garbage because my object (as most) isn't thread-safe and the evaluation of toString() would happen while the object is being changed in my thread. I'd seriously go nuts in the cases 2.) to 4.) and you could probably hear my scream in Switzerland ;) (You know, I'm a bit touchy about debugging tools that lie to me... The Codewarrior debugger once lied to me about the content of variables and I had a screenshot to prove it but, unfortunately, I lost it while leaving the company...) My only chance to prevent this uncertainty would be logger.debug("{}", object.toString()) which isn't much better than the log4j methods, right? I'm really wondering why nobody else panics about those scenarios. Such a change could potentially break lots of existing code!
At present time, we serialize stringified version of the arguments. However, we could imagine that certain types, such as maps and collections, or other well-defined types be serialized as is (without transformation into String). Thus, not only would be it possible to take advantage of specific object types locally but also remotely.
This would complicate logback. A lot. It also has the real risk of creating what we call "eierlegende Wollmilchsau" in Germany. That's a fantasy animal that produces eggs, wool, milk and ham. There just isn't any suitable translation for it that does the original justice. dict.leo.org translates it to "all-in- one device suitable for every purpose" but it's just not the same. The idea is that you simply won't be able to create such a thing and, therefore, would inevitably fail trying. Instead of trying to be everything for everyone just stay focused on the main topic.
Regarding the lazy evaluation of LoggingEvent fields such as caller data and others, the current approach is highly dynamic. For example, if an appender named A requires caller data but is not on the processing path of the current logging event, then caller data will not be evaluated. Setting caller data evaluation as an option at the level of the logger context is not as flexible.
It's not as flexible but also more deterministic. I'd know, for sure, what data will be obtained. If I'm not interested in caller data then no appender will be able to force it on me. The appender would be required to be able to cope with missing information instead.
Moreover, if any single appender performs the computation of a given field in logging event, that field become available for all subsequent appenders.
In my case, the field will be available to all appenders if it is enabled in the logger context. It would simplify a lot. It would, for example, prevent the comment in AsynchronousAppenderBase that ThreadName and CallerData must be obtained synchronously if required ;) Appenders would just use the events as they receive them - without any worries...
I tend to think that the way logging event does lazy evaluation of fields works quite well.
That's correct with the current appenders but asynchronous appenders would fail horribly with the current implementation because both caller data and thread name would be wrong. Both would return the data of the thread executing the respective getter method instead of the correct data from the originating thread... If the lazy initialization of those fields would be done synchronously while the event is created, the event would have a real chance to be a candidate for immutability (see my previous mail). I'm not talking about the lazy formatting of messagePattern + arguments to message because that wouldn't be a problem since the worst case is formatting more than once in case of threading chaos. Since both messagePattern and arguments won't change nothing bad can and will happen. It would just be less efficient but nobody would get "hurt" in the process... Joern.

On Feb 24, 2009, at 3:32 PM, Joern Huxhorn wrote:
Hi Ceki
On 24.02.2009, at 20:00, Ceki Gulcu wrote:
Joern,
Accepting parameters of type Object instead of String opens the door for nasty bugs as you point out. At the same time, it also constitutes an important extension point for logback within the limits imposed by the SLF4J API.
At this point I'm wondering if such an extension is really a good idea at all.
Don't use a hammer to screw a screw. Ok, it would kind of work, but you'd be better of using a screwdriver instead.
The hammer is slf4j/logback, a magnificent framework for application logging. The screwdriver would be an auditing framework (see http://audit.qos.ch/apidocs/index.html :D). A pretty good indicator that slf4j is, in fact, the wrong tool for the job, is that log levels are mostly meaningless in an audit context.
This discussion is now getting very much off topic and probably should move to slf4j dev. So I've addressed it to both lists. Feel free to drop logback dev. While it is true that levels are meaningless, using the same API, configuration, Appenders, etc has considerable value. The differences I have between "normal" logging and "audit" logging are: 1. Events should always be logged. Filtering should only be used to determine what Appender to use. 2. The log record typically consists of several data elements, not just a text message. 3. Only a single Logger is required specifically for the use of event logging. No offense to Ceki, but I believe audit.qos.ch is taking a slightly different approach than what I am doing. I've been doing this quite a while in my own proprietary code and see no need to keep it that way, especially since it is so simple. We simply leverage the MDC for all our request context information (i.e. these appear in every audit event that might occur on a request) and use the EventData to add information about the specific event. That's it. Where the value-add is is in creating Appenders that can provide guaranteed delivery since that is a requirement of most applications and that is where stuff can get really complicated. Ralph

Hello Joern, I see your hammer and screwdriver analogy. However, while the hammer-screwdriver analogy is reasonable and valid in the case of extending slf4j/logback-classic for auditing purposes, it is not *always* valid. Morre importantly, even if the analogy is valid, it does not necessarily mean that Ralph usage is wrong. It may be something that was not exactly foreseen when slf4j was designed, but it is not wrong. In abstract terms, allowing Object-typed argument arrays is intended as a last ditch extension point. There are many data points contained in a logging event. There are fairly fixed and structured points such as the time, logger, level and the exception. There are less structured data points such as the MDC, and logger context properties. The logging event message is a special case, in the sense that it can hold any string value and assuming a object-to-string encoding mechanism, it can be used to transport objects. (Logback does not provide any such encodings nor does it explicitly support such a transport mechanism.) The only remaining unstructured data point is the argument array, typed as Object[]. Assume you write applications for a supermarket chain called SMart. Application deployed at SMart mostly deal with objects of type Article. You can either create a new logback module called logback-article which deals exclusively with objects of type Article, or you can extend logback-classic with special appenders and converters which can deal with arguments of type Article. If you chose the latter, you will bve using argumentArray as an extension point to support special use cases. Object typed argument arrays can be misused and abused. They are an uncontrolled extension point which one can be described as "eierlegende Wollmilchsau". Nevertheless, they allow users to extend logback-classic to support special use cases. While Object-typed argument array are dangerous, they are not senseless. As for shouting all the way to Switzerland. The residency regulations in Vaud the Canton where I reside, forbid loud noises between 10PM and 6AM. (I couldn't resist evoking the cliché.)
That's correct with the current appenders but asynchronous appenders would fail horribly with the current implementation because both caller data and thread name would be wrong. Both would return the data of the thread executing the respective getter method instead of the correct data from the originating thread...
The prepareForDeferredProcessing() method in the LoggingEvent class addresses that problem. Whenever a logging event is about to be serialized or transferred to another thread, you, as an author of an appender, are supposed to call loggingEvent.prepareForDeferredProcessing(). Caller data is pretty interesting because it is not and cannot be covered by prepareForDeferredProcessing(). However, all is hope is not lost. Assuming a logging event knows about the thread that created it, and if a later time it is asked to compute caller data, it can refuse to do so if the current thread is not the original thread. It's a relatively easy check to add in LoggingEvent. Joern Huxhorn wrote:
At this point I'm wondering if such an extension is really a good idea at all.
Don't use a hammer to screw a screw. Ok, it would kind of work, but you'd be better of using a screwdriver instead.
The hammer is slf4j/logback, a magnificent framework for application logging. The screwdriver would be an auditing framework (see http://audit.qos.ch/apidocs/index.html :D). A pretty good indicator that slf4j is, in fact, the wrong tool for the job, is that log levels are mostly meaningless in an audit context.
I think that the org.slf4j.Logger class should contain a waring (in red) about the arguments being evaluated asynchronously.
But, uhm, don't get me wrong but... isn't the SLF4J concept of parameterized logging kind of pointless with that warning in place?
What am I supposed to do about that warning? I mean as the user of slf4j?
I'd like to log the state of an object so I call logger.debug("{}", object). With the warning above I have not the slightest idea when the event will actually be handled. It could happen synchronously or in two days (rhetoric exaggeration). In the next line of my own code I change something in that object and log it again in the same way.
Essentially 4 things can happen at this point: 1.) everything works as *I* would expect it to, i.e. synchronous logging with the two different states of the object. 2.) two events are created, both containing the state of the object after my second change. 3.) neither 1.) nor 2.) but *some* other state is logged because I'll keep changing my object and the events are handled *much* later. 4.) the events would contain some senseless garbage because my object (as most) isn't thread-safe and the evaluation of toString() would happen while the object is being changed in my thread.
I'd seriously go nuts in the cases 2.) to 4.) and you could probably hear my scream in Switzerland ;) (You know, I'm a bit touchy about debugging tools that lie to me... The Codewarrior debugger once lied to me about the content of variables and I had a screenshot to prove it but, unfortunately, I lost it while leaving the company...)
My only chance to prevent this uncertainty would be logger.debug("{}", object.toString()) which isn't much better than the log4j methods, right?
I'm really wondering why nobody else panics about those scenarios. Such a change could potentially break lots of existing code!
At present time, we serialize stringified version of the arguments. However, we could imagine that certain types, such as maps and collections, or other well-defined types be serialized as is (without transformation into String). Thus, not only would be it possible to take advantage of specific object types locally but also remotely.
This would complicate logback. A lot.
It also has the real risk of creating what we call "eierlegende Wollmilchsau" in Germany. That's a fantasy animal that produces eggs, wool, milk and ham. There just isn't any suitable translation for it that does the original justice. dict.leo.org translates it to "all-in-one device suitable for every purpose" but it's just not the same. The idea is that you simply won't be able to create such a thing and, therefore, would inevitably fail trying.
Instead of trying to be everything for everyone just stay focused on the main topic.
Regarding the lazy evaluation of LoggingEvent fields such as caller data and others, the current approach is highly dynamic. For example, if an appender named A requires caller data but is not on the processing path of the current logging event, then caller data will not be evaluated. Setting caller data evaluation as an option at the level of the logger context is not as flexible.
It's not as flexible but also more deterministic. I'd know, for sure, what data will be obtained. If I'm not interested in caller data then no appender will be able to force it on me. The appender would be required to be able to cope with missing information instead.
Moreover, if any single appender performs the computation of a given field in logging event, that field become available for all subsequent appenders.
In my case, the field will be available to all appenders if it is enabled in the logger context. It would simplify a lot. It would, for example, prevent the comment in AsynchronousAppenderBase that ThreadName and CallerData must be obtained synchronously if required ;) Appenders would just use the events as they receive them - without any worries...
I tend to think that the way logging event does lazy evaluation of fields works quite well.
That's correct with the current appenders but asynchronous appenders would fail horribly with the current implementation because both caller data and thread name would be wrong. Both would return the data of the thread executing the respective getter method instead of the correct data from the originating thread... If the lazy initialization of those fields would be done synchronously while the event is created, the event would have a real chance to be a candidate for immutability (see my previous mail).
I'm not talking about the lazy formatting of messagePattern + arguments to message because that wouldn't be a problem since the worst case is formatting more than once in case of threading chaos. Since both messagePattern and arguments won't change nothing bad can and will happen. It would just be less efficient but nobody would get "hurt" in the process...
Joern.
_______________________________________________ logback-dev mailing list logback-dev@qos.ch http://qos.ch/mailman/listinfo/logback-dev
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Ceki Gulcu skrev:
The prepareForDeferredProcessing() method in the LoggingEvent class addresses that problem. Whenever a logging event is about to be serialized or transferred to another thread, you, as an author of an appender, are supposed to call loggingEvent.prepareForDeferredProcessing().
I think that it would be appropriate to provide a flush mechanism on an appender (in lack of a better word) which is accessible through the configuration or pattern or code. The flush mechanism would immediately run the String-flattening of the arguments making them safe from all these issues. If NOT invoked, the flattening would happen as lazily as possible. I believe this behaviour should be explicitly enabled, keeping the default of immediately Stringifying when the event has been accepted for logging. -- Thorbjørn Ravn Andersen "...plus... Tubular Bells!"

Thorbjoern, Which data point in logging event are you talking about? Thorbjoern Ravn Andersen wrote:
Ceki Gulcu skrev:
The prepareForDeferredProcessing() method in the LoggingEvent class addresses that problem. Whenever a logging event is about to be serialized or transferred to another thread, you, as an author of an appender, are supposed to call loggingEvent.prepareForDeferredProcessing().
I think that it would be appropriate to provide a flush mechanism on an appender (in lack of a better word) which is accessible through the configuration or pattern or code.
The flush mechanism would immediately run the String-flattening of the arguments making them safe from all these issues. If NOT invoked, the flattening would happen as lazily as possible.
I believe this behaviour should be explicitly enabled, keeping the default of immediately Stringifying when the event has been accepted for logging.
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Ceki Gulcu wrote:
Hello Joern,
I see your hammer and screwdriver analogy. However, while the hammer-screwdriver analogy is reasonable and valid in the case of extending slf4j/logback-classic for auditing purposes, it is not *always* valid. Morre importantly, even if the analogy is valid, it does not necessarily mean that Ralph usage is wrong. It may be something that was not exactly foreseen when slf4j was designed, but it is not wrong.
I wasn't trying to say that Ralph is doing anything wrong and I sincerely hope that he does not have that impression. I have no idea about audit logging at all so both Ralph and you will probably know very well what you are doing. He is using undocumented behavior, though, if I'm not entirely mistaken.
In abstract terms, allowing Object-typed argument arrays is intended as a last ditch extension point.
There are many data points contained in a logging event. There are fairly fixed and structured points such as the time, logger, level and the exception. There are less structured data points such as the MDC, and logger context properties. The logging event message is a special case, in the sense that it can hold any string value and assuming a object-to-string encoding mechanism, it can be used to transport objects. (Logback does not provide any such encodings nor does it explicitly support such a transport mechanism.) The only remaining unstructured data point is the argument array, typed as Object[]. I disagree. I just implemented an enhanced version of the log4j NDC myself as you suggested on the SLF4J mailinglist. It's implemented quite similar to the logback MDC but isn't serialized/used by any standard appender. My multiplex-appenders, on the other hand, *are* evaluating it, thus adding an additional data point to the LoggingEvent. Wouldn't a similar extension be possible in Ralphs case? Assume you write applications for a supermarket chain called SMart. Application deployed at SMart mostly deal with objects of type Article. You can either create a new logback module called logback-article which deals exclusively with objects of type Article, or you can extend logback-classic with special appenders and converters which can deal with arguments of type Article. If you chose the latter, you will bve using argumentArray as an extension point to support special use cases. Object typed argument arrays can be misused and abused. They are an uncontrolled extension point which one can be described as "eierlegende Wollmilchsau". Nevertheless, they allow users to extend logback-classic to support special use cases. While Object-typed argument array are dangerous, they are not senseless. I never said they are senseless, in fact I absolutely see that there *could* be a very valid use case - even though I can't think of one right now. In the above case, I'd simply use the "normal" logger and live with it.
I just say that the LoggingEvent should (and I would even say must) be completely initialized at the time of the log statement execution to prevent mayhem like false log statements or even worse situations. Perhaps an additional hook in Logger is needed to support pluggable behavior (like Processor in addition to Appender - but this is just a spontaneous idea) but that, too, would have to be executed synchronously because asynchronous behavior is simply undefined in most cases.
As for shouting all the way to Switzerland. The residency regulations in Vaud the Canton where I reside, forbid loud noises between 10PM and 6AM. (I couldn't resist evoking the cliché.) LOL, I'll try to time it properly :D
That's correct with the current appenders but asynchronous appenders would fail horribly with the current implementation because both caller data and thread name would be wrong. Both would return the data of the thread executing the respective getter method instead of the correct data from the originating thread...
The prepareForDeferredProcessing() method in the LoggingEvent class addresses that problem. Whenever a logging event is about to be serialized or transferred to another thread, you, as an author of an appender, are supposed to call loggingEvent.prepareForDeferredProcessing().
Caller data is pretty interesting because it is not and cannot be covered by prepareForDeferredProcessing(). However, all is hope is not lost. Assuming a logging event knows about the thread that created it, and if a later time it is asked to compute caller data, it can refuse to do so if the current thread is not the original thread. It's a relatively easy check to add in LoggingEvent. While all that sounds reasonable it is way more complex than deciding globally if thread name and/or caller data should be evaluated or not. There are just less surprises for everyone involved. It's also guaranteed to have the same performance as the more flexible version. If someone enabled caller data and isn't actually using it in any appender then it will be slower but this would essentially be a misconfiguration, wouldn't it?
Ok, I see a downside. It's currently possible to evaluate caller data only on a small subset of the events, depending on the attached appender... that wouldn't be possible anymore... hmmmm... I'm wondering how others are using Logback... I've always enabled caller data and can live with the fact that it's somewhat slower if actually logging something but very fast if no log event is actually generated... I'll think about that some more... Joern.

On Feb 25, 2009, at 6:11 AM, Joern Huxhorn wrote:
I wasn't trying to say that Ralph is doing anything wrong and I sincerely hope that he does not have that impression. I have no idea about audit logging at all so both Ralph and you will probably know very well what you are doing. He is using undocumented behavior, though, if I'm not entirely mistaken.
First, I prefer to call it "Event Logging" even though it is more or less the same thing. It really doesn't use undocumented behavior. It serializes the data into an XML string and passes that in the message. If the arguments are not available then the "message" is deserialized. But if they are there it takes advantage of that for better performance. Feel free to take a look at the actual code. I committed it to slf4j-ext last night - although it needs cleaning up as some of the style conventions are wrong. I don't take offense at technical discussions on mailing lists. A lot can get misinterpreted. Instead, I suggest you take a look at the code and see if you think it is a horrible idea. What I'm currently actually using, but would replace with this, does have more knowledge of Logback, specifically so it can pass the objects to the Appender. Unfortunately (at least I think so), SLF4J's LocationAwareLogger doesn't provide a method to pass that information along.
In abstract terms, allowing Object-typed argument arrays is intended as a last ditch extension point.
There are many data points contained in a logging event. There are fairly fixed and structured points such as the time, logger, level and the exception. There are less structured data points such as the MDC, and logger context properties. The logging event message is a special case, in the sense that it can hold any string value and assuming a object-to-string encoding mechanism, it can be used to transport objects. (Logback does not provide any such encodings nor does it explicitly support such a transport mechanism.) The only remaining unstructured data point is the argument array, typed as Object[]. I disagree. I just implemented an enhanced version of the log4j NDC myself as you suggested on the SLF4J mailinglist. It's implemented quite similar to the logback MDC but isn't serialized/used by any standard appender. My multiplex-appenders, on the other hand, *are* evaluating it, thus adding an additional data point to the LoggingEvent. Wouldn't a similar extension be possible in Ralphs case?
I assume an NDC is based on a ThreadLocal? This works well for data that lasts the lifetime of the request in progress. It is dangerous to use for data for a specific event as that data must be cleared after the event is completed - without disturbing other data that might have been stored in it. Ralph

Ralph Goers wrote:
I don't take offense at technical discussions on mailing lists. A lot can get misinterpreted. Instead, I suggest you take a look at the code and see if you think it is a horrible idea. What I'm currently actually using, but would replace with this, does have more knowledge of Logback, specifically so it can pass the objects to the Appender. Unfortunately (at least I think so), SLF4J's LocationAwareLogger doesn't provide a method to pass that information along.
I was meaning to ask. Why do you need support from LocationAwareLogger for argument arrays if you are going to use logback-classic underneath SLF4J?
I assume an NDC is based on a ThreadLocal? This works well for data that lasts the lifetime of the request in progress. It is dangerous to use for data for a specific event as that data must be cleared after the event is completed - without disturbing other data that might have been stored in it.
Precisely. If push comes to shove, referring to my previous example using object of type "Article", we could write MDC.put("article", article); // push data logger.info("article modified"); MDC.remove("article"); // mandatory clean up However, this is less convenient than writing, logger.info("article modified", article); Note the lack of an anchor in the message. This is to emphasize that we are using the argumentArray as an extension point, circumventing usual message formatting. It is the responsibility of "article"-specific appenders to process articles. This is similar to the way MDC data is not always necessarily printed. In the previous example, we are using the MDC as an extension point, albeit a clumsy one.
Ralph
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Ceki Gulcu wrote:
Ralph Goers wrote:
I don't take offense at technical discussions on mailing lists. A lot can
get misinterpreted. Good to hear.
Instead, I suggest you take a look at the code and see if you think it is a horrible idea. I will, but not right now. I'm pretty sure it's not a horrible idea, though ;) What I'm currently actually using, but would replace with this, does have more knowledge of Logback, specifically so it can pass the objects to the Appender. Unfortunately (at least I think so), SLF4J's LocationAwareLogger doesn't provide a method to pass that information along.
I was meaning to ask. Why do you need support from LocationAwareLogger for argument arrays if you are going to use logback-classic underneath SLF4J?
I assume an NDC is based on a ThreadLocal? This works well for data that lasts the lifetime of the request in progress. It is dangerous to use for data for a specific event as that data must be cleared after the event is completed - without disturbing other data that might have been stored in it.
Precisely. If push comes to shove, referring to my previous example using object of type "Article", we could write
MDC.put("article", article); // push data logger.info("article modified"); MDC.remove("article"); // mandatory clean up
However, this is less convenient than writing,
logger.info("article modified", article);
Note the lack of an anchor in the message. This is to emphasize that we are using the argumentArray as an extension point, circumventing usual message formatting. It is the responsibility of "article"-specific appenders to process articles. This is similar to the way MDC data is not always necessarily printed. In the previous example, we are using the MDC as an extension point, albeit a clumsy one.
Using Ralphs example: logger.logEvent(EventData data) Wouldn't it be possible to hide all that init and cleanup in the logEvent method? e.g. something like the following void logEvent(EventData data) { EventDataHolder.set(data); info("whatever"); EventDataHolder.reset(); } That way, a special appender could access the EventData from the ThreadLocal storage... assuming that it is executed synchronously... but I guess that guaranteed delivery mandates synchronous handling anyway, right?
Ralph Joern.
(gmail is completely snafud at the moment...)

Jörn Huxhorn wrote:
Using Ralphs example: logger.logEvent(EventData data)
Wouldn't it be possible to hide all that init and cleanup in the logEvent method? e.g. something like the following void logEvent(EventData data) { EventDataHolder.set(data); info("whatever"); EventDataHolder.reset(); }
This can be done but it is not pretty. The argument array approach is also a hack, but to a lesser extent. -- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Hello all, Revision 2170, even if many classes were affected, is conceptually a minor refactoring of the LoggingEvent class. LoggingEvent class now is an implementation of the ILoggingEvent interface shown below: public interface ILoggingEvent extends SDOAware { public String getThreadName(); public Level getLevel(); public String getMessage(); public LoggerRemoteView getLoggerRemoteView(); public String getFormattedMessage(); public Object[] getArgumentArray(); public ThrowableProxy getThrowableProxy(); public CallerData[] getCallerData(); public Marker getMarker(); public Map<String, String> getMDCPropertyMap(); public long getTimeStamp(); public long getContextBirthTime(); public void prepareForDeferredProcessing(); } The getContextBithTime method replaces getStartTime() method in LoggingEvent. The intent is to separate the concern of serialization from LoggingEvent. Serialization code has moved into LoggingEventSDO (SDO=Serializable Data Object). The SDOAware interface contains a single method: public interface SDOAware { Serializable getSDO(); } I would like to continue working until we arrive at a situation where LoggingEventSDO better insulated from changes in LoggingEvent or sub-components thereof. We are there not there yet. Revision 2170 is just the first step in what I hope to be the right direction. Your comments are welcome. -- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

On 25.02.2009, at 19:33, Ceki Gulcu wrote:
The SDOAware interface contains a single method:
public interface SDOAware { Serializable getSDO(); }
I would like to continue working until we arrive at a situation where LoggingEventSDO better insulated from changes in LoggingEvent or sub-components thereof. We are there not there yet. Revision 2170 is just the first step in what I hope to be the right direction.
Your comments are welcome.
It's not necessary for the SDO to implement Serializable. This is only necessary if serialization using ObjectOutputStream is used as the persistence method, and this should be an implementation detail hidden from the API. Beside that, DTO (http://en.wikipedia.org/wiki/Data_Transfer_Object) would probably be a better name since it is known to a wider audience. Regards, Joern.

Joern Huxhorn wrote:
On 25.02.2009, at 19:33, Ceki Gulcu wrote:
The SDOAware interface contains a single method:
public interface SDOAware { Serializable getSDO(); }
It's not necessary for the SDO to implement Serializable. This is only necessary if serialization using ObjectOutputStream is used as the persistence method, and this should be an implementation detail hidden from the API.
The getSDO() returns a Serializable precisely because certain logback classes such as SocketAppenderBase in logback-core, and JMSQueue/JMSTopicAppender in logback-classic require serializable objects just before handing them over to an ObjectOutputStream. So as it currently stands (revision 2170), the LoggingEvent hierarchy assumes that you can transform a LoggingEvent into a corresponding serializable LoggingEvet. As things stand as of revision 2170, serialization is not an implementation detail but actually leaks from ILoggingEvent interface (because it extends SDOAware). I don't personally like this leakage but it is there. I welcome ideas about alternative designs where serialization is really an implementation detail. To give an idea, I've looked at adding a writeResolve method in LoggingEvent so as to replace the current LoggingEvent instance with a LoggingEventSDO instance. This would obviate the need for the SDOAware interface. However, the cost of serialization performance is degraded by about 30% which seems to steep a price.
Beside that, DTO (http://en.wikipedia.org/wiki/Data_Transfer_Object) would probably be a better name since it is known to a wider audience.
Indeed, SDO is probably not a good name. However, aren't DTOs a mechanism to aggregate data so as to minimize the number of EJB calls? Also, are DTOs serializable by definition? (I guess they are.) Maybe VO (Value Object) is a better name than SDO...
Regards, Joern.
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

As an alternative design, appenders doing serialization such as SocketAppender and JMSQueue/JMSTopicAppender can delegate the serialization transformation to a sub-component. This way, the ILoggingEvent interface need not know about serialization at all. The SDOAware interface can be thus removed. Much cleaner. Ceki Gulcu wrote:
Joern Huxhorn wrote:
On 25.02.2009, at 19:33, Ceki Gulcu wrote:
The SDOAware interface contains a single method:
public interface SDOAware { Serializable getSDO(); }
It's not necessary for the SDO to implement Serializable. This is only necessary if serialization using ObjectOutputStream is used as the persistence method, and this should be an implementation detail hidden from the API.
The getSDO() returns a Serializable precisely because certain logback classes such as SocketAppenderBase in logback-core, and JMSQueue/JMSTopicAppender in logback-classic require serializable objects just before handing them over to an ObjectOutputStream.
So as it currently stands (revision 2170), the LoggingEvent hierarchy assumes that you can transform a LoggingEvent into a corresponding serializable LoggingEvet. As things stand as of revision 2170, serialization is not an implementation detail but actually leaks from ILoggingEvent interface (because it extends SDOAware).
I don't personally like this leakage but it is there. I welcome ideas about alternative designs where serialization is really an implementation detail.
To give an idea, I've looked at adding a writeResolve method in LoggingEvent so as to replace the current LoggingEvent instance with a LoggingEventSDO instance. This would obviate the need for the SDOAware interface. However, the cost of serialization performance is degraded by about 30% which seems to steep a price.
Beside that, DTO (http://en.wikipedia.org/wiki/Data_Transfer_Object) would probably be a better name since it is known to a wider audience.
Indeed, SDO is probably not a good name. However, aren't DTOs a mechanism to aggregate data so as to minimize the number of EJB calls? Also, are DTOs serializable by definition? (I guess they are.) Maybe VO (Value Object) is a better name than SDO...
Regards, Joern.
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Ceki Gulcu skrev:
public interface ILoggingEvent extends SDOAware {
I would like to ask to you not to use abbreviations, as they tend to grow to be the name normally used for the concept. See for instance NDC and MDC. One is a stack and the other a map, but only careful research reveals that :) I for one missed the defintion of SDO. -- Thorbjørn Ravn Andersen "...plus... Tubular Bells!"

On Feb 25, 2009, at 7:35 AM, Ceki Gulcu wrote:
Ralph Goers wrote:
I don't take offense at technical discussions on mailing lists. A lot can get misinterpreted. Instead, I suggest you take a look at the code and see if you think it is a horrible idea. What I'm currently actually using, but would replace with this, does have more knowledge of Logback, specifically so it can pass the objects to the Appender. Unfortunately (at least I think so), SLF4J's LocationAwareLogger doesn't provide a method to pass that information along.
I was meaning to ask. Why do you need support from LocationAwareLogger for argument arrays if you are going to use logback-classic underneath SLF4J?
I'm not sure what you are getting at here. The implementation I wrote for myself used Logback's filterAndLog method if the implementation being used was logback and Log4j's log method when it was the implementation, both of which support passing objects. However, I wouldn't want to directly tie SLF4J to any implementation. Am I misunderstanding something? Ralph

Ralph Goers wrote:
On Feb 25, 2009, at 7:35 AM, Ceki Gulcu wrote:
I was meaning to ask. Why do you need support from LocationAwareLogger for argument arrays if you are going to use logback-classic underneath SLF4J?
I'm not sure what you are getting at here. The implementation I wrote for myself used Logback's filterAndLog method if the implementation being used was logback and Log4j's log method when it was the implementation, both of which support passing objects. However, I wouldn't want to directly tie SLF4J to any implementation. Am I misunderstanding something?
SLF4J must not be tied to any particular implementation. There is no misunderstanding about that. I just had not seen public class EventLogger { .... omitted code public static void logEvent(EventData data) { if (eventLogger.instanceofLAL) { ((LocationAwareLogger) eventLogger.logger).log(EVENT_MARKER, FQCN, LocationAwareLogger.ERROR_INT, data.toXML(), null); } else { eventLogger.logger.error(EVENT_MARKER, data.toXML(), data); } } In light of the above, requesting that the LocationAwareLogger.log method admit an argumentArray parameters makes sense. Given that there are implementations of the LocationAwareLogger interface outside slf4j.org, I don't think the LocationAwareLogger interface can be changed lightly. However, the issue certainly merits a bugzilla entry. Ralph, may I ask you to enter a bugzilla bug report in relation with this topic? TIA.
Ralph
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

On Feb 26, 2009, at 12:44 AM, Ceki Gulcu wrote:
Ralph Goers wrote:
On Feb 25, 2009, at 7:35 AM, Ceki Gulcu wrote:
I was meaning to ask. Why do you need support from LocationAwareLogger for argument arrays if you are going to use logback-classic underneath SLF4J? I'm not sure what you are getting at here. The implementation I wrote for myself used Logback's filterAndLog method if the implementation being used was logback and Log4j's log method when it was the implementation, both of which support passing objects. However, I wouldn't want to directly tie SLF4J to any implementation. Am I misunderstanding something?
SLF4J must not be tied to any particular implementation. There is no misunderstanding about that. I just had not seen
public class EventLogger {
.... omitted code
public static void logEvent(EventData data) { if (eventLogger.instanceofLAL) { ((LocationAwareLogger) eventLogger.logger).log(EVENT_MARKER, FQCN, LocationAwareLogger.ERROR_INT, data.toXML(), null); } else { eventLogger.logger.error(EVENT_MARKER, data.toXML(), data); } }
In light of the above, requesting that the LocationAwareLogger.log method admit an argumentArray parameters makes sense. Given that there are implementations of the LocationAwareLogger interface outside slf4j.org, I don't think the LocationAwareLogger interface can be changed lightly. However, the issue certainly merits a bugzilla entry. Ralph, may I ask you to enter a bugzilla bug report in relation with this topic? TIA.
I created bug # 127.

On Feb 26, 2009, at 12:44 AM, Ceki Gulcu wrote:
Ralph Goers wrote:
On Feb 25, 2009, at 7:35 AM, Ceki Gulcu wrote:
I was meaning to ask. Why do you need support from LocationAwareLogger for argument arrays if you are going to use logback-classic underneath SLF4J? I'm not sure what you are getting at here. The implementation I wrote for myself used Logback's filterAndLog method if the implementation being used was logback and Log4j's log method when it was the implementation, both of which support passing objects. However, I wouldn't want to directly tie SLF4J to any implementation. Am I misunderstanding something?
SLF4J must not be tied to any particular implementation. There is no misunderstanding about that. I just had not seen
public class EventLogger {
.... omitted code
public static void logEvent(EventData data) { if (eventLogger.instanceofLAL) { ((LocationAwareLogger) eventLogger.logger).log(EVENT_MARKER, FQCN, LocationAwareLogger.ERROR_INT, data.toXML(), null); } else { eventLogger.logger.error(EVENT_MARKER, data.toXML(), data); } }
In light of the above, requesting that the LocationAwareLogger.log method admit an argumentArray parameters makes sense. Given that there are implementations of the LocationAwareLogger interface outside slf4j.org, I don't think the LocationAwareLogger interface can be changed lightly. However, the issue certainly merits a bugzilla entry. Ralph, may I ask you to enter a bugzilla bug report in relation with this topic? TIA.
I created bug # 127.

Joern Huxhorn wrote:
I disagree. I just implemented an enhanced version of the log4j NDC myself as you suggested on the SLF4J mailinglist. It's implemented quite similar to the logback MDC but isn't serialized/used by any standard appender. My multiplex-appenders, on the other hand, *are* evaluating it, thus adding an additional data point to the LoggingEvent. Wouldn't a similar extension be possible in Ralphs case?
How can multiplex-appenders access a field which does not exist in c.q.l.classic.spi.LoggingEvent? -- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Ceki Gulcu wrote:
Joern Huxhorn wrote:
I disagree. I just implemented an enhanced version of the log4j NDC myself as you suggested on the SLF4J mailinglist. It's implemented quite similar to the logback MDC but isn't serialized/used by any standard appender. My multiplex-appenders, on the other hand, *are* evaluating it, thus adding an additional data point to the LoggingEvent. Wouldn't a similar extension be possible in Ralphs case?
How can multiplex-appenders access a field which does not exist in c.q.l.classic.spi.LoggingEvent? They are using de.huxhorn.lilith.data.logging.LoggingEvent which contains a field for NDC values. The conversion between c.q.l.classic.spi.LoggingEvent and Lilith LoggingEvent is done in /trunk/logback/logging-adapter in the de.huxhorn.lilith.data.logging.logback.LogbackLoggingAdapter class.
That conversion is also evaluating the NDC by executing if(!NDC.isEmpty()) { result.setNdc(NDC.getContextStack()); } Joern.

Joern Huxhorn wrote:
I'm wondering how others are using Logback... I've always enabled caller data and can live with the fact that it's somewhat slower if actually logging something but very fast if no log event is actually generated...
Computing caller data as a global option is an interesting idea, especially considering that it will be computed only for enabled logging statements, as you have justly noted. Another parameter which was not mentioned, is the depth of the caller data. In a real-world system where the stack depth can 30 levels or more, caller data will be very large compared to the rest of the event. When considering a series of logging event, caller data will compress very well, although I must confess that I never imagined caller data to be deployed on a regular basis on production systems. I saw caller data as normally disabled, but which can be enabled when absolutely necessary. Joern, you seem to use a different model. -- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Joern Huxhorn skrev:
Hi Ceki, Thorbjoern and Maarten,
I'd like to encourage all of you to take a look at my LoggingEvent implementation at I don't have an opinion on the refactoring, but I'd like for the socket server to platform agnostic (as I mentioned earlier, but I am voicing it louder now), i.e. that others than logback should be able to submit events. Hopefully such a mechanism exists that is comparable or - hopefully - even faster than plain serialization so it is attractive to use.
If I understand correctly the default logback implementation is more sensible to trouble than the one in LIlith. Hence I suggest that instead of focusing on the technical refactoring details we should discuss the "what to do" a bit first :) I'll open a new thread for that... -- Thorbjørn Ravn Andersen "...plus... Tubular Bells!"

Ceki Gulcu skrev:
Thorbjoern Ravn Andersen wrote:
Ceki Gulcu skrev:
Hello all,
I would like to split/redesign the LoggingEvent object as follows: Hi.
Could you elaborate on what you want to achieve? Makes it easier to evaluate your suggestion.
One important goal is to better support interoperability between logback versions. For example, in a client/server situation, when a client is using a different version of logback than the server. Here the client is the application generating LoggingEvent instances and the server is the application receiving serialized LoggingEvent instances via the network.
Ok. It is the serialization problem you are looking at. A brief comment now: The problem you experience is inherit to Java serialization. If you consider another transport mechanism as the default, you may choose one which allows for clients not written in Java as the official "talk-to-logback" protocol. Joern has already suggested looking into XMLEncoder/XMLDecoder which may however be too slow for your liking.
A second goal is to simplify the code in LoggingEvent. If LoggingEvent instances do not have to worry about serialization, then LoggingEvent can be simplified. LoggingEventMemento needs to worry about serialization, replacing LoggingEvent in client/server communication.
So you want to split out the functionality from the Data Value Object. Fine with me :)
It is not clear how LoggingEventMemento would actually ensure version compatibility, especially if LoggingEventMemento fields change in time. However, as LoggingEventMemento is only a data carrying tpye, it is likely to be much smaller in (code) size.
If you write custom serializer/deserializer and keep a serialversion you can do whatever you want :) -- Thorbjørn Ravn Andersen "...plus... Tubular Bells!"

Thorbjoern Ravn Andersen wrote:
If you write custom serializer/deserializer and keep a serialversion you can do whatever you want :)
Indeed, custom serialization is probably the way to go. -- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch

Hello, I agree that the Memento is probably not necesarry for achieving backwards compatibility. Google Protocol Buffers (or ApaPche Thrift) could also be interesting : * better performance than xml encoding/decoding (not yet tested it myself though) * you can update the message types without breaking existing code [1] * support for Java, C++ and python I will do a benchmark ASAP. [1] http://code.google.com/apis/protocolbuffers/docs/proto.html#updating regards, Maarten On Tue, Feb 24, 2009 at 2:42 PM, Ceki Gulcu <ceki@qos.ch> wrote:
Thorbjoern Ravn Andersen wrote:
If you write custom serializer/deserializer and keep a serialversion you can do whatever you want :)
Indeed, custom serialization is probably the way to go.
-- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch _______________________________________________ logback-dev mailing list logback-dev@qos.ch http://qos.ch/mailman/listinfo/logback-dev
participants (7)
-
Ceki Gulcu
-
Joern Huxhorn
-
Jörn Huxhorn
-
Maarten Bosteels
-
Ralph Goers
-
Ralph Goers
-
Thorbjoern Ravn Andersen