Plan for SLF4J 2.0

Hello all, Here are the 4 items I'd like to address in SLF4J 2.0: 1) Varargs for Logger methods http://bugzilla.slf4j.org/show_bug.cgi?id=31 Require JDK 1.5 and remain binary compatible as explained in my comment #31 dated 2009-03-25 2) logging exception if last argument, as exaplined by Joern in http://bugzilla.slf4j.org/show_bug.cgi?id=43 3) Avoid bogus incompatibility warnings http://bugzilla.slf4j.org/show_bug.cgi?id=154 4) fix http://bugzilla.slf4j.org/show_bug.cgi?id=170 possibly with a nop implementation of org.apache.log4j.NDC Are there any other major items? Is everyone OK with requiring JDK 1.5 in SLF4J 2.0? -- Ceki

On 06.03.2010, at 14:48, Ceki Gülcü wrote:
Hello all,
Here are the 4 items I'd like to address in SLF4J 2.0:
1) Varargs for Logger methods http://bugzilla.slf4j.org/show_bug.cgi?id=31 Require JDK 1.5 and remain binary compatible as explained in my comment #31 dated 2009-03-25
2) logging exception if last argument, as exaplined by Joern in http://bugzilla.slf4j.org/show_bug.cgi?id=43
3) Avoid bogus incompatibility warnings http://bugzilla.slf4j.org/show_bug.cgi?id=154
4) fix http://bugzilla.slf4j.org/show_bug.cgi?id=170 possibly with a nop implementation of org.apache.log4j.NDC
Are there any other major items? Is everyone OK with requiring JDK 1.5 in SLF4J 2.0?
Hi Ceki, While I'm a big fan of #31 and #43 and personally don't need <1.5 compatibility, I fear that dropping it might seriously hurt libraries using it right now. I also guess that lots of people using SLF4J aren't following this mailinglist so this should probably also be discussed on slf4j-user@qos.ch . We'd still miss lots of users and I guess this will result in crying after the fact. It's a shame that there's no tool to analyze the whole central Maven2 repository concerning this, or is there? It would be great if there was a way to find out which modules depend on SLF4J (either directly or transitively) and are still 1.4. If we'd consider my alternative suggestion in http://bugzilla.slf4j.org/show_bug.cgi?id=31 , i.e. http://github.com/huxi/slf4j/tree/slf4j-redesign , we'd stay binary compatible, keep JDK<1.5 support and would still support 1) and 2) - it's a win-win. Concerning 4), I've also implemented an NDC in my branch which uses the same Message (and, therefore, cheap, parameterized messages like SLF4J) as my suggested Logger interface. This means it's much more powerful than the log4j one (which expects one word per entry without enforcing it - I derive this from the way NDC is formatted in log4j xml) but log4j NDC can be implemented easily by wrapping it. It would only be available in the new SLF4J API - that was my plan, at least. In case of log4j-over-slf4j, we could use the new SLF4J NDC if available (i.e. in case of Java 1.5), falling back to an NOP implementation otherwise. As I stated before, I'm a big fan of NDC and see it as a very good supplement to MDC. We actually use the NDC version available in Lilith in our production environment and it's quite helpful. I really don't understand why it was omitted from SLF4J. It's comparable to a manual, semantic stacktrace. However this discussion ends, I'm really looking forward for SLF4J 2.0! Cheers, Joern.

On 06/03/2010 4:29 PM, Joern Huxhorn wrote:
On 06.03.2010, at 14:48, Ceki Gülcü wrote:
Hello all,
Here are the 4 items I'd like to address in SLF4J 2.0:
1) Varargs for Logger methods http://bugzilla.slf4j.org/show_bug.cgi?id=31 Require JDK 1.5 and remain binary compatible as explained in my comment #31 dated 2009-03-25
2) logging exception if last argument, as exaplined by Joern in http://bugzilla.slf4j.org/show_bug.cgi?id=43
3) Avoid bogus incompatibility warnings http://bugzilla.slf4j.org/show_bug.cgi?id=154
4) fix http://bugzilla.slf4j.org/show_bug.cgi?id=170 possibly with a nop implementation of org.apache.log4j.NDC
Are there any other major items? Is everyone OK with requiring JDK 1.5 in SLF4J 2.0?
Hi Ceki,
While I'm a big fan of #31 and #43 and personally don't need <1.5 compatibility, I fear that dropping it might seriously hurt libraries using it right now.
That's the $64k question. If library lA depends on SLF4J and target JDK 1.4, nothing prevents lA from continuing to use SLF4J version 1.5.11. It gets a little trickier when library lB targets JDK 1.5 and depended on SLF4J v2 and if some application aX requires both lA and lB. Since SLF4J v2 is intended to be binary compatible with SLF4J 1.5.x, aX can use SLF4J v2 and lA will run just fine (without recompilation or anything). Where it gets really hairy is when an app server like Geronimo, JBoss or Spring DM bundle some version of SLF4J, say v1.5.11. If the end-user cannot freely chose the version of SLF4J (and this happens quite frequently) because the the app server imposes its version of SLF4J, then we might run into serious problems, unless SLF4J v2 and 1.5.x are binary compatible. As mentioned in comment #10 dated 2007-10-10 on bug 31, we can have binary compatibility as long as logger.trace|debug|info|warn|error(String, Object[]) logger.trace|debug|info|warn|error(Marker, String, Object[]) are changed to logger.trace|debug|info|warn|error(String, Object...) logger.trace|debug|info|warn|error(Marker, String, Object...) with all the other methods remaining unchanged. If the assumption about binary compatibility is wrong, SLF4J will probably not survive the shitstorm ensuing the release of v2.
I also guess that lots of people using SLF4J aren't following this mailinglist so this should probably also be discussed on slf4j-user@qos.ch . We'd still miss lots of users and I guess this will result in crying after the fact.
True. As Gunnar Wagenknecht observed, the BasicMDCAdapter class since it was introduced in early 2008, withing its remove() method invokes a method new to JDK 1.5. Thus, whenever the MDC.remove method is called under JDK 1.4, for any binding other than logback (which has its own MDCAdapter implementation and anyhow requires JDK 1.5) will cause an exception to be thrown. As far as I can remember no one has complained about this bug before Gunnar. So either no one is using org.slf4j.MDC#remove under JDK 1.4 or those who do, do not care enough to complain. Here is the link to BasicMDCAdapter dating from 2008-01-15: http://tinyurl.com/yherxp9 I find this BasicMDCAdapter.remove episode really puzzling. The only simple explanation I see is that few SLF4J users run under JDK 1.4 or more accurately the overlap of org.slf4j.MDC and JDK 1.4 use corresponds to a very small number of users.
It's a shame that there's no tool to analyze the whole central Maven2 repository concerning this, or is there? It would be great if there was a way to find out which modules depend on SLF4J (either directly or transitively) and are still 1.4.
Let's google it. :-)
If we'd consider my alternative suggestion in http://bugzilla.slf4j.org/show_bug.cgi?id=31 , i.e. http://github.com/huxi/slf4j/tree/slf4j-redesign , we'd stay binary compatible, keep JDK<1.5 support and would still support 1) and 2) - it's a win-win.
If we are going to re-implement org.slf4j.Logger under org.slf4j.n.Logger, we might as well call it org.newslf4j.Logger and start over from scratch. Copying the API to new packages avoids conflicts but otherwise constitutes a radical break.
Concerning 4), I've also implemented an NDC in my branch which uses the same Message (and, therefore, cheap, parameterized messages like SLF4J) as my suggested Logger interface. This means it's much more powerful than the log4j one (which expects one word per entry without enforcing it - I derive this from the way NDC is formatted in log4j xml) but log4j NDC can be implemented easily by wrapping it. It would only be available in the new SLF4J API - that was my plan, at least. In case of log4j-over-slf4j, we could use the new SLF4J NDC if available (i.e. in case of Java 1.5), falling back to an NOP implementation otherwise.
As I stated before, I'm a big fan of NDC and see it as a very good supplement to MDC. We actually use the NDC version available in Lilith in our production environment and it's quite helpful. I really don't understand why it was omitted from SLF4J. It's comparable to a manual, semantic stacktrace.
If you are used to log4j's NDC, having NDC in SLF4J is more comfortable than not having it. Otherwise, since MDC is semantically richer than NDC (one can trivially implement NDC over MDC), one can always get by using MDC instead of NDC. Another reason was that by scrapping NDC in SLF4J there was one less piece of code to maintain.
However this discussion ends, I'm really looking forward for SLF4J 2.0!
Cool.
Cheers, Joern.

Hello everybody. On 06.03.2010, at 18:13, Ceki Gülcü wrote:
On 06/03/2010 4:29 PM, Joern Huxhorn wrote:
On 06.03.2010, at 14:48, Ceki Gülcü wrote:
Hello all,
Here are the 4 items I'd like to address in SLF4J 2.0:
1) Varargs for Logger methods http://bugzilla.slf4j.org/show_bug.cgi?id=31 Require JDK 1.5 and remain binary compatible as explained in my comment #31 dated 2009-03-25
2) logging exception if last argument, as exaplined by Joern in http://bugzilla.slf4j.org/show_bug.cgi?id=43
3) Avoid bogus incompatibility warnings http://bugzilla.slf4j.org/show_bug.cgi?id=154
4) fix http://bugzilla.slf4j.org/show_bug.cgi?id=170 possibly with a nop implementation of org.apache.log4j.NDC
Are there any other major items? Is everyone OK with requiring JDK 1.5 in SLF4J 2.0?
Hi Ceki,
While I'm a big fan of #31 and #43 and personally don't need <1.5 compatibility, I fear that dropping it might seriously hurt libraries using it right now.
That's the $64k question. If library lA depends on SLF4J and target JDK 1.4, nothing prevents lA from continuing to use SLF4J version 1.5.11. It gets a little trickier when library lB targets JDK 1.5 and depended on SLF4J v2 and if some application aX requires both lA and lB. Since SLF4J v2 is intended to be binary compatible with SLF4J 1.5.x, aX can use SLF4J v2 and lA will run just fine (without recompilation or anything). Where it gets really hairy is when an app server like Geronimo, JBoss or Spring DM bundle some version of SLF4J, say v1.5.11. If the end-user cannot freely chose the version of SLF4J (and this happens quite frequently) because the the app server imposes its version of SLF4J, then we might run into serious problems, unless SLF4J v2 and 1.5.x are binary compatible.
As mentioned in comment #10 dated 2007-10-10 on bug 31, we can have binary compatibility as long as
logger.trace|debug|info|warn|error(String, Object[]) logger.trace|debug|info|warn|error(Marker, String, Object[])
are changed to
logger.trace|debug|info|warn|error(String, Object...) logger.trace|debug|info|warn|error(Marker, String, Object...)
with all the other methods remaining unchanged.
That's not entirely true. As you stated in that comment, compatibility is not the case if someone compiles against SLF4J v2 (with varargs) but the container uses JDK 1.4. http://bugzilla.slf4j.org/show_bug.cgi?id=31#c75 I fear that SLF4J v2 might sneak up on some people by either not RTFM of SLF4J v2 or by some dependency where the developers did not RTFM the SLF4J v2 manual. It wouldn't be our fault in both cases but it would boil down to "Updating xyz broke my build" subjects in mailinglists with the explanation that it was because of the SLF4J dependency.
If the assumption about binary compatibility is wrong, SLF4J will probably not survive the shitstorm ensuing the release of v2.
I also guess that lots of people using SLF4J aren't following this mailinglist so this should probably also be discussed on slf4j-user@qos.ch . We'd still miss lots of users and I guess this will result in crying after the fact.
True. As Gunnar Wagenknecht observed, the BasicMDCAdapter class since it was introduced in early 2008, withing its remove() method invokes a method new to JDK 1.5. Thus, whenever the MDC.remove method is called under JDK 1.4, for any binding other than logback (which has its own MDCAdapter implementation and anyhow requires JDK 1.5) will cause an exception to be thrown. As far as I can remember no one has complained about this bug before Gunnar. So either no one is using org.slf4j.MDC#remove under JDK 1.4 or those who do, do not care enough to complain. Here is the link to BasicMDCAdapter dating from 2008-01-15: http://tinyurl.com/yherxp9
I find this BasicMDCAdapter.remove episode really puzzling. The only simple explanation I see is that few SLF4J users run under JDK 1.4 or more accurately the overlap of org.slf4j.MDC and JDK 1.4 use corresponds to a very small number of users.
You're right. But there's also the possibility that people are only putting values into MDC without ever removing them again, overwriting them instead. Before I switched all my stuff to SLF4J I was using commons-logging. This was the case for all the projects I was involved in. CL did not support MDC and I think that this is one of the reasons why MDC is - even now - quite underused. Lots of people switched their codebase over to SLF4J by a simple search & destroy, I guess.
It's a shame that there's no tool to analyze the whole central Maven2 repository concerning this, or is there? It would be great if there was a way to find out which modules depend on SLF4J (either directly or transitively) and are still 1.4.
Let's google it. :-)
Did you succeed?
If we'd consider my alternative suggestion in http://bugzilla.slf4j.org/show_bug.cgi?id=31 , i.e. http://github.com/huxi/slf4j/tree/slf4j-redesign , we'd stay binary compatible, keep JDK<1.5 support and would still support 1) and 2) - it's a win-win.
If we are going to re-implement org.slf4j.Logger under org.slf4j.n.Logger, we might as well call it org.newslf4j.Logger and start over from scratch. Copying the API to new packages avoids conflicts but otherwise constitutes a radical break.
Yes, it does. But it also cleans up the API and adds some very substantial features. There are lots of methods that aren't necessary anymore with varargs and exception support built in. Additional ones would be extremely nice to have. Methods supporting the (or a) Message interface would enable the user to define his own application-specific message implementation which would bring a whole new level to the framework. He could access those special messages without parsing in specifically written appenders. I've also added log(Level,....) and isEnabled(Level[,Marker])-methods for cases where the actual level a call uses is determined programmatically. This is something that we needed on several occasions, too. But additional methods can't be added to the original SLF4J because, as you correctly enforce, SLF4J API is frozen and must stay that way. What I suggest is a bit like the junit package switch. With JUnit 4 the package changed from junit to org.junit. The original junit keeps working as before but new stuff was added in org.junit. You are right: the package could be named anything. The only reasons I choose org.slf4j.n was a) making a switch easy by just adding .n to import statements b) using the SLF4J brand because it's established and people already believe/put trust in it c) it was merely a suggestion The main point, though, is, that there would be zero impact on any existing code base. Everything, even SLF4J implementations, don't have to do anything. The new API would be supported anyway. In case of JDK>=1.5 implementations like Logback, it does make a lot of sense to implement the new API directly (so the Message reaches the appenders - instead of an String) and wrapping the other way around, i.e. using a wrapper to support the original SLF4J API (which is already provided in the new Logger interface via getOldLogger() method). But this isn't required for org.slf4j.n to work.
Concerning 4), I've also implemented an NDC in my branch which uses the same Message (and, therefore, cheap, parameterized messages like SLF4J) as my suggested Logger interface. This means it's much more powerful than the log4j one (which expects one word per entry without enforcing it - I derive this from the way NDC is formatted in log4j xml) but log4j NDC can be implemented easily by wrapping it. It would only be available in the new SLF4J API - that was my plan, at least. In case of log4j-over-slf4j, we could use the new SLF4J NDC if available (i.e. in case of Java 1.5), falling back to an NOP implementation otherwise.
As I stated before, I'm a big fan of NDC and see it as a very good supplement to MDC. We actually use the NDC version available in Lilith in our production environment and it's quite helpful. I really don't understand why it was omitted from SLF4J. It's comparable to a manual, semantic stacktrace.
If you are used to log4j's NDC, having NDC in SLF4J is more comfortable than not having it. Otherwise, since MDC is semantically richer than NDC (one can trivially implement NDC over MDC), one can always get by using MDC instead of NDC. Another reason was that by scrapping NDC in SLF4J there was one less piece of code to maintain.
True, but that way the NDC kind of "pollutes" the MDC. Also, an NDC implementation is quite trivial. The one I provided in the prototype handles the same Messages (incl. the same parameterized message format as a shortcut) as logging - with the same positive aspects concerning performance, i.e. the formatted message is only actually created if it is requested, not while something is put on the NDC. If the NDC is ignored by appenders, it won't be formatted, ever. In case of MDC the formatting would always need to be performed. We'd use it for trace-like stuff, e.g. putting "Inside method xyz with arguments {}, {}, {}" on the NDC. In that case, my suggested NDC would make a significant difference, especially if it's actually ignored. Cheers, Joern.

On 08/03/2010 2:49 PM, Joern Huxhorn wrote:
On 06.03.2010, at 18:13, Ceki Gülcü wrote:
As mentioned in comment #10 dated 2007-10-10 on bug 31, we can have binary compatibility as long as
logger.trace|debug|info|warn|error(String, Object[]) logger.trace|debug|info|warn|error(Marker, String, Object[])
are changed to
logger.trace|debug|info|warn|error(String, Object...) logger.trace|debug|info|warn|error(Marker, String, Object...)
with all the other methods remaining unchanged.
That's not entirely true. As you stated in that comment, compatibility is not the case if someone compiles against SLF4J v2 (with varargs) but the container uses JDK 1.4. http://bugzilla.slf4j.org/show_bug.cgi?id=31#c75 I fear that SLF4J v2 might sneak up on some people by either not RTFM of SLF4J v2 or by some dependency where the developers did not RTFM the SLF4J v2 manual. It wouldn't be our fault in both cases but it would boil down to "Updating xyz broke my build" subjects in mailinglists with the explanation that it was because of the SLF4J dependency.
But that's OK isn't it? If project A is built under JDK 1.4, upgrading to SLF4J v2 would result in an incompatible class version during the build and the developer will be forced to stay with slf4j 1.5.x, which is OK. Moreover, while a container may be built using JDK 1.4 I don't see how a container could force the use of JDK 1.4. The end-user can always chose to use a later version of the JDK. My comment was about a container exporting its version of SLF4J to the application, but as long as SLF4J v1 and v2 are binary compatible that would not be a problem. If v1 and v2 are NOT binary compatible, then that's a different matter altogether.
I find this BasicMDCAdapter.remove episode really puzzling. The only simple explanation I see is that few SLF4J users run under JDK 1.4 or more accurately the overlap of org.slf4j.MDC and JDK 1.4 use corresponds to a very small number of users.
You're right. But there's also the possibility that people are only putting values into MDC without ever removing them again, overwriting them instead.
Yes, that's also a possibility.
Before I switched all my stuff to SLF4J I was using commons-logging. This was the case for all the projects I was involved in. CL did not support MDC and I think that this is one of the reasons why MDC is - even now - quite underused. Lots of people switched their codebase over to SLF4J by a simple search & destroy, I guess.
Search and replace, not destroy. :-)
It's a shame that there's no tool to analyze the whole central Maven2 repository concerning this, or is there? It would be great if there was a way to find out which modules depend on SLF4J (either directly or transitively) and are still 1.4.
Let's google it. :-)
Did you succeed?
No, that's was my feeble attepmt at cracking a joke.
If we are going to re-implement org.slf4j.Logger under org.slf4j.n.Logger, we might as well call it org.newslf4j.Logger and start over from scratch. Copying the API to new packages avoids conflicts but otherwise constitutes a radical break.
Yes, it does. But it also cleans up the API and adds some very substantial features.
There are lots of methods that aren't necessary anymore with varargs and exception support built in.
Additional ones would be extremely nice to have. Methods supporting the (or a) Message interface would enable the user to define his own application-specific message implementation which would bring a whole new level to the framework. He could access those special messages without parsing in specifically written appenders.
I've also added log(Level,....) and isEnabled(Level[,Marker])-methods for cases where the actual level a call uses is determined programmatically. This is something that we needed on several occasions, too.
But additional methods can't be added to the original SLF4J because, as you correctly enforce, SLF4J API is frozen and must stay that way.
What I suggest is a bit like the junit package switch. With JUnit 4 the package changed from junit to org.junit. The original junit keeps working as before but new stuff was added in org.junit.
Good point. There are several differences between the Junit case and SLF4J. Junit3 had maybe 99% market share of the unit testing API "market". The market shared being eroded by TestNG which brought in annotations as significant improvement over Junit3. I don't think Junit4 would exist without TestNG competitive pressure. As far as I know, SLF4J has no competition from a "feature" point of view. Of course, there is JCL, jul and log4j but they all have less advanced user-facing logging APIs.
You are right: the package could be named anything.
The only reasons I choose org.slf4j.n was a) making a switch easy by just adding .n to import statements b) using the SLF4J brand because it's established and people already believe/put trust in it c) it was merely a suggestion
The main point, though, is, that there would be zero impact on any existing code base. Everything, even SLF4J implementations, don't have to do anything. The new API would be supported anyway.
In case of JDK>=1.5 implementations like Logback, it does make a lot of sense to implement the new API directly (so the Message reaches the appenders - instead of an String) and wrapping the other way around, i.e. using a wrapper to support the original SLF4J API (which is already provided in the new Logger interface via getOldLogger() method). But this isn't required for org.slf4j.n to work.
IMO, support for messages can be sufficiently important as to justify a new API but I am not convinced yet. By the way, the invitation made to Juergen regarding a link at a prominent place on the SLF4J web-site extends to your fork as well. I'll gladly add a link to http://github.com/huxi/slf4j/tree/slf4j-redesign with a description provided by you so that can people can try it out and provide feedback. WDYT? -- Ceki

Am 08.03.2010 15:34, schrieb Ceki Gülcü:
Moreover, while a container may be built using JDK 1.4 I don't see how a container could force the use of JDK 1.4. The end-user can always chose to use a later version of the JDK.
Actually, not. I have seen so many shops which deployed WebSphere and then are bound to the IBM JRE shipped with WebSphere.
My comment was about a container exporting its version of SLF4J to the application, but as long as SLF4J v1 and v2 are binary compatible that would not be a problem. If v1 and v2 are NOT binary compatible, then that's a different matter altogether.
From my understanding, if it's binary compatible it's not v2.
There is some information centralized here: http://wiki.eclipse.org/API_Central This one is particular interesting: http://wiki.eclipse.org/Evolving_Java-based_APIs http://wiki.eclipse.org/Evolving_Java-based_APIs_2#Turning_non-generic_types... Also this one: http://wiki.eclipse.org/Version_Numbering#When_to_change_the_major_segment
From the article above it appears that it's actually possible to use 1.5 syntax in source which gets "down-compiled" to 1.4.
WDYT?
What about a simple user survey to find out what SLF4J users are actually using today? The whole discussion might be obsolete if the survey unveils that 40% of the users are using 1.4 JREs and cannot upgrade. It could also be that >80% use Java5 already. It would be also interesting which is the most used SLF4J implementation. -Gunnar -- Gunnar Wagenknecht gunnar@wagenknecht.org http://wagenknecht.org/

On 08/03/2010 10:27 PM, Gunnar Wagenknecht wrote:
What about a simple user survey to find out what SLF4J users are actually using today? The whole discussion might be obsolete if the survey unveils that 40% of the users are using 1.4 JREs and cannot upgrade. It could also be that>80% use Java5 already. It would be also interesting which is the most used SLF4J implementation.
Good idea. I've just put such a survey in place. There is link to it from the left panel of the slf4j site. On the download page a pop up will ask the user if he/she is interested in the survey. You comments in particular about the questions asked in the survey are welcome.
-Gunnar

On 08.03.2010, at 15:34, Ceki Gülcü wrote:
On 08/03/2010 2:49 PM, Joern Huxhorn wrote:
That's not entirely true. As you stated in that comment, compatibility is not the case if someone compiles against SLF4J v2 (with varargs) but the container uses JDK 1.4. http://bugzilla.slf4j.org/show_bug.cgi?id=31#c75 I fear that SLF4J v2 might sneak up on some people by either not RTFM of SLF4J v2 or by some dependency where the developers did not RTFM the SLF4J v2 manual. It wouldn't be our fault in both cases but it would boil down to "Updating xyz broke my build" subjects in mailinglists with the explanation that it was because of the SLF4J dependency.
But that's OK isn't it? If project A is built under JDK 1.4, upgrading to SLF4J v2 would result in an incompatible class version during the build and the developer will be forced to stay with slf4j 1.5.x, which is OK.
Moreover, while a container may be built using JDK 1.4 I don't see how a container could force the use of JDK 1.4. The end-user can always chose to use a later version of the JDK.
My comment was about a container exporting its version of SLF4J to the application, but as long as SLF4J v1 and v2 are binary compatible that would not be a problem. If v1 and v2 are NOT binary compatible, then that's a different matter altogether.
It's just that AFAIK Websphere requires a certain JRE. I've no personal experience with it, though (thankfully!)...
Lots of people switched their codebase over to SLF4J by a simple search & destroy, I guess.
Search and replace, not destroy. :-)
Yes, indeed ;)
It's a shame that there's no tool to analyze the whole central Maven2 repository concerning this, or is there? It would be great if there was a way to find out which modules depend on SLF4J (either directly or transitively) and are still 1.4.
Let's google it. :-)
Did you succeed?
No, that's was my feeble attepmt at cracking a joke.
There goes my hope ;)
If we are going to re-implement org.slf4j.Logger under org.slf4j.n.Logger, we might as well call it org.newslf4j.Logger and start over from scratch. Copying the API to new packages avoids conflicts but otherwise constitutes a radical break.
Yes, it does. But it also cleans up the API and adds some very substantial features.
There are lots of methods that aren't necessary anymore with varargs and exception support built in.
Additional ones would be extremely nice to have. Methods supporting the (or a) Message interface would enable the user to define his own application-specific message implementation which would bring a whole new level to the framework. He could access those special messages without parsing in specifically written appenders.
I've also added log(Level,....) and isEnabled(Level[,Marker])-methods for cases where the actual level a call uses is determined programmatically. This is something that we needed on several occasions, too.
But additional methods can't be added to the original SLF4J because, as you correctly enforce, SLF4J API is frozen and must stay that way.
What I suggest is a bit like the junit package switch. With JUnit 4 the package changed from junit to org.junit. The original junit keeps working as before but new stuff was added in org.junit.
Good point. There are several differences between the Junit case and SLF4J. Junit3 had maybe 99% market share of the unit testing API "market". The market shared being eroded by TestNG which brought in annotations as significant improvement over Junit3. I don't think Junit4 would exist without TestNG competitive pressure.
As far as I know, SLF4J has no competition from a "feature" point of view. Of course, there is JCL, jul and log4j but they all have less advanced user-facing logging APIs.
I can't argue against this, there's no real competitive pressure right now. On the other hand, I think that this is somewhat caused by the fact that you are developing both SLF4J and Logback (which is IMHO the state of the art concerning logging) - meaning that there won't be pressure from that side. Concerning features, JUL supports logging for a given level, e.g. log(Level level, String msg). With SLF4J one has to (re)invent a level incl. code like switch(level) { case DEBUG: logger.debug(...); break; [... etc.] } Instead of waiting for competition to appear (e.g. Log4J 2.0?), we should innovate anyway.
You are right: the package could be named anything.
The only reasons I choose org.slf4j.n was a) making a switch easy by just adding .n to import statements b) using the SLF4J brand because it's established and people already believe/put trust in it c) it was merely a suggestion
The main point, though, is, that there would be zero impact on any existing code base. Everything, even SLF4J implementations, don't have to do anything. The new API would be supported anyway.
In case of JDK>=1.5 implementations like Logback, it does make a lot of sense to implement the new API directly (so the Message reaches the appenders - instead of an String) and wrapping the other way around, i.e. using a wrapper to support the original SLF4J API (which is already provided in the new Logger interface via getOldLogger() method). But this isn't required for org.slf4j.n to work.
IMO, support for messages can be sufficiently important as to justify a new API but I am not convinced yet.
What would it take to convince you? The addition of Message support would solve problems like the following in an unintrusive http://bugzilla.slf4j.org/show_bug.cgi?id=116 (java.util.Formatter support) http://bugzilla.slf4j.org/show_bug.cgi?id=148 (StructuredData support, RFC 5424) http://jira.qos.ch/browse/LBCLASSIC-76 (Allow extension of LoggingEvent with new data) Also, the Message instance might be used in Logback TurboFilters, I think. It may be necessary to change the interface (or at least the implementation of ParameterizedMessage) a bit for better performance, i.e. deferring the evaluation (searching for Throwable) and transformation of the argument[] into a String[] until after the TurboFilter. (Yes, I know, that's a Logback-only issue)
By the way, the invitation made to Juergen regarding a link at a prominent place on the SLF4J web-site extends to your fork as well. I'll gladly add a link to http://github.com/huxi/slf4j/tree/slf4j-redesign with a description provided by you so that can people can try it out and provide feedback.
WDYT?
I'd have no problem with that, though I'm not sure if I'll be able to keep that branch updated with the main SLF4J branch. The main problem I see is that it's not worth that much without Logback supporting it. It's nice to have the Messages but a lot of power is lost if the appenders in Logback are still getting an event containing a formatted message string. Ralph couldn't use the StructuredData without parsing it first, for example. Cheers, Joern.

On 09/03/2010 11:45 AM, Joern Huxhorn wrote:
What would it take to convince you?
The addition of Message support would solve problems like the following in an unintrusive http://bugzilla.slf4j.org/show_bug.cgi?id=116 (java.util.Formatter support) http://bugzilla.slf4j.org/show_bug.cgi?id=148 (StructuredData support, RFC 5424) http://jira.qos.ch/browse/LBCLASSIC-76 (Allow extension of LoggingEvent with new data)
Well, I haven't yet come across a convincing use case or maybe I have but failed to grasp its significance. Anyway, it seems to me that RFC 5424 defines a text-based encoding scheme more than anything else. It follows that RFC 5424 could be supported by logback simply by composing FileAppender with an appropriate encoder, say RFC5424Encoder. Encoders are new in 0.9.19. This encoder could not only encode the contents of the message but other logging event fields such as time, logger, level as well which is probably what you really want. Does the RFC5424Encoder make sense? Regarding the encoding of the message contents, I think it can be addressed by convention: 1) message parameter implements some well known interface, say RFC4224Aware or 2) message parameter implements some well known method "toRFC5424(): String"" Given that SLF4J already allows you to write logger.debug("{}", myRFC5424AwareData); putting aside the issue location awareness, I fail to see the point of changing the org.slf4j.Logger interface to add support for typed-messages especially considering that one can easily write an SLF4J-extension with the appropriate syntactical sugar: class MesssageLogger { Logger logger; void debug(Message msg) { logger.debug("{}", msg); } ... } BTW, I've started looking at both Ralph and Joern's proposals. -- Ceki

On Mar 9, 2010, at 5:29 AM, Ceki Gülcü wrote:
Well, I haven't yet come across a convincing use case or maybe I have but failed to grasp its significance. Anyway, it seems to me that RFC 5424 defines a text-based encoding scheme more than anything else. It follows that RFC 5424 could be supported by logback simply by composing FileAppender with an appropriate encoder, say RFC5424Encoder. Encoders are new in 0.9.19. This encoder could not only encode the contents of the message but other logging event fields such as time, logger, level as well which is probably what you really want. Does the RFC5424Encoder make sense?
Absolutely not. The problem you are ignoring is getting the data to the Appender in the first place. That is why a Message is required. Remember, this is "Structured" data. To be structured you have to have a structure, not a message and a bunch of parameters. Whether the formatted string is created by a Layout or Encoder is somewhat irrelevant.
Regarding the encoding of the message contents, I think it can be addressed by convention:
1) message parameter implements some well known interface, say RFC4224Aware
If you go down this road then you have to start inspecting all the parameters for the interfaces they implement. I started with this. It is awful.
or
2) message parameter implements some well known method "toRFC5424(): String""
Not a whole lot better. Joern's proposal of having the Message delegates to the Message object instead of having Appenders try to support all the various interfaces that might exist.
Given that SLF4J already allows you to write logger.debug("{}", myRFC5424AwareData); putting aside the issue location awareness, I fail to see the point of changing the org.slf4j.Logger interface to add support for typed-messages especially considering that one can easily write an SLF4J-extension with the appropriate syntactical sugar:
Because the proposals above really suck. It is more or less what I was forced to do to support EventData and the performance isn't that great. Ralph

Once you have a custom encoder, you can encode parameters as you see fit. This of course assumes logback as the slf4j backend since encoders exists only in logback. Given that parameters are passed to appenders unaltered from slf4j to logback, I don't think I am ignoring the question of getting the data to the appender, or am I? Assuming the parameters get to the appender unaltered, an RFC5424Encoder could ask each parameter whether it can be encoded in RFC5424. If the argument supports RFC5424 encoding we would use the data supplied by the argument itself. Otherwise, we would use the value returned by toString() and prepend the key "argN=" for argument N as its generic RFC5424 encoding. For Object serialization which is another form of encoding, an Object encoder would ignore the RFC5424 capabilities of parameters and use serialization instead. It seems that postponing the decision to transform an argument to its desired encoding up until the last minute without imposing a type is actually a pretty good design. I have not coded any of this so it may all be a pile of vaporware crap. On 09/03/2010 3:19 PM, Ralph Goers wrote:
On Mar 9, 2010, at 5:29 AM, Ceki Gülcü wrote:
Well, I haven't yet come across a convincing use case or maybe I have but failed to grasp its significance. Anyway, it seems to me that RFC 5424 defines a text-based encoding scheme more than anything else. It follows that RFC 5424 could be supported by logback simply by composing FileAppender with an appropriate encoder, say RFC5424Encoder. Encoders are new in 0.9.19. This encoder could not only encode the contents of the message but other logging event fields such as time, logger, level as well which is probably what you really want. Does the RFC5424Encoder make sense?
Absolutely not. The problem you are ignoring is getting the data to the Appender in the first place. That is why a Message is required. Remember, this is "Structured" data. To be structured you have to have a structure, not a message and a bunch of parameters. Whether the formatted string is created by a Layout or Encoder is somewhat irrelevant.
Regarding the encoding of the message contents, I think it can be addressed by convention:
1) message parameter implements some well known interface, say RFC4224Aware
If you go down this road then you have to start inspecting all the parameters for the interfaces they implement. I started with this. It is awful.
or
2) message parameter implements some well known method "toRFC5424(): String""
Not a whole lot better. Joern's proposal of having the Message delegates to the Message object instead of having Appenders try to support all the various interfaces that might exist.
Given that SLF4J already allows you to write logger.debug("{}", myRFC5424AwareData); putting aside the issue location awareness, I fail to see the point of changing the org.slf4j.Logger interface to add support for typed-messages especially considering that one can easily write an SLF4J-extension with the appropriate syntactical sugar:
Because the proposals above really suck. It is more or less what I was forced to do to support EventData and the performance isn't that great.
Ralph
_______________________________________________ slf4j-dev mailing list slf4j-dev@qos.ch http://qos.ch/mailman/listinfo/slf4j-dev

On Mar 9, 2010, at 6:55 AM, Ceki Gülcü wrote:
Once you have a custom encoder, you can encode parameters as you see fit. This of course assumes logback as the slf4j backend since encoders exists only in logback. Given that parameters are passed to appenders unaltered from slf4j to logback, I don't think I am ignoring the question of getting the data to the appender, or am I?
Yes, you are. If you look at RFC 5424 you will see that it supports structured data elements inside an element id. The spec also supports multiple of these. By its nature, the MDC can already be wrapped in a structured element but other things cannot. They are just arbitrary parameters. It is possible that I might want to have two structured data sections in the output. With a Message this can easily be accomodated by creating a new Message class where getFormattedMessage knows that it will contain an array of StructuredDataMessages and will then format it accordingly. This can be done for virtually any object type. Having to do this with an encoder or layout means writing a lot of ugly code that tries to anticipate all the various objects that might get passed in.
Assuming the parameters get to the appender unaltered, an RFC5424Encoder could ask each parameter whether it can be encoded in RFC5424. If the argument supports RFC5424 encoding we would use the data supplied by the argument itself. Otherwise, we would use the value returned by toString() and prepend the key "argN=" for argument N as its generic RFC5424 encoding.
This is exactly my point. This sucks.
For Object serialization which is another form of encoding, an Object encoder would ignore the RFC5424 capabilities of parameters and use serialization instead.
It seems that postponing the decision to transform an argument to its desired encoding up until the last minute without imposing a type is actually a pretty good design.
It is a good design for the overall layout. It is not a good design for formatting the objects where the objects themselves can do the formatting. With a slightly smarter method in the Message interface it is possible that the Message could even format itself based upon information passed from the layout/encoder.
I have not coded any of this so it may all be a pile of vaporware crap.
I have. Several times. What Joern proposed works very well and is the model I am following for Log4j 2.0. The version on my slf4j/logback branch works and is binary compatible with current SLF4J versions but isn't as straightforward. Ralph

On 09/03/2010 5:42 PM, Ralph Goers wrote:
On Mar 9, 2010, at 6:55 AM, Ceki Gülcü wrote:
Once you have a custom encoder, you can encode parameters as you see fit. This of course assumes logback as the slf4j backend since encoders exists only in logback. Given that parameters are passed to appenders unaltered from slf4j to logback, I don't think I am ignoring the question of getting the data to the appender, or am I?
Yes, you are. If you look at RFC 5424 you will see that it supports structured data elements inside an element id. The spec also supports multiple of these. By its nature, the MDC can already be wrapped in a structured element but other things cannot. They are just arbitrary parameters. It is possible that I might want to have two structured data sections in the output. With a Message this can easily be accomodated by creating a new Message class where getFormattedMessage knows that it will contain an array of StructuredDataMessages and will then format it accordingly. This can be done for virtually any object type. Having to do this with an encoder or layout means writing a lot of ugly code that tries to anticipate all the various objects that might get passed in.
Let's say you have a parameter of type 'House' you would like to log and you wrap it inside a new type called StructuredDataHouse and pass it to a logger as the first parameter (the message being "{}"). The RFC5424Encoder detects that this parameter supports RFC5424 encoding and asks StructuredDataHouse for its RFC5424 encoded data. RFC5424Encoder only neeeds to deals with objects supporting RFC5424 encoding, there is no need to anticipate other encoding types. The end result is very similar to asking your StructuredDataMessage in the org.slf4j.message package for its formatted message, except that the question is asked by a RFC5424Encoder. A different encoder would ask a different question.
Assuming the parameters get to the appender unaltered, an RFC5424Encoder could ask each parameter whether it can be encoded in RFC5424. If the argument supports RFC5424 encoding we would use the data supplied by the argument itself. Otherwise, we would use the value returned by toString() and prepend the key "argN=" for argument N as its generic RFC5424 encoding.
This is exactly my point. This sucks.
What sucks? It seems to me that such heuristic already provides better support for structured data then a *general* message type. If you are passed an object whose getFormatedMessage returns an arbitrary result, you would need some way of ensuring that it plays nicely with the RFC5424 format. The heuristic above provides a way.
For Object serialization which is another form of encoding, an Object encoder would ignore the RFC5424 capabilities of parameters and use serialization instead.
It seems that postponing the decision to transform an argument to its desired encoding up until the last minute without imposing a type is actually a pretty good design.
It is a good design for the overall layout. It is not a good design for formatting the objects where the objects themselves can do the formatting. With a slightly smarter method in the Message interface it is possible that the Message could even format itself based upon information passed from the layout/encoder.
What I am proposing still uses the formatting capabilities of each object, but it does so in a targeted way. If an object supports both X and Y encoding, an X-encoder would use the object's X-encoding capabilities whereas a Y-encoder would use the object's Y-encoding capabilities.
I have not coded any of this so it may all be a pile of vaporware crap.
I have. Several times. What Joern proposed works very well and is the model I am following for Log4j 2.0. The version on my slf4j/logback branch works and is binary compatible with current SLF4J versions but isn't as straightforward.
OK. I intend to look into Joern's and your code more carefully.
Ralph

I am extremely tired of this discussion. On Mar 9, 2010, at 10:08 AM, Ceki Gülcü wrote:
On 09/03/2010 5:42 PM, Ralph Goers wrote:
On Mar 9, 2010, at 6:55 AM, Ceki Gülcü wrote:
Once you have a custom encoder, you can encode parameters as you see fit. This of course assumes logback as the slf4j backend since encoders exists only in logback. Given that parameters are passed to appenders unaltered from slf4j to logback, I don't think I am ignoring the question of getting the data to the appender, or am I?
Yes, you are. If you look at RFC 5424 you will see that it supports structured data elements inside an element id. The spec also supports multiple of these. By its nature, the MDC can already be wrapped in a structured element but other things cannot. They are just arbitrary parameters. It is possible that I might want to have two structured data sections in the output. With a Message this can easily be accomodated by creating a new Message class where getFormattedMessage knows that it will contain an array of StructuredDataMessages and will then format it accordingly. This can be done for virtually any object type. Having to do this with an encoder or layout means writing a lot of ugly code that tries to anticipate all the various objects that might get passed in.
Let's say you have a parameter of type 'House' you would like to log and you wrap it inside a new type called StructuredDataHouse and pass it to a logger as the first parameter (the message being "{}").
You can't do that on a LocationAwareLogger so this is impossible with any Logger implementation based on LoggerWrapper.
The RFC5424Encoder detects that this parameter supports RFC5424 encoding and asks StructuredDataHouse for its RFC5424 encoded data. RFC5424Encoder only neeeds to deals with objects supporting RFC5424 encoding, there is no need to anticipate other encoding types.
1. You can have a bunch of parameters. The encoder has to check every one of them. 2. I guess you'd also have to be able to configure a whole list of encoders and run through all of them to make sure each of your parameters was formatted correctly, even if the message doesn't contain data matching any of them.
The end result is very similar to asking your StructuredDataMessage in the org.slf4j.message package for its formatted message, except that the question is asked by a RFC5424Encoder. A different encoder would ask a different question.
This is not similar at all. The layout/encoder/whatever calls getFormattedMessage and gets an appropriate response. Since everything is a Message the method is always there.
Assuming the parameters get to the appender unaltered, an RFC5424Encoder could ask each parameter whether it can be encoded in RFC5424. If the argument supports RFC5424 encoding we would use the data supplied by the argument itself. Otherwise, we would use the value returned by toString() and prepend the key "argN=" for argument N as its generic RFC5424 encoding.
This is exactly my point. This sucks.
What sucks? It seems to me that such heuristic already provides better support for structured data then a *general* message type. If you are passed an object whose getFormatedMessage returns an arbitrary result, you would need some way of ensuring that it plays nicely with the RFC5424 format. The heuristic above provides a way.
Sure, I can still do if (Message instanceof StructuredDataMessage) to make sure it is valid if I want to. In lots of cases you won't care. The layout will just call getFormattedString and accept what it gets back.
For Object serialization which is another form of encoding, an Object encoder would ignore the RFC5424 capabilities of parameters and use serialization instead.
It seems that postponing the decision to transform an argument to its desired encoding up until the last minute without imposing a type is actually a pretty good design.
It is a good design for the overall layout. It is not a good design for formatting the objects where the objects themselves can do the formatting. With a slightly smarter method in the Message interface it is possible that the Message could even format itself based upon information passed from the layout/encoder.
What I am proposing still uses the formatting capabilities of each object, but it does so in a targeted way. If an object supports both X and Y encoding, an X-encoder would use the object's X-encoding capabilities whereas a Y-encoder would use the object's Y-encoding capabilities.
The Message concept is much simpler, much clearer, and much cleaner with a lot less ambiguity. As soon as Joern proposed it instead of what I originally had to do I realized how much cleaner and simpler it actually is. Because SLF4J/Logback doesn't support this stuff I had to implement my RFC 5424 support in my own source control based on the XML serialized EventData. It is error prone since the data might not be a serialized EventData object. Even if I could get it as a parameter, checking each parameter is painful. Ralph

On 10/03/2010 2:41 AM, Ralph Goers wrote:
I am extremely tired of this discussion.
I am sorry to hear that. Do you feel that your arguments are not being heard properly? Or do you think that the matter under discussion admits such an obvious solution that it does not merit debate? More below.
On Mar 9, 2010, at 10:08 AM, Ceki Gülcü wrote:
Let's say you have a parameter of type 'House' you would like to log and you wrap it inside a new type called StructuredDataHouse and pass it to a logger as the first parameter (the message being "{}").
You can't do that on a LocationAwareLogger so this is impossible with any Logger implementation based on LoggerWrapper.
It's not the logger which wraps House but the user. The location of the logger call remains unchanged by the wrapping which is there for encoding purposes only. Here is an example: StructuredDataHouse sdh = new StructuredDataHouse(house); logger.info("{}", sdh); If this encoding thing catches on, we could dispense with the wrapping thing altogether. You could register an RFC5424 "subencoder" for the House type and the RFC5424Encoder would look it up at runtime. So you could just write: logger.info("{}", house); The RFC5424 encoder and the transformer for House would just output the correct information.
The RFC5424Encoder detects that this parameter supports RFC5424 encoding and asks StructuredDataHouse for its RFC5424 encoded data. RFC5424Encoder only neeeds to deals with objects supporting RFC5424 encoding, there is no need to anticipate other encoding types.
1. You can have a bunch of parameters. The encoder has to check every one of them.
Yes, but that's just iterating over the parameters.
2. I guess you'd also have to be able to configure a whole list of encoders and run through all of them to make sure each of your parameters was formatted correctly, even if the message doesn't contain data matching any of them.
Well, the encoder is unique per appender. However, as discussed above there might be subencoder specific for each type you care about. For types without sub-encoders, some default heuristic would be applied which I already mentioned.
The end result is very similar to asking your StructuredDataMessage in the org.slf4j.message package for its formatted message, except that the question is asked by a RFC5424Encoder. A different encoder would ask a different question.
This is not similar at all. The layout/encoder/whatever calls getFormattedMessage and gets an appropriate response. Since everything is a Message the method is always there.
How is this different than toString()? Everything is an object and the toString() method is always there. -- Ceki

On 10.03.2010, at 09:46, Ceki Gülcü wrote:
The end result is very similar to asking your StructuredDataMessage in the org.slf4j.message package for its formatted message, except that the question is asked by a RFC5424Encoder. A different encoder would ask a different question.
This is not similar at all. The layout/encoder/whatever calls getFormattedMessage and gets an appropriate response. Since everything is a Message the method is always there.
How is this different than toString()? Everything is an object and the toString() method is always there.
Well, the idea was the following: I didn't want to allow just any type of Object - which would be possible if we'd simply use toString(). I wanted to emphasize the use of Messages of certain types instead of simply dumping just any Object into the Logger. Message.getFormattedMessage() is supposed to return a lazily initialized and cached String representation of the message. (I'm not sure if this has been documented yet. Chances are good that it isn't) toString(), on the other hand, can still be used for debugging output of a Message. toString() is generated code (by Eclipse or IDEA) most of the time. Even if that isn't the case, I dare say that it's caching it's result very seldom. I'm only talking about the typical implementation here. Since multiple appenders might use the formatted message of the same event, caching might make a rather big difference. It's just a semantic difference and an attempt to push the user in a certain direction (i.e. "Take a look at Messages!"). This semantic difference should be documented in the Message interface, obviously. Cheers, Joern.

On Mar 10, 2010, at 12:46 AM, Ceki Gülcü wrote:
On 10/03/2010 2:41 AM, Ralph Goers wrote:
I am extremely tired of this discussion.
I am sorry to hear that. Do you feel that your arguments are not being heard properly? Or do you think that the matter under discussion admits such an obvious solution that it does not merit debate?
Yes. It is obvious if you've tried to do what you are proposing. It sounds very similar to the mess I had to deal with with EventData, except with EventData it can't be passed as a parameter since it is passed through a LocationAwareLogger, which is a point you keep ignoring.
More below.
On Mar 9, 2010, at 10:08 AM, Ceki Gülcü wrote:
Let's say you have a parameter of type 'House' you would like to log and you wrap it inside a new type called StructuredDataHouse and pass it to a logger as the first parameter (the message being "{}").
You can't do that on a LocationAwareLogger so this is impossible with any Logger implementation based on LoggerWrapper.
It's not the logger which wraps House but the user. The location of the logger call remains unchanged by the wrapping which is there for encoding purposes only. Here is an example:
StructuredDataHouse sdh = new StructuredDataHouse(house); logger.info("{}", sdh);
If this encoding thing catches on, we could dispense with the wrapping thing altogether. You could register an RFC5424 "subencoder" for the House type and the RFC5424Encoder would look it up at runtime.
So you could just write:
logger.info("{}", house);
There is a big difference between an interface of info(String msg, Object param); and info(Message msg) although the code is similar: logger.info(new HouseMessage(house)); the way it is processed in Logback is much, much different and a lot simpler.
The RFC5424 encoder and the transformer for House would just output the correct information.
The RFC5424Encoder detects that this parameter supports RFC5424 encoding and asks StructuredDataHouse for its RFC5424 encoded data. RFC5424Encoder only neeeds to deals with objects supporting RFC5424 encoding, there is no need to anticipate other encoding types.
1. You can have a bunch of parameters. The encoder has to check every one of them.
Yes, but that's just iterating over the parameters.
And having to have special logic to interpret them. What will it do with Objects that it doesn't understand? Probably just call toString() which may result in garbage from a lot of objects. With a Message you are guaranteed that it will generate something meaningful because that is the contract. The contract with Object is less than helpful.
2. I guess you'd also have to be able to configure a whole list of encoders and run through all of them to make sure each of your parameters was formatted correctly, even if the message doesn't contain data matching any of them.
Well, the encoder is unique per appender. However, as discussed above there might be subencoder specific for each type you care about. For types without sub-encoders, some default heuristic would be applied which I already mentioned.
Exactly. This is where it turns into a complicated nightmare. Encoders referencing sub-encoders all having to be managed in the configuration, and each sub-encoder would have to be called to see if it understands each Object. No configuration at all is required for a Message to render itself. The contract could probably be enhanced a little bit to pass getFormattedMessage a bit of information about the Layout or Appender so that it can have a bit of variety in rendering itself, but it should always render something meaningful.
The end result is very similar to asking your StructuredDataMessage in the org.slf4j.message package for its formatted message, except that the question is asked by a RFC5424Encoder. A different encoder would ask a different question.
This is not similar at all. The layout/encoder/whatever calls getFormattedMessage and gets an appropriate response. Since everything is a Message the method is always there.
How is this different than toString()? Everything is an object and the toString() method is always there.
Yes. But it often doesn't do what you want. See above for the rest. Encoders may be useful doing things like compressing the data stream, but trying to get fancy like this is just a horrible idea. By the way, did you update the doc on the site? I don't see any reference to Encoders and had to look at the code to see that they are currently only used in classes extending OutputStreamAppender. So I guess a SocketAppender or SyslogAppender currently can't have an encoder. Ralph

On 11/03/2010 2:21 AM, Ralph Goers wrote:
On Mar 10, 2010, at 12:46 AM, Ceki Gülcü wrote:
On 10/03/2010 2:41 AM, Ralph Goers wrote:
I am extremely tired of this discussion.
I am sorry to hear that. Do you feel that your arguments are not being heard properly? Or do you think that the matter under discussion admits such an obvious solution that it does not merit debate?
Yes. It is obvious if you've tried to do what you are proposing. It sounds very similar to the mess I had to deal with with EventData, except with EventData it can't be passed as a parameter since it is passed through a LocationAwareLogger, which is a point you keep ignoring.
Yes, unfortunately you can't pass parameters to LocationAwareLogger's log method which is a major impediment to wrapping intended as syntactic sugar.
On Mar 9, 2010, at 10:08 AM, Ceki Gülcü wrote:
Let's say you have a parameter of type 'House' you would like to log and you wrap it inside a new type called StructuredDataHouse and pass it to a logger as the first parameter (the message being "{}").
You can't do that on a LocationAwareLogger so this is impossible with any Logger implementation based on LoggerWrapper.
It's not the logger which wraps House but the user. The location of the logger call remains unchanged by the wrapping which is there for encoding purposes only. Here is an example:
StructuredDataHouse sdh = new StructuredDataHouse(house); logger.info("{}", sdh);
If this encoding thing catches on, we could dispense with the wrapping thing altogether. You could register an RFC5424 "subencoder" for the House type and the RFC5424Encoder would look it up at runtime.
So you could just write:
logger.info("{}", house);
There is a big difference between an interface of
info(String msg, Object param);
and
info(Message msg)
although the code is similar:
logger.info(new HouseMessage(house));
the way it is processed in Logback is much, much different and a lot simpler.
The RFC5424 encoder and the transformer for House would just output the correct information.
The RFC5424Encoder detects that this parameter supports RFC5424 encoding and asks StructuredDataHouse for its RFC5424 encoded data. RFC5424Encoder only neeeds to deals with objects supporting RFC5424 encoding, there is no need to anticipate other encoding types.
1. You can have a bunch of parameters. The encoder has to check every one of them.
Yes, but that's just iterating over the parameters.
And having to have special logic to interpret them. What will it do with Objects that it doesn't understand? Probably just call toString() which may result in garbage from a lot of objects. With a Message you are guaranteed that it will generate something meaningful because that is the contract. The contract with Object is less than helpful.
2. I guess you'd also have to be able to configure a whole list of encoders and run through all of them to make sure each of your parameters was formatted correctly, even if the message doesn't contain data matching any of them.
Well, the encoder is unique per appender. However, as discussed above there might be subencoder specific for each type you care about. For types without sub-encoders, some default heuristic would be applied which I already mentioned.
Exactly. This is where it turns into a complicated nightmare. Encoders referencing sub-encoders all having to be managed in the configuration, and each sub-encoder would have to be called to see if it understands each Object. No configuration at all is required for a Message to render itself. The contract could probably be enhanced a little bit to pass getFormattedMessage a bit of information about the Layout or Appender so that it can have a bit of variety in rendering itself, but it should always render something meaningful.
We already have heuristics for encoding an arbitrary object. During serialization message parameters are transformed into string. Similarly, Log4jXMLLayout transforms message parameters to string as well. There are several low-level encoding schemes worth mentioning, namely text, XML, object serialization, protobuf and RFC5424. When encoding an event in XML the natural inclination is to encode parameters in XML as well. During serialization of an event using serialization for message parameters is equally natural. The same goes for RFC 5424 and protobuf. I am speculating but are you assuming that encoding message parameters in XML is always appropriate (at least as a nice fallback)? If true, you can live with a single transformation method provided by the Message interface. However, the problem gets really interesting if you add the requirement to be able to retrieve the original message parameter which opens up a whole bunch of new possibilities.
The end result is very similar to asking your StructuredDataMessage in the org.slf4j.message package for its formatted message, except that the question is asked by a RFC5424Encoder. A different encoder would ask a different question.
This is not similar at all. The layout/encoder/whatever calls getFormattedMessage and gets an appropriate response. Since everything is a Message the method is always there.
How is this different than toString()? Everything is an object and the toString() method is always there.
Yes. But it often doesn't do what you want. See above for the rest. Encoders may be useful doing things like compressing the data stream, but trying to get fancy like this is just a horrible idea.
You may be right but I think the idea is worth trying/experimenting with.
By the way, did you update the doc on the site? I don't see any reference to Encoders and had to look at the code to see that they are currently only used in classes extending OutputStreamAppender. So I guess a SocketAppender or SyslogAppender currently can't have an encoder.
Encoders are new in 0.9.19, the docs on the site will be updated when 0.9.19 comes out. They have been updated in the git repo. -- Ceki

On 09.03.2010, at 14:29, Ceki Gülcü wrote:
class MesssageLogger {
Logger logger;
void debug(Message msg) { logger.debug("{}", msg); } ... }
BTW, I've started looking at both Ralph and Joern's proposals.
Thanks. The main difference is that the Message is assumed to reach the appender implementations unchanged, i.e. not transformed into a String. That way, appenders can implement special handling of certain known (to the appender) Message implementation. Only an appender like ConsoleAppender would actually use the formatted message. A specifically implemented DBAppender, for example, could store certain application-specific fields in special tables. Or, as in Ralphs case, if I remember correctly: take the structured data (a map) and store the various entries as required by the RFC. Cheers, Joern.

Am 06.03.2010 18:13, schrieb Ceki Gülcü:
If you are used to log4j's NDC, having NDC in SLF4J is more comfortable than not having it. Otherwise, since MDC is semantically richer than NDC (one can trivially implement NDC over MDC), one can always get by using MDC instead of NDC. Another reason was that by scrapping NDC in SLF4J there was one less piece of code to maintain.
This might be a dump question. What does an NDC gives you that and MDC doesn't? AFAIK it's also important to keep the API for user simple and small. Especially multiple options should be avoided. FWIW, I'd like to see an evolution to the Marker concept in a 2.0 version. I sometimes have the feeling that the current implementation is a bit over-engineered. Especially the difference around attached and detached markers and their intention can be confusing for clients. For example, sometimes they might share markers through static variables. Suddenly sombody elses attaches another marker to such a marker and all other log messages are polluted as well. This has some hidden implications which I /personally/ don't like in APIs. Frankly, I'd rather like to see a much smaller implementation. I often compare Markers with "tags". Everything is "tagged" these days. Thus, everything a marker needs is a good "#toString" method. :) Of course, then there needs to be some API to accept multiple "tags" per log message. -Gunnar -- Gunnar Wagenknecht gunnar@wagenknecht.org http://wagenknecht.org/

On 08.03.2010, at 22:38, Gunnar Wagenknecht wrote:
Am 06.03.2010 18:13, schrieb Ceki Gülcü:
If you are used to log4j's NDC, having NDC in SLF4J is more comfortable than not having it. Otherwise, since MDC is semantically richer than NDC (one can trivially implement NDC over MDC), one can always get by using MDC instead of NDC. Another reason was that by scrapping NDC in SLF4J there was one less piece of code to maintain.
This might be a dump question. What does an NDC gives you that and MDC doesn't? AFAIK it's also important to keep the API for user simple and small. Especially multiple options should be avoided.
Please take a look at my examples: http://sourceforge.net/apps/trac/lilith/wiki/NestedDiagnosticContext Much like an ordinary stack-trace, the NDC is actually a stack. The stacking of messages is the key here. The main difference between the MDC and (my implementation of) the NDC is that NDC is also supporting the same Message as the one I proposed for logging in general. This means, that the actual formatting of the message isn't performed if it's not really needed (for example, if the Appenders in Logback are not printing the NDC but choose to ignore it). With MDC, on the other hand, one would have to format the message anyway. Using only the MDC without any implementation of the NDC (like the one in slf4j-ext) would be quite awkward since the MDC does not have the stacked/ordered nature that NDC has. It's quite similar to List vs. Map. While a Map is very useful to map stuff ;) it's not an ideal collection for a sorted set, i.e. list.
FWIW, I'd like to see an evolution to the Marker concept in a 2.0 version. I sometimes have the feeling that the current implementation is a bit over-engineered. Especially the difference around attached and detached markers and their intention can be confusing for clients. For example, sometimes they might share markers through static variables. Suddenly sombody elses attaches another marker to such a marker and all other log messages are polluted as well. This has some hidden implications which I /personally/ don't like in APIs.
I also think that this is a shortcoming of the Marker concept but this can't be changed without breaking stuff.
Frankly, I'd rather like to see a much smaller implementation. I often compare Markers with "tags". Everything is "tagged" these days. Thus, everything a marker needs is a good "#toString" method. :) Of course, then there needs to be some API to accept multiple "tags" per log message.
Hm.
-Gunnar
Cheers, Joern.

Am 09.03.2010 12:20, schrieb Joern Huxhorn:
Much like an ordinary stack-trace, the NDC is actually a stack. The stacking of messages is the key here.
I know. But looking at your examples, I don't see something that the current Java stack trace wouldn't give me already. The logging of method arguments might be key here but a stack only really makes sense if you use recursion. Another issue is that the API usage is quite complicated for clients. They need to wrap each usage into try/finally blocks. However, I can see some value in having both - a simple to use MDC as well as an NDC. I wonder if it's possible to merge both into a central API "DiagnoseContext" or just DC.
The main difference between the MDC and (my implementation of) the NDC is that NDC is also supporting the same Message as the one I proposed for logging in general. This means, that the actual formatting of the message isn't performed if it's not really needed (for example, if the Appenders in Logback are not printing the NDC but choose to ignore it).
That's just an implementation detail. MDC can be changed as well. Remember, we are talking about a version 2.0 which means breaking API changes anyway.
With MDC, on the other hand, one would have to format the message anyway.
Deferring formatting is good (performance wise). I think such a message class (under the covers) should be a key concept in SLF4J 2.0. -Gunnar -- Gunnar Wagenknecht gunnar@wagenknecht.org http://wagenknecht.org/

On 09.03.2010, at 17:00, Gunnar Wagenknecht wrote:
Am 09.03.2010 12:20, schrieb Joern Huxhorn:
Much like an ordinary stack-trace, the NDC is actually a stack. The stacking of messages is the key here.
I know. But looking at your examples, I don't see something that the current Java stack trace wouldn't give me already. The logging of method arguments might be key here but a stack only really makes sense if you use recursion. Another issue is that the API usage is quite complicated for clients. They need to wrap each usage into try/finally blocks.
Well, it's not strictly necessary to wrap into try-finally since it's also possible to clear the NDC. But my example is the only really fool-proof way that guarantees that the NDC is cleaned up both in case of normal leaving of the scope and in case of an exception. The main difference between a normal stacktrace and an NDC is that the stacktrace says "I'm here (and there and there)" while the NDC would say "I'm doing this (while doing this and that)".
However, I can see some value in having both - a simple to use MDC as well as an NDC. I wonder if it's possible to merge both into a central API "DiagnoseContext" or just DC.
The main difference between the MDC and (my implementation of) the NDC is that NDC is also supporting the same Message as the one I proposed for logging in general. This means, that the actual formatting of the message isn't performed if it's not really needed (for example, if the Appenders in Logback are not printing the NDC but choose to ignore it).
That's just an implementation detail. MDC can be changed as well. Remember, we are talking about a version 2.0 which means breaking API changes anyway.
That's not entirely true. The way I planned it, switching to the new API is done simply by adding an ".n" to the package of Logger and LoggerFactory. The rest of the API-changes are source-level-compatible. This was a primary concern while doing it. I wanted to make it easy with three exclamation marks for people that want to switch over. Anything else wouldn't fly and would only serve to annoy people. Some people aren't very fond of changes and would rather stay with an old API instead of taking a look at a new one.
With MDC, on the other hand, one would have to format the message anyway.
Deferring formatting is good (performance wise). I think such a message class (under the covers) should be a key concept in SLF4J 2.0.
It's actually a somewhat two-sided sword. Deferring the formatting should be postponed as much as possible but transforming arguments to Strings should be done early, i.e. definitely in the same thread at the time the logging call is made. Otherwise arguments might be changed until that transformation is done. Cheers, Joern.

Am 09.03.2010 19:40, schrieb Joern Huxhorn:
[..]. Some people aren't very fond of changes and would rather stay with an old API instead of taking a look at a new one.
That's a no brainer because your proposal leaves the old API in place. Someone would just have to write a wrapper that forwards call from the old APIs to the new ones. There is no reason to keep two source-compatible APIs in different packages. -Gunnar -- Gunnar Wagenknecht gunnar@wagenknecht.org http://wagenknecht.org/

On 09.03.2010, at 19:47, Gunnar Wagenknecht wrote:
Am 09.03.2010 19:40, schrieb Joern Huxhorn:
[..]. Some people aren't very fond of changes and would rather stay with an old API instead of taking a look at a new one.
That's a no brainer because your proposal leaves the old API in place. Someone would just have to write a wrapper that forwards call from the old APIs to the new ones.
There is no reason to keep two source-compatible APIs in different packages.
I dare to disagree. They are compatible while switching from org.slf4j to org.slf4j.n but not the other way around. Joern.

I've updated my branch at http://github.com/huxi/slf4j/tree/slf4j-redesign I documented Message and implemented an AbstractMessage (that's already performing the lazy initialization and caching of the formatted message). ParameterizedMessage and JavaUtilFormatterMessage are now extending this AbstractMessage instead of implementing that behavior manually. Cheers, Joern.

On 08/03/2010 10:38 PM, Gunnar Wagenknecht wrote:
Am 06.03.2010 18:13, schrieb Ceki Gülcü:
If you are used to log4j's NDC, having NDC in SLF4J is more comfortable than not having it. Otherwise, since MDC is semantically richer than NDC (one can trivially implement NDC over MDC), one can always get by using MDC instead of NDC. Another reason was that by scrapping NDC in SLF4J there was one less piece of code to maintain.
This might be a dump question. What does an NDC gives you that and MDC doesn't? AFAIK it's also important to keep the API for user simple and small. Especially multiple options should be avoided.
FWIW, I'd like to see an evolution to the Marker concept in a 2.0 version. I sometimes have the feeling that the current implementation is a bit over-engineered. Especially the difference around attached and detached markers and their intention can be confusing for clients. For example, sometimes they might share markers through static variables. Suddenly sombody elses attaches another marker to such a marker and all other log messages are polluted as well. This has some hidden implications which I /personally/ don't like in APIs.
Ouch. When multiple markers are needed, the idea is to pass the most specific marker as the argument to the logger request. This is rather counter-intuitive. So if a request needs to be marked as CONFIDENTIAL and as DATABASE, you would create a new marker referencing both. Marker m = MarkerFactory.getMarker("COMBI"); m.add(MarkerFactory.getMarker("CONFIDENTIAL")); m.add(MarkerFactory.getMarker("DATABASE")); logger.info(m, "Donald Duck withdrew 1'000 rupees"); adding a DATABASE marker to CONFIDENTIAL or the other way around would be quite wrong.
Frankly, I'd rather like to see a much smaller implementation. I often compare Markers with "tags". Everything is "tagged" these days. Thus, everything a marker needs is a good "#toString" method. :) Of course, then there needs to be some API to accept multiple "tags" per log message.
I agree. The only elegant way I see for a log message to accept multiple markers is to have an event to which markers are added. (The user would progressively build the event to be logged.) Passing a marker array or marker list before calling the logger is another possibility.
-Gunnar

Am 09.03.2010 14:43, schrieb Ceki Gülcü:
So if a request needs to be marked as CONFIDENTIAL and as DATABASE, you would create a new marker referencing both.
Marker m = MarkerFactory.getMarker("COMBI"); m.add(MarkerFactory.getMarker("CONFIDENTIAL")); m.add(MarkerFactory.getMarker("DATABASE"));
logger.info(m, "Donald Duck withdrew 1'000 rupees");
Ouch. That's not obvious just from looking at the API. Just to give you an idea of what I have in mind: http://bit.ly/cOsi2W Actually, I separated logging from tracing. I even went so far and replace log levels with tags. A developer just adds tags. In the end you have rules/filters configured by an administrator which decided about the importance of a log message at runtime. The reason is that I often found code where developers just didn't care about correct log levels but mixed debug with info, etc. Logging can also be very useful for business events. That's when you need structured and rich data. But debugging/tracing doesn't need rich data. Thus, I thought about such a drastic split between the both and also forced such a split in the APIs. -Gunnar -- Gunnar Wagenknecht gunnar@wagenknecht.org http://wagenknecht.org/

Yes. Please add using Messages and support for StructuredData. I'm fine with requiring Java 5. Ralph On Mar 6, 2010, at 5:48 AM, Ceki Gülcü wrote:
Hello all,
Here are the 4 items I'd like to address in SLF4J 2.0:
1) Varargs for Logger methods http://bugzilla.slf4j.org/show_bug.cgi?id=31 Require JDK 1.5 and remain binary compatible as explained in my comment #31 dated 2009-03-25
2) logging exception if last argument, as exaplined by Joern in http://bugzilla.slf4j.org/show_bug.cgi?id=43
3) Avoid bogus incompatibility warnings http://bugzilla.slf4j.org/show_bug.cgi?id=154
4) fix http://bugzilla.slf4j.org/show_bug.cgi?id=170 possibly with a nop implementation of org.apache.log4j.NDC
Are there any other major items? Is everyone OK with requiring JDK 1.5 in SLF4J 2.0?
-- Ceki
_______________________________________________ slf4j-dev mailing list slf4j-dev@qos.ch http://qos.ch/mailman/listinfo/slf4j-dev
participants (4)
-
Ceki Gülcü
-
Gunnar Wagenknecht
-
Joern Huxhorn
-
Ralph Goers