How to contribute to logback?

I've cloned logback on github and implemented an RFE that I filed. I went ahead and submitted a pull request, https://github.com/qos-ch/logback/pull/63, though I noticed that there are a lot of outstanding pull requests that don't seem to be getting addressed. Is there something that developers looking to contribute back need to do, or is it the dev's intention not to accept contributions from the community? With regard to this change specifically, we could obviously just use my forked copy, but we would prefer to utilize an official released version, and I do feel that other people could benefit from the changes. Regards, Tommy Becker

Submitting pull requests is the way to go. You used attributes for the passing arguments to SiftingAppender. It is far easier to use element in which case you don't have to code anything in SiftAction. Joran (the XML configurator) or Gaffer (the GroovyConfigurator) will inject the arguments into the SiftingAppender instance automatically. Regarding release of resources, SMTPAppender which also uses AppenderTracker will release release resources whenever an event has the marker "FINALIZE_SESSION". This is more convenient than waiting for a timeout as resources are released immediately. Would such an approach work for you? In other words, can you identify a point in your code where after which resources should be released? -- Ceki http://twitter.com/#!/ceki On 24.10.2012 17:31, Becker, Thomas wrote:
I've cloned logback on github and implemented an RFE that I filed. I went ahead and submitted a pull request, https://github.com/qos-ch/logback/pull/63, though I noticed that there are a lot of outstanding pull requests that don't seem to be getting addressed. Is there something that developers looking to contribute back need to do, or is it the dev's intention not to accept contributions from the community? With regard to this change specifically, we could obviously just use my forked copy, but we would prefer to utilize an official released version, and I do feel that other people could benefit from the changes.
Regards, Tommy Becker

Thanks, I'll look into changing the configuration to use elements. I was not aware of the FINALIZE_SESSION marker, though I don't think it would work for our use case. My RFE was originally to just make the appender timeout configurable. But then I thought about it more and decided the real problem was that there is no way to cap the number of sub-appenders (and the scarce resources they consume, like FDs) that can be spun up in response to a burst of activity. In our case, we expose a job engine to clients and use SiftingAppender to direct each job to its own log. But when we get a flood of new job submissions, we ran out of FDs which cripples the system in all sorts of ways that should not be affected by logging. But now we can cap the number of appenders we want to allow, and clients don't need to know to pass a marker stating they're done with the logger. So I guess I'm saying that although the marker is nice, the maxAppenders setting is more like a safety valve to keep Bad Things from happening ;) -Tommy -----Original Message----- From: logback-dev-bounces@qos.ch [mailto:logback-dev-bounces@qos.ch] On Behalf Of ceki Sent: Wednesday, October 24, 2012 2:07 PM To: logback developers list Subject: Re: [logback-dev] How to contribute to logback? Submitting pull requests is the way to go. You used attributes for the passing arguments to SiftingAppender. It is far easier to use element in which case you don't have to code anything in SiftAction. Joran (the XML configurator) or Gaffer (the GroovyConfigurator) will inject the arguments into the SiftingAppender instance automatically. Regarding release of resources, SMTPAppender which also uses AppenderTracker will release release resources whenever an event has the marker "FINALIZE_SESSION". This is more convenient than waiting for a timeout as resources are released immediately. Would such an approach work for you? In other words, can you identify a point in your code where after which resources should be released? -- Ceki http://twitter.com/#!/ceki On 24.10.2012 17:31, Becker, Thomas wrote:
I've cloned logback on github and implemented an RFE that I filed. I went ahead and submitted a pull request, https://github.com/qos-ch/logback/pull/63, though I noticed that there are a lot of outstanding pull requests that don't seem to be getting addressed. Is there something that developers looking to contribute back need to do, or is it the dev's intention not to accept contributions from the community? With regard to this change specifically, we could obviously just use my forked copy, but we would prefer to utilize an official released version, and I do feel that other people could benefit from the changes.
Regards, Tommy Becker
_______________________________________________ logback-dev mailing list logback-dev@qos.ch http://mailman.qos.ch/mailman/listinfo/logback-dev

On 24.10.2012 20:46, Becker, Thomas wrote:
Thanks, I'll look into changing the configuration to use elements.
I was not aware of the FINALIZE_SESSION marker, though I don't think it would work for our use case. My RFE was originally to just make the appender timeout configurable. But then I thought about it more and decided the real problem was that there is no way to cap the number of sub-appenders (and the scarce resources they consume, like FDs) that can be spun up in response to a burst of activity. In our case, we expose a job engine to clients and use SiftingAppender to direct each job to its own log. But when we get a flood of new job submissions, we ran out of FDs which cripples the system in all sorts of ways that should not be affected by logging. But now we can cap the number of appenders we want to allow, and clients don't need to know to pass a marker stating they're done with the logger. So I guess I'm saying that although the marker is nice, the maxAppenders setting is more like a safety valve to keep Bad Things from happening
Capping the max number of sub-appender sound like what *not* to do in your scenario. For example, if the cap is 100 and 101 requests are received in a short amount of time, then you will be prematurely opening and closing sub-appenders in the scenario you describe. Reconfiguring an sub-appender is not exactly cheap. I reiterate my question. Can you identify an end-of-session point in your code after which resources can be released? -- Ceki 65% of statistics are made up on the spot

I won't say that I couldn't reconfigure the code so that such an "end-of-session" point could be identified. I will say that I don't think that is a general solution to the problem. In your scenario you are correct that with my changes the 101st request will result in the oldest appender getting closed and a new one getting opened. And that "thrashing" will continue as long as we hover above the 100 request mark yes. But things will work, and work as they should. If performance is degraded my option is to decide if I can afford to increase this maximum (which keep in mind is the maximum I deliberately chose, since the default is unbounded), or address it some other way. My application will not go down in flames because I can't open a socket or some such thing that requires an FD since my logging system has decided it can consume as many as it wants. I would consider a temporary performance degradation to be preferable to failure, wouldn't you? Regards, Tommy -----Original Message----- From: logback-dev-bounces@qos.ch [mailto:logback-dev-bounces@qos.ch] On Behalf Of ceki Sent: Wednesday, October 24, 2012 3:35 PM To: logback developers list Subject: Re: [logback-dev] How to contribute to logback? On 24.10.2012 20:46, Becker, Thomas wrote:
Thanks, I'll look into changing the configuration to use elements.
I was not aware of the FINALIZE_SESSION marker, though I don't think > it would work for our use case. My RFE was originally to just make > the appender timeout configurable. But then I thought about it more > and decided the real problem was that there is no way to cap the > number of sub-appenders (and the scarce resources they consume, like > FDs) that can be spun up in response to a burst of activity. In our > case, we expose a job engine to clients and use SiftingAppender to > direct each job to its own log. But when we get a flood of new job > submissions, we ran out of FDs which cripples the system in all sorts > of ways that should not be affected by logging. But now we can cap > the number of appenders we want to allow, and clients don't need to > know to pass a marker stating they're done with the logger. So I > guess I'm saying that although the marker is nice, the maxAppenders > setting is more like a safety valve to keep Bad Things from happening
Capping the max number of sub-appender sound like what *not* to do in your scenario. For example, if the cap is 100 and 101 requests are received in a short amount of time, then you will be prematurely opening and closing sub-appenders in the scenario you describe. Reconfiguring an sub-appender is not exactly cheap. I reiterate my question. Can you identify an end-of-session point in your code after which resources can be released? -- Ceki 65% of statistics are made up on the spot _______________________________________________ logback-dev mailing list logback-dev@qos.ch http://mailman.qos.ch/mailman/listinfo/logback-dev

Just as an additional argument for this feature, I'd like to add that on Linux you are limited to 1024 file descriptors. Even if I were able to allocate all of them for Logback appenders and if they sent the "end-of-session" token on completion, it would still not be enough because I have a requirement to be able to handle more than that many *concurrent* jobs in the system. In general this shouldn't be a problem since most jobs will log messages relatively infrequently and some may be very long running (days). But I cannot guarantee that any number of jobs will not all log a message within some fixed time period (30 minutes or otherwise), which would exhaust the FDs with the current implementation. In this scenario I'm not all that concerned about the performance overhead of starting and stopping appenders; I'm much more concerned about correctness and not exhausting limited resources. In general I agree with your point that it is reasonable for the application to attempt to convey that it is finished with the logger. But considering that SiftingAppender has the potential to allocate and hold an large number of file descriptors, I think some configuration knobs are warranted. The default behavior with my changes is exactly as before, and the implementation is actually notably less complex due to replacing the DIY linked list implementation in AppenderTracker with a LinkedHashMap. -Tommy ________________________________________ From: logback-dev-bounces@qos.ch [logback-dev-bounces@qos.ch] on behalf of Becker, Thomas [Thomas.Becker@netapp.com] Sent: Wednesday, October 24, 2012 3:53 PM To: logback developers list Subject: Re: [logback-dev] How to contribute to logback? I won't say that I couldn't reconfigure the code so that such an "end-of-session" point could be identified. I will say that I don't think that is a general solution to the problem. In your scenario you are correct that with my changes the 101st request will result in the oldest appender getting closed and a new one getting opened. And that "thrashing" will continue as long as we hover above the 100 request mark yes. But things will work, and work as they should. If performance is degraded my option is to decide if I can afford to increase this maximum (which keep in mind is the maximum I deliberately chose, since the default is unbounded), or address it some other way. My application will not go down in flames because I can't open a socket or some such thing that requires an FD since my logging system has decided it can consume as many as it wants. I would consider a temporary performance degradation to be preferable to failure, wouldn't you? Regards, Tommy -----Original Message----- From: logback-dev-bounces@qos.ch [mailto:logback-dev-bounces@qos.ch] On Behalf Of ceki Sent: Wednesday, October 24, 2012 3:35 PM To: logback developers list Subject: Re: [logback-dev] How to contribute to logback? On 24.10.2012 20:46, Becker, Thomas wrote:
Thanks, I'll look into changing the configuration to use elements.
I was not aware of the FINALIZE_SESSION marker, though I don't think > it would work for our use case. My RFE was originally to just make > the appender timeout configurable. But then I thought about it more > and decided the real problem was that there is no way to cap the > number of sub-appenders (and the scarce resources they consume, like > FDs) that can be spun up in response to a burst of activity. In our > case, we expose a job engine to clients and use SiftingAppender to > direct each job to its own log. But when we get a flood of new job > submissions, we ran out of FDs which cripples the system in all sorts > of ways that should not be affected by logging. But now we can cap > the number of appenders we want to allow, and clients don't need to > know to pass a marker stating they're done with the logger. So I > guess I'm saying that although the marker is nice, the maxAppenders > setting is more like a safety valve to keep Bad Things from happ ening
Capping the max number of sub-appender sound like what *not* to do in your scenario. For example, if the cap is 100 and 101 requests are received in a short amount of time, then you will be prematurely opening and closing sub-appenders in the scenario you describe. Reconfiguring an sub-appender is not exactly cheap. I reiterate my question. Can you identify an end-of-session point in your code after which resources can be released? -- Ceki 65% of statistics are made up on the spot _______________________________________________ logback-dev mailing list logback-dev@qos.ch http://mailman.qos.ch/mailman/listinfo/logback-dev _______________________________________________ logback-dev mailing list logback-dev@qos.ch http://mailman.qos.ch/mailman/listinfo/logback-dev
participants (2)
-
Becker, Thomas
-
ceki