how to remove old log files in this sifting appender configuration

Hi I'm trying to resolve a scenario with the following sifting appender configuration, but got problems with too many open files. I use logback to produce CSV files. The ${hour} key is set based on the content (within my %msg). Rollover is every minute. <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender"> <discriminator> <key>hour</key> <defaultValue>unknown</defaultValue> </discriminator> <sift> <appender name="DATA-${hour}" class="com.nexustelecom.minaclient.writer.MessageAppender"> <file>${java.io.tmpdir}/data-${hour}.csv</file> <encoder> <pattern>%msg</pattern> </encoder> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>${java.io.tmpdir}/data-${hour}-%d{yyyyMMddHHmm}.csv</fileNamePattern> <cleanHistoryOnStart>true</cleanHistoryOnStart> <maxHistory>3</maxHistory> </rollingPolicy> </appender> </sift> </appender> Suppose that I started the application at 11, and that current time is 16, so most of the data goes into data-16.csv, and some goes into data-15.csv, data-14.csv, but nothing goes into older data files. After a while, I get more and more old files, active files and rolled files, i.e. data-11.csv, data-12.csv, and data-11-201207111125.csv, data-11-201207111132.csv, and so on. Such old files (from 11, or 12) are under logback control, but as they don't get new data anymore, they stay open forever and are never cleaned up. There aren't new 'events' for these files. The JVM shows an increasing number of active threads (probably the loggers) at the same time, which causes problems after a few hours. Is there a way to configure logback to handle my scenario? This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment.

How do you set the 'hour' key? -- Ceki http://tinyurl.com/proLogback On 11.07.2012 16:13, Andreas Kruthoff wrote:
Hi
I'm trying to resolve a scenario with the following sifting appender configuration, but got problems with too many open files.
I use logback to produce CSV files.
The ${hour} key is set based on the content (within my %msg).
Rollover is every minute.
<appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender"> <discriminator> <key>hour</key> <defaultValue>unknown</defaultValue> </discriminator> <sift> <appender name="DATA-${hour}" class="com.nexustelecom.minaclient.writer.MessageAppender"> <file>${java.io.tmpdir}/data-${hour}.csv</file> <encoder> <pattern>%msg</pattern> </encoder> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${java.io.tmpdir}/data-${hour}-%d{yyyyMMddHHmm}.csv</fileNamePattern>
<cleanHistoryOnStart>true</cleanHistoryOnStart> <maxHistory>3</maxHistory> </rollingPolicy> </appender> </sift> </appender>
Suppose that I started the application at 11, and that current time is 16, so most of the data goes into data-16.csv, and some goes into data-15.csv, data-14.csv, but nothing goes into older data files.
After a while, I get more and more old files, active files and rolled files, i.e. data-11.csv, data-12.csv, and data-11-201207111125.csv, data-11-201207111132.csv, and so on.
Such old files (from 11, or 12) are under logback control, but as they don't get new data anymore, they stay open forever and are never cleaned up. There aren't new 'events' for these files.
The JVM shows an increasing number of active threads (probably the loggers) at the same time, which causes problems after a few hours.
Is there a way to configure logback to handle my scenario?

My %msg contains a field which specifies the hour. I found it convenient to call MDC.put("hour", timestampHour); The data within the messages isn't ordered by time, unfortunately, but I want to save the data in a 'timeseries'-like manner to my output files. All messages of hour 11 go in one file, 12 into an other, and so on. I create such a file once per minute. This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment.

Could it be that com.nexustelecom.minaclient.writer.MessageAppender is not closing files? There are 24 hours in a day, so you should not have more than 24 active appenders... On 11.07.2012 16:13, Andreas Kruthoff wrote:
Hi
I'm trying to resolve a scenario with the following sifting appender configuration, but got problems with too many open files.
I use logback to produce CSV files.
The ${hour} key is set based on the content (within my %msg).
Rollover is every minute.
<appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender"> <discriminator> <key>hour</key> <defaultValue>unknown</defaultValue> </discriminator> <sift> <appender name="DATA-${hour}" class="com.nexustelecom.minaclient.writer.MessageAppender"> <file>${java.io.tmpdir}/data-${hour}.csv</file> <encoder> <pattern>%msg</pattern> </encoder> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${java.io.tmpdir}/data-${hour}-%d{yyyyMMddHHmm}.csv</fileNamePattern>
<cleanHistoryOnStart>true</cleanHistoryOnStart> <maxHistory>3</maxHistory> </rollingPolicy> </appender> </sift> </appender>
Suppose that I started the application at 11, and that current time is 16, so most of the data goes into data-16.csv, and some goes into data-15.csv, data-14.csv, but nothing goes into older data files.
After a while, I get more and more old files, active files and rolled files, i.e. data-11.csv, data-12.csv, and data-11-201207111125.csv, data-11-201207111132.csv, and so on.
Such old files (from 11, or 12) are under logback control, but as they don't get new data anymore, they stay open forever and are never cleaned up. There aren't new 'events' for these files.
The JVM shows an increasing number of active threads (probably the loggers) at the same time, which causes problems after a few hours.
Is there a way to configure logback to handle my scenario?
This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment.
_______________________________________________ Logback-user mailing list Logback-user@qos.ch http://mailman.qos.ch/mailman/listinfo/logback-user
-- Ceki http://tinyurl.com/proLogback

MessageAppender extends RollingFileAppender. I overwrite rollover() to notify an Observable, but I'll verify the part with the increasing threads (by using the RollingFileAppender directly in logback.xml)... On 07/11/2012 04:46 PM, ceki wrote:
Could it be that com.nexustelecom.minaclient.writer.MessageAppender is not closing files? There are 24 hours in a day, so you should not have more than 24 active appenders...
On 11.07.2012 16:13, Andreas Kruthoff wrote:
Hi
I'm trying to resolve a scenario with the following sifting appender configuration, but got problems with too many open files.
I use logback to produce CSV files.
The ${hour} key is set based on the content (within my %msg).
Rollover is every minute.
<appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender"> <discriminator> <key>hour</key> <defaultValue>unknown</defaultValue> </discriminator> <sift> <appender name="DATA-${hour}" class="com.nexustelecom.minaclient.writer.MessageAppender"> <file>${java.io.tmpdir}/data-${hour}.csv</file> <encoder> <pattern>%msg</pattern> </encoder> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${java.io.tmpdir}/data-${hour}-%d{yyyyMMddHHmm}.csv</fileNamePattern>
<cleanHistoryOnStart>true</cleanHistoryOnStart> <maxHistory>3</maxHistory> </rollingPolicy> </appender> </sift> </appender>
Suppose that I started the application at 11, and that current time is 16, so most of the data goes into data-16.csv, and some goes into data-15.csv, data-14.csv, but nothing goes into older data files.
After a while, I get more and more old files, active files and rolled files, i.e. data-11.csv, data-12.csv, and data-11-201207111125.csv, data-11-201207111132.csv, and so on.
Such old files (from 11, or 12) are under logback control, but as they don't get new data anymore, they stay open forever and are never cleaned up. There aren't new 'events' for these files.
The JVM shows an increasing number of active threads (probably the loggers) at the same time, which causes problems after a few hours.
Is there a way to configure logback to handle my scenario?
This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment.
_______________________________________________ Logback-user mailing list Logback-user@qos.ch http://mailman.qos.ch/mailman/listinfo/logback-user
This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment.

Correct, number of threads remains stable after replacing MessageAppender by RollingFileAppender. But number of produced files is still increasing because old files aren't getting 'events' anymore, and logback depends on such events. I'm wondering if this scenario can be solved with logback or not. Maybe I need to write my own rolling policy....? <thinking...> This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment.

On 11.07.2012 17:10, Andreas Kruthoff wrote:
Correct, number of threads remains stable after replacing MessageAppender by RollingFileAppender.
But number of produced files is still increasing because old files aren't getting 'events' anymore, and logback depends on such events.
SiftingAppener cleans unused appenders after 30 minutes.
I'm wondering if this scenario can be solved with logback or not. Maybe I need to write my own rolling policy....? <thinking...>
-- Ceki http://tinyurl.com/proLogback

On 07/11/2012 05:30 PM, ceki wrote:
SiftingAppener cleans unused appenders after 30 minutes.
Will you make it configurable in a future release of logback? I could use such a feature. This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment.

On 17.07.2012 15:15, Andreas Kruthoff wrote:
On 07/11/2012 05:30 PM, ceki wrote:
SiftingAppener cleans unused appenders after 30 minutes.
Will you make it configurable in a future release of logback? I could use such a feature.
If there is demand for it, then yes. If you would be interested in such a feature please create a jira issue asking for it. SMTPAppender which shares similarities with SiftingAppender, will discard buffers when an a logging event with the marker "FINALIZE_SESSION" is encountered. See [1, 2]. So it would be quite easy to add functionality to immediately discard and appender upon an event determined by the developer. [1] http://tinyurl.com/cyh5mv9 [2] http://tinyurl.com/cho3g5k -- Ceki http://tinyurl.com/proLogback

FYI I created an RFE for this back in Feburary. There definitely needs to be a way to make this timeout shorter and make SiftingAppender clean up fds more aggressively. http://jira.qos.ch/browse/LOGBACK-244 -----Original Message----- From: logback-user-bounces@qos.ch [mailto:logback-user-bounces@qos.ch] On Behalf Of ceki Sent: Tuesday, July 17, 2012 10:13 AM To: logback users list Subject: Re: [logback-user] how to remove old log files in this sifting appender configuration On 17.07.2012 15:15, Andreas Kruthoff wrote:
On 07/11/2012 05:30 PM, ceki wrote:
SiftingAppener cleans unused appenders after 30 minutes.
Will you make it configurable in a future release of logback? I could use such a feature.
If there is demand for it, then yes. If you would be interested in such a feature please create a jira issue asking for it. SMTPAppender which shares similarities with SiftingAppender, will discard buffers when an a logging event with the marker "FINALIZE_SESSION" is encountered. See [1, 2]. So it would be quite easy to add functionality to immediately discard and appender upon an event determined by the developer. [1] http://tinyurl.com/cyh5mv9 [2] http://tinyurl.com/cho3g5k -- Ceki http://tinyurl.com/proLogback _______________________________________________ Logback-user mailing list Logback-user@qos.ch http://mailman.qos.ch/mailman/listinfo/logback-user

I've referenced http://jira.qos.ch/browse/LOGBACK-244 from http://jira.qos.ch/browse/LOGBACK-724 thanks for the hint -andreas On 07/26/2012 05:16 PM, Becker, Thomas wrote:
FYI I created an RFE for this back in Feburary. There definitely needs to be a way to make this timeout shorter and make SiftingAppender clean up fds more aggressively.
http://jira.qos.ch/browse/LOGBACK-244
-----Original Message----- From: logback-user-bounces@qos.ch [mailto:logback-user-bounces@qos.ch] On Behalf Of ceki Sent: Tuesday, July 17, 2012 10:13 AM To: logback users list Subject: Re: [logback-user] how to remove old log files in this sifting appender configuration
On 17.07.2012 15:15, Andreas Kruthoff wrote:
On 07/11/2012 05:30 PM, ceki wrote:
SiftingAppener cleans unused appenders after 30 minutes.
Will you make it configurable in a future release of logback? I could use such a feature.
If there is demand for it, then yes. If you would be interested in such a feature please create a jira issue asking for it.
SMTPAppender which shares similarities with SiftingAppender, will discard buffers when an a logging event with the marker "FINALIZE_SESSION" is encountered. See [1, 2]. So it would be quite easy to add functionality to immediately discard and appender upon an event determined by the developer.
[1] http://tinyurl.com/cyh5mv9 [2] http://tinyurl.com/cho3g5k
-- Ceki http://tinyurl.com/proLogback
_______________________________________________ Logback-user mailing list Logback-user@qos.ch http://mailman.qos.ch/mailman/listinfo/logback-user _______________________________________________ Logback-user mailing list Logback-user@qos.ch http://mailman.qos.ch/mailman/listinfo/logback-user
This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment.
participants (3)
-
Andreas Kruthoff
-
Becker, Thomas
-
ceki