This is a real-time ad bidder handling 20Kqps on a 8-core machine, so almost all CPU goes to low-level networking, partially thanks to JVM overheads with native I/O etc. So if I disregard the top 4 methods (93.26% accum), scaling the remaining application CPU usage (6.74% -> 100%), then the 0.16% absolute CPU used by getTurboFilterChainDecision_0_3OrMore() scales to ~2.4% of all CPU usage from the rest of my app; that seems to be a high overhead for logging decisions (app has lots of logging code, but always using isXxxEnabled() guards except when the log msg is a literal string).
I noticed that the Object[] params is never used by any of the TurboFilters. I guess the params are passed because it's an extension point, somebody may write their own TurboFilter which considers the params, but maybe this could be handled in a different way (e.g., with a marker interface that tells a filter needs to receive the params).
Another inefficiency is that when you have <3 params and the logging decision is ACCEPT, the wrapper Object[] will be created twice: for example, filterAndLog_1() will first invoke getTurboFilterChainDecision_1() which creates this Object[] for invoking getTurboFilterChainDecision(), and later getTurboFilterChainDecision_1() creates its own Object[] wrapper for calling buildLoggingEventAndAppend(). This may not be very important when the event is actually logged (an extra Object[] wrapper is very small overhead compared to the full logging work), but appenders can also decide to not log, in practice I expect this to be a common scenario in apps that configure most of their loggers "open" and use the appenders as an easy on/off switch for the whole firehose.
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
I was also looking at this code because it turns out to be a relatively high CPU consumer in a test:
CPU SAMPLES BEGIN (total = 4077726) Fri Aug 22 18:54:48 2014
rank self accum count trace method
1 59.97% 59.97% 2445241 301175 io.netty.channel.epoll.Native.epollWait
2 26.54% 86.51% 1082218 301216 io.netty.channel.epoll.Native.writevAddresses
3 4.85% 91.36% 197836 301205 java.net.SocketInputStream.socketRead0
4 1.91% 93.26% 77790 301261 io.netty.channel.epoll.Native.readAddress
5 0.49% 93.76% 20139 301591 java.lang.ThreadLocal$ThreadLocalMap.getEntryAfterMiss
6 0.16% 93.92% 6606 301660 ch.qos.logback.classic.LoggerContext.getTurboFilterChainDecision_0_3OrMore
This is a real-time ad bidder handling 20Kqps on a 8-core machine, so almost all CPU goes to low-level networking, partially thanks to JVM overheads with native I/O etc. So if I disregard the top 4 methods (93.26% accum), scaling the remaining application CPU usage (6.74% -> 100%), then the 0.16% absolute CPU used by getTurboFilterChainDecision_0_3OrMore() scales to ~2.4% of all CPU usage from the rest of my app; that seems to be a high overhead for logging decisions (app has lots of logging code, but always using isXxxEnabled() guards except when the log msg is a literal string).
I noticed that the Object[] params is never used by any of the TurboFilters. I guess the params are passed because it's an extension point, somebody may write their own TurboFilter which considers the params, but maybe this could be handled in a different way (e.g., with a marker interface that tells a filter needs to receive the params).
Another inefficiency is that when you have <3 params and the logging decision is ACCEPT, the wrapper Object[] will be created twice: for example, filterAndLog_1() will first invoke getTurboFilterChainDecision_1() which creates this Object[] for invoking getTurboFilterChainDecision(), and later getTurboFilterChainDecision_1() creates its own Object[] wrapper for calling buildLoggingEventAndAppend(). This may not be very important when the event is actually logged (an extra Object[] wrapper is very small overhead compared to the full logging work), but appenders can also decide to not log, in practice I expect this to be a common scenario in apps that configure most of their loggers "open" and use the appenders as an easy on/off switch for the whole firehose.