Using Logback with RabbitMQ

Hi all, Tried looking this up on the net, but there is nothing recent. I would like to use an appender (standard or custom) that could interface with RabbitMQ. I have a need to see logging data in real time, but cannot use the console nor a database. I would assume that a custom appender can be written to communicate with the RabbitMQ exchange, and I would also like to be able to disable the appender when it is not required. Any ideas on how to begin this? Larry Smith

On Fri, Oct 19, 2012 at 12:32 AM, Smith, Larry (ECS - Enterprise Cloud Service) <larry.smith5@hp.com> wrote:
Hi all,
Tried looking this up on the net, but there is nothing recent. I would like to use an appender (standard or custom) that could interface with RabbitMQ.
I have a need to see logging data in real time, but cannot use the console nor a database.
I would assume that a custom appender can be written to communicate with the RabbitMQ exchange, and I would also like to be able to disable the appender when it is not required.
Any ideas on how to begin this?
Glad to hear someone is interested in such a thing, that is to use RabbitMQ as a Logback transport system. There is such a solution, open sourced under Apache 2 license, composed of both a Logback appender, and then a client that allows you to fetch the events. (Disclaimer: I'm the code's author.) :) https://github.com/arkitech/logging-java (and my own repository: https://github.com/cipriancraciun/logging-java ) The architecture is very simple: * for the publisher side, you just add in your dependencies the AMQP publisher, plus some custom configuration inside `logback.xml`; (see the below link for a `logback.xml` example;) https://github.com/cipriancraciun/logging-java/blob/master/logging-conapp/sr... * for the client side I've borrowed the idea from other networked publishers from the logback code, that is I consume the events from the RabbitMQ server, and I push them back into the logback system, thus you can configure their routing as normal towards files, database, etc. on the collector side; again all you need to do is add something inside the `logback.xml` file like in the example below: https://github.com/cipriancraciun/logging-java/blob/master/logging-conapp/sr... to run the consumer just `mvn package` the `loggin-connapp` and `java -jar logging-connapp.jar`; As features: * it buffers events in memory if the connection to the broker is down; (if needed capacity limits are trivial to add in the code;) * it reconnects to the broker in case of connectivity failures; * it publishes the messages on a separate thread than where the message was created; (thus the incurred overhead is minimal;) As potential issues: * it uses native Java serialization to encode the messages, thus you can't just look at the messages with any RabbitMQ consumer; (of course the code is quite modular and you could replace that with JSON); * you can't enable and disable the appender, but you can make the logger itself suspend the publishing; As for it's stability it has been used in production for over a year with only minor incidents (due to serialization exceptions). Unfortunately the documentation is almost non-existing in the repository, except those example files, but if there is someone interested in using it, I could easily add it. Of course I can also answers questions directly on this mailing list or through a private email. Ciprian. P.S.: Inside the same repository there is an "experimental" BerkeleyDB + Lucene support to store and query the logs, but these are experimental only... P.P.S: If you are **really** interested in using the appender or the consumer, but you require other features and they are easy to implement I could try to implement them.
participants (2)
-
Ciprian Dorin Craciun
-
Smith, Larry (ECS - Enterprise Cloud Service)