Konrad Scherer

Wind River ActiveMQ Infrastructure

Here are some details of the Wind River ActiveMQ infrastructure that I have installed to support Mcollective. I have a non redundant distributed setup: 1 activemq broker in each of 3 geographic locations (master/slave is on the TODO list).

Each broker is connected to every other with a TTL of 1. So far it seems to be working. I recently upgraded to ActiveMQ 5.7.0 on CentOS 6.3 with openjdk 1.7. MCollective 2.2.x is setup with activemq connector and thanks to recent MCollective mailing list info, now has direct_addressing = 1 in the configuration. Note that if direct_addressing is enabled in the client, but not on the server, connections will fail.

I have put the spec file I used to make 5.6 and 5.7 activemq rpms on GitHub. I have also submitted this to puppetlabs, so hopefully it will make it into puppetlabs repo.

Here are a few things I have learned.

The Activemq 5.6+ tarball contains configurations which use a new ${activemq.data} variable. This variable needs to be setup in the activemq-wrapper.conf. This is missing from the activemq 5.6 rpm in the PE repo.

Here is the broker definition from default activemq.xml shipped by PuppetLabs

<broker xmlns="http://activemq.apache.org/schema/core"
  brokerName="localhost" dataDirectory="${activemq.base}/data"
  destroyApplicationContextOnStop="true">

The configuration destroyApplicationContextOnStop is no longer supported in 5.6+.

Here is an excerpt from the transport section.

<transportConnectors>
  <transportConnector name="openwire" uri="tcp://0.0.0.0:61616"/>
  <transportConnector name="stomp+nio" uri="stomp+nio://0.0.0.0:61613"/>
</transportConnectors>

I compared this to the activemq stomp example which has the following configuration.

<transportConnector name="stomp+nio" uri="stomp://0.0.0.0:6163?transport.closeAsync=false"/>

Based on this recommendation, I have added transport.closeAsync=false to my configs, but does anyone know why this is a good idea for stomp? The activemq docs only states that connections are closed synchronously.

I was also getting out of memory errors often. Turns out there is a ActiveMQ FAQ entry for this. I modified activemq-wrapper.conf to include the following configuration:

-Dorg.apache.activemq.UseDedicatedTaskRunner=false 

With this change, ActiveMQ uses a lot less memory. Which probably is why it seems to be much more stable now.

Turning off the task runner switches ActiveMQ to use a thread pool instead of a thread per connection. Some people suggest that the thread pool should be the default. It helped me, so I am passing this info along. I am curious if anyone else has experience with this setting.

For the curious, the mess that is my puppet repos is also available on GitHub. The templates for my activemq.xml, activemq-wrapper.conf are located in modules/wr/templates.

Pages