Skip to content

Instantly share code, notes, and snippets.

@yegeniy
Created November 22, 2012 01:40
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save yegeniy/a2a78465a3fc818ed61b to your computer and use it in GitHub Desktop.
Save yegeniy/a2a78465a3fc818ed61b to your computer and use it in GitHub Desktop.
Message Loss After Restart

A durable queue "jms.queue.foo" with 0 consumers is sitting on node1. Usually the node it is on is clustered via an implicit core bridge to another node node2, which has a durable queue of the same name "jms.queue.foo" with at least one consumer.

Node1 was restarted and for some reason the cluster connection to node2 failed to be reestablished when node1 came back up. Messages began to pile up in "jms.queue.foo" on node1. The messages originally get into the durable queue "jms.queue.foo" on node1 over stomp.

Eventually, we restarted node1 so that it would reestablish the cluster connection to node2. The connection was reestablished and new messages began to move from node1 to "jms.queue.foo" queue on node2.

However, the messages that had piled up in node1's "jms.queue.foo" are gone. Does this make sense?

Here is the configuration https://gist.github.com/a2a78465a3fc818ed61b#file_2%20node1%20hornetq_configuration.xml

So, how come the messages that were on node1's "jms.queue.foo" seem to be gone? How can I set up my cluster so that node1's "jms.queue.foo" messages aren't lost after a restart?

<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
<clustered>true</clustered>
<paging-directory>/var/lib/data/hornetq/paging</paging-directory>
<bindings-directory>/var/lib/data/hornetq/bindings</bindings-directory>
<journal-directory>/var/lib/data/hornetq/journal</journal-directory>
<journal-min-files>10</journal-min-files>
<large-messages-directory>/var/lib/data/hornetq/large-messages</large-messages-directory>
<message-counter-enabled>true</message-counter-enabled>
<connectors>
<connector name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="node1host"/>
<param key="port" value="5445"/>
</connector>
<connector name="node2">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="node2host"/>
<param key="port" value="5445"/>
</connector>
</connectors>
<acceptors>
<acceptor name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="0.0.0.0"/>
<param key="port" value="5445"/>
</acceptor>
<acceptor name="stomp-acceptor">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="protocol" value="stomp"/>
<param key="host" value="0.0.0.0"/>
<param key="port" value="61613"/>
</acceptor>
</acceptors>
<cluster-connections>
<cluster-connection name="production">
<address>jms</address>
<connector-ref>netty</connector-ref>
<static-connectors allow-direct-connections-only="true">
<connector-ref>node2</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="send" roles="guest"/>
</security-setting>
</security-settings>
<address-settings>
<!--default for catch all-->
<address-setting match="jms.#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<max-delivery-attempts>3</max-delivery-attempts>
<redelivery-delay>10000</redelivery-delay>
<redistribution-delay>5000</redistribution-delay>
<max-size-bytes>104857600</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<send-to-dla-on-no-route>true</send-to-dla-on-no-route>
</address-setting>
</address-settings>
</configuration>
@yegeniy
Copy link
Author

yegeniy commented Nov 22, 2012

How come the messages that were on node1's "jms.queue.foo" seem to be gone? How can I set up my cluster so that node1's "jms.queue.foo" messages aren't lost after a restart? Is this a better question for the user forums instead?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment