1
- ## Networks of Brokers in AMQ7 Broker (Clustering)
1
+ ## Networks of Brokers in AMQ 7 Broker (Clustering)
2
2
3
- This worksheet covers clustering AMQ7 Brokers.
4
- By the end of this you should know:
3
+ This worksheet covers clustering AMQ 7 brokers. By the end of this you should know:
5
4
6
- 1 . Clustering concepts of AMQ7
5
+ 1 . Clustering concepts of AMQ 7
7
6
* Discovery
8
7
* Cluster bridges
9
8
* Routing of messages
@@ -12,18 +11,18 @@ By the end of this you should know:
12
11
13
12
2 . How to configure Clustering
14
13
* Configuring Discovery
15
- * Configuring A cluster
14
+ * Configuring a cluster
16
15
* Configuring Load Balancing
17
16
18
17
19
- ### AMQ7 Clustering Concepts
18
+ ### AMQ 7 Clustering Concepts
20
19
21
- Multiple instances of AMQ7 Brokers can be grouped together to share message processing load.
22
- Each Broker manages its own messages and connections and is connected to other brokers with
23
- "cluster bridges" that are used to send Topology information, such as queues and consumers,
20
+ Multiple instances of AMQ 7 brokers can be grouped together to share message processing load.
21
+ Each broker manages its own messages and connections and is connected to other brokers with
22
+ "cluster bridges" that are used to send topology information, such as queues and consumers,
24
23
as well as load balancing messages.
25
24
26
- ### Simple 2 node cluster
25
+ ### Simple 2- node cluster
27
26
28
27
Lets create 2 clustered brokers using the CLI, firstly, since AMQ 7 uses UDP for discovery
29
28
you will need to ensure that a loopback address is created, this will allow UDP to work on the same machine.
@@ -37,21 +36,26 @@ On a Mac this will be something like
37
36
38
37
Now lets create a cluster of 2 brokers by running the CLI commands
39
38
40
- (A_MQ_Install_Dir)/bin/artemis create --user admin --password password --role admin --allow-anonymous y --clustered --host 127.0.0.1 --cluster-user clusterUser --cluster-password clusterPassword --max-hops 1 broker1
39
+ ```
40
+ $ <AMQ_HOME>/bin/artemis create --user admin --password password --role admin --allow-anonymous y --clustered --host 127.0.0.1 --cluster-user clusterUser --cluster-password clusterPassword --max-hops 1 ../instances/clusteredbroker1
41
+ ```
41
42
42
43
and
43
44
44
- (A_MQ_Install_Dir)/bin/artemis create --user admin --password password --role admin --allow-anonymous y --clustered --host 127.0.0.1 --cluster-user clusterUser --cluster-password clusterPassword --max-hops 1 --port-offset 100 broker2
45
+ ```
46
+ $ <AMQ_HOME>/bin/artemis create --user admin --password password --role admin --allow-anonymous y --clustered --host 127.0.0.1 --cluster-user clusterUser --cluster-password clusterPassword --max-hops 1 --port-offset 100 ../instances/clusteredbroker2
47
+ ```
45
48
46
- Now start both brokers using the run command, for instance
49
+ Now start * both* brokers using the ` run ` command for each one , for instance:
47
50
48
- "(Broker1_Home)/broker1/bin/artemis" run
51
+ ```
52
+ $ <AMQ_INSTANCE>/bin/artemis run
53
+ ```
49
54
50
- What you should see is each Broker discovering each other and connection a cluster bridge,
51
- you should see a log message on each Broker showing this, something like:
55
+ What you should see is each broker discovering each other and creating a cluster bridge. You should see a log message on each broker showing this, something like:
52
56
53
57
``` bash
54
- 12:42:35,488 INFO [org.apache.activemq.artemis.core.server] AMQ221027: Bridge ClusterConnectionBridge@1d90c678 [name=$.artemis.internal.sf.my-cluster.8d25b0ff-55ad-11e7-bfb2-e8b1fc559583, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.8d25b0ff-55ad-11e7-bfb2-e8b1fc559583, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=86fab59a-55ad-11e7-ae52-e8b1fc559583], temp=false]@60ff2ab7 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@1d90c678 [name=$.artemis.internal.sf.my-cluster.8d25b0ff-55ad-11e7-bfb2-e8b1fc559583, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.8d25b0ff-55ad-11e7-bfb2-e8b1fc559583, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=86fab59a-55ad-11e7-ae52-e8b1fc559583], temp=false]@60ff2ab7 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61716&host=127-0-0-1], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@943454742[nodeUUID=86fab59a-55ad-11e7-ae52-e8b1fc559583, connector=TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=127-0-0-1, address=, server=ActiveMQServerImpl::serverUUID=86fab59a-55ad-11e7-ae52-e8b1fc559583])) [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61716&host=127-0-0-1], discoveryGroupConfiguration=null]] is connected
58
+ INFO [org.apache.activemq.artemis.core.server] AMQ221027: Bridge ClusterConnectionBridge@1d90c678 [name=$.artemis.internal.sf.my-cluster.8d25b0ff-55ad-11e7-bfb2-e8b1fc559583, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.8d25b0ff-55ad-11e7-bfb2-e8b1fc559583, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=86fab59a-55ad-11e7-ae52-e8b1fc559583], temp=false]@60ff2ab7 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@1d90c678 [name=$.artemis.internal.sf.my-cluster.8d25b0ff-55ad-11e7-bfb2-e8b1fc559583, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.8d25b0ff-55ad-11e7-bfb2-e8b1fc559583, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=86fab59a-55ad-11e7-ae52-e8b1fc559583], temp=false]@60ff2ab7 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61716&host=127-0-0-1], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@943454742[nodeUUID=86fab59a-55ad-11e7-ae52-e8b1fc559583, connector=TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=127-0-0-1, address=, server=ActiveMQServerImpl::serverUUID=86fab59a-55ad-11e7-ae52-e8b1fc559583])) [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61716&host=127-0-0-1], discoveryGroupConfiguration=null]] is connected
55
59
```
56
60
57
61
You can also log into the HawtIO Console by going to 'http://localhost:8161/hawtio ' and click on the
@@ -65,19 +69,18 @@ We now have a cluster of 2 brokers, lets look in more detail at the configuratio
65
69
66
70
#### Discovery
67
71
68
- When a Clustered Broker is started the first thing it does is try to discover another Broker in the cluster.
69
- It will keep doing this until it finds a Broker at which time it will try to create a Cluster Bridge to it.
70
- By default the Broker will use UDP to broadcast its location and to also discover other Brokers.
71
-
72
+ When a clustered broker is started the first thing it does is try to discover another broker in the cluster.
73
+ It will keep doing this until it finds a broker at which time it will try to create a cluster bridge to it.
74
+ By default the broker will use UDP multicast to broadcast its location and to also discover other brokers.
72
75
73
- The first thing to notice in the configuration file is the connector config that will be broadcast to other Brokers , this
76
+ The first thing to notice in the configuration file is the connector config that will be broadcast to other brokers , this
74
77
looks like:
75
78
76
79
``` xml
77
80
<connector name =" artemis" >tcp://127.0.0.1:61616</connector >
78
81
```
79
82
80
- A Broadcast Group then defines how a Broker broadcasts the connector info, this looks something like:
83
+ A ` broadcast-group ` then defines how a broker broadcasts the connector info, this looks something like:
81
84
82
85
``` xml
83
86
<broadcast-group name =" bg-group1" >
@@ -89,9 +92,9 @@ A Broadcast Group then defines how a Broker broadcasts the connector info, this
89
92
90
93
```
91
94
92
- This configuration will broadcast the 'artemis' connector info over the UDP address 231.7.7.7:9876 every 5 secosnds.
95
+ This configuration will broadcast the 'artemis' connector info over the multicast address 231.7.7.7:9876 every 5 secosnds.
93
96
94
- Now a Broker needs to be able discover the above broadcast, this is done via a discovery group. This looks like:
97
+ Now a broker needs to be able discover the above broadcast, this is done via a ` discovery- group ` . This looks like:
95
98
96
99
``` xml
97
100
<discovery-group name =" dg-group1" >
@@ -101,7 +104,7 @@ Now a Broker needs to be able discover the above broadcast, this is done via a d
101
104
</discovery-group >
102
105
```
103
106
104
- This configuration doesn't do anything by itself but is referenced by a cluster connection configuration , which looks like:
107
+ This configuration doesn't do anything by itself but is referenced by a ` cluster- connection ` , which looks like:
105
108
106
109
``` xml
107
110
<cluster-connection name =" my-cluster" >
@@ -112,62 +115,61 @@ This configuration doesn't do anything by itself but is referenced by a cluster
112
115
</cluster-connection >
113
116
```
114
117
115
- You can see that the ' discovery-group-ref' references a discovery group, once started the Broker will listen on
116
- the UDP address 231.7.7.7:9876 for other Brokers broadcasting.
118
+ You can see that the ` discovery-group-ref ` references a discovery group. Once started, the broker will listen on
119
+ the multicast address 231.7.7.7:9876 for other brokers broadcasting.
117
120
118
- Once the Broker has discovered a target broker it will try to create a Cluster Bridge to that Broker, we refer to this as initial discovery.
119
- Once initial discovery is complete all other discovery is done over the Cluster Bridge, in a 2 node cluster it would happen
121
+ Once the broker has discovered a target broker it will try to create a cluster bridge to that broker. We refer to this as * initial discovery* .
122
+ Once initial discovery is complete all other discovery is done over the cluster bridge itself. In a 2 node cluster it would happen
120
123
like so:
121
124
122
- 1 . The source Broker sends its full Topology over the cluster bridge to its target Broker, This includes a list of Brokers it
123
- is aware of , including itself configured by the connector-ref in the cluster-connection config, and a list of queues and consumers.
124
- 2 . The target broker then uses the list of Brokers to create its own cluster bridges, in this case back to the source Broker .
125
- 3 . The target Broker then sends its own Topology over its cluster bridges.
126
- 4 . Both Broker create any queues based on the Topology received.
125
+ 1 . The source broker sends its full topology over the cluster bridge to its target broker. This includes a list of brokers it
126
+ is aware of ( including itself configured by the ` connector-ref ` in the ` cluster-connection ` ) and a list of queues and consumers.
127
+ 2 . The target broker then uses the list of brokers to create its own cluster bridges ( in this case back to the source broker) .
128
+ 3 . The target broker then sends its own topology over its cluster bridges.
129
+ 4 . Both brokers create any queues based on the topology received.
127
130
128
131
129
- ##### Discovery without UDP (Static Connectors), Optional
132
+ ##### Discovery without UDP multicast (Static Connectors), Optional
130
133
131
- If UDP is not available then Brokers can be statically configured. This is done purely through connectors.
134
+ If UDP multicast is not available then brokers can be * statically* configured. This is done purely through connectors.
132
135
133
- Lets update broker1 and 2 to use static connectors.
136
+ Lets update brokers 1 and 2 to use static connectors.
134
137
135
138
Firstly remove both the broadcast and discovery groups configuration completely.
136
139
137
140
``` xml
138
- <!-- remove the following lines -->
139
- <broadcast-groups >
140
- <broadcast-group name =" bg-group1" >
141
- <group-address >231.7.7.7</group-address >
142
- <group-port >9876</group-port >
143
- <broadcast-period >5000</broadcast-period >
144
- <connector-ref >artemis</connector-ref >
145
- </broadcast-group >
146
- </broadcast-groups >
147
-
148
- <discovery-groups >
149
- <discovery-group name =" dg-group1" >
150
- <group-address >231.7.7.7</group-address >
151
- <group-port >9876</group-port >
152
- <refresh-timeout >10000</refresh-timeout >
153
- </discovery-group >
154
- </discovery-groups >
155
-
141
+ <!-- remove the following lines -->
142
+ <broadcast-groups >
143
+ <broadcast-group name =" bg-group1" >
144
+ <group-address >231.7.7.7</group-address >
145
+ <group-port >9876</group-port >
146
+ <broadcast-period >5000</broadcast-period >
147
+ <connector-ref >artemis</connector-ref >
148
+ </broadcast-group >
149
+ </broadcast-groups >
150
+
151
+ <discovery-groups >
152
+ <discovery-group name =" dg-group1" >
153
+ <group-address >231.7.7.7</group-address >
154
+ <group-port >9876</group-port >
155
+ <refresh-timeout >10000</refresh-timeout >
156
+ </discovery-group >
157
+ </discovery-groups >
156
158
```
157
159
158
- Then add a connector that points to the other broker, so on broker 1 it would look like:
160
+ Then on each broker add a connector that points to the * other* broker. So on broker 1 it would look like:
159
161
160
162
``` xml
161
163
<connector name =" discovery-connector" >tcp://127.0.0.1:61716</connector >
162
164
```
163
165
164
- broker 2 would look like
166
+ And on broker 2 it would look like:
165
167
166
168
``` xml
167
169
<connector name =" discovery-connector" >tcp://127.0.0.1:61616</connector >
168
170
```
169
171
170
- Lastly remove the discovery-group-ref from the cluster-connection elemet and replace it with the following:
172
+ Lastly remove the ` discovery-group-ref ` from the ` cluster-connection ` on both brokers and replace it with the following:
171
173
172
174
``` xml
173
175
<static-connectors >
@@ -177,46 +179,47 @@ Lastly remove the discovery-group-ref from the cluster-connection elemet and rep
177
179
178
180
Now if you restart the brokers you will again see them form a cluster.
179
181
180
- > Note
182
+ > ##### Note
181
183
> The static connectors list can contain all the possible brokers in the cluster,
182
184
> however it only needs 1 to be available to connect to.
183
185
184
186
185
- #### Client Side load balancing
187
+ #### Client-side Connection Load Balancing
186
188
187
- Client Side balancing is the ability to spread connections across multiple brokers,
189
+ Client-side connection load balancing is the ability of a client to spread connections across multiple brokers,
188
190
currently only the core JMS client supports this. This is done via load balancing policies configured on the
189
- Connection Factory . If using JNDI the ` jndi.properties ` would look like:
191
+ connection factory via the ` loadBalancingPolicyClassName ` URL property . If using JNDI the ` jndi.properties ` would look like:
190
192
191
- java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory
192
- connection.myConnectionFactory=tcp://localhost:61616?loadBalancingPolicyClassName=org.apache.activemq.artemis.api.core.client.loadbalance.RandomConnectionLoadBalancingPolicy
193
+ ``` properties
194
+ java.naming.factory.initial =org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory
195
+ connection.myConnectionFactory =tcp://localhost:61616?loadBalancingPolicyClassName =org.apache.activemq.artemis.api.core.client.loadbalance.RandomConnectionLoadBalancingPolicy
196
+ ```
193
197
194
198
The available policies are
195
199
196
- - Round Robin.
197
- - Random.
198
- - Random Sticky.
199
- - First Element.
200
+ * Round Robin ( ` org.apache.activemq.artemis.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy ` ) .
201
+ * Random ( ` org.apache.activemq.artemis.api.core.client.loadbalance.RandomConnectionLoadBalancingPolicy ` ) .
202
+ * Random Sticky ( ` org.apache.activemq.artemis.api.core.client.loadbalance.RandomStickyConnectionLoadBalancingPolicy ` ) .
203
+ * First Element ( ` org.apache.activemq.artemis.api.core.client.loadbalance.FirstElementConnectionLoadBalancingPolicy ` ) .
200
204
201
- You can also implement your own policy by implementing an interface
205
+ All of these classes ship in the ` artemis-core-client ` JAR. You can also implement your own policy by implementing the ` org.apache.activemq.artemis.api.core.client.loadbalance.ConnectionLoadBalancingPolicy ` interface.
202
206
203
207
#### Message Load Balancing
204
208
205
- The Message Load balancing Policy configures how messages are load balanced around the cluster.
209
+ The message load balancing policy configures how messages are load balanced around the cluster * by the server * .
206
210
This is configured in the cluster-connection, like so.
207
211
208
212
``` xml
209
213
<message-load-balancing >ON_DEMAND</message-load-balancing >
210
214
```
211
215
212
- By default it is ON_DEMAND, which means that messages will be round robined around Brokers that have
213
- available consumers.
216
+ By default it is ON_DEMAND, which means that messages will be round robined around Brokers that have available consumers.
214
217
215
- > Note
218
+ > ##### Note
216
219
> Messages are routed at the point they arrive at the broker and before they arrive on a queue.
217
220
> They will either route to a local queue or to a queue on another broker
218
221
219
- We can test this using the cli , but firstly each broker needs to be stopped and the clustered queues configured,
222
+ We can test this using the CLI , but firstly each broker needs to be stopped and the clustered queues configured,
220
223
add the following queue to each broker:
221
224
222
225
``` xml
@@ -232,20 +235,20 @@ And then restart the brokers.
232
235
firstly lets send 20 messages to broker 1:
233
236
234
237
``` bash
235
- ${ARTEMIS_HOME} /bin/artemis producer --url tcp://localhost:61616 --message-count 20 --destination queue://myQueue
238
+ $ < AMQ_HOME > /bin/artemis producer --url tcp://localhost:61616 --message-count 20 --destination queue://myQueue
236
239
```
237
240
238
241
Since there are no consumers these will simple be delivered to the local queue on broker 1. We can test this by trying
239
242
to consume from broker 2:
240
243
241
244
``` bash
242
- ${ARTEMIS_HOME} /bin/artemis consumer --url tcp://localhost:61716 --destination queue://myQueue --message-count 10
245
+ $ < AMQ_HOME > /bin/artemis consumer --url tcp://localhost:61716 --destination queue://myQueue --message-count 10
243
246
```
244
247
245
248
This should just hang without receiving any message. Now try broker 1
246
249
247
250
``` bash
248
- ${ARTEMIS_HOME} /bin/artemis consumer --url tcp://localhost:61616 --destination queue://myQueue --message-count 20
251
+ $ < AMQ_HOME > /bin/artemis consumer --url tcp://localhost:61616 --destination queue://myQueue --message-count 20
249
252
```
250
253
251
254
The client will receive all 20 message.
@@ -260,4 +263,4 @@ Now try the first part of this again, but this time setting the load balancing t
260
263
<message-load-balancing >STRICT</message-load-balancing >
261
264
```
262
265
263
- This time the messages are load balanced even tho no consumer exist.
266
+ This time the messages are load balanced even though no consumer exist.
0 commit comments