After we upgraded from Wildfly 8.2.1.Final to Wildfly 9.0.1.Final, we started to getting a lot of warnings like the following:
在我们从Wildfly 8.2.1升级之后。决赛Wildfly 9.0.1。最后,我们开始收到很多类似的警告:
WARNING [org.jgroups.protocols.TCP] (INT-1,ee,dev6.example.com:server1) JGRP000012: discarded message from different cluster hq-cluster (our cluster is ee). Sender was ad3f8046-3c95-f6d4-da13-3019d931f9e4 (received 4 identical messages from ad3f8046-3c95-f6d4-da13-3019d931f9e4 in the last 64159 ms)
The messages are for various hosts and servers at hosts. The same thing was in betas and CR versions of Wildfly, on the other hand, it wasn't in version 8. We are using TCP as a transport, however according to other resources the same is for UDP.
消息是针对主机和服务器的主机。另一方面,在betas和CR版本的Wildfly中也出现了同样的情况,它不在版本8中。我们使用TCP作为传输,但是根据其他资源,UDP也是如此。
Does someone have a solution (of course other than increasing the severity level of logs)? Thanks.
是否有人有解决方案(当然除了增加日志的严重程度)?谢谢。
1 个解决方案
#1
7
We finally found the problem and the solution. Wildfly 9 is sending the messages for the cluster nodes and for HornetQ within the same communication channel, which seems to make collisions. We solved the problem by the creation of the second stack and dividing of the traffic between them.
我们最终找到了问题和解决方案。Wildfly 9将消息发送到集群节点,而HornetQ则在相同的通信通道内发送消息,这似乎会造成冲突。我们通过创建第二个堆栈来解决这个问题,并将它们之间的流量划分开来。
For TCP, the working configuration is as follows:
对于TCP,工作配置如下:
<stacks default="tcp">
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="TCPPING">
<property name="initial_hosts">
node1[7600],node1[7750],node2[7600],node2[7750]
</property>
<property name="port_range">
0
</property>
</protocol>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
<stack name="tcphq">
<transport type="TCP" socket-binding="jgroups-tcp-hq"/>
<protocol type="TCPPING">
<property name="initial_hosts">
node1[7660],node1[7810],node2[7660],node2[7810]
</property>
<property name="port_range">
0
</property>
</protocol>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-hq-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
</stacks>
You also need to configure HornetQ (use the proper jgroups-stack, tcphq in this case):
您还需要配置HornetQ(在本例中使用适当的jgroups-stack, tcphq):
<broadcast-groups>
<broadcast-group name="bg-group1">
<jgroups-stack>tcphq</jgroups-stack>
<jgroups-channel>hq-cluster</jgroups-channel>
<broadcast-period>5000</broadcast-period>
<connector-ref>
http-connector
</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<jgroups-stack>tcphq</jgroups-stack>
<jgroups-channel>hq-cluster</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
...and of course you need to add the relevant socket-binding
into socket-binding-group
:
…当然,你还需要将相关的套接字绑定到socket-bin -group:
<socket-binding name="jgroups-tcp-hq" port="7660"/>
<socket-binding name="jgroups-tcp-hq-fd" port="7670"/>
Unfortunately, I have no experience with UDP, but I think the principle will be the same.
不幸的是,我对UDP没有经验,但我认为原理是一样的。
#1
7
We finally found the problem and the solution. Wildfly 9 is sending the messages for the cluster nodes and for HornetQ within the same communication channel, which seems to make collisions. We solved the problem by the creation of the second stack and dividing of the traffic between them.
我们最终找到了问题和解决方案。Wildfly 9将消息发送到集群节点,而HornetQ则在相同的通信通道内发送消息,这似乎会造成冲突。我们通过创建第二个堆栈来解决这个问题,并将它们之间的流量划分开来。
For TCP, the working configuration is as follows:
对于TCP,工作配置如下:
<stacks default="tcp">
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="TCPPING">
<property name="initial_hosts">
node1[7600],node1[7750],node2[7600],node2[7750]
</property>
<property name="port_range">
0
</property>
</protocol>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
<stack name="tcphq">
<transport type="TCP" socket-binding="jgroups-tcp-hq"/>
<protocol type="TCPPING">
<property name="initial_hosts">
node1[7660],node1[7810],node2[7660],node2[7810]
</property>
<property name="port_range">
0
</property>
</protocol>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-hq-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
</stacks>
You also need to configure HornetQ (use the proper jgroups-stack, tcphq in this case):
您还需要配置HornetQ(在本例中使用适当的jgroups-stack, tcphq):
<broadcast-groups>
<broadcast-group name="bg-group1">
<jgroups-stack>tcphq</jgroups-stack>
<jgroups-channel>hq-cluster</jgroups-channel>
<broadcast-period>5000</broadcast-period>
<connector-ref>
http-connector
</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<jgroups-stack>tcphq</jgroups-stack>
<jgroups-channel>hq-cluster</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
...and of course you need to add the relevant socket-binding
into socket-binding-group
:
…当然,你还需要将相关的套接字绑定到socket-bin -group:
<socket-binding name="jgroups-tcp-hq" port="7660"/>
<socket-binding name="jgroups-tcp-hq-fd" port="7670"/>
Unfortunately, I have no experience with UDP, but I think the principle will be the same.
不幸的是,我对UDP没有经验,但我认为原理是一样的。