I'm using Docker to link JMS server container to another JMS client container. But when I run the server in the docker container, the client can not connect to the server correctly. I exposed port 443 on docker (Is there any other port which JMS uses?)
我正在使用Docker将JMS服务器容器链接到另一个JMS客户机容器。但是,当我在docker容器中运行服务器时,客户端无法正确连接到服务器。我在docker上公开了端口443(是否有JMS使用的其他端口?)
I can successfully create destication, but not the JMS context:
我可以成功地创建destication,但不是JMS上下文:
String PROVIDER_URL = "https-remoting://MYDOMAIN:443";
...
/** PASSED **/
Destination destination = (Destination) namingContext.lookup(destinationString);
/** HAS ERROR **/
JMSContext context = connectionFactory.createContext(username, password)
Here is the error:
这是错误:
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:123)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:621)
at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:176)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:169)
at io.netty.channel.DefaultChannelPipeline$HeadHandler.connect(DefaultChannelPipeline.java:1008)
at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495)
at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480)
at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47)
at io.netty.channel.CombinedChannelDuplexHandler.connect(CombinedChannelDuplexHandler.java:168)
at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495)
at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480)
at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495)
at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480)
at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:465)
at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:847)
at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:199)
at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:165)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:354)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "main" javax.jms.JMSRuntimeException: Failed to create session factory
at org.hornetq.jms.client.JmsExceptionUtils.convertToRuntimeException(JmsExceptionUtils.java:98)
at org.hornetq.jms.client.HornetQConnectionFactory.createContext(HornetQConnectionFactory.java:149)
at org.hornetq.jms.client.HornetQConnectionFactory.createContext(HornetQConnectionFactory.java:130)
at com.wpic.uptime.Client.main(Client.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Caused by: javax.jms.JMSException: Failed to create session factory
at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:673)
at org.hornetq.jms.client.HornetQConnectionFactory.createContext(HornetQConnectionFactory.java:140)
... 7 more
Caused by: HornetQNotConnectedException[errorType=NOT_CONNECTED message=HQ119007: Cannot connect to server(s). Tried with all available servers.]
at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:905)
at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:669)
... 8 more
1 个解决方案
#1
6
I just found the solution to this problem. I was also going through it.
我找到了解决这个问题的办法。我也经历过。
In your case the problem is in the JBoss configuration. In my case the problem was in Wildfly 8.2.
在您的情况下,问题是在JBoss配置中。在我的例子中,问题是在Wildfly 8.2中。
You are probably using the following parameter in your JBoss: jboss.bind.address = 0.0.0.0
您可能在JBoss中使用以下参数:jboss.bind。地址= 0.0.0.0
I am using this parameter in my wildfly for him to accept external connections from any IP because my Wildfly is exposed on the Internet.
我使用这个参数wildfly他接受来自任何的外部连接IP,因为我wildfly暴露在互联网上。
The problem is that if you do not specify the JBoss/Wildfly settings which IP that HornetQ should report to the JMS clients that are doing remote loockup HornetQ will assume that the IP is what is set in jboss.bind.address. In this case it will take that 0.0.0.0 is not a valid IP. You probably see the following message in its log JBoss:
问题是,如果您没有指定JBoss/Wildfly设置,那么HornetQ应该向正在进行远程loockup的JMS客户端报告的JMS客户端,将假定IP是在jboss.bind.address中设置的。在这种情况下,0.0.0.0不是一个有效的IP。您可能在其日志JBoss中看到以下消息:
INFO [org.hornetq.jms.server] (ServerService Thread Pool -- 53) HQ121005: Invalid "host" value "0.0.0.0" detected for "http-connector" connector. Switching to "hostname.your.server". If this new address is incorrect please manually configure the connector to use the proper one.
(org.hornetq.jms信息。服务器](ServerService线程池——53)HQ121005:“主机”值“0.0.0.0”被检测为“http连接器”连接器。切换到“hostname.your.server”。如果这个新地址不正确,请手动配置连接器使用合适的。
In this case HornetQ will use the host defined in the machine name. On linux for example it will use what is defined in /etc/hostname.
在这种情况下,HornetQ将使用机器名称中定义的主机。在linux上,它将使用/etc/hostname中定义的内容。
There is another problem. Because usually the hostname is not a valid host on the Internet can be resolved to an IP via a DNS service.
还有另一个问题。因为通常主机名不是Internet上的有效主机,所以可以通过DNS服务解析为IP。
Then notice what is probably happening to you: Your JBoss server is scheduled to give bind to 0.0.0.0, your HornetQ (embedded in JBoss) is trying to take this IP and how it is not a valid IP he is taking the hostname of your server. When your remote JMS client (that is outside of your local network) makes a loockup on your JBoss the HornetQ reports to the client that he must seek the HornetQ resources on the host YOUR_HOSTNAME_LOCAL_SERVER but when it tries to resolve this name through DNS he can not then the following failure occurs:
然后注意可能发生的情况:您的JBoss服务器将被安排绑定到0.0.0.0,您的HornetQ(嵌入在JBoss中)正在尝试使用这个IP,以及它不是一个有效的IP,他正在使用您的服务器的主机名。当你远程JMS客户机(即你的本地网络以外的)使你的JBoss loockup HornetQ报告给客户端,他必须寻求HornetQ资源主机YOUR_HOSTNAME_LOCAL_SERVER但是当它试图通过DNS他不能解决这个名字然后发生下列故障:
java.nio.channels.UnresolvedAddressException at sun.nio.ch.Net.checkAddress(Net.java:123) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:621) at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:176) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:169) at io.netty.channel.DefaultChannelPipeline$HeadHandler.connect(DefaultChannelPipeline.java:1008) at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495) at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480) at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47) at io.netty.channel.CombinedChannelDuplexHandler.connect(CombinedChannelDuplexHandler.java:168) at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495) at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480) at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50) at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495) at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480) at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:465) at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:847) at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:199) at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:165) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:354) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101) at java.lang.Thread.run(Thread.java:745)
java.nio.channels。在sun.nio.ch.Net.checkAddress(Net.java:123)中,在sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:621)在io.netty.net . channel.net . doconnect (NioSocketChannel.java:176) at io.netty.org .connect(AbstractNioChannel.java:169)在iot .netty. channel.defaultchannelpipeline $HeadHandler.connect(defaultchannel. java:1008)。在iot .netty.channel. defaultchannelhandlercontext.java495 .connect(DefaultChannelHandlerContext.java:480) at io.netty.channeloutboundhandleradapter .connect(ChannelOutboundHandlerAdapter.java:47) at io.netty.comedchannel. connect(CombinedChannelDuplexHandler.java:168) at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495)。在iot .netty.channel. connect(DefaultChannelHandlerContext.java:480)在iot .netty. channelhandlercontext.invokeconnect (DefaultChannelHandlerContext.java:495)在io.netty.channelhandlercontext.connect (DefaultChannelHandlerContext.java:480)在io.netty.channelhandlercontext.connect (DefaultChannelHandlerContext.java:465)。在io.netty.org . . . . . . . . . . .。. . . . . . . . . . . . . . . . . .,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .java.lang.Thread.run(Thread.java:745)
The solution to the problem is to configure the JBoss which host it should inform for customers who are doing loockup remote.
问题的解决方案是配置JBoss,它应该告知正在进行loockup remote的客户。
In my case the setting for the wildfly is as follows. The standalone.xml file must be changed:
在我的例子中,wildfly的设置如下。独立的。xml文件必须更改:
<subsystem xmlns="urn:jboss:domain:messaging:2.0">
<hornetq-server>
<security-enabled>true</security-enabled>
<journal-file-size>102400</journal-file-size>
<connectors>
<http-connector name="http-connector" socket-binding="http-remote-jms">
<param key="http-upgrade-endpoint" value="http-acceptor"/>
</http-connector>
</connectors>
...
</hornetq-server>
</subsystem>
AND
和
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
...
<outbound-socket-binding name="http-remote-jms">
<remote-destination host="YOUR_REAL_HOSTNAME" port="${jboss.http.port:8080}"/>
</outbound-socket-binding>
</socket-binding-group>
Note that I'm not using https because I could not do Wildfly work with https for JMS.
请注意,我没有使用https,因为我不能使用https来处理JMS。
#1
6
I just found the solution to this problem. I was also going through it.
我找到了解决这个问题的办法。我也经历过。
In your case the problem is in the JBoss configuration. In my case the problem was in Wildfly 8.2.
在您的情况下,问题是在JBoss配置中。在我的例子中,问题是在Wildfly 8.2中。
You are probably using the following parameter in your JBoss: jboss.bind.address = 0.0.0.0
您可能在JBoss中使用以下参数:jboss.bind。地址= 0.0.0.0
I am using this parameter in my wildfly for him to accept external connections from any IP because my Wildfly is exposed on the Internet.
我使用这个参数wildfly他接受来自任何的外部连接IP,因为我wildfly暴露在互联网上。
The problem is that if you do not specify the JBoss/Wildfly settings which IP that HornetQ should report to the JMS clients that are doing remote loockup HornetQ will assume that the IP is what is set in jboss.bind.address. In this case it will take that 0.0.0.0 is not a valid IP. You probably see the following message in its log JBoss:
问题是,如果您没有指定JBoss/Wildfly设置,那么HornetQ应该向正在进行远程loockup的JMS客户端报告的JMS客户端,将假定IP是在jboss.bind.address中设置的。在这种情况下,0.0.0.0不是一个有效的IP。您可能在其日志JBoss中看到以下消息:
INFO [org.hornetq.jms.server] (ServerService Thread Pool -- 53) HQ121005: Invalid "host" value "0.0.0.0" detected for "http-connector" connector. Switching to "hostname.your.server". If this new address is incorrect please manually configure the connector to use the proper one.
(org.hornetq.jms信息。服务器](ServerService线程池——53)HQ121005:“主机”值“0.0.0.0”被检测为“http连接器”连接器。切换到“hostname.your.server”。如果这个新地址不正确,请手动配置连接器使用合适的。
In this case HornetQ will use the host defined in the machine name. On linux for example it will use what is defined in /etc/hostname.
在这种情况下,HornetQ将使用机器名称中定义的主机。在linux上,它将使用/etc/hostname中定义的内容。
There is another problem. Because usually the hostname is not a valid host on the Internet can be resolved to an IP via a DNS service.
还有另一个问题。因为通常主机名不是Internet上的有效主机,所以可以通过DNS服务解析为IP。
Then notice what is probably happening to you: Your JBoss server is scheduled to give bind to 0.0.0.0, your HornetQ (embedded in JBoss) is trying to take this IP and how it is not a valid IP he is taking the hostname of your server. When your remote JMS client (that is outside of your local network) makes a loockup on your JBoss the HornetQ reports to the client that he must seek the HornetQ resources on the host YOUR_HOSTNAME_LOCAL_SERVER but when it tries to resolve this name through DNS he can not then the following failure occurs:
然后注意可能发生的情况:您的JBoss服务器将被安排绑定到0.0.0.0,您的HornetQ(嵌入在JBoss中)正在尝试使用这个IP,以及它不是一个有效的IP,他正在使用您的服务器的主机名。当你远程JMS客户机(即你的本地网络以外的)使你的JBoss loockup HornetQ报告给客户端,他必须寻求HornetQ资源主机YOUR_HOSTNAME_LOCAL_SERVER但是当它试图通过DNS他不能解决这个名字然后发生下列故障:
java.nio.channels.UnresolvedAddressException at sun.nio.ch.Net.checkAddress(Net.java:123) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:621) at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:176) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:169) at io.netty.channel.DefaultChannelPipeline$HeadHandler.connect(DefaultChannelPipeline.java:1008) at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495) at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480) at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47) at io.netty.channel.CombinedChannelDuplexHandler.connect(CombinedChannelDuplexHandler.java:168) at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495) at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480) at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50) at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495) at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480) at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:465) at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:847) at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:199) at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:165) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:354) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101) at java.lang.Thread.run(Thread.java:745)
java.nio.channels。在sun.nio.ch.Net.checkAddress(Net.java:123)中,在sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:621)在io.netty.net . channel.net . doconnect (NioSocketChannel.java:176) at io.netty.org .connect(AbstractNioChannel.java:169)在iot .netty. channel.defaultchannelpipeline $HeadHandler.connect(defaultchannel. java:1008)。在iot .netty.channel. defaultchannelhandlercontext.java495 .connect(DefaultChannelHandlerContext.java:480) at io.netty.channeloutboundhandleradapter .connect(ChannelOutboundHandlerAdapter.java:47) at io.netty.comedchannel. connect(CombinedChannelDuplexHandler.java:168) at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495)。在iot .netty.channel. connect(DefaultChannelHandlerContext.java:480)在iot .netty. channelhandlercontext.invokeconnect (DefaultChannelHandlerContext.java:495)在io.netty.channelhandlercontext.connect (DefaultChannelHandlerContext.java:480)在io.netty.channelhandlercontext.connect (DefaultChannelHandlerContext.java:465)。在io.netty.org . . . . . . . . . . .。. . . . . . . . . . . . . . . . . .,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .java.lang.Thread.run(Thread.java:745)
The solution to the problem is to configure the JBoss which host it should inform for customers who are doing loockup remote.
问题的解决方案是配置JBoss,它应该告知正在进行loockup remote的客户。
In my case the setting for the wildfly is as follows. The standalone.xml file must be changed:
在我的例子中,wildfly的设置如下。独立的。xml文件必须更改:
<subsystem xmlns="urn:jboss:domain:messaging:2.0">
<hornetq-server>
<security-enabled>true</security-enabled>
<journal-file-size>102400</journal-file-size>
<connectors>
<http-connector name="http-connector" socket-binding="http-remote-jms">
<param key="http-upgrade-endpoint" value="http-acceptor"/>
</http-connector>
</connectors>
...
</hornetq-server>
</subsystem>
AND
和
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
...
<outbound-socket-binding name="http-remote-jms">
<remote-destination host="YOUR_REAL_HOSTNAME" port="${jboss.http.port:8080}"/>
</outbound-socket-binding>
</socket-binding-group>
Note that I'm not using https because I could not do Wildfly work with https for JMS.
请注意,我没有使用https,因为我不能使用https来处理JMS。