You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am struggling to set up dispy to work inside a docker container.
In my configuration, both a scheduler and a client (SharedJobCluster) hosted on the same server inside separate containers. The containers have access to a common network. The node at the same time runs on a separate machine. Unfortunately, this configuration doesn't work.
I have an issue with ip_addr option. It allows to specify only one host address for SharedJobCluster. If it's the container's hostname I can successfully connect to the scheduler to start the job. But then the node can not connect to the client to return the resulting files. It has local to the container IP and of course, cannot reach it from outside.
If I specify external IP through ip_addr option the scheduler refuses to start the job. Because the client doesn't listen anymore to this interface, only external. And thus cannot open for scheduler connection try.
I tried to understand the documentation for ip_addr, but the mention of a list just confuses me.
ip_addr is address to use for (client) communication. If it is not set, all configured network interfaces are used. If it is a string, it must be either a host name or IP address (in either IPv4 or IPv6 format). If it is a list, each must be a string of host name or IP address, in which case interface addresses for each of those is used.
So, my question is. Is it possible to set up more than one ip_addr for SharedJobCluster similarly to what we can achieve with the scheduler?
I did try to use ext_ip_addr, but then the scheduler starts recognizing the client's IP as 127.0.0.1 and again fails to connect to such a host.
The text was updated successfully, but these errors were encountered:
Hello,
I am struggling to set up dispy to work inside a docker container.
In my configuration, both a scheduler and a client (SharedJobCluster) hosted on the same server inside separate containers. The containers have access to a common network. The node at the same time runs on a separate machine. Unfortunately, this configuration doesn't work.
I have an issue with
ip_addr
option. It allows to specify only one host address for SharedJobCluster. If it's the container's hostname I can successfully connect to the scheduler to start the job. But then the node can not connect to the client to return the resulting files. It has local to the container IP and of course, cannot reach it from outside.If I specify external IP through
ip_addr
option the scheduler refuses to start the job. Because the client doesn't listen anymore to this interface, only external. And thus cannot open for scheduler connection try.I tried to understand the documentation for
ip_addr
, but the mention of a list just confuses me.So, my question is. Is it possible to set up more than one
ip_addr
for SharedJobCluster similarly to what we can achieve with the scheduler?I did try to use ext_ip_addr, but then the scheduler starts recognizing the client's IP as 127.0.0.1 and again fails to connect to such a host.
The text was updated successfully, but these errors were encountered: