Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SharedJobCluster, connecting to multiple interfaces #215

Open
dismine opened this issue Sep 4, 2020 · 0 comments
Open

SharedJobCluster, connecting to multiple interfaces #215

dismine opened this issue Sep 4, 2020 · 0 comments

Comments

@dismine
Copy link

dismine commented Sep 4, 2020

Hello,

I am struggling to set up dispy to work inside a docker container.

In my configuration, both a scheduler and a client (SharedJobCluster) hosted on the same server inside separate containers. The containers have access to a common network. The node at the same time runs on a separate machine. Unfortunately, this configuration doesn't work.

I have an issue with ip_addr option. It allows to specify only one host address for SharedJobCluster. If it's the container's hostname I can successfully connect to the scheduler to start the job. But then the node can not connect to the client to return the resulting files. It has local to the container IP and of course, cannot reach it from outside.

If I specify external IP through ip_addr option the scheduler refuses to start the job. Because the client doesn't listen anymore to this interface, only external. And thus cannot open for scheduler connection try.

I tried to understand the documentation for ip_addr, but the mention of a list just confuses me.

ip_addr is address to use for (client) communication. If it is not set, all configured network interfaces are used. If it is a string, it must be either a host name or IP address (in either IPv4 or IPv6 format). If it is a list, each must be a string of host name or IP address, in which case interface addresses for each of those is used.

So, my question is. Is it possible to set up more than one ip_addr for SharedJobCluster similarly to what we can achieve with the scheduler?

I did try to use ext_ip_addr, but then the scheduler starts recognizing the client's IP as 127.0.0.1 and again fails to connect to such a host.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant