You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm implementing the Vespa multinode-ha example in an on-prem Kubernetes Cluster. Is there a way to properly expose vespa-internal using a LB? My data scientist team prefer to access/use http://myserversxyz:19071 instead of having to "kubectl port-forward pod/vespa-configserver-0 19071" when deploying their application package.
Vespa multinode-ha example shows headless service configured as clusterIP: None. I've tried to modify that to LB, but my LB does not seem to like it. It keeps bringing service up and down at the LB level. Looking at vespa-config-server pods seems to be fine.
Deploying config/configmap.yml, config/headless.yml (modified to use LB) and config/configserver.yml works well. I can curl http://myservicename:19071/state/v1/health without any problem. Problem happens after deploying config/admin.yml. LB will display service nodes/pool going down and up over and over, so when I'm querying http://myservicename:19071/state/v1/health, sometimes will respond with 200, and sometimes with connection refused.
Any idea?
Thanks,
FT
The text was updated successfully, but these errors were encountered:
Hi, and sorry for slow reponse! You are describing a situation where the configserver pods are fine, until you deploy the rest of the pods (or at least the admin pod). As these are different pods, it looks like a lack of resource problem?
I'm implementing the Vespa multinode-ha example in an on-prem Kubernetes Cluster. Is there a way to properly expose vespa-internal using a LB? My data scientist team prefer to access/use http://myserversxyz:19071 instead of having to "kubectl port-forward pod/vespa-configserver-0 19071" when deploying their application package.
Vespa multinode-ha example shows headless service configured as clusterIP: None. I've tried to modify that to LB, but my LB does not seem to like it. It keeps bringing service up and down at the LB level. Looking at vespa-config-server pods seems to be fine.
Deploying config/configmap.yml, config/headless.yml (modified to use LB) and config/configserver.yml works well. I can curl http://myservicename:19071/state/v1/health without any problem. Problem happens after deploying config/admin.yml. LB will display service nodes/pool going down and up over and over, so when I'm querying http://myservicename:19071/state/v1/health, sometimes will respond with 200, and sometimes with connection refused.
Any idea?
Thanks,
FT
The text was updated successfully, but these errors were encountered: