This example shows that NSM keeps working after the remote NSMgr + remote NSE restart.
NSC and NSE are using the kernel
mechanism to connect to its local forwarder.
Forwarders are using the vxlan
mechanism to connect with each other.
Make sure that you have completed steps from basic or memory setup.
Create test namespace:
NAMESPACE=($(kubectl create -f https://raw.githubusercontent.com/networkservicemesh/deployments-k8s/94bde617f7939b9fd990840e80a6f2124e1c8e83/examples/heal/namespace.yaml)[0])
NAMESPACE=${NAMESPACE:10}
Get nodes exclude control-plane:
NODES=($(kubectl get nodes -o go-template='{{range .items}}{{ if not .spec.taints }}{{index .metadata.labels "kubernetes.io/hostname"}} {{end}}{{end}}'))
Create customization file:
cat > kustomization.yaml <<EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ${NAMESPACE}
bases:
- https://github.com/networkservicemesh/deployments-k8s/apps/nsc-kernel?ref=94bde617f7939b9fd990840e80a6f2124e1c8e83
- https://github.com/networkservicemesh/deployments-k8s/apps/nse-kernel?ref=94bde617f7939b9fd990840e80a6f2124e1c8e83
patchesStrategicMerge:
- patch-nsc.yaml
- patch-nse.yaml
EOF
Create NSC patch:
cat > patch-nsc.yaml <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nsc-kernel
spec:
template:
spec:
containers:
- name: nsc
env:
- name: NSM_NETWORK_SERVICES
value: kernel://icmp-responder/nsm-1
nodeSelector:
kubernetes.io/hostname: ${NODES[0]}
EOF
Create NSE patch:
cat > patch-nse.yaml <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nse-kernel
spec:
template:
spec:
containers:
- name: nse
env:
- name: NSM_CIDR_PREFIX
value: 172.16.1.100/31
nodeSelector:
kubernetes.io/hostname: ${NODES[1]}
EOF
Deploy NSC and NSE:
kubectl apply -k .
Wait for applications ready:
kubectl wait --for=condition=ready --timeout=1m pod -l app=nsc-kernel -n ${NAMESPACE}
kubectl wait --for=condition=ready --timeout=1m pod -l app=nse-kernel -n ${NAMESPACE}
Find NSC and NSE pods by labels:
NSC=$(kubectl get pods -l app=nsc-kernel -n ${NAMESPACE} --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
NSE=$(kubectl get pods -l app=nse-kernel -n ${NAMESPACE} --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
Ping from NSC to NSE:
kubectl exec ${NSC} -n ${NAMESPACE} -- ping -c 4 172.16.1.100
Ping from NSE to NSC:
kubectl exec ${NSE} -n ${NAMESPACE} -- ping -c 4 172.16.1.101
Create a new NSE patch:
cat > patch-nse.yaml <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nse-kernel
spec:
template:
metadata:
labels:
version: new
spec:
containers:
- name: nse
env:
- name: NSM_CIDR_PREFIX
value: 172.16.1.102/31
nodeSelector:
kubernetes.io/hostname: ${NODES[1]}
EOF
Find remote NSMgr pod:
NSMGR=$(kubectl get pods -l app=nsmgr --field-selector spec.nodeName==${NODES[1]} -n nsm-system --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
Restart remote NSMgr and NSE:
kubectl delete pod ${NSMGR} -n nsm-system
kubectl apply -k .
Waiting for new ones:
kubectl wait --for=condition=ready --timeout=1m pod -l app=nsmgr --field-selector spec.nodeName==${NODES[1]} -n nsm-system
kubectl wait --for=condition=ready --timeout=1m pod -l app=nse-kernel -l version=new -n ${NAMESPACE}
Find new NSE pod:
NEW_NSE=$(kubectl get pods -l app=nse-kernel -l version=new -n ${NAMESPACE} --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
Ping from NSC to new NSE:
kubectl exec ${NSC} -n ${NAMESPACE} -- ping -c 4 172.16.1.102
Ping from new NSE to NSC:
kubectl exec ${NEW_NSE} -n ${NAMESPACE} -- ping -c 4 172.16.1.103
Delete ns:
kubectl delete ns ${NAMESPACE}