修复kind集群重启异常

构建 macOS工作室 采用 macOS安装Docker 来运行 kind多节点集群 ,和之前一样,再次遇到 排查kind重启失败 困难。虽然 multi-node: Kubernetes cluster does not start after Docker re-assigns node’s IP addresses after (Docker) restart #2045 提到kind 0.15.0 解决多管控节点重启后无法重连的问题,但是我所使用的 kind v0.17.0 还是存在同样问题。

问题原因:

  • docker每次重启默认动态分配IP地址,不能保证 kind(本地docker模拟k8s集群) 各个节点重启后获得相同的IP地址

  • kind(本地docker模拟k8s集群) 在设计之初没有考虑持久化运行,通常都是不断重建来模拟各种环境开发,但这也带来的对于复杂环境搭建非常花费时间的问题

检查

  • 执行以下命令可以获得 kind 集群当前各个节点的IP:

获取docker容器的IP地址
docker inspect -f '{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)

输出显示:

获取docker容器的IP地址输出信息
/exciting_yalow - 172.17.0.2
/admiring_snyder -
/dev-external-load-balancer - 172.18.0.8
/dev-control-plane2 - 172.18.0.3
/dev-worker5 - 172.18.0.4
/dev-control-plane3 - 172.18.0.2
/dev-control-plane - 172.18.0.7
/dev-worker2 - 172.18.0.10
/dev-worker - 172.18.0.6
/dev-worker4 - 172.18.0.9
/dev-worker3 - 172.18.0.5
/demo-dev -
/gifted_gould -

可以验证,每次重启docker( macOS安装Docker ),docker容器的IP地址(也就是kind集群的node节点)都会变化,这也就是为何 kind多节点集群 重启后无法正常工作的原因。

这里存在一个问题,就是 multi-node: Kubernetes cluster does not start after Docker re-assigns node’s IP addresses after (Docker) restart #2045 已经修复了多管控节点重启之后IP地址重新分配问题,但是解决的思路是采用 Fix multi-node cluster not working after restarting docker #2671 提供patch方法,将kubeadm的kubeconfig文件有关 kube-controller-managerkube-scheduler 的服务器地址改写为loopback地址

可以看到集群能够正常启动,我采用 Docker Desktop on Mac 虚拟机nsenter 方法进入虚拟机可以查看到 top 命令显示:

获取docker虚拟机检查top可以看到etcd/apiserver/haproxy等进程公告的IP地址
CPU:   6% usr   3% sys   0% nic  89% idle   0% io   0% irq   0% sirq
Load average: 3.99 1.19 0.42 4/1204 7646
  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
 6389  5837 root     S    10.6g 139%   0   3% etcd --advertise-client-urls=https://172.18.0.3:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --experimental-initial-corrupt-check=true --experimental-wa
 5997  5701 root     S    10.6g 139%   3   1% etcd --advertise-client-urls=https://172.18.0.2:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --experimental-initial-corrupt-check=true --experimental-wa
 6483  6082 root     S    10.6g 139%   0   1% etcd --advertise-client-urls=https://172.18.0.7:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --experimental-initial-corrupt-check=true --experimental-wa
 5484  3274 root     S    1908m  24%   1   1% /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-end
 5261  2913 root     S    1835m  23%   0   0% /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-end
 4911  2816 root     S    1908m  24%   2   0% /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-end
 5262  2586 root     S    1836m  23%   1   0% /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-end
 5286  2565 root     S    1908m  24%   0   0% /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-end
 5453  3612 root     S    1908m  24%   2   0% /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-end
 5465  3634 root     S    1836m  23%   1   0% /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-end
 1925  1920 root     S    2188m  28%   1   0% /usr/local/bin/dockerd --containerd /var/run/desktop-containerd/containerd.sock --pidfile /run/desktop/docker.pid --swarm-default-advertise-addr=eth0 --host-gateway-ip 192.168.65.2
 4942  2833 root     S    1908m  24%   1   0% /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-end
 3844  2586 root     S    2027m  26%   3   0% /usr/local/bin/containerd
 4758  3274 root     S    2100m  27%   4   0% /usr/local/bin/containerd
 4405  3443 root     S     420m   5%   4   0% haproxy -sf 7 -W -db -f /usr/local/etc/haproxy/haproxy.cfg
   14     2 root     IW       0   0%   4   0% [rcu_preempt]
 4429  2565 root     S    2099m  27%   3   0% /usr/local/bin/containerd
 6308  5900 root     S     740m   9%   0   0% kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader
 6153  5704 root     S     740m   9%   4   0% kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader
  908     1 root     S     726m   9%   4   0% /usr/bin/containerd
 4086  2816 root     S    1521m  19%   0   0% /usr/local/bin/containerd
 1620  1614 root     S    1376m  17%   2   0% /usr/local/bin/containerd --config /etc/containerd/containerd.toml
 6497  6117 root     S     740m   9%   4   0% kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader
 5837  2586 root     S     695m   9%   2   0% /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 634e452d0e2577ce8139323d9398e01a5ac39c8c395df0998a0046d05ddce892 -address /run/containerd/containerd.sock
 4146  2833 root     S    1521m  19%   4   0% /usr/local/bin/containerd
 4738  3634 root     S    1450m  18%   4   0% /usr/local/bin/containerd
 4668  3612 root     S    1449m  18%   2   0% /usr/local/bin/containerd
 7570  5703 root     S     762m  10%   3   0% kube-apiserver --advertise-address=172.18.0.2 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-to
 6360  5860 root     S     752m  10%   2   0% kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.
 4091  2913 root     S<   23304   0%   1   0% /lib/systemd/systemd-journald

解决之道

seguidor777/kind_static_ips.sh 提供了一个非常巧妙的脚本,在部署了 kind 集群之后,执行这个 kind_static_ips.sh 可以获取dokcer分配给kind集群每个node的IP地址,然后通过 docker network connect --ip <node_ip> "kind" <node_name> 来实现指定静态IP地址。

我在原作者的脚本基础上做了一些修订:

通过 kind_static_ips.sh 脚本设置kind集群每个node静态IP
#!/usr/bin/env bash
set -e

CLUSTER_NAME='dev'
reg_name="${CLUSTER_NAME}-registry"

# Workaround for https://github.com/kubernetes-sigs/kind/issues/2045
# all_nodes=$(kind get nodes --name "${CLUSTER_NAME}" | tr "\n" " ")
# 我将 registry 也加入了列表指定静态IP
all_nodes=$(kind get nodes --name "${CLUSTER_NAME}" | tr "\n" " ")${reg_name}
declare -A nodes_table
ip_template="{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}"

echo "Saving original IPs from nodes"

for node in ${all_nodes}; do
  #nodes_table["${node}"]=$(docker inspect -f "${ip_template}" "${node}")
  #registry有2个网络接口,这里采用了一个比较ugly的方法,过滤掉不属于kind网络的接口IP
  nodes_table["${node}"]=$(docker inspect -f "${ip_template}" "${node}" | sed 's/172.17.0.2//')
  echo "${node}: ${nodes_table["${node}"]}"
done

echo "Stopping all nodes and registry"
docker stop ${all_nodes} >/dev/null

echo "Re-creating network with user defined subnet"
subnet=$(docker network inspect -f "{{(index .IPAM.Config 0).Subnet}}" "kind")
echo "Subnet: ${subnet}"
gateway=$(docker network inspect -f "{{(index .IPAM.Config 0).Gateway}}" "kind")
echo "Gateway: ${gateway}"
docker network rm "kind" >/dev/null
docker network create --driver bridge --subnet ${subnet} --gateway ${gateway} "kind" >/dev/null

echo "Assigning static IPs to nodes"
for node in "${!nodes_table[@]}"; do
  docker network connect --ip ${nodes_table["${node}"]} "kind" "${node}"
  echo "Assigning IP ${nodes_table["${node}"]} to node ${node}"
done

echo "Starting all nodes and registry"
docker start ${all_nodes} >/dev/null

echo -n "Wait until all nodes are ready "

while :; do
  #[[ $(kubectl get nodes | grep Ready | wc -l) -eq ${#nodes_table[@]} ]] && break
  #需要启动的k8s节点比docker的容器列表少2 (registry和haproxy没有包含在k8s)
  pod_num=`expr ${#nodes_table[@]} - 2`
  [[ $(kubectl get nodes | grep Ready | wc -l) -eq ${pod_num} ]] && break
  echo -n "."
  sleep 5
done

echo

备注

  • 原作 kind_static_ips.sh 是假设采用了 kind集群本地Registry ,所以脚本中有一个 ${reg_name} 变量,如果你没有采用 kind集群本地Registry ,需要修改脚本去掉这个变量,否则执行会报错(因为传递给 docker stop 命令参数是一个空字符串):

    Error response from daemon: page not found
    
  • 我将 registry 也包含到设置固定IP地址列表(第9行),但是 registry 有2个网络接口,其中一个接口是连接在 kind 网络,另一个连接在 bridge 网络(通外网),所以在第18行我过滤掉了 bridge 网络接口的IP地址,只设置 kind 接口IP

  • 有两个docker container没有属于kubernetes,即 registryhaproxy ,所以在第48,49行对docker containers数量减2后再对比 kubectl get nodes

  • 脚本执行需要采用较高版本的bash, macOS 内置bash语法不兼容,我在 macOS工作室 中特意通过 Homebrew 安装了最新版本的 bash ,并修订了原作者脚本开头采用 env 来搜索到优先的 bash 版本( homebrew 安装后会提供更高的优先级,所以就能满足我这里需要较高版本 bash 的要求 )

参考