Kubernetes部署服务器

在部署模拟Kubernetes集群的环境中,我采用如下虚拟机部署:

  • 3台 Kubernetes Mster
  • 5台 Kubernetes Node
  • 2台 HAProxy
  • 3台 etcd

clone k8s虚拟机

注解

本案例在单台物理主机上部署多个KVM虚拟机,这些虚拟机是在该物理主机的NAT网络中,所以外部不能直接访问(需要端口映射)。

  • clone虚拟就:

    virsh shutdown centos7
    
    for i in {1..4};do
      virt-clone --connect qemu:///system --original centos7 --name kubemaster-${i} --file /var/lib/libvirt/images/kubemaster-${i}.qcow2
      virt-sysprep -d kubemaster-${i} --hostname kubemaster-${i} --root-password password:CHANGE_ME
      virsh start kubemaster-${i}
    done
    
    for i in {1..5};do
      virt-clone --connect qemu:///system --original centos7 --name kubenode-${i} --file /var/lib/libvirt/images/kubenode-${i}.qcow2
      virt-sysprep -d kubenode-${i} --hostname kubenode-${i} --root-password password:CHANGE_ME
      virsh start kubenode-${i}
    done
    
    for i in {1..2};do
      virt-clone --connect qemu:///system --original centos7 --name haproxy-${i} --file /var/lib/libvirt/images/haproxy-${i}.qcow2
      virt-sysprep -d haproxy-${i} --hostname haproxy-${i} --root-password password:CHANGE_ME
      virsh start haproxy-${i}
    done
    
    for i in {1..3};do
      virt-clone --connect qemu:///system --original centos7 --name etcd-${i} --file /var/lib/libvirt/images/etcd-${i}.qcow2
      virt-sysprep -d etcd-${i} --hostname etcd-${i} --root-password password:CHANGE_ME
      virsh start etcd-${i}
    done
    

注解

稳定运行的Kubernetes集群需要3台Master服务器,这里多创建多 kubermaster-4 是为了演练Master服务器故障替换所准备的。

haproxy-X是用于构建 高可用(HA)Kubernetes集群 时所需的负载均衡,用于提供apiserver的负载均衡能力。

主机名解析

在运行KVM的物理主机上 /etc/hosts 配置模拟集群的hosts域名解析

../../studio/hosts
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
# Host启动时启动:
# ceph-1 / ceph-2 / ceph-3 / ceph-4 / ceph -5 - 提供分布式存储
# machine-1 / machine-2 / machine-3 - 提供物理主机模拟
# 嵌套虚拟化 =>  
# devstack - 开发虚拟机
# minikube- ubernetes开发虚拟机

127.0.0.1 localhost localhost.localdomain

# KVM libvirt virbr0: 192.168.122.0/24
192.168.122.1   myadmin myadmin.dev.huatai.me  #物理主机

192.168.122.2  win10 win10.dev.huatai.me
# 开发测试OpenStack/Kernel/Kata/Docker (单机运行多种功能节约虚拟化资源)
192.168.122.3  devstack devstack.dev.huatai.me

# Kubernetes开发环境
192.168.122.4   minikube minikube.dev.huatai.me

# KVM NAT内部 开发专用Kuberneetes集群
192.168.122.5   etcd-1 etcd-1.dev.huatai.me
192.168.122.6   etcd-2 etcd-2.dev.huatai.me
192.168.122.7   etcd-3 etcd-3.dev.huatai.me
192.168.122.8   haproxy-1 haproxy-1.dev.huatai.me
192.168.122.9   haproxy-2 haproxy-2.dev.huatai.me
192.168.122.10  kubeapi kubeapi.dev.huatai.me            #在HAProxy上构建apiserver的VIP
192.168.122.11  kubemaster-1 kubemaster-1.dev.huatai.me
192.168.122.12  kubemaster-2 kubemaster-2.dev.huatai.me
192.168.122.13  kubemaster-3 kubemaster-3.dev.huatai.me
192.168.122.14  kubemaster-4 kubemaster-4.dev.huatai.me  #测试替换轮转
192.168.122.15  kubenode-1 kubenode-1.dev.huatai.me
192.168.122.16  kubenode-2 kubenode-2.dev.huatai.me
192.168.122.17  kubenode-3 kubenode-3.dev.huatai.me
192.168.122.18  kubenode-4 kubenode-4.dev.huatai.me
192.168.122.19  kubenode-5 kubenode-5.dev.huatai.me

# 模拟物理主机
# 在物理主机上运行L-1的Kubernetes集群和L-2层KVM虚拟机(OpenStack)
192.168.122.21  machine-1 machine-1.test.huatai.me
192.168.122.22  machine-2 machine-2.test.huatai.me
192.168.122.23  machine-3 machine-3.test.huatai.me

# 持续集成环境
192.168.122.200  jenkins jenkins.test.huatai.me

# 模版操作系统
192.168.122.252  centos8 centos8.test.huatai.me
192.168.122.253  ubuntu18-04 ubuntu18-04.test.huatai.me
192.168.122.254  centos7 centos7.test.huatai.me

# KVM nested virbr0:

# Docker docker0: 172.17.0.0/16

# Docker ceph-net: 172.18.0.0/16
172.18.0.11 ceph-1 ceph-1.test.huatai.me
172.18.0.12 ceph-2 ceph-2.test.huatai.me
172.18.0.13 ceph-3 ceph-3.test.huatai.me
172.18.0.14 ceph-4 ceph-4.test.huatai.me
172.18.0.15 ceph-5 ceph-5.test.huatai.me

# 模版操作系统
172.17.0.253  centos6-c centos6-c.test.huatai.me
172.17.0.252  centos7-c centos7-c.test.huatai.me
172.17.0.251  centos8-c centos8-c.test.huatai.me
172.17.0.239  ubunut18-c ubunut18-c.test.huatai.me
172.17.0.238  ubunut20-c ubunut20-c.test.huatai.me

# VMware虚拟化
172.16.16.253  centos6
172.16.16.252  centos7
172.16.16.251  centos8
172.16.16.239  ubunut18
172.16.16.238  ubunut20


# Jetson Nano + Raspberry Pi
# 构建ARM Kubernetes
192.168.6.1  mbp13
192.168.6.2  mbp15
192.168.6.7  kali
192.168.6.8  pi4
192.168.6.9  pi400
192.168.6.10 jetson
192.168.6.11 pi-master1
192.168.6.12 pi-master2
192.168.6.13 pi-master3
192.168.6.15 pi-worker1
192.168.6.16 pi-worker2
192.168.6.17 pi-worker3
192.168.6.110 raspberrypi
192.168.6.111 alpine
192.168.6.200  zcloud
# Kuberntes CIDR 10.244.0.0/16

# libvirt bridge网络
# zcloud上KVM虚拟机直连网络,default gw 192.168.6.9
192.168.6.200 br0
192.168.6.201 sles12-sp3

# 树莓派Zero
192.168.7.10 kali

#-------------------------------------------------------------

# 模拟环境(staging)采用物理服务器集群部署Openstack+Kubernetes
# 局域网内部部署,在推送到生产环境前的模拟环境
# 域名 staging.huatai.me 
# 物理主机共7台
# 功能: ceph, etcd, glusterfs, database
# # 功能: kubenode工作节点
192.168.1.1 worker1
192.168.1.2 worker2
192.168.1.3 worker3
192.168.1.4 worker4
192.168.1.5 worker5
192.168.1.6 worker6
192.168.1.7 worker7 # 运行独立的kind模拟kubernetes集群

# kubemaster(KVM虚拟机/Brigde网络)
192.168.1.251 kubemaster-1 kubemaster-1.staging.huatai.me
192.168.1.252 kubemaster-2 kubemaster-2.staging.huatai.me
192.168.1.253 kubemaster-3 kubemaster-3.staging.huatai.me

#-------------------------------------------------------------

# 生产环境(production)采用云计算厂商提供的云服务器部署对外服务
# 通过持续集成+持续部署自动推送
# 域名 huatai.me

libvirt dnsmasq

但是,在KVM虚拟机集群中,如何能够使得所有虚拟机都获得统一的DNS解析呢?显然,在集群中运行一个DNS服务器是一个解决方案。但是,请注意,在KVM libvirt的运行环境中,默认就在libvirt中运行了一个dnsmasq,实际上为 virtbr0 网络接口上连接的所有虚拟机提供了DNS解析服务。通过在物理服务器上检查 ps aux | grep dnsmasq 可以看到:

nobody    13280  0.0  0.0  53884  1116 ?        S    22:15   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
root      13281  0.0  0.0  53856   380 ?        S    22:15   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

检查libvirt的dnsmasq配置文件 /var/lib/libvirt/dnsmasq/default.conf 可以看到:

strict-order
pid-file=/var/run/libvirt/network/default.pid
except-interface=lo
bind-dynamic
interface=virbr0
dhcp-range=192.168.122.51,192.168.122.254
dhcp-no-override
dhcp-authoritative
dhcp-lease-max=204
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile
addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts

上述配置:

  • interface=virbr0 表示libvirt dnsmasq只对 virtbr0 接口提供服务,所以也就只影响NAT网络中对虚拟机
  • dhcp-range=192.168.122.51,192.168.122.254 是我配置的DHCP范围
  • dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile 是配置静态分配DHCP地址的配置文件
  • addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts 是配置dnsmasq的DNS解析配置文件,类似 /etc/hots

参考 KVM: Using dnsmasq for libvirt DNS resolution ,执行 virsh net-edit default 编辑 libvirt 网络,添加 <dns></dns> 段落:

virsh net-edit default
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
<network>
  <name>default</name>
  <uuid>7cbc12d6-1899-4808-8d76-cdda780e3cc9</uuid>
  <forward mode='nat'/>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:03:12:89'/>
  <dns>
    <host ip='192.168.122.5'>
      <hostname>etcd-1.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.6'>
      <hostname>etcd-2.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.7'>
      <hostname>etcd-3.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.8'>
      <hostname>haproxy-1.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.9'>
      <hostname>haproxy-2.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.10'>
      <hostname>kubeapi.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.11'>
      <hostname>kubemaster-1.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.12'>
      <hostname>kubemaster-2.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.13'>
      <hostname>kubemaster-3.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.14'>
      <hostname>kubemaster-4.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.15'>
      <hostname>kubenode-1.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.16'>
      <hostname>kubenode-2.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.17'>
      <hostname>kubenode-3.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.18'>
      <hostname>kubenode-4.test.huatai.me</hostname>
    </host>
    <host ip='192.168.122.19'>
      <hostname>kubenode-5.test.huatai.me</hostname>
    </host>
  </dns>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.51' end='192.168.122.254'/>
    </dhcp>
  </ip>
</network>

然后重启libvirt default网络:

sudo virsh net-destroy default
sudo virsh net-start default

此时检查物理服务器的 /var/lib/libvirt/dnsmasq/default.addnhosts 内容,原先空白的文件就会自动填写上类似 /etc/hosts 这样的配置静态IP解析:

192.168.122.5    etcd-1.test.huatai.me
192.168.122.6    etcd-2.test.huatai.me
...

注解

之前我的实践发现,直接修改 /var/lib/libvirt/dnsmasq/default.addnhosts 添加静态解析内容,然后重建 default 网络也能够实现相同的DNS解析。但是,我发现过一段时间以后物理服务器的 /var/lib/libvirt/dnsmasq/default.addnhosts 会被清空。不过,在虚拟机网络中,依然能够解析DNS。似乎直接修改文件不是好的方法,所以还是参考上述文档通过修订default网络的xml来完成配置。

注意,重启网络之后,所有虚拟机的虚拟网卡会脱离网桥 virbr0 ,需要重新连接:

for i in {0..13};do sudo brctl addif virbr0 vnet${i};done

随后登陆任意虚拟机,尝试解析DNS,例如,解析后续作为apiserver的VIP地址:

dig kubeapi.test.huatai.me

输出应该类似:

kubeapi.test.huatai.me.    0    IN    A    192.168.122.10

而且可以使用短域名解析:

dig kubeapi

输出:

kubeapi.        0    IN    A    192.168.122.10

注解

如果要使用独立的dnsmasq对外提供DNS解析服务,可以参考我之前的实践 DNSmasq 快速起步 或者 KVM: Using dnsmasq for libvirt DNS resolution

pssh配置

为了方便在集群多台服务器上同时安装软件包和进行基础配置,采用pssh方式执行命令,所以准备按照虚拟机用途进行分组如下:

kube
1
2
3
4
5
6
7
8
9
192.168.122.11
192.168.122.12
192.168.122.13
192.168.122.14
192.168.122.15
192.168.122.16
192.168.122.17
192.168.122.18
192.168.122.19
kubemaster
1
2
3
4
192.168.122.11
192.168.122.12
192.168.122.13
192.168.122.14
kubenode
1
2
3
4
5
192.168.122.15
192.168.122.16
192.168.122.17
192.168.122.18
192.168.122.19

这样,例如需要同时安装docker软件包,只需要执行:

pssh -ih kube 'yum install docker-ce -y'