私有云部署TLS认证的etcd集群

在部署 K3s - 轻量级Kubernetes 集群,我采用了 部署TLS认证的etcd集群 。在这个基础上,我部署 私有云etcd服务 时,采用本文方法实践。

实践环境

服务器依然是 私有云etcd集群TLS设置 中使用的 通过 私有云KVM环境 构建3台虚拟机:

私有云KVM虚拟机

主机IP

主机名

192.168.6.204

z-b-data-1

192.168.6.205

z-b-data-2

192.168.6.206

z-b-data-3

TLS证书

TLS证书采用 cfssl 工具构建,完整步骤见 私有云etcd集群TLS设置 。分别获得:

  • CA:

    ca-key.pem
    ca.csr
    ca.pem
    
  • 服务器证书:

    server-key.pem
    server.csr
    server.pem
    
  • 点对点证书:

    z-b-data-1-key.pem
    z-b-data-1.csr
    z-b-data-1.pem
    
    z-b-data-2.csr
    z-b-data-2.json
    z-b-data-2.pem
    
    z-b-data-3.csr
    z-b-data-3.json
    z-b-data-3.pem
    
  • 客户端证书:

    client-key.pem
    client.csr
    client.pem
    

安装软件包

采用 安装运行本地etcd 中安装脚本下载最新安装软件包(当前版本 3.5.4 )

下载etcd的linux版本脚本 install_etcd.sh
ETCD_VER=v3.5.4
KERNEL=`uname -s` # Linux / Darwin
ARCH=`uname -m` # x86_64 / aarch64

if [ ${KERNEL} == "Linux" ];then
    KERNEL="linux"
elif [ ${KERNEL} == "Darwin" ];then
    KERNEL="darwin"
else
    echo "Not Linux or macOS, exit!"
    exit 0
fi

if [ ${ARCH} == "x86_64" ];then
    ARCH="amd64"
elif [ ${ARCH} == "aarch64"  ];then
    ARCH="arm64"
else
    echo "Not x86_64 or aarch64, exit!"
    exit 0
fi

# choose either URL
GOOGLE_URL=https://storage.googleapis.com/etcd
GITHUB_URL=https://github.com/etcd-io/etcd/releases/download
DOWNLOAD_URL=${GOOGLE_URL}

rm -f /tmp/etcd-${ETCD_VER}-${KERNEL}-${ARCH}.tar.gz
rm -rf /tmp/etcd-download-test && mkdir -p /tmp/etcd-download-test

curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-${KERNEL}-${ARCH}.tar.gz -o /tmp/etcd-${ETCD_VER}-${KERNEL}-${ARCH}.tar.gz
tar xzvf /tmp/etcd-${ETCD_VER}-${KERNEL}-${ARCH}.tar.gz -C /tmp/etcd-download-test --strip-components=1
rm -f /tmp/etcd-${ETCD_VER}-${KERNEL}-${ARCH}.tar.gz

/tmp/etcd-download-test/etcd --version
/tmp/etcd-download-test/etcdctl version
/tmp/etcd-download-test/etcdutl version

sudo mv /tmp/etcd-download-test/etcd /usr/local/bin
sudo mv /tmp/etcd-download-test/etcdctl /usr/local/bin
sudo mv /tmp/etcd-download-test/etcdutl /usr/local/bin
  • 在安装节点创建 etcd 目录以及用户和用户组(如果使用了 私有云数据层LVM卷管理 中构建的 lv-etcd 卷,则忽略目录创建):

useradd添加etcd用户账号
sudo mkdir -p /etc/etcd /var/lib/etcd
groupadd -f -g 1501 etcd
useradd -c "etcd user" -d /var/lib/etcd -s /bin/false -g etcd -u 1501 etcd
chown -R etcd:etcd /var/lib/etcd

证书分发

分发证书脚本 deploy_etcd_certificates.sh
cat << EOF > etcd_hosts
z-b-data-1
z-b-data-2
z-b-data-3
EOF

cat << EOF > prepare_etcd.sh
if [ -d /tmp/etcd_tls ];then
    rm -rf /tmp/etcd_tls
    mkdir /tmp/etcd_tls
else
    mkdir /tmp/etcd_tls
fi

if  [ ! -d /etc/etcd/ ];then
    sudo mkdir /etc/etcd
fi
EOF

for host in `cat etcd_hosts`;do
    scp prepare_etcd.sh $host:/tmp/
    ssh $host 'sh /tmp/prepare_etcd.sh'
done

for host in `cat etcd_hosts`;do
    scp ${host}.pem ${host}:/tmp/etcd_tls/
    scp ${host}-key.pem ${host}:/tmp/etcd_tls/
    scp ca.pem ${host}:/tmp/etcd_tls/
    scp server.pem ${host}:/tmp/etcd_tls/
    scp server-key.pem ${host}:/tmp/etcd_tls/
    scp client.csr ${host}:/tmp/etcd_tls/
    scp client.pem ${host}:/tmp/etcd_tls/
    scp client-key.pem ${host}:/tmp/etcd_tls/
    ssh $host 'sudo cp /tmp/etcd_tls/* /etc/etcd/;sudo chown etcd:etcd /etc/etcd/*'
done

执行脚本:

sh deploy_etcd_certificates.sh

这样在 etcd 主机上分别有对应主机的配置文件 /etc/etcd 目录下

配置启动服务脚本

Systemd进程管理器 启动etcd脚本

  • 可以通过以下命令获得环境变量,不过后面我在构建配置文件时会结合这个环境变量

etcd环境变量
# 网卡名为 enp1s0 ,请根据实际情况修订
ETCD_HOST_IP=$(ip addr show enp1s0 | grep "inet\b" | awk '{print $2}' | cut -d/ -f1)
ETCD_NAME=$(hostname -s)

备注

实际上很多网上部署案例或者生产环境部署etcd都不采用配置文件,而是通过命令行参数来调整 etcd 运行特性。根据 etcd配置规则 ,配置文件优先级最高,所以我采用所有调整参数都以配置文件为准,无命令行参数。

备注

etcd 配置中有一些变量,我之前在 部署TLS认证的etcd集群 采用模版中占用符方式,然后通过 sed 去替换。虽然可行,但是不够优雅。更为简洁方便的方式是采用SHELL的 here document 特性,通过一些环境变量自动从脚本中替换好变量,本文即采用此方法。

  • 执行脚本 generate_etcd_service 生成 /etc/etcd/conf.yml 配置文件和 Systemd进程管理器 启动 etcd 配置文件 /lib/systemd/system/etcd.service :

创建etcd启动的配置conf.yml 和 systemd脚本
ETCD_HOST_IP=$(ip addr show enp1s0 | grep "inet\b" | awk '{print $2}' | cut -d/ -f1)
ETCD_NAME=$(hostname -s)
ETCD_HOST_1=z-b-data-1
ETCD_HOST_2=z-b-data-2
ETCD_HOST_3=z-b-data-3
ETCD_HOST_1_IP=192.168.6.204
ETCD_HOST_2_IP=192.168.6.205
ETCD_HOST_3_IP=192.168.6.206
INIT_TOKEN=initpasswd

cat << EOF > /etc/etcd/conf.yml
# This is the configuration file for the etcd server.

# Human-readable name for this member.
name: ${ETCD_NAME}

# Path to the data directory.
data-dir: /var/lib/etcd

# Path to the dedicated wal directory.
wal-dir:

# Number of committed transactions to trigger a snapshot to disk.
snapshot-count: 10000

# Time (in milliseconds) of a heartbeat interval.
heartbeat-interval: 100

# Time (in milliseconds) for an election to timeout.
election-timeout: 1000

# Raise alarms when backend size exceeds the given quota. 0 means use the
# default quota.
quota-backend-bytes: 0

# List of comma separated URLs to listen on for peer traffic.
listen-peer-urls: https://${ETCD_HOST_IP}:2380

# List of comma separated URLs to listen on for client traffic.
listen-client-urls: https://${ETCD_HOST_IP}:2379,https://127.0.0.1:2379

# Maximum number of snapshot files to retain (0 is unlimited).
max-snapshots: 5

# Maximum number of wal files to retain (0 is unlimited).
max-wals: 5

# Comma-separated white list of origins for CORS (cross-origin resource sharing).
cors:

# List of this member's peer URLs to advertise to the rest of the cluster.
# The URLs needed to be a comma-separated list.
initial-advertise-peer-urls: https://${ETCD_HOST_IP}:2380

# List of this member's client URLs to advertise to the public.
# The URLs needed to be a comma-separated list.
advertise-client-urls: https://${ETCD_HOST_IP}:2379

# Discovery URL used to bootstrap the cluster.
discovery:

# Valid values include 'exit', 'proxy'
discovery-fallback: 'proxy'

# HTTP proxy to use for traffic to discovery service.
discovery-proxy:

# DNS domain used to bootstrap initial cluster.
discovery-srv:

# Initial cluster configuration for bootstrapping.
initial-cluster: ${ETCD_HOST_1}=https://${ETCD_HOST_1_IP}:2380,${ETCD_HOST_2}=https://${ETCD_HOST_2_IP}:2380,${ETCD_HOST_3}=https://${ETCD_HOST_3_IP}:2380

# Initial cluster token for the etcd cluster during bootstrap.
initial-cluster-token: ${INIT_TOKEN}

# Initial cluster state ('new' or 'existing').
initial-cluster-state: 'new'

# Reject reconfiguration requests that would cause quorum loss.
strict-reconfig-check: false

# Accept etcd V2 client requests
enable-v2: true

# Enable runtime profiling data via HTTP server
enable-pprof: true

# Valid values include 'on', 'readonly', 'off'
proxy: 'off'

# Time (in milliseconds) an endpoint will be held in a failed state.
proxy-failure-wait: 5000

# Time (in milliseconds) of the endpoints refresh interval.
proxy-refresh-interval: 30000

# Time (in milliseconds) for a dial to timeout.
proxy-dial-timeout: 1000

# Time (in milliseconds) for a write to timeout.
proxy-write-timeout: 5000

# Time (in milliseconds) for a read to timeout.
proxy-read-timeout: 0

client-transport-security:
  # Path to the client server TLS cert file.
  cert-file: /etc/etcd/server.pem

  # Path to the client server TLS key file.
  key-file: /etc/etcd/server-key.pem

  # Enable client cert authentication.
  client-cert-auth: true

  # Path to the client server TLS trusted CA cert file.
  trusted-ca-file: /etc/etcd/ca.pem

  # Client TLS using generated certificates
  auto-tls: true

peer-transport-security:
  # Path to the peer server TLS cert file.
  cert-file: /etc/etcd/${ETCD_NAME}.pem

  # Path to the peer server TLS key file.
  key-file: /etc/etcd/${ETCD_NAME}-key.pem

  # Enable peer client cert authentication.
  client-cert-auth: true

  # Path to the peer server TLS trusted CA cert file.
  trusted-ca-file: /etc/etcd/ca.pem

  # Peer TLS using generated certificates.
  auto-tls: true

# Enable debug-level logging for etcd.
debug: false

logger: zap

# Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd.
log-outputs: [stderr]

# Force to create a new one member cluster.
force-new-cluster: false

auto-compaction-mode: periodic
auto-compaction-retention: "1"
EOF

cat << EOF > /lib/systemd/system/etcd.service
[Unit]
Description=etcd service
Documentation=https://github.com/coreos/etcd

[Service]
User=etcd
Type=notify
ExecStart=/usr/local/bin/etcd \\
 --config-file=/etc/etcd/conf.yml
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

备注

配置文件 conf.yml 和之前实践 部署TLS认证的etcd集群 相同,只不过我采用了 here document 来完成变量替换。这里无需再手工编辑配置

备注

配置文件 conf.yml 中,初始化etcd绑定的url必须使用主机的IP地址,不能使用域名:

# List of comma separated URLs to listen on for peer traffic.
listen-peer-urls: https://192.168.6.204:2380

# List of comma separated URLs to listen on for client traffic.
listen-client-urls: https://192.168.6.204:2379,https://127.0.0.1:2379

如果使用域名,如 z-b-data-1.staging.huatai.me (即使域名解析正确),启动etcd还是会报错:

{"level":"warn","ts":1649611005.237307,"caller":"etcdmain/etcd.go:74","msg":"failed to verify flags","error":"expected IP in URL for binding (http://z-b-data-1.staging.huatai.me:2380)"}
  • 激活服务:

    sudo systemctl enable etcd.service
    
  • 启动服务:

    sudo systemctl start etcd.service
    

问题排查

  • 检查启动失败原因:

    sudo systemctl status etcd.service
    sudo journalctl -xe
    

初始化集群未找到对应节点名

启动日志:

Jul 03 00:41:30 z-b-data-1 etcd[13044]: {"level":"fatal","ts":"2022-07-03T00:41:30.498+0800","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"couldn't find local name \"z-b-data-1\" in the initial cluster configuration","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\t/go/src/go.etcd.io/etcd/re>
Jul 03 00:41:30 z-b-data-1 systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE"}

仔细核对了配置( 受到 couldn’t find local name “” in the initial cluster configuration when start etcd service 启发 ),原来在配置文件中有一行:

# Initial cluster configuration for bootstrapping.
initial-cluster: NODE1=https://192.168.6.204:2380,NODE2=https://192.168.6.205:2380,NODE3=https://192.168.6.206:2380

这行配置错误了,必须修订成:

initial-cluster: z-b-data-1=https://192.168.6.204:2380,z-b-data-2=https://192.168.6.205:2380,z-b-data-3=https://192.168.6.206:2380

这样才能和配置文件中最初的:

# Human-readable name for this member.
name: z-b-data-1

对应起来。也就是必须告知 initial-clusterz-b-data-1 对应的是那个服务器配置,这里就是 https://192.168.6.204:2380

unmarshaling JSON

启动日志报错:

Jul 03 20:54:56 z-b-data-1 etcd[266420]: {"level":"warn","ts":"2022-07-03T20:54:56.212+0800","caller":"etcdmain/etcd.go:75","msg":"failed to verify flags","error":"error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go struct field configYAML.log-outputs of type []string"}
Jul 03 20:54:56 z-b-data-1 systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE
-- Subject: Unit process exited

上述报错 error unmarshaling JSON: while decoding JSON 在很多yaml配置错误时候就会出现,例如 Concourse get bitbucket resource error unmarshaling JSON: while decoding JSON

不过,我经过实践检查发现,原来我配置了:

# Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd.
#log-outputs: [stderr]
log-outputs: /var/log/etcd/etcd.log

是错误的,需要恢复为:

# Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd.
log-outputs: [stderr]

备注

使用 Systemd进程管理器 管理 etcd ,日志可以通过 journalctl 来观察

检查

  • 启动 etcd 之后,检查服务进程:

    ps aux | grep etcd
    

可以看到:

etcd        8556  2.1  0.2 11214264 39296 ?      Ssl  22:02   0:02 /usr/local/bin/etcd --config-file=/etc/etcd/conf.yml
  • 检查日志:

    journalctl -u etcd.service
    

验证etcd集群

  • 为方便维护,配置 etcdctl 环境变量,添加到用户自己的 profile中:

etcdctl 使用的环境变量
export ETCDCTL_API=3
export ETCDCTL_ENDPOINTS='https://etcd.staging.huatai.me:2379'
export ETCDCTL_CACERT=/etc/etcd/ca.pem
export ETCDCTL_CERT=/etc/etcd/client.pem
export ETCDCTL_KEY=/etc/etcd/client-key.pem

然后可以检查:

etcdctl 检查集群成员列表(member list)
etcdctl member list

输出类似:

64e2be2269f59c43, started, z-b-data-3, https://192.168.6.206:2380, https://192.168.6.206:2379, false
73d6903628b74671, started, z-b-data-1, https://192.168.6.204:2380, https://192.168.6.204:2379, false
cbea9b1cda087dbf, started, z-b-data-2, https://192.168.6.205:2380, https://192.168.6.205:2379, false

为方便观察,可以使用表格输出模式:

etcdctl 检查endpoint状态(表格形式输出)
etcdctl --write-out=table endpoint status

输出显示:

+-------------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|              ENDPOINT               |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://etcd.staging.huatai.me:2379 | 73d6903628b74671 |   3.5.4 |   20 kB |      true |      false |         2 |         22 |                 22 |        |
+-------------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

注意,每次刷新显示的 ID 是轮转变化的,分别是3个实际的etcd节点。这种方式观察不是很方便(同时观察3个节点状态)

检查健康状况:

etcdctl 检查endpoint健康状态(查看节点响应情况)
etcdctl endpoint health

输出显示:

https://etcd.staging.huatai.me:2379 is healthy: successfully committed proposal: took = 12.150298ms

调整 ETCDCTL_ENDPOINTS

你有没有发现,上面检查 etcdctlendpoint status 输出只显示了DNS域名对应的状态,而实际上 etcd.staging.huatai.me 对应了3个服务器的IP(实际采用了DNSRR负载均衡)。那么怎么显示出所有的节点状态呢?

关键点是 ETCDCTL_ENDPOINTS 环境变量,将 https://etcd.staging.huatai.me:2379 调整为实际的服务器节点:

ETCDCTL_ENDPOINTS 环境变量
export ETCDCTL_API=3
#export ETCDCTL_ENDPOINTS='https://etcd.staging.huatai.me:2379'
export ETCDCTL_ENDPOINTS=https://192.168.6.204:2379,https://192.168.6.205:2379,https://192.168.6.206:2379
export ETCDCTL_CACERT=/etc/etcd/ca.pem
export ETCDCTL_CERT=/etc/etcd/client.pem
export ETCDCTL_KEY=/etc/etcd/client-key.pem

现在就可以检查etcd集群的所有节点:

  • 检查节点状态:

etcdctl 检查endpoint状态(表格形式输出)
etcdctl --write-out=table endpoint status

现在就可以看到3个节点状态的详细信息:

+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.6.204:2379 | 73d6903628b74671 |   3.5.4 |   20 kB |      true |      false |         2 |         57 |                 57 |        |
| https://192.168.6.205:2379 | cbea9b1cda087dbf |   3.5.4 |   20 kB |     false |      false |         2 |         57 |                 57 |        |
| https://192.168.6.206:2379 | 64e2be2269f59c43 |   3.5.4 |   20 kB |     false |      false |         2 |         57 |                 57 |        |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  • 检查节点健康状况:

etcdctl 检查endpoint健康状态(查看节点响应情况)
etcdctl endpoint health

现在也能看到3个节点健康情况:

+----------------------------+--------+-------------+-------+
|          ENDPOINT          | HEALTH |    TOOK     | ERROR |
+----------------------------+--------+-------------+-------+
| https://192.168.6.204:2379 |   true | 10.114539ms |       |
| https://192.168.6.205:2379 |   true | 10.327062ms |       |
| https://192.168.6.206:2379 |   true | 10.631616ms |       |
+----------------------------+--------+-------------+-------+
  • (重要步骤)由于 etcd 已经完成部署,之前在 /etc/etcd/conf.yml 配置集群状态,需要从 new 改为 existing ,表明集群已经建设完成:

    # Initial cluster state ('new' or 'existing').
    initial-cluster-state: 'existing'
    

后续系统重启,etcd重启就会按照已经建成的etcd来运行,不用再进行初始化

参考