部署TLS认证的etcd集群

分布式高可用的etcd集群,在生产环境上需要启用TLS认证来加强安全性。在完成 部署etcd集群 之后,我在部署 K3s - 轻量级Kubernetes 中,采用 K3s高可用etcd 实现。

实践环境

树莓派堆叠 环境采用3台 树莓派Raspberry Pi 3 硬件部署3节点 etcd - 分布式kv存储 集群:

树莓派k3s管控服务器

主机IP

主机名

192.168.7.11

x-k3s-m-1

192.168.7.12

x-k3s-m-2

192.168.7.13

x-k3s-m-3

TLS证书

TLS证书采用 cfssl 工具构建,完整步骤见 etcd集群TLS设置 。分别获得:

  • CA:

    ca-key.pem
    ca.csr
    ca.pem
    
  • 服务器证书:

    server-key.pem
    server.csr
    server.pem
    
  • 点对点证书:

    x-k3s-m-1-key.pem
    x-k3s-m-1.csr
    x-k3s-m-1.pem
    
    x-k3s-m-2.csr
    x-k3s-m-2.json
    x-k3s-m-2.pem
    
    x-k3s-m-3.csr
    x-k3s-m-3.json
    x-k3s-m-3.pem
    
  • 客户端证书:

    client-key.pem
    client.csr
    client.pem
    

证书分发

分发证书脚本 deploy_etcd_certificates.sh
cat << EOF > etcd_hosts
x-k3s-m-1
x-k3s-m-2
x-k3s-m-3
EOF

cat << EOF > prepare_etcd.sh
if [ -d /tmp/etcd_tls ];then
    rm -rf /tmp/etcd_tls
    mkdir /tmp/etcd_tls
else
    mkdir /tmp/etcd_tls
fi

if  [ ! -d /etc/etcd/ ];then
    sudo mkdir /etc/etcd
fi
EOF

for host in `cat etcd_hosts`;do
    scp prepare_etcd.sh $host:/tmp/
    ssh $host 'sh /tmp/prepare_etcd.sh'
done

for host in `cat etcd_hosts`;do
    scp ${host}.pem ${host}:/tmp/etcd_tls/
    scp ${host}-key.pem ${host}:/tmp/etcd_tls/
    scp ca.pem ${host}:/tmp/etcd_tls/
    scp server.pem ${host}:/tmp/etcd_tls/
    scp server-key.pem ${host}:/tmp/etcd_tls/
    scp client.csr ${host}:/tmp/etcd_tls/
    scp client.pem ${host}:/tmp/etcd_tls/
    scp client-key.pem ${host}:/tmp/etcd_tls/
    ssh $host 'sudo cp /tmp/etcd_tls/* /etc/etcd/;sudo chown etcd:etcd /etc/etcd/*'
done

执行脚本:

sh deploy_etcd_certificates.sh

这样在 etcd 主机上分别有对应主机的配置文件 /etc/etcd 目录下有(以下案例是 x-k3s-m-1 ):

x-k3s-m-1 主机证书案例
ca.pem
server-key.pem
server.pem
x-k3s-m-1-key.pem
x-k3s-m-1.pem

配置启动服务脚本

Systemd进程管理器 启动etcd脚本

… 待完善

OpenRC 启动etcd脚本

注解

参考alpine linux软件仓库 etcdetcd-openrc 软件包配置方法,我在 OpenRC 配置etcd时,采用 /etc/etcd/conf.yml 配置文件来配置etcd。

需要注意 etcd配置规则 : 如果提供配置文件,则运行参数和环境变量都不会生效!!!

当然,也可以采用命令行参数(见上文 Systemd进程管理器 启动etcd脚本 ),或者采用环境变量。

从惯例来看,大多数网上配置都采用命令行参数,部分采用环境变量。不过,我觉得对于固定配置,特别是不依赖 Systemd进程管理器 这样的启动器,使用配置文件是最通用的。

Alpine Linuxedge/testing仓库提供etcd-openrc ,包含了 openrc 启动脚本以及配置:

  • /etc/init.d/etcd

  • /etc/conf.d/etcd

此外 edge/testing仓库etcd 包含了 etcd 配置文件 /etc/etcd/conf.yml

所以,在测试主机 x-devapk添加软件包 ( edge/testing )

sudo apk add etcd-openrc etcd --update-cache --repository http://dl-cdn.alpinelinux.org/alpine/edge/testing/ --allow-untrusted

可以在此基础上做自定义配置,

  • 准备配置文件 conf.yml (这个配置文件是 edge/testing仓库etcd 的etcd 配置文件 /etc/etcd/conf.yml 基础上修订,增加了配置占位符方便后续通过脚本修订):

etcd配置文件 /etc/etcd/conf.yml
# This is the configuration file for the etcd server.

# Human-readable name for this member.
name: 'NODENAME'

# Path to the data directory.
data-dir: /var/lib/etcd

# Path to the dedicated wal directory.
wal-dir:

# Number of committed transactions to trigger a snapshot to disk.
snapshot-count: 10000

# Time (in milliseconds) of a heartbeat interval.
heartbeat-interval: 100

# Time (in milliseconds) for an election to timeout.
election-timeout: 1000

# Raise alarms when backend size exceeds the given quota. 0 means use the
# default quota.
quota-backend-bytes: 0

# List of comma separated URLs to listen on for peer traffic.
listen-peer-urls: https://NODEIP:2380

# List of comma separated URLs to listen on for client traffic.
listen-client-urls: https://NODEIP:2379,https://127.0.0.1:2379

# Maximum number of snapshot files to retain (0 is unlimited).
max-snapshots: 5

# Maximum number of wal files to retain (0 is unlimited).
max-wals: 5

# Comma-separated white list of origins for CORS (cross-origin resource sharing).
cors:

# List of this member's peer URLs to advertise to the rest of the cluster.
# The URLs needed to be a comma-separated list.
initial-advertise-peer-urls: https://NODENAME.DOMAIN:2380

# List of this member's client URLs to advertise to the public.
# The URLs needed to be a comma-separated list.
advertise-client-urls: https://NODENAME.DOMAIN:2379

# Discovery URL used to bootstrap the cluster.
discovery:

# Valid values include 'exit', 'proxy'
discovery-fallback: 'proxy'

# HTTP proxy to use for traffic to discovery service.
discovery-proxy:

# DNS domain used to bootstrap initial cluster.
discovery-srv:

# Initial cluster configuration for bootstrapping.
initial-cluster: NODE1=https://NODE1.DOMAIN:2380,NODE2=https://NODE2.DOMAIN:2380,NODE3=https://NODE3.DOMAIN:2380

# Initial cluster token for the etcd cluster during bootstrap.
initial-cluster-token: 'INIT-TOKEN'

# Initial cluster state ('new' or 'existing').
initial-cluster-state: 'new'

# Reject reconfiguration requests that would cause quorum loss.
strict-reconfig-check: false

# Accept etcd V2 client requests
enable-v2: true

# Enable runtime profiling data via HTTP server
enable-pprof: true

# Valid values include 'on', 'readonly', 'off'
proxy: 'off'

# Time (in milliseconds) an endpoint will be held in a failed state.
proxy-failure-wait: 5000

# Time (in milliseconds) of the endpoints refresh interval.
proxy-refresh-interval: 30000

# Time (in milliseconds) for a dial to timeout.
proxy-dial-timeout: 1000

# Time (in milliseconds) for a write to timeout.
proxy-write-timeout: 5000

# Time (in milliseconds) for a read to timeout.
proxy-read-timeout: 0

client-transport-security:
  # Path to the client server TLS cert file.
  cert-file: /etc/etcd/server.pem

  # Path to the client server TLS key file.
  key-file: /etc/etcd/server-key.pem

  # Enable client cert authentication.
  client-cert-auth: true

  # Path to the client server TLS trusted CA cert file.
  trusted-ca-file: /etc/etcd/ca.pem

  # Client TLS using generated certificates
  auto-tls: true

peer-transport-security:
  # Path to the peer server TLS cert file.
  cert-file: /etc/etcd/NODENAME.pem

  # Path to the peer server TLS key file.
  key-file: /etc/etcd/NODENAME-key.pem

  # Enable peer client cert authentication.
  client-cert-auth: true

  # Path to the peer server TLS trusted CA cert file.
  trusted-ca-file: /etc/etcd/ca.pem

  # Peer TLS using generated certificates.
  auto-tls: true

# Enable debug-level logging for etcd.
debug: false

logger: zap

# Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd.
log-outputs: [stderr]

# Force to create a new one member cluster.
force-new-cluster: false

auto-compaction-mode: periodic
auto-compaction-retention: "1"
  • 准备一个为每个管控服务器修订etcd配置的脚本 config_etcd.sh :

修订etcd配置的脚本 config_etcd.sh
NODENAME=`hostname -s`
NODEIP=`ip addr show eth0 | grep "inet\b" | awk '{print $2}' | cut -d/ -f1`
NODE1="x-k3s-m-1"
NODE2="x-k3s-m-2"
NODE3="x-k3s-m-3"
DOMAIN="edge.huatai.me"
INITTOKEN="x-k3s"

cd /tmp/etcd_config
sed -i "s/NODENAME/$NODENAME/g" conf.yml
sed -i "s/NODEIP/$NODEIP/g" conf.yml
sed -i "s/INITTOKEN/$INITTOKEN/g" conf.yml
sed -i "s/NODE1/$NODE1/g" conf.yml
sed -i "s/NODE2/$NODE2/g" conf.yml
sed -i "s/NODE3/$NODE3/g" conf.yml
sed -i "s/DOMAIN/$DOMAIN/g" conf.yml

注解

配置文件 conf.yml 中,初始化etcd绑定的url必须使用主机的IP地址,不能使用域名:

# List of comma separated URLs to listen on for peer traffic.
listen-peer-urls: https://192.168.7.11:2380

# List of comma separated URLs to listen on for client traffic.
listen-client-urls: https://192.168.7.11:2379,https://127.0.0.1:2379

如果使用域名,如 x-k3s-m-1.edge.huatai.me (即使域名解析正确),启动etcd还是会报错:

{"level":"warn","ts":1649611005.237307,"caller":"etcdmain/etcd.go:74","msg":"failed to verify flags","error":"expected IP in URL for binding (http://x-k3s-m-1.edge.huatai.me:2380)"}
  • 执行以下 deploy_etcd_config.sh :

执行etcd修订脚本 deploy_etcd_config.sh
cat << EOF > etcd_hosts
x-k3s-m-1
x-k3s-m-2
x-k3s-m-3
EOF

cat << EOF > prepare_etcd_config.sh
if [ -d /tmp/etcd_config ];then
    rm -rf /tmp/etcd_config
    mkdir /tmp/etcd_config
else
    mkdir /tmp/etcd_config
fi

if  [ ! -d /etc/etcd/ ];then
    sudo mkdir /etc/etcd
fi
EOF

for host in `cat etcd_hosts`;do
    scp prepare_etcd_config.sh $host:/tmp/
    ssh $host 'sh /tmp/prepare_etcd_config.sh'
done

for host in `cat etcd_hosts`;do
    scp config_etcd.sh $host:/tmp/etcd_config/
    scp conf.yml $host:/tmp/etcd_config/
    ssh $host 'sh /tmp/etcd_config/config_etcd.sh'
    ssh $host 'sudo cp /tmp/etcd_config/conf.yml /etc/etcd/'
done
sh deploy_etcd_config.sh

然后验证每台管控服务器上 /etc/etcd/config.yml 配置文件中的占位符是否已经正确替换成主机名。正确情况下, /etc/etcd/conf.yml 中对应 占位符 都会被替换成对应主机的IP地址或者域名,例如:

# Human-readable name for this member.
name: 'x-k3s-m-1'
...
# List of comma separated URLs to listen on for peer traffic.
listen-peer-urls: https://192.168.7.11:2380

# List of comma separated URLs to listen on for client traffic.
listen-client-urls: https://192.168.7.11:2379,https://127.0.0.1:2379
...
# Initial cluster configuration for bootstrapping.
initial-cluster: x-k3s-m-1=https://x-k3s-m-1.edge.huatai.me:2380,x-k3s-m-2=https://x-k3s-m-2.edge.huatai.me:2380,x-k3s-m-3=https://x-k3s-m-3.edge.huatai.me:2380
...
  • 准备配置文件 conf.d-etcdinit.d-etcd (从alpine linux软件仓库 etcd-openrc 软件包提取)

openrc的etcd配置文件 /etc/conf.d/etcd
LOGPATH=/var/log/${RC_SVCNAME}
ETCD_CONFIG_FILE=/etc/etcd/conf.yml
openrc的etcd服务配置文件 /etc/init.d/etcd
#!/sbin/openrc-run
# Copyright 2016 Alpine Linux
# Distributed under the terms of the GNU General Public License v2
# $Id$
supervisor=supervise-daemon

name="$RC_SVCNAME"
description="Highly-available key-value store"

ETCD_DATA_DIR=$(sed -nr 's/^data-dir:\s*(\/.*)/\1/p' $ETCD_CONFIG_FILE)

command=/usr/bin/etcd
command_args="--config-file=${ETCD_CONFIG_FILE}"
: ${output_log:=$LOGPATH/$RC_SVCNAME.log}
: ${error_log:=$LOGPATH/$RC_SVCNAME.log}

command_user="etcd:etcd"

supervise_daemon_args="--chdir $ETCD_DATA_DIR"

depend() {
        need net
}

start_pre() {
        checkpath -d -m 0775 -o "$command_user" "$LOGPATH"
        checkpath -d -m 0700 -o "$command_user" "$ETCD_DATA_DIR"
}
  • 然后执行以下 deploy_etcd_service.sh :

分发openrc的etcd服务脚本 deploy_etcd_service.sh
cat << EOF > etcd_hosts
x-k3s-m-1
x-k3s-m-2
x-k3s-m-3
EOF

cat << EOF > prepare_etcd_service.sh
if [ -d /tmp/etcd_service ];then
    rm -rf /tmp/etcd_service
    mkdir /tmp/etcd_service
else
    mkdir /tmp/etcd_service
fi
EOF

for host in `cat etcd_hosts`;do
    scp prepare_etcd_service.sh $host:/tmp/
    ssh $host 'sh /tmp/prepare_etcd_service.sh'
done

for host in `cat etcd_hosts`;do
    scp conf.d-etcd $host:/tmp/etcd_service/
    scp init.d-etcd $host:/tmp/etcd_service/
    ssh $host 'sudo cp /tmp/etcd_service/conf.d-etcd /etc/conf.d/etcd'
    ssh $host 'sudo cp /tmp/etcd_service/init.d-etcd /etc/init.d/etcd'
    ssh $host 'sudo addgroup -g 1001 etcd && sudo adduser -u 1001 -G etcd -h /dev/null -s /sbin/nologin -D etcd'
done
sh deploy_etcd_service.sh
  • 启动服务:

    sudo service etcd start
    

检查

  • 启动 etcd 之后,检查服务进程:

    ps aux | grep etcd
    

可以看到:

5665 root      0:00 supervise-daemon etcd --start --stdout /var/log/etcd/etcd.log --stderr /var/log/etcd/etcd.log --user etcd etcd --chdir /var/lib/etcd /usr/bin/etcd -- --config-file=/etc/etcd/conf.yml
5666 etcd      0:01 /usr/bin/etcd --config-file=/etc/etcd/conf.yml
  • 检查日志:

    tail -f /var/log/etcd/etcd.log
    

证书错误排查

我在启动 etcd 服务之后,检查 etcd.log 发现有如下报错:

WARNING: 2022/04/11 23:59:00 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"warn","ts":"2022-04-11T23:59:13.567+0800","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.7.11:53050","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2022/04/11 23:59:13 [core] grpc: addrConn.createTransport failed to connect to {192.168.7.11:2379 192.168.7.11:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"warn","ts":"2022-04-11T23:59:15.891+0800","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:50242","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}

并且,每个服务器上都有类似对应错误

这个问题在 ETCD出现:certificate specifies an incompatible key usage 解决方案 提供了解决思路: 原因参考 3.2/3.3 etcd server with TLS would start with error “tls: bad certificate” #9398 ,是因为在 etcd集群TLS设置 时使用的 ca-config.json 采用了:

...
         "server": {
             "expiry": "87600h",
             "usages": [
                 "signing",
                 "key encipherment",
                 "server auth"
             ]
         },
...

没有添加 "client auth" ,这在早期etcd版本中不是问题,但是在 3.2 版本之后,需要添加,也就是:

...
         "server": {
             "expiry": "87600h",
             "usages": [
                 "signing",
                 "key encipherment",
                 "server auth",
                 "client auth"
             ]
         },
...

然后重新生成证书

验证etcd集群

现在 etcd 集群已经启动,我们使用以下命令检查集群是否正常工作:

curl --cacert ca.pem --cert client.pem --key client-key.pem https://etcd.edge.huatai.me:2379/health

此时返回信息应该是:

{"health":"true","reason":""}

注意,这里访问的域名是 etcd.edge.huatai.me ,原因是 etcd集群TLS设置 配置 server.json 时在证书中指定访问域名是 etcd.edge.huatai.me (或IP)

server.json 配置 hosts 部分指定服务器访问域名是 etcd.edge.huatai.me
{
    "CN": "edge k3s etcd",
    "hosts": [
        "etcd.edge.huatai.me",
        "192.168.7.11",
        "192.168.7.12",
        "192.168.7.13",
        "127.0.0.1"
    ],
    "key": {
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
            "C": "CN",
            "L": "Shanghai",
            "ST": "cloud-atlas"
        }
    ]
}

所以不能使用 x-k3s-m-1.edge.huatai.me 这样的 real server 域名,但是可以使用 192.168.7.11 的IP地址,因为IP地址也配置在证书中

上述 server.json 非常巧妙使用了可以同时解析为多个real server的域名 etcd.edge.huatai.me ,也就是生产环境上,可以配置这个域名轮转到这3台服务器的IP上,或者使用一个 负载均衡 分发到这3个real server上,域名解析绑定到负载均衡的VIP上。

  • 为方便维护,配置 etcdctl 环境变量,添加到用户自己的 profile中:

etcdctl 使用的环境变量
export ETCDCTL_API=3
export ETCDCTL_ENDPOINTS='https://etcd.edge.huatai.me:2379'
export ETCDCTL_CACERT=/etc/etcd/ca.pem
export ETCDCTL_CERT=/etc/etcd/client.pem
export ETCDCTL_KEY=/etc/etcd/client-key.pem

然后可以检查:

etcdctl 检查集群成员列表(member list)
etcdctl member list

输出类似:

9bfd4ef1e72d26, started, x-k3s-m-3, https://x-k3s-m-3.edge.huatai.me:2380, https://x-k3s-m-3.edge.huatai.me:2379, false
7e8d94ba496c072d, started, x-k3s-m-1, https://x-k3s-m-1.edge.huatai.me:2380, https://x-k3s-m-1.edge.huatai.me:2379, false
a01cb65343e64610, started, x-k3s-m-2, https://x-k3s-m-2.edge.huatai.me:2380, https://x-k3s-m-2.edge.huatai.me:2379, false

为方便观察,可以使用表格输出模式:

etcdctl 检查endpoint状态(表格形式输出)
etcdctl --write-out=table endpoint status

输出显示:

+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.7.11:2379 | 7e8d94ba496c072d |   3.5.2 |   20 kB |     false |      false |         7 |        237 |                237 |        |
| https://192.168.7.12:2379 | a01cb65343e64610 |   3.5.2 |   20 kB |     false |      false |         7 |        237 |                237 |        |
| https://192.168.7.13:2379 |   9bfd4ef1e72d26 |   3.5.2 |   20 kB |      true |      false |         7 |        237 |                237 |        |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

检查健康状况:

etcdctl 检查endpoint健康状态(查看节点响应情况)
etcdctl endpoint health

输出显示:

https://192.168.7.13:2379 is healthy: successfully committed proposal: took = 67.98523ms
https://192.168.7.12:2379 is healthy: successfully committed proposal: took = 64.634362ms
https://192.168.7.11:2379 is healthy: successfully committed proposal: took = 67.330493ms

参考