Ceph CSI RBD Plugin

Ceph CSI RBD plugin 可以提供一个 Ceph Block Device(RBD) 镜像,并将其附加和挂载到 Kubernetes负载 :

Ceph创建存储池

  • 默认情况戏,Ceph块设备使用 rbd 存储池。可以为Kubernetes集群创建一个Kubernetes卷存储池(确保Ceph集群运行状态,执行创建存储池。方法同 Libvirt集成Ceph RBD ),我这里创建了一个独立存储池给 y-k8s 集群使用,命名为 y-k8s-pool :

为Kubernetes创建存储池
# 为Kubernetes集群y-k8s创建存储池
ceph osd pool create y-k8s-pool

# 新建的存储池需要初始化
rbd pool init y-k8s-pool

备注

命令行操作创建存储池之后需要进行初始化才能使用。我这里实际上是采用 Ceph Dashboard 管控面板 完成,更为方便

在Kubernetes中部署CSI RBD

CSI driver for Ceph (GitHub仓库)提供部署的模版文件(源代码 deploy/rbd/kubernetes 目录下),可以帮助我们在Kubernetes中部署:

  • csi-config-map.yaml

  • csidriver.yaml

  • csi-nodeplugin-rbac.yaml

  • csi-provisioner-rbac.yaml

  • csi-rbdplugin-provisioner.yaml

  • csi-rbdplugin.yaml

Kubernetes集群需要

创建 CSIDriver

  • 创建 CSIDriver 对象:

创建 CSIDriver 对象
kubectl create -f csidriver.yaml
csidriver.yaml
#
# /!\ DO NOT MODIFY THIS FILE
#
# This file has been automatically generated by Ceph-CSI yamlgen.
# The source for the contents can be found in the api/deploy directory, make
# your modifications there.
#
---
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
  name: "rbd.csi.ceph.com"
spec:
  attachRequired: true
  podInfoOnMount: false
  seLinuxMount: true
  fsGroupPolicy: File

此时检查 csidriver 对象:

检查 csidriver
kubectl get csidriver

输出显示:

检查 csidriver 的输出
NAME               ATTACHREQUIRED   PODINFOONMOUNT   STORAGECAPACITY   TOKENREQUESTS   REQUIRESREPUBLISH   MODES        AGE
rbd.csi.ceph.com   true             false            false             <unset>         false               Persistent   67s

检查 csidriver 对象 spec:

检查 csidriver 对象 spec
kubectl get csidriver -o yaml

输出显示:

检查 csidriver 对象 spec输出
apiVersion: v1
items:
- apiVersion: storage.k8s.io/v1
  kind: CSIDriver
  metadata:
    creationTimestamp: "2023-12-07T01:51:00Z"
    name: rbd.csi.ceph.com
    resourceVersion: "62852207"
    uid: 5183faa3-c919-4220-8497-8fabc6a25754
  spec:
    attachRequired: true
    fsGroupPolicy: File
    podInfoOnMount: false
    requiresRepublish: false
    storageCapacity: false
    volumeLifecycleModes:
    - Persistent
kind: List
metadata:
  resourceVersion: ""

部署sidecar容器和节点plugins的 RBACs

使用清单(manifests) 部署服务账号,集群角色以及集群的角色绑定。这些设置是RBD和 Ceph CSI CephFS plugin 共享的配置,所以必须是相同的权限

  • 执行以下命令部署 RBACs :

部署sidecar和node plugins的 RBACs
kubectl create -f csi-provisioner-rbac.yaml
kubectl create -f csi-nodeplugin-rbac.yaml
csi-provisioner-rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-csi-provisioner
  # replace with non-default namespace name
  namespace: default

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-external-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims/status"]
    verbs: ["update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots"]
    verbs: ["get", "list", "patch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots/status"]
    verbs: ["get", "list", "patch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents"]
    verbs: ["create", "get", "list", "watch", "update", "delete", "patch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments/status"]
    verbs: ["patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["csinodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents/status"]
    verbs: ["update", "patch"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["serviceaccounts"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["serviceaccounts/token"]
    verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-provisioner-role
subjects:
  - kind: ServiceAccount
    name: rbd-csi-provisioner
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: ClusterRole
  name: rbd-external-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  # replace with non-default namespace name
  namespace: default
  name: rbd-external-provisioner-cfg
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "list", "watch", "create", "update", "delete"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["get", "watch", "list", "delete", "update", "create"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-provisioner-role-cfg
  # replace with non-default namespace name
  namespace: default
subjects:
  - kind: ServiceAccount
    name: rbd-csi-provisioner
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: Role
  name: rbd-external-provisioner-cfg
  apiGroup: rbac.authorization.k8s.io
csi-nodeplugin-rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-csi-nodeplugin
  # replace with non-default namespace name
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-nodeplugin
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get"]
  # allow to read Vault Token and connection options from the Tenants namespace
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["serviceaccounts"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["serviceaccounts/token"]
    verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-nodeplugin
subjects:
  - kind: ServiceAccount
    name: rbd-csi-nodeplugin
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: ClusterRole
  name: rbd-csi-nodeplugin
  apiGroup: rbac.authorization.k8s.io

部署CSI plugins的 ConfigMap

ceph-csi/docs/deploy-rbd.md 采用了一个空白CSI配置来挂载Ceph CSI plugin pods的卷,我没有理解为何这样操作。实际我参考 BLOCK DEVICES AND KUBERNETES 提供的 GENERATE CEPH-CSI CONFIGMAP 来生成这个configmap配置

ceph-csi 需要一个ConfigMap对象来定义Ceph monitor地址,所以需要先搜集Ceph集群的 fsid 和 monitor 地址:

执行 ceph mon dump 获取集群配置信息
ceph mon dump

输出信息如下:

我的Ceph集群monitor信息
epoch 4
fsid 0e6c8b6f-0d32-4cdb-a45d-85f8c7997c17
last_changed 2022-11-07T23:40:25.922046+0800
created 2021-12-01T16:57:40.856830+0800
min_mon_release 17 (quincy)
election_strategy: 1
0: [v2:192.168.6.204:3300/0,v1:192.168.6.204:6789/0] mon.z-b-data-1
1: [v2:192.168.6.205:3300/0,v1:192.168.6.205:6789/0] mon.z-b-data-2
2: [v2:192.168.6.206:3300/0,v1:192.168.6.206:6789/0] mon.z-b-data-3
dumped monmap epoch 4
  • 根据上述信息,我们创建一个 csi-config-map.yaml

参考