kubernetes rook 安装 cephfs
ceph rootkubernete osd uuid partuuid
·
特别鸣谢
Kubernetes 持久化存储之 Rook Ceph 探究
系统环境
主机名 | IP | CPU | 内存 | 系统盘 | 数据盘 | 用途 |
master01 | 10.16.5.15 | 26 | 128 | 100 | 2T | Ceph |
node01 | 10.16.5.16 | 26 | 128 | 100 | 2T | Ceph |
node02 | 10.16.5.17 | 26 | 128 | 100 | 2T | Ceph |
- 操作系统:debian 12.7
- Kubernetes:v1.22.12
- Docker:24.0.9
- Rook:v1.12.11
- Ceph: v17.2.6
官方推荐安装ceph方法
cephadm是一个可用于安装和管理 Ceph 集群的工具。
- cephadm 仅支持 Octopus 和较新版本。
- cephadm 与业务流程 API 完全集成,并完全支持用于管理集群部署的 CLI 和仪表板功能。
- cephadm 需要容器支持(以 Podman 或 Docker 的形式)和 Python 3。
- cephadm 需要 systemd。
Rook可部署和管理在 Kubernetes 中运行的 Ceph 集群,同时还支持通过 Kubernetes API 管理存储资源和配置。我们建议使用 Rook 在 Kubernetes 中运行 Ceph 或将现有 Ceph 存储集群连接到 Kubernetes。
- Rook 仅支持 Nautilus 和较新版本的 Ceph。
-
Rook 是在 Kubernetes 上运行 Ceph 或将 Kubernetes 集群连接到现有(外部)Ceph 集群的首选方法。
- Rook 支持 Orchestrator API。CLI 和仪表板中的管理功能完全受支持。
ceph版本
Rook 版本和Kubernetes版本
根据对应版本下载
Rook版本 | Kubernetes版本 |
v1.12 | v1.22到v1.28 |
v1.13 | v1.22到v1.28 |
v1.13 | v1.22到v1.28 |
CPU 架构
支持的体系结构有amd64 / x86_64 和 arm64。
Ceph 先决条件
要配置 Ceph 存储集群,至少需要以下一种本地存储类型:
- 原始设备(无分区或格式化的文件系统)
- 原始分区(无格式化的文件系统)
- LVM 逻辑卷(无格式化的文件系统)
-
block 模式下存储类中可用的持久卷
部署 Rook Operator
wget https://github.com/rook/rook/archive/refs/tags/v1.12.11.tar.gz
tar xvf v1.12.11.tar.gz
vi operator.yaml
镜像替换华为仓库 swr.cn-north-4.myhuaweicloud.com/ddn-k8s
ROOK_CSI_CEPH_IMAGE: "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/quay.io/cephcsi/cephcsi:v3.9.0"
ROOK_CSI_REGISTRAR_IMAGE: "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0"
ROOK_CSI_RESIZER_IMAGE: "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/csi-resizer:v1.8.0"
ROOK_CSI_PROVISIONER_IMAGE: "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/csi-provisioner:v3.5.0"
ROOK_CSI_SNAPSHOTTER_IMAGE: "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/csi-snapshotter:v6.2.2"
ROOK_CSI_ATTACHER_IMAGE: "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/csi-attacher:v4.3.0"
cd deploy/examples
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
# verify the rook-ceph-operator is in the `Running` state before proceeding
kubectl -n rook-ceph get pod
创建OSD磁盘
整块磁盘推荐使用uuid
分区使用的partuuid
root@master01:~# blkid
/dev/sdb1: UUID="a7c31033-39cc-41d1-9f7e-e35b4cb6b8b0" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="008d32cc-01"
/dev/sda5: UUID="d0fcb26c-555d-4373-9410-c62f73c0a2f9" TYPE="swap" PARTUUID="68101887-05"
/dev/sda1: UUID="148ca2e5-6627-46ec-a953-5fe1f24ef056" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="68101887-01"
/dev/sdb2: TYPE="ceph_bluestore" PARTUUID="008d32cc-02"
vi cluster.yaml
storage: # cluster level storage configuration and selection
useAllNodes: false
useAllDevices: false
#deviceFilter:
config:
storeType: bluestore
nodes:
- name: "master01"
devices:
- name: "/dev/disk/by-partuuid/008d32cc-02" # devices can be specified using full udev paths
config:
osdsPerDevice: "1"
osd-partuuid: "008d32cc-02" # 使用 PARTUUID
databaseSizeMB: "1024"
journalSizeMB: "1024"
- name: "node01"
devices:
- name: "/dev/disk/by-partuuid/aaa40f90-02" # devices can be specified using full udev paths
config:
osdsPerDevice: "1"
osd-partuuid: "aaa40f90-02" # 使用 PARTUUID
databaseSizeMB: "1024"
journalSizeMB: "1024"
- name: "node02"
devices:
- name: "/dev/disk/by-partuuid/f794c6db-02" # devices can be specified using full udev paths
config:
osdsPerDevice: "1"
osd-partuuid: "f794c6db-02" # 使用 PARTUUID
databaseSizeMB: "1024"
journalSizeMB: "1024"
创建 Ceph 集群
kubectl create -f cluster.yaml
#查看命名空间中的 pod 来验证集群是否正在运行
kubectl -n rook-ceph get pod
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-provisioner-d77bb49c6-n5tgs 5/5 Running 0 140s
csi-cephfsplugin-provisioner-d77bb49c6-v9rvn 5/5 Running 0 140s
csi-cephfsplugin-rthrp 3/3 Running 0 140s
csi-rbdplugin-hbsm7 3/3 Running 0 140s
csi-rbdplugin-provisioner-5b5cd64fd-nvk6c 6/6 Running 0 140s
csi-rbdplugin-provisioner-5b5cd64fd-q7bxl 6/6 Running 0 140s
rook-ceph-crashcollector-minikube-5b57b7c5d4-hfldl 1/1 Running 0 105s
rook-ceph-mgr-a-64cd7cdf54-j8b5p 2/2 Running 0 77s
rook-ceph-mgr-b-657d54fc89-2xxw7 2/2 Running 0 56s
rook-ceph-mon-a-694bb7987d-fp9w7 1/1 Running 0 105s
rook-ceph-mon-b-856fdd5cb9-5h2qk 1/1 Running 0 94s
rook-ceph-mon-c-57545897fc-j576h 1/1 Running 0 85s
rook-ceph-operator-85f5b946bd-s8grz 1/1 Running 0 92m
rook-ceph-osd-0-6bb747b6c5-lnvb6 1/1 Running 0 23s
rook-ceph-osd-1-7f67f9646d-44p7v 1/1 Running 0 24s
rook-ceph-osd-2-6cd4b776ff-v4d68 1/1 Running 0 25s
rook-ceph-osd-prepare-node1-vx2rz 0/2 Completed 0 60s
rook-ceph-osd-prepare-node2-ab3fd 0/2 Completed 0 60s
rook-ceph-osd-prepare-node3-w4xyz 0/2 Completed 0 60s
创建 toolbox 并查看ceph状态
kubectl apply -f toolbox.yaml
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
bash-5.1$ ceph -s
cluster:
id: 1f052a24-a13b-4200-8da4-3c3e971f4d4f
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 7h)
mgr: a(active, since 2h), standbys: b
osd: 3 osds: 3 up (since 7h), 3 in (since 7h)
data:
volumes: 1/1 healthy
pools: 1 pools, 1 pgs
objects: 5.11k objects, 842 MiB
usage: 2.6 GiB used, 1.8 TiB / 1.8 TiB avail
pgs: 49 active+clean
使用CephFS存储
创建CephFilesystem
vi deploy/examples/storageclass.yaml
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: cephfs-hdd
namespace: rook-ceph
spec:
metadataPool:
replicated:
size: 3
dataPools:
- name: replicated
replicated:
size: 3
preserveFilesystemOnDelete: true
metadataServer:
activeCount: 1
activeStandby: true
kubectl create -f filesystem.yaml
root@master01:~# kubectl -n rook-ceph get pod -l app=rook-ceph-mds
NAME READY STATUS RESTARTS AGE
rook-ceph-mds-cephfs-hdd-a-84988b6f7f-4ftnq 2/2 Running 0 7h10m
rook-ceph-mds-cephfs-hdd-b-8cccdfc55-hj6tx 2/2 Running 0 7h9m
bash-4.4$ ceph -s
cluster:
id: 1f052a24-a13b-4200-8da4-3c3e971f4d4f
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 7h)
mgr: a(active, since 2h), standbys: b
mds: 1/1 daemons up, 1 hot standby
osd: 3 osds: 3 up (since 7h), 3 in (since 7h)
data:
volumes: 1/1 healthy
pools: 3 pools, 49 pgs
objects: 5.11k objects, 856 MiB
usage: 2.7 GiB used, 1.8 TiB / 1.8 TiB avail
pgs: 49 active+clean
#查看池
bash-4.4$ ceph osd pool ls
.mgr
cephfs-hdd-metadata
cephfs-hdd-replicated
#查看文件系统
bash-4.4$ ceph fs ls
name: cephfs-hdd, metadata pool: cephfs-hdd-metadata, data pools: [cephfs-hdd-replicated ]
配置StorageClass
vi deploy/examples/storageclass.yam
#
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-ssd
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
# clusterID is the namespace where the rook cluster is running
# If you change this namespace, also change the namespace below where the secret namespaces are defined
clusterID: rook-ceph
# CephFS filesystem name into which the volume shall be created
fsName: cephfs-hdd
# Ceph pool into which the volume shall be created
# Required for provisionVolume: "true"
pool: cephfs-hdd-replicated
# The secrets contain Ceph admin credentials. These are generated automatically by the operator
# in the same namespace as the cluster.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
reclaimPolicy: Delete
#开启自动扩容功能
allowVolumeExpansion: true
#
kubectl create -f storageclass.yaml
问题处理
重新安装,加入相同磁盘需要擦除分区
wipefs -a /dev/sdb1
更多推荐
所有评论(0)