Skip to main content

Ceph File System(CephFS)


생성

CephFilesystem

apiVersion: ceph.rook.io/v1
kind: CephFilesystem
spec:
metadataPool:
dataPool:
metadataServer:
activeCount: 1
activeStandby: true
  • metadataPool
    • failureDomain: <crushType|osd>
      • 기본 값은 host입니다.
    • replicated
      • size: <size>
  • dataPools: []
    • name: data0
    • failureDomain: <crushType|osd>
      • 기본 값은 host입니다.
    • replicated
      • size: <size>
  • metadataServer
    • activeCount: <count>
      • active 상태의 MDS의 수입니다.
      • active MDS + standby MDS 두 개가 쌍으로 생성되므로 총 MDS 수는 activeCount * 2 입니다.
    • activeStandby: <bool>
      • standby MDS를 standby_replay MDS로 설정할지 여부입니다.
      • standby_replay는 warm 캐시를 유지하므로 더 빠른 failover를 제공합니다.
    • resources
    • placement
      • nodeAffinity
      • podAffinity
      • podAntiAffinity
      • tolerations
      • topologySpreadConstraints

apiVersion: ceph.rook.io/v1
kind: CephFilesystemSubVolumeGroup
spec:
name: csi # 설정 안하면 .metadata.name이 사용됩니다.
filesystemName: <cephFilesystemName>
pinning:
distributed: 1

StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "false"
# operator에 CSI_DRIVER_NAME_PREFIX를 설정하지 않았다면 operator가 속한 namespace를 사용합니다.
provisioner: <operatorNamespace>.cephfs.csi.ceph.com
volumeBindingMode: Immediate
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
clusterID: <cephClusterNamespace>
fsName: <cephFilesystemName>
pool: <cephFilesystemName>-<cephFilesystem.spec.dataPools.name>
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: <cephClusterNamespace>
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: <cephClusterNamespace>
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: <cephClusterNamespace>

삭제

kubectl annotate cephfilesystemsubvolumegroups <name> \
rook.io/force-deletion="true" \
-n rook-ceph