ceph-src-dir: /home/github/ceph
ceph-build-dir: /home/github/ceph/build-master
# cd <ceph-build-dir>
# export PATH=<ceph-build-dir>/bin:$PATH
create the first cluster (do not (yet) change the cluster name from "cluster1/2")
# This was tested using 2 ceph clusters setup with Rook | |
# Ceph version reported was: 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable) | |
# There were 2 k8s clusters in use named "east" and "west" | |
# tbox.sh is a script to run commands within Rook toolbox, IOW a shell to execute commands on the ceph clusters | |
# tbox.sh looks like so (for reference) | |
# Begin (commented) tbox.sh | |
##! /bin/bash | |
#scriptdir="$(dirname "$(realpath "$0")")" | |
#ctx=${1} | |
#shift 1 |
$ ./rbd-image-leak.sh | |
=== Creating image EAST === | |
=== Enabling mirror EAST === | |
Mirroring enabled | |
=== Listing snapshot schedule EAST === | |
every 2m starting at 14:00:00-05:00 | |
=== Mirror image status EAST === |
ceph-src-dir: /home/github/ceph
ceph-build-dir: /home/github/ceph/build-master
# cd <ceph-build-dir>
# export PATH=<ceph-build-dir>/bin:$PATH
create the first cluster (do not (yet) change the cluster name from "cluster1/2")
2020-08-19T16:30:55.586+0000 7f43fcf9f700 0 [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(prefix:fs subvolume getpath, sub_name:subvolume_0000000000545095, target:['mon-mgr', ''], vol_name:cephfs) < ""
2020-08-19T16:30:55.586+0000 7f43fcf9f700 0 [volumes DEBUG mgr_util] self.fs_id=46, fs_id=47
2020-08-19T16:30:55.586+0000 7f43fcf9f700 0 [volumes WARNING mgr_util] filesystem id changed for volume 'cephfs', reconnecting...
2020-08-19T16:30:55.586+0000 7f43fcf9f700 0 [volumes DEBUG mgr_util] self.fs_id=46, fs_id=47
2020-08-19T16:30:55.586+0000 7f43fcf9f700 0 [volumes INFO mgr_util] aborting connection from cephfs 'cephfs'
This document aims to list, at a high level, CSI 1 requirements and how these are met by existing CephFS features, or are features that need to be developed or improved in CephFS. It also covers features that are needed in ceph-csi, to close the gap between RBD and CephFS integrations.
NOTE: This is to serve as a running document, and would get modifications as the CSI specification evolves and its implementation in ceph-csi 2 evolves over time, or as more insight is gleaned for the existing requirements.
This section captures the gaps and potential future requirements (and associated trackers) based on the analysis in the subsequent sections. It should be sufficient to read through this section, for features and gaps, for non-CSI implementors.
This document details the design around adding topology aware provisioning support to ceph CSI drivers.
NOTE: Used from kubernetes "Volume Topology-aware Scheduling"
--- | |
apiVersion: v1 | |
kind: PersistentVolume | |
metadata: | |
# Can be anything, but has to be matched at line 47 | |
# Also should avoid conflicts with existing PV names in the namespace | |
name: preprov-pv-cephfs-01 | |
spec: | |
accessModes: | |
- ReadWriteMany |
#!/bin/bash | |
# $1 namespace to query | |
# $2 csi plugin to query | |
# $3 container in pod to exec commands in | |
# $4 port to query | |
function get_metrics() | |
{ | |
for pod in $(oc get pod -l app="$2" -n "$1" -o jsonpath="{.items[*].metadata.name}"); do | |
echo "Gathering CSI gRPC metrics for pod ($pod) in namespace ($2) into ${pod}-metrics.log" |
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: ocs-monkey | |
labels: | |
app: ocs-monkey | |
spec: | |
ports: | |
- port: 8097 | |
name: dummy |