-
RHEL 7
-
OCP 3.9+
-
Deployed router and a working DNS resolution with a wildcard
-
Gluster Storage deployed either natively or standalone.
The first step is to generate a secure credentials under which you will access the buckets. You can export them as envinment variables. For example:
export S3_ACCOUNT=ocpgluster export S3_USER=User1 export S3_PASS=Password1
The variables map to more commonly known credentials like this:
AWS_ACCESS_KEY_ID="${S3_ACCOUNT}:${S3_USER}" AWS_SECRET_ACCESS_KEY="${S3_PASS}"
Apart from regular glusterfs and heketi settings, the following variables can be set in your inventory file:
openshift_storage_glusterfs_s3_deploy=True openshift_storage_glusterfs_s3_account="{{ lookup('env','S3_ACCOUNT') }}" openshift_storage_glusterfs_s3_user="{{ lookup('env','S3_USER') }}" openshift_storage_glusterfs_s3_password="{{ lookup('env','S3_PASS') }}" openshift_storage_glusterfs_s3_pvc_size="20Gi"
See the full parameter list for descriptions.
Now deploy the Gluster using the playbook.
This is useful if you have Gluster already deployed in the OCP cluster.
Please follow the Gluster S3 object storage as native app. on OpenShift.
As a prerequisite, you must determine the route to your s3 service. The following command assumes you have just one s3 service deployed in the gluster namespace.
# execute this in the namespace where the gluster s3 service has been deployed (e.g. glusterfs) endpoint="$(oc get -o jsonpath=$'{range .items[*]}{.spec.host}\n{end}' route | \ grep s3 | head -n 1)"
Via S3 Curl
-
Install the following dependencies:
-
perl-Digest-HMAC
-
libxml2
Like this:
sudo yum install -y libxml2 perl-Digest-HMAC unzip
-
-
Download and prepare the
s3curl
script. Note the$endpoint
environment variable needs to be set.curl -L -O http://s3.amazonaws.com/doc/s3-example-code/s3-curl.zip unzip s3-curl.zip sed -i -e '/^\(my \)\?@endpoints\s*=\s*/,/;/s/^/#/' -e "/^#\(my \)\?@endpoints/i \ my @endpoints = ( '$endpoint', );" s3-curl/s3curl.pl chmod a+x s3-curl/s3curl.pl rm s3-curl.zip
-
Create an s3curl credentials file:
cat > ~/.s3curl <<EOF %awsSecretAccessKeys = ( ${S3_ACCOUNT} => { id => '${S3_ACCOUNT}:${S3_USER}', key => '${S3_PASS}', }, ); EOF chmod 600 ~/.s3curl
-
Execute the following to create your first bucket (e.g.
bucket1
):s3-curl/s3curl.pl --debug --id "${S3_ACCOUNT}" --put /dev/null -- -v "http://$endpoint/bucket1/"
There cannot be a directory without a file in S3 object store. Therefor, let’s create an empty
file residing in a directory (e.g. dir1
):
s3-curl/s3curl.pl --id "${S3_ACCOUNT}" --put /dev/null -- \ "http://$endpoint/bucket1/dir1/.direntry"
s3-curl/s3curl.pl --id "${S3_ACCOUNT}" -- http:///bucket1/ | xmllint --format -
You can find more examples at s3curl repository.