Skip to content

Instantly share code, notes, and snippets.

@miminar
Last active November 21, 2018 11:19
Show Gist options
  • Save miminar/6420a35b20a8f9b7b5d855bb5e23d9fd to your computer and use it in GitHub Desktop.
Save miminar/6420a35b20a8f9b7b5d855bb5e23d9fd to your computer and use it in GitHub Desktop.
Gluster S3 API deployment in OCP

Guide to deploy GlusterFS S3 API in OpenShift on RHEL7

Prerequisites:

Example account variables

The first step is to generate a secure credentials under which you will access the buckets. You can export them as envinment variables. For example:

export S3_ACCOUNT=ocpgluster
export S3_USER=User1
export S3_PASS=Password1

The variables map to more commonly known credentials like this:

AWS_ACCESS_KEY_ID="${S3_ACCOUNT}:${S3_USER}"
AWS_SECRET_ACCESS_KEY="${S3_PASS}"

Installation

Using advanced installer

Apart from regular glusterfs and heketi settings, the following variables can be set in your inventory file:

openshift_storage_glusterfs_s3_deploy=True
openshift_storage_glusterfs_s3_account="{{ lookup('env','S3_ACCOUNT') }}"
openshift_storage_glusterfs_s3_user="{{ lookup('env','S3_USER') }}"
openshift_storage_glusterfs_s3_password="{{ lookup('env','S3_PASS') }}"
openshift_storage_glusterfs_s3_pvc_size="20Gi"

See the full parameter list for descriptions.

Now deploy the Gluster using the playbook.

Using upstream template

This is useful if you have Gluster already deployed in the OCP cluster.

Accessing the S3 storage

As a prerequisite, you must determine the route to your s3 service. The following command assumes you have just one s3 service deployed in the gluster namespace.

# execute this in the namespace where the gluster s3 service has been deployed (e.g. glusterfs)
endpoint="$(oc get -o jsonpath=$'{range .items[*]}{.spec.host}\n{end}' route | \
  grep s3 | head -n 1)"
  1. Install the following dependencies:

    • perl-Digest-HMAC

    • libxml2

      Like this:

      sudo yum install -y libxml2 perl-Digest-HMAC unzip
  2. Download and prepare the s3curl script. Note the $endpoint environment variable needs to be set.

    curl -L -O http://s3.amazonaws.com/doc/s3-example-code/s3-curl.zip
    unzip s3-curl.zip
    sed -i -e '/^\(my \)\?@endpoints\s*=\s*/,/;/s/^/#/' -e "/^#\(my \)\?@endpoints/i \
    my @endpoints = ( '$endpoint', );" s3-curl/s3curl.pl
    chmod a+x s3-curl/s3curl.pl
    rm s3-curl.zip
  3. Create an s3curl credentials file:

    cat > ~/.s3curl <<EOF
    %awsSecretAccessKeys = (
        ${S3_ACCOUNT} => {
            id => '${S3_ACCOUNT}:${S3_USER}',
            key => '${S3_PASS}',
        },
    );
    EOF
    chmod 600 ~/.s3curl
  4. Execute the following to create your first bucket (e.g. bucket1):

    s3-curl/s3curl.pl --debug --id "${S3_ACCOUNT}" --put /dev/null -- -v "http://$endpoint/bucket1/"
Create a directory on the bucket:

There cannot be a directory without a file in S3 object store. Therefor, let’s create an empty file residing in a directory (e.g. dir1):

s3-curl/s3curl.pl --id "${S3_ACCOUNT}" --put /dev/null -- \
  "http://$endpoint/bucket1/dir1/.direntry"
List the bucket contents
s3-curl/s3curl.pl --id "${S3_ACCOUNT}" -- http:///bucket1/ | xmllint --format -

You can find more examples at s3curl repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment