Skip to content

Instantly share code, notes, and snippets.

  • start a vstart cluster
  • created a tenanted user:
bin/radosgw-admin user create --display-name "Ka Boom" --tenant boom --uid ka --access_key ka --secret_key boom
  • create a bucket on that tenant
AWS_ACCESS_KEY_ID=ka AWS_SECRET_ACCESS_KEY=boom aws --endpoint-url http://localhost:8000 s3 mb s3://fish
  • create a log bucket with no tenant

Warm and Fuzzy

Background

The RGW's frontend is an S3 REST API server, and in this project we would like to use a REST API fuzzer to test the RGW for security issues (and other bugs). Would recommend exploring the Restler tool. Very good intro in this video. Feed it with the AWS S3 OpenAPI spec, and see what happens when we let it connect to the RGW.

Project

Initial (evaluation) Phase

  • run Ceph with a radosgw. you can use cephadm to install and run ceph in containers or build it from source and run it a vstart cluster

The More the Merrier

Background

Persistent bucket notifications are a very useful and powerful feature. To learn more about it, you can look at this tech talk and usecase example.

Persistent notifications are usually better that synchronous notification, due to several reasons:

  • the queue they are using is, in fact, a RADOS object. This gives the queue the reliability level of RADOS
  • they do not add the delay of sending the notification to the broker to the client request round trip time
  • they allow for temporary disconnects with the broker or broker restarts without affecting the service
  • they have a time and attempts retry mechanism
################################################################################################################
# Define the settings for the rook-ceph cluster with common settings for a small test cluster.
# All nodes with available raw devices will be used for the Ceph cluster. One node is sufficient
# in this example.

Test

this test assumes ceph cluster with RGW is deployed via vstart

  • create the "log' bucket:
aws --endpoint-url http://localhost:8000 s3 mb s3://all-logs

Standard Mode

  • create a bucket for standard logging:

K8s Setup

install minikube, run minikube with enough CPUs and 2 extra disks (for 2 OSDs):

$ minikube start --cpus 6 --extra-disks=2 --driver=kvm2

install kubectl and use from from the host:

$ eval $(minikube docker-env)

Phase 0

  • draft PR
  • initial PR
  • initial test PR

Phase 1

  • add "flush" REST API call to fix the issue of lazy-commit. use POST /<bucket name>/?logging as the command
  • add admin command to get bucket logging info: radosgw-admin bucket logging get
  • handle copy correctly:
  • in "Journal" mode, we should just see the "PUT" of the new object (existing behavior)
from http.server import ThreadingHTTPServer, BaseHTTPRequestHandler
import threading
from multiprocessing import Process
import random
import json
class HTTPPostHandler(BaseHTTPRequestHandler):
"""HTTP POST hanler class storing the received events in its http server"""
def do_POST(self):
"""implementation of POST handler"""

Fedora28-32

Current status:

df -h

Find large files in the system:

sudo find / -type f -size +1000M -exec ls -lh {} \;