Gotta Catch 'Em All - GSoC 2023 Ceph Project
Below are detailed instructions regarding the Gotta Catch 'Em All - GSoC 2023 Ceph Project
To access the list of issues found by coverity, you would need to first open a coverity account. Would recommend usign your github account, since you would need one anyway to contribute to Ceph. Once you have your coverity account, you can ask for access from the Ceph project page. The pending Coverity issues for the RGW could be found here: https://scan5.scan.coverity.com/reports.htm#v58144/p10114
As with any static analysis tool, these are just suggestion that require further analysis and classification by a developer.
- an issue may be a real but that worthwhile fixig
- or a minor issue that does not need to be fixed
- some are "false positive" issues
- and some are code smells indicating a issue that needs to be fixed, but not necessarily the issue pointed by the tool
- something else? The goal of the project is to to through as many of these issues, classify and fix them.
More frequent scans of the latest code base perfomed by the Ceph team, and the results are posted here: http://folio07.sepia.ceph.com/main/
Static analysis should be part of a process (ideally automated) that prevents bugs from sneaking into the system, even if reviewers missed them, and testing did not cover them. However, given the amount of issues that currently exist, both real and false-positives, it would be difficult to deploy such a process. Once the issues are cleaned up, false-positives are marked as such in the code, and real issues are either fixed or have trackers opened against them, it would be easy to add a process (not in scope for this GSoC project) where newly found issues are reported and should be addressed by the developer that introduced them.
In this step we would build ceph and tests its Object Store interface.
First would be to have a linux based development environment, as a minimum you would need a 4 CPU machine, with 8G RAM and 50GB disk. Unless you already have a linux distro you like, I would recommend choosing from:
- Fedora (37 or rawhide) - my favorite!
- Ubuntu (20.04 LTS)
- OpenSuse (Leap 15.3/4 or tumbleweed)
- WSL (Windows Subsystem for Linux)
Once you have that up and running, you should clone the Ceph repo from github (https://github.com/ceph/ceph). If you don’t know what github and git are, this is the right time to close these gaps :-) And yes, you should have a github account, so you can later share your work on the project.
First install any missing system dependencies use:
Note that, the first build may take long time, so the following
cmake parameter could be used to minimize the build time.
With a fresh ceph clone use the following:
./do_cmake.sh -DBOOST_J=$(nproc) -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DWITH_MGR_DASHBOARD_FRONTEND=OFF \ -DWITH_DPDK=OFF -DWITH_SPDK=OFF -DWITH_SEASTAR=OFF -DWITH_CEPHFS=OFF -DWITH_RBD=OFF -DWITH_KRBD=OFF -DWITH_CCACHE=OFF
build directory already exists, you can rebuild the ninja files by using (from
cmake -DBOOST_J=$(nproc) -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DWITH_MGR_DASHBOARD_FRONTEND=OFF \ -DWITH_DPDK=OFF -DWITH_SPDK=OFF -DWITH_SEASTAR=OFF -DWITH_CEPHFS=OFF -DWITH_RBD=OFF -DWITH_KRBD=OFF -DWITH_CCACHE=OFF ..
then invoke the build process (using ninja) from withon the
build directory (created by
Assuming the build was completed successfully, you can run the unit tests (see: https://github.com/ceph/ceph#running-unit-tests).
Now you are ready to run the ceph processes, as explained here: https://github.com/ceph/ceph#running-a-test-cluster You probably would also like to check the developer guide (https://docs.ceph.com/docs/master/dev/developer_guide/) and learn more on how to build Ceph and run it locally (https://docs.ceph.com/docs/master/dev/quick_guide/). Would recommend using the following command for starting the cluster:
MON=1 OSD=1 MDS=0 MGR=1 RGW=1 ../src/vstart.sh -n -d
Assuming you have everything up and running, you can create a bucket in Ceph and upload an object to it.
Best way for doing that is the
s3cmd python command line tool:
Note that the tool is mainly geared towards AWS S3, so make sure to specify the location of the RGW as the endpoint, and the RGW credentials (as printed to the screen after running vstart.sh).
$ s3cmd --no-ssl --host=localhost:8000 --host-bucket="localhost:8000/%(bucket)" \ --access_key=0555b35654ad1656d804 \ --secret_key=h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q== \ mb s3://mybucket
Would create a bucket called
mybucket in Ceph.
$ s3cmd --no-ssl --host=localhost:8000 --host-bucket="localhost:8000/%(bucket)" \ --access_key=0555b35654ad1656d804 \ --secret_key=h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q== \ put myimage.jpg s3://mybucket
myimage.jpg into that bucket.
In this step we would try to analyze an issue from the list above (with "high" or "medium" impact) and see which category it matches
Pick one of the issues that were already classified as bugs and try to fix them. Note that these issues will have a "tracker". e.g. https://tracker.ceph.com/issues/57516
Note that to read tracker issues no registration is needed. However, to update an issues you must register to the Cpeh tracker