Skip to content

Instantly share code, notes, and snippets.

View ErwanAliasr1's full-sized avatar

Erwan Velu ErwanAliasr1

View GitHub Profile
erwan@R1:~/Devel/chroot/ceph/src (evelu-check)$ cat ceph-disk/run-tox.sh.log
flake8 develop-inst-nodeps: /ceph/src/ceph-disk
flake8 installed: ceph-detect-init==1.0.1,-e git+https://github.com/ceph/ceph.git@75c6c53aa21819281f99f21c20c9ada9856a1c21#egg=ceph_disk&subdirectory=src/ceph-disk,configobj==5.0.6,coverage==4.0.3,discover==0.4.0,extras==0.0.3,fixtures==1.4.0,flake8==2.5.4,funcsigs==0.4,linecache2==1.0.0,mccabe==0.4.0,mock==1.3.0,pbr==1.8.1,pep8==1.7.0,pluggy==0.3.1,py==1.4.31,pyflakes==1.0.0,pyrsistent==0.11.12,pytest==2.8.7,python-mimeparse==1.5.1,python-subunit==1.2.0,six==1.10.0,testrepository==0.0.20,testtools==2.0.0,tox==2.3.1,traceback2==1.4.0,unittest2==1.1.0,virtualenv==14.0.6,wheel==0.29.0
flake8 runtests: PYTHONHASHSEED='2232632747'
flake8 runtests: commands[0] | flake8 --ignore=H105,H405 ceph_disk tests
py27 develop-inst-nodeps: /ceph/src/ceph-disk
py27 installed: ceph-detect-init==1.0.1,-e git+https://github.com/ceph/ceph.git@75c6c53aa21819281f99f21c20c9ada9856a1c21#egg=ceph_disk&subdirecto
diff --git a/src/ceph-disk/tests/test_main.py b/src/ceph-disk/tests/test_main.py
index fe85eb7..47b74ae 100644
--- a/src/ceph-disk/tests/test_main.py
+++ b/src/ceph-disk/tests/test_main.py
@@ -315,7 +315,7 @@ class TestCephDisk(object):
partition_uuid = "56244cf5-83ef-4984-888a-2d8b8e0e04b2"
disk = "Xda"
partition = "Xda1"
- holders = ["dm-0"]
+ holders = ["dm-100"]
#!/bin/bash
pids=""
(sleep 10; exit 10) & pids="$pids $!"
(sleep 5; exit 5) & pids="$pids $!"
pids="$pids 9999"
for pid in $pids; do
wait $pid
#!/bin/bash
function run_in_background() {
# The first argument is the name of the PID variable
# We execute everything passed in argument
# And save the running pid in the variable name
local pid_variable=$1
shift;
"$@" & eval "$pid_variable+=\" $!\""
}
def get_scratch_devices2(remote):
"""
Extract the list of free block device from a host
"""
used_block_devices = get_used_block_devices(remote)
used_block_devices.append(get_root_device(remote))
translated_uuids = translate_block_UUID(used_block_devices)
expanded_block_devices = expand_dm_devices(translated_uuids)
translated_block_devices = translate_block_devices_path(expanded_block_devices)
final_used_block_devices = partitions_to_block_device(translated_block_devices)
---
# Defines deployment design and assigns role to server groups
- hosts: localhost
gather_facts: false
vars:
lookup_disks: "{'storage_disks': {'size': 'gt(800 MB)', 'rotational': '1', 'count' : '*'}}"
tasks:
# If we can't get python2 installed before any module is used we will fail
# so just try what we can to get it installed
TASK [debug] *******************************************************************
ok: [localhost] => {
"devices": {
"storage_disks_000_000": {
"bdev": "/dev/disk/by-id/scsi-36848f690e68a50001e428e4f1e211ba2",
"holders": [],
"host": "RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 01)",
"model": "PERC H710P",
"partitions": {},
"removable": "0",

Selecting disks to be used by Ceph-Ansible

Why do we need a module for selecting disks ?

The legacy approach to select disks on a set of servers is to use their logical names like "/dev/sdx".

Using a logical name to point a device have the following drawbacks:

  • the device path is not consistent over time : {add|remov}ing devices change the name
plopplup
plou
lao
@ErwanAliasr1
ErwanAliasr1 / pwet.sh
Last active March 16, 2018 11:56
pwet
function run_tox {
case "$CEPH_ANSIBLE_BRANCH" in
stable-*)
CEPH_DOCKER_IMAGE_TAG=$(find_latest_tag $RELEASE)
;;
master)
CEPH_DOCKER_IMAGE_TAG="latest"
;;
*)
CEPH_DOCKER_IMAGE_TAG=""