- Kill all mysql process on whole cluster (check via
ps aux| grep mysql)
- Remove grastate.data and ib_logfiles
rm /var/lib/mysql/grastate.data
rm /var/lib/mysql/ib_logfile*
- Edit this row in
/etc/mysql/my.cnf
on `"wsrep_cluster_address="gcomm://""`` - Start MySql on this node
- validate status of MySql cluster
salt-call mysql.status | grep -A1 wsrep_cluster_size
- validate status of MySql cluster
- Start MySql on another one node
- Wait when second rejoin cluster, when is cluster = 2 start MySql process on last node
salt-call mysql.status | grep -A1 wsrep_cluster_size
cluster size should be equal to 3- re-run galera state on first
- Stop the instance
- Rsync qcow image to right destination
rsync -Pa /var/lib/nova/instances/instance-000004f2 destination_hostname:/var/lib/nova/instances/
- Update record in MySql database
UPDATE instances SET node='os-kvm-prod-dmz-self014.mgm.avg.com',host='os-kvm-prod-dmz-self014' WHERE uuid ='9f2d63b5-122c-47b1-812a-1bec3d40fbb9';
- Log in to database
use cinder
table- Update correct record
UPDATE
cinder.
volumesSET
status='available',
attach_status='detached' WHERE
id`='2d66d83b-a644-480a-a457-56075838612b';``
cinder service-disable cpt06@EasyTier cinder-volume a delete from db: mysql -uroot -pcloudlab -e "update services set deleted = 1 where host like 'cpt06@EasyTier' and disabled = 1 " cinder
show processlist
- It shows on witch node is active connections
show status
- In this table is lot of interesting informations.