Skip to content

Instantly share code, notes, and snippets.

View archerslaw's full-sized avatar
:octocat:
Focusing

Archers Law archerslaw

:octocat:
Focusing
View GitHub Profile
@archerslaw
archerslaw / How to generate a tape device.
Last active October 18, 2016 11:39
How to generate a tape device.
1.install and setup the scsi target
# rpm -q scsi-target-utils || yum -y install scsi-target-utils (perl-Config-General)
2.disable iptables and selinux.
# service iptables stop
# chkconfig iptables off
# setenforce 0
3.setup the tape target.
# service tgtd start
# tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.st:tape:sttarget1
# tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL
@archerslaw
archerslaw / [dump][pv dump] QEMU add pvpanic device to deal with guest panic event for automatic capturing
Created June 12, 2014 04:55
[dump][pv dump] QEMU add pvpanic device to deal with guest panic event for automatic capturing
1.boot guest with pvpanic device and QMP monitor.
# /usr/libexec/qemu-kvm -M pc-i440fx-rhel7.0.0 ... -qmp tcp:0:4444,server,nowait -serial unix:/tmp/ttyS0,server,nowait -spice port=5931,disable-ticketing -monitor stdio -device pvpanic
Note: There's a new RHEL-7 qemu-kvm bug that affects this test case: bug 990601. Once that bug is fixed, you'll have to replace ‘-global pvpanic.ioport=0x0505’ in step 1 with ‘-device pvpanic’
$ telnet 10.66.9.242 4444
{"execute":"qmp_capabilities"}
{"return": {}}
2.check the info qtree to contain the dump the device 'pvpanic' and ioport.
(qemu) info qtree
3.check the HMP monitor status
(qemu) info status
@archerslaw
archerslaw / How to use Persistent Reservation test with NPIV in KVM.
Last active August 29, 2015 14:02
How to use Persistent Reservation test with NPIV in KVM.
● For persistent reservation. (HBA Host: dell-pet105-04.qe.lab.eng.nay.redhat.com)
1.The initiator name is the name that is saved for the persistent reservation, if you start two guests, the persistent reservation should not move from the first to the second.
2.We create the vHBA outside QEMU, then just pass the LUN as seen in the vHBA using /dev/disk/by-path, for both NPIV and iSCSI it is "SCSI passthrough" (scsi-block).
3.For rhel6.x just test NPIV, for rhel7 need to test NPIV and iSCSI for this feature.
4.Since lots of scsi command(sg_xxx) have been supported, Only TRIM and persistent reservations need a separate test.
5.Why use NPIV:
N_Port ID Virtualization (NPIV) is a function available with some Fibre Channel devices. NPIV shares a single physical N_Port as multiple N_Port IDs. NPIV provides similar functionality for Host Bus Adaptors (HBAs) that SR-IOV provides for network interfaces. With NPIV, guests can be provided with a virtual Fibre Channel initiator to Storage Area
@archerslaw
archerslaw / Learn_virsh_XML_configure_and_qemu-kvm_cmdline.
Last active August 29, 2015 14:01
Learn_virsh_XML_configure_and_qemu-kvm_cmdline.
1.Attach a scsi-block/scsi-hd/scsi-cd/scsi-generic device.
1.1.scsi-block device.
e.g1:...-drive file=/dev/disk/by-path/ip-10.66.33.253:3260-iscsi-iqn.2014.sluo.com:iscsi.storage.1-lun-1,if=none,id=drive-scsi0-0-0-1,format=raw,cache=none -device scsi-block,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1
...
<disk type='block' device='lun' >
<driver name='qemu' type='raw' cache='none'/>
<source dev='/dev/disk/by-path/ip-10.66.33.253:3260-iscsi-iqn.2014.sluo.com:iscsi.storage.1-lun-1'/>
<target dev='sdb' bus='scsi'/>
<alias name='scsi0-0-0-1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
@archerslaw
archerslaw / whole qemu-kvm comamnd line.
Last active August 29, 2015 14:01
whole qemu-kvm comamnd line.
/usr/libexec/qemu-kvm -M pc -S -cpu SandyBridge -enable-kvm -m 4096 -smp 4,sockets=1,cores=4,threads=1 -no-kvm-pit-reinjection -usb -device usb-tablet,id=input0 -name sluo -uuid 990ea161-6b67-47b2-b803-19fb01d30d30 -rtc base=localtime,clock=host,driftfix=slew -device virtio-serial-pci,id=virtio-serial0,max_ports=16,vectors=0,bus=pci.0,addr=0x3 -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait -device virtserialport,chardev=channel1,name=com.redhat.rhevm.vdsm0,bus=virtio-serial0.0,id=port1 -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,chardev=channel2,name=com.redhat.rhevm.vdsm1,bus=virtio-serial0.0,id=port2 -drive file=/home/rhel7-64.qcow2,if=none,id=drive-system-disk,format=qcow2,cache=none,aio=native,werror=stop,rerror=stop -device virtio-scsi-pci,bus=pci.0,addr=0x4,id=scsi0 -device scsi-hd,drive=drive-system-disk,id=system-disk,bus=scsi0.0,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virti
@archerslaw
archerslaw / Support Maximum pcie devices(256)
Created May 5, 2014 03:20
Support Maximum pcie devices(256)
#!/bin/sh
MACHINE=q35
SMP=4,cores=2,threads=2,sockets=1
MEM=2G
GUEST_IMG=/home/RHEL7.0.raw
IMG_FORMAT=raw
CLI="/usr/libexec/qemu-kvm -enable-kvm -M $MACHINE -smp $SMP -m $MEM -name vm1 -drive file=$GUEST_IMG,if=none,id=guest-img,format=$GUEST_IMG,werror=stop,rerror=stop -device ide-hd,drive=guest-img,bus=ide.0,unit=0,id=os-disk,bootindex=1 -spice port=5931,disable-ticketing -vga qxl -monitor stdio -qmp tcp:0:6666,server,nowait -boot menu=on"
@archerslaw
archerslaw / Test support maximum disks(Max pci-bridge devices number for rhel7) in one guest.
Created May 5, 2014 03:05
Test support maximum disks(Max pci-bridge devices number for rhel7) in one guest.
#Currently, support Max pci bridge devices number is 930.
#!/bin/sh
MACHINE=pc-i440fx-rhel7.0.0
MEM=20G
IMG=/home/xuhan/rhel7cp2.qcow2_v3
CLI="/usr/libexec/qemu-kvm -vga none -net none -M $MACHINE -smp 2,core=2,thread=1,socket=1 -m $MEM -name vm1 -vnc :1 -monitor stdio -device pci-bridge,chassis_nr=1,id=bridge0,addr=0x02 -drive file=$IMG,if=none,id=hd,format=qcow2,werror=stop,rerror=stop -device virtio-blk-pci,scsi=off,drive=hd,id=os-disk,bus=bridge0,addr=0x01,bootindex=1 -serial unix:/tmp/bridge-con,server,nowait"
@archerslaw
archerslaw / how to debug the QEMU quit without any backtrace at the breakpoint.
Created April 30, 2014 02:56
how to debug the QEMU quit without any backtrace at the breakpoint.
breakpoints -- Making program stop at certain points
1.launch a KVM guest with gdb.
# gdb /usr/libexec/qemu-kvm
2.set the breakpoint at exit.
(gdb) b exit
Breakpoint 1 at 0xaaf30
3.start the program.
@archerslaw
archerslaw / support Max rtl8139 Nics and disable multifunction.
Created April 30, 2014 02:39
support Max rtl8139 Nics and disable multifunction.
/usr/libexec/qemu-kvm -M pc -enable-kvm -m 4G -smp 2,sockets=2,cores=1,threads=1 -name rhel7 -uuid 745fe449-aac8-29f1-0c2d-5042a707263b -drive file=/home/windows_server_2012_r2_x64.raw,if=none,id=drive-ide0-0-0,format=raw,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,id=hostnet1,vhost=on,script=/etc/qemu-ifup -device rtl8139,netdev=hostnet1,mac=00:12:17:18:49:01,bus=pci.0,id=rtl81391 -netdev tap,id=hostnet2,vhost=on,script=/etc/qemu-ifup -device rtl8139,netdev=hostnet2,mac=00:12:17:18:49:02,bus=pci.0,id=rtl81392 -netdev tap,id=hostnet3,vhost=on,script=/etc/qemu-ifup -device rtl8139,netdev=hostnet3,mac=00:12:17:18:49:03,bus=pci.0,id=rtl81393 -netdev tap,id=hostnet4,vhost=on,script=/etc/qemu-ifup -device rtl8139,netdev=hostnet4,mac=00:12:17:18:49:04,bus=pci.0,id=rtl81394 -netdev tap,id=hostnet5,vhost=on,script=/etc/qemu-ifup -device rtl8139,netdev=hostnet5,mac=00:12:17:18:49:05,bus=pci.0,id=rtl81395 -netdev tap,id=hostnet6,vhost=on,script=/etc/qemu-ifup -
@archerslaw
archerslaw / support Max e1000 Nics number and disable multifunction.
Created April 30, 2014 02:36
support Max e1000 Nics number and disable multifunction.
/usr/libexec/qemu-kvm -M pc -enable-kvm -m 4G -smp 2,sockets=2,cores=1,threads=1 -name rhel7 -uuid 745fe449-aac8-29f1-0c2d-5042a707263b -drive file=/home/windows_server_2012_r2_x64.raw,if=none,id=drive-ide0-0-0,format=raw,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,id=hostnet1,vhost=on,script=/etc/qemu-ifup -device e1000,netdev=hostnet1,mac=00:12:17:18:49:01,bus=pci.0,id=e10001 -netdev tap,id=hostnet2,vhost=on,script=/etc/qemu-ifup -device e1000,netdev=hostnet2,mac=00:12:17:18:49:02,bus=pci.0,id=e10002 -netdev tap,id=hostnet3,vhost=on,script=/etc/qemu-ifup -device e1000,netdev=hostnet3,mac=00:12:17:18:49:03,bus=pci.0,id=e10003 -netdev tap,id=hostnet4,vhost=on,script=/etc/qemu-ifup -device e1000,netdev=hostnet4,mac=00:12:17:18:49:04,bus=pci.0,id=e10004 -netdev tap,id=hostnet5,vhost=on,script=/etc/qemu-ifup -device e1000,netdev=hostnet5,mac=00:12:17:18:49:05,bus=pci.0,id=e10005 -netdev tap,id=hostnet6,vhost=on,script=/etc/qemu-ifup -device e1000,netdev=