Instantly share code, notes, and snippets.

View gist:b84993057299fa130f74c1c26f75b016
Nov 5 15:29:33 knew getty[30735]: open /dev/ttyu2: No such file or directory
Nov 5 15:58:11 knew smartd[1068]: Device: /dev/da16 [SAT], FAILED SMART self-check. BACK UP DATA NOW!
Nov 5 15:58:11 knew smartd[1068]: Device: /dev/da16 [SAT], Failed SMART usage Attribute: 240 Head_Flying_Hours.
OK, let's replace da20 (now on the system as da16)
sudo zpool replace system gpt/653BK12FFS9A.r1.c3 gpt/57NGK1ZGF57D.r1.c3
[dan@knew:~] $ zpool status system
pool: system
View 1 - zpool status
[dan@knew:~] $ zpool status system
pool: system
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: resilvered 615G in 4h42m with 0 errors on Thu Nov 1 21:19:46 2018
config:
View compare list of drives in system against those specified in periodic.conf
#!/bin/sh
DISKS=`/sbin/sysctl -n kern.disks`
CHECKED=`/usr/sbin/sysrc -nf /etc/periodic.conf daily_status_smart_devices`
DEV_DISKS=''
for disk in ${DISKS}
do
if [ "${DEV_DISKS}" == "" ]
then
View Rescheduling failed full backups
[dan@bacula:/usr/local/etc/bacula] $ diff -ruN schedules.conf schedules-custom.conf
--- schedules.conf 2018-11-04 17:21:50.653054000 +0000
+++ schedules-custom.conf 2018-11-04 17:22:49.252924000 +0000
@@ -31,7 +31,7 @@
# and incremental backups other days
Schedule {
Name = "WeeklyCycle"
- Run = Level=Full 1st sun at 03:05
+ Run = Level=Full 1st sun at 17:25
Run = Level=Differential 2nd-5th sun at 03:05
View 1 - adjusting the drive locations
I've moved the failed drive from a drive caddy into the interior of the chassis. It is just sitting there, loose.
View 1 - r710-01 - smartd.conf
# DEVICESCAN
/dev/da0 -a -d scsi -m dan@langille.org -s S/../.././22
/dev/da1 -a -d scsi -m dan@langille.org -s S/../.././22
/dev/da2 -a -d scsi -m dan@langille.org -s S/../.././22
/dev/da3 -a -d scsi -m dan@langille.org -s S/../.././22
/dev/da4 -a -d scsi -m dan@langille.org -s S/../.././22
/dev/da5 -a -d scsi -m dan@langille.org -s S/../.././22
/dev/da0 -a -d scsi -m dan@langille.org -s L/../28/./23
/dev/da1 -a -d scsi -m dan@langille.org -s L/../01/./01
View test-drives.sh
#!/bin/sh
DRIVES=`/sbin/sysctl -n kern.disks`
DAY=`/bin/date +"%Oe"`
# yes, I could jump to the DAY-th element of the list
# Do you know how to do that?
TESTED=''
i=0;
View 1
[dan@knew:~] $ sudo zpool status -v system
pool: system
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Thu Oct 25 21:22:38 2018
116G scanned out of 44.3T at 1.96G/s, 6h24m to go
1.80G resilvered, 0.26% done
config:
View 1 - validate.php
[dan@besser:/usr/local/www/librenms] $ sudo ./validate.php
====================================
Component | Version
--------- | -------
LibreNMS | 1.44
DB Schema | ?
PHP | 7.2.11
MySQL | ?
RRDTool | 1.7.0
SNMP | NET-SNMP 5.7.3
View based upon
% make flavors-package-names -f /var/db/repos/PORTS-head/security/py-requests-kerberos/Makefile PORTSDIR=/var/db/repos/PORTS-head
py27-requests-kerberos-0.11.0_2
py36-requests-kerberos-0.11.0_2