Skip to content

Instantly share code, notes, and snippets.

@djsly
djsly / kube-controller-manager.post.patch.log
Created February 22, 2018 15:29
kube-controller-manager post volume detach patch
Feb 22 15:07:14 kube-controller-manager[125942]: I0222 15:07:14.553993 125942 actual_state_of_world.go:358] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/<sub_id>/resourceGroups/<rg>/providers/Microsoft.Compute/disks/<rg>-dynamic-pvc-a7cbdf1d-11c0-11e8-888f-000d3a018174 to the node "kn-edge-2" mounted true
Feb 22 15:07:32 kube-controller-manager[125942]: I0222 15:07:32.489512 125942 actual_state_of_world.go:358] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/<sub_id>/resourceGroups/<rg>/providers/Microsoft.Compute/disks/<rg>-dynamic-pvc-a7ca7d7a-11c0-11e8-888f-000d3a018174 to the node "kn-edge-0" mounted true
Feb 22 15:07:32 kube-controller-manager[125942]: W0222 15:07:32.861066 125942 reconciler.go:267] Multi-Attach error for volume "pvc-91d3b0a6-11b9-11e8-82b7-000d3a018ac3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/<sub_id>/resourceGroups/<rg>/providers/Microsoft.Compute/disks/<rg>-dynamic-pvc-91d3b0a6-11b9-11e8-82b7-000d3a018ac3") from node "
@djsly
djsly / kube-controller-manager.log
Created February 21, 2018 01:09
kube-controller-manager azure disk detach error
This file has been truncated, but you can view the full file.
I0220 23:29:43.411811 73673 actual_state_of_world.go:358] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/<sub_id>/resourceGroups/<rg_name>/providers/Microsoft.Compute/disks/<rg_name>-dynamic-pvc-a7cbdf1d-11c0-11e8-888f-000d3a018174 to the node "kn-edge-2" mounted true
W0220 23:29:48.741515 73673 reconciler.go:267] Multi-Attach error for volume "pvc-92317976-11b9-11e8-9620-000d3a017a13" (UniqueName: "kubernetes.io/azure-disk//subscriptions/<sub_id>/resourceGroups/<rg_name>/providers/Microsoft.Compute/disks/<rg_name>-dynamic-pvc-92317976-11b9-11e8-9620-000d3a017a13") from node "kn-edge-1" Volume is already exclusively attached to one node and can't be attached to another
W0220 23:29:48.741558 73673 reconciler.go:267] Multi-Attach error for volume "pvc-91d3b0a6-11b9-11e8-82b7-000d3a018ac3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/<sub_id>/resourceGroups/<rg_name>/providers/Microsoft.Compute/disks/<rg_name>-dynamic-pvc-91d3b0a6-11b9-11e8-82b7-000d3a018ac3") from node "kn-
[root@m<hostname> ~]# salt-call -l all state.sls etcd,journald
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG ] Including configuration from '/etc/salt/minion.d/f_defaults.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/f_defaults.conf
[DEBUG ] Using cached minion ID from /etc/salt/minion_id: host.com
[TRACE ] None of the required configuration sections, 'logstash_udp_handler' and 'logstash_zmq_handler', were found in the configuration. Not loading the Logstash logging handlers module.
[TRACE ] The required configuration section, 'fluent_handler', was not found the in the configuration. Not loading the fluent logging handlers module.
[DEBUG ] Configuration file path: /etc/salt/minion
[root@<host> ~]# salt-call -l all state.sls journald,etcd
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG ] Including configuration from '/etc/salt/minion.d/f_defaults.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/f_defaults.conf
[DEBUG ] Using cached minion ID from /etc/salt/minion_id: <host>
[TRACE ] None of the required configuration sections, 'logstash_udp_handler' and 'logstash_zmq_handler', were found in the configuration. Not loading the Logstash logging handlers module.
[TRACE ] The required configuration section, 'fluent_handler', was not found the in the configuration. Not loading the fluent logging handlers module.
[DEBUG ] Configuration file path: /etc/salt/minion
@djsly
djsly / heapster-controller.yaml
Created July 25, 2017 15:50
yams file for heapster controller which loosing taints
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
labels:
k8s-app: heapster
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
@djsly
djsly / gist:6cf0e61ae440772f4c103f0898c8e62a
Created June 14, 2017 21:15
telegraf config -- issues with took longer
# Telegraf Configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared inputs, and sent to the declared outputs.
# Plugins must be declared in here to be active.
# To deactivate a plugin, comment out the name and any variables.
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
# file would generate.
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
azurerm_resource_group.test: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-7f47533b4dd6/resourceGroups/slytest)
azurerm_virtual_network.test: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...rosoft.Network/virtualNetworks/slyvnet)
azurerm_storage_account.test: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...crosoft.Storage/storageAccounts/slysa0)
azurerm_subnet.test: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...ualNetworks/slyvnet/subnets/slyvnetsub)
azurerm_network_interface.test: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...osoft.Network/networkInterfaces/slyni0)
azurerm_storage_container.test: Refreshing state... (ID: vhds)
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
~/Downloads/kube-controller-manager (1).txt:12814: Mar 28 01:41:35 kubemaster0 kube-controller-manager[62486]: I0328 01:41:35.633992 62486 endpoints_controller.go:368] About to update endpoints for service "test5/frontend"
~/Downloads/kube-controller-manager (1).txt:12820: Mar 28 01:41:35 kubemaster0 kube-controller-manager[62486]: I0328 01:41:35.670599 62486 controller_utils.go:158] Controller test5/frontend-88237173 either never recorded expectations, or the ttl expired.
~/Downloads/kube-controller-manager (1).txt:12824: Mar 28 01:41:35 kubemaster0 kube-controller-manager[62486]: I0328 01:41:35.670675 62486 controller_utils.go:175] Setting expectations &controller.ControlleeExpectations{add:3, del:0, key:"test5/frontend-88237173", timestamp:time.Time{sec:63626262095, nsec:670673816, loc:(*time.Location)(0x39524a0)}}
~/Downloads/kube-controller-manager (1).txt:12845: Mar 28 01:41:35 kubemaster0 kube-controller-manager[62486]: I0328 01:41:35.687633 62486 controller_utils.go:192] Lowered expectations &
2017-03-21 16:02:07,977 INFO Reader.run: Name: lv_docker - Function: lvm.lv_present - Result: Clean Started: - 12:00:12.720280 Duration: 227.197 ms
2017-03-21 16:02:07,977 INFO Reader.run: Name: mkfs.btrfs /dev/vg01/lv_docker - Function: cmd.run - Result: Clean Started: - 12:00:12.948846 Duration: 12.035 ms
2017-03-21 16:02:07,978 INFO Reader.run: Name: /var/lib/docker - Function: file.directory - Result: Clean Started: - 12:00:12.961144 Duration: 1.161 ms
2017-03-21 16:02:07,978 INFO Reader.run: Name: /var/lib/docker - Function: mount.mounted - Result: Clean Started: - 12:00:12.963374 Duration: 25.333 ms
2017-03-21 16:02:07,978 INFO Reader.run: Name: net.ipv4.ip_forward - Function: sysctl.present - Result: Clean Started: - 12:00:12.988949 Duration: 26.663 ms
2017-03-21 16:02:07,978 INFO Reader.run: Name: docker - Function: service.dead - Result: Clean Started: - 12:00:13.016035 Duration: 13.698 ms
2017-03-21 16:02:07,978 INFO Reader.run: Name: old_docker_pkg - Function: pkg.purged - Result: Cle