Created
August 8, 2018 13:18
-
-
Save danking/2b51ed028bd3843b5919d1434a746f5f to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
-- Logs begin at Wed 2018-08-08 13:03:33 UTC, end at Wed 2018-08-08 13:17:16 UTC. -- | |
Aug 08 13:03:33 localhost kernel: Initializing cgroup subsys cpuset | |
Aug 08 13:03:33 localhost kernel: Initializing cgroup subsys cpu | |
Aug 08 13:03:33 localhost kernel: Initializing cgroup subsys cpuacct | |
Aug 08 13:03:33 localhost kernel: Linux version 4.4.111+ (chrome-bot@build279-m2.golo.chromium.org) (gcc version 4.9.x 20150123 (prerelease) (4.9.2_cos_gg_4.9.2-r175-0c5a656a1322e137fa4a251f2ccc6c4022918c0a_4.9.2-r175) ) #1 SMP Thu Feb 1 22:06:37 PST 2018 | |
Aug 08 13:03:33 localhost kernel: Command line: BOOT_IMAGE=/syslinux/vmlinuz.A init=/usr/lib/systemd/systemd boot=local rootwait ro noresume noswap loglevel=7 noinitrd console=ttyS0 vsyscall=emulate security=apparmor systemd.unified_cgroup_hierarchy=false systemd.legacy_systemd_cgroup_controller=true dm_verity.error_behavior=3 dm_verity.max_bios=-1 dm_verity.dev_wait=1 i915.modeset=1 cros_efi root=/dev/dm-0 "dm=1 vroot none ro 1,0 2539520 verity payload=PARTUUID=72C5C8D0-6721-0E4A-9408-2C95B1A56C3E hashtree=PARTUUID=72C5C8D0-6721-0E4A-9408-2C95B1A56C3E hashstart=2539520 alg=sha1 root_hexdigest=bd2e40281a062c14b4946005d2e425cb8e7ea881 salt=547509642c1cf4f33b76d318f1eec6f873178e767793fdcc22435f5ed85c08b2" | |
Aug 08 13:03:33 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 | |
Aug 08 13:03:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x01: 'x87 floating point registers' | |
Aug 08 13:03:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x02: 'SSE registers' | |
Aug 08 13:03:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x04: 'AVX registers' | |
Aug 08 13:03:33 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. | |
Aug 08 13:03:33 localhost kernel: x86/fpu: Using 'eager' FPU context switches. | |
Aug 08 13:03:33 localhost kernel: e820: BIOS-provided physical RAM map: | |
Aug 08 13:03:33 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable | |
Aug 08 13:03:33 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved | |
Aug 08 13:03:33 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved | |
Aug 08 13:03:33 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bfff2fff] usable | |
Aug 08 13:03:33 localhost kernel: BIOS-e820: [mem 0x00000000bfff3000-0x00000000bfffffff] reserved | |
Aug 08 13:03:33 localhost kernel: BIOS-e820: [mem 0x00000000fffbc000-0x00000000ffffffff] reserved | |
Aug 08 13:03:33 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x00000007bfffffff] usable | |
Aug 08 13:03:33 localhost kernel: NX (Execute Disable) protection: active | |
Aug 08 13:03:33 localhost kernel: SMBIOS 2.4 present. | |
Aug 08 13:03:33 localhost kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 | |
Aug 08 13:03:33 localhost kernel: Hypervisor detected: KVM | |
Aug 08 13:03:33 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved | |
Aug 08 13:03:33 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable | |
Aug 08 13:03:33 localhost kernel: e820: last_pfn = 0x7c0000 max_arch_pfn = 0x400000000 | |
Aug 08 13:03:33 localhost kernel: MTRR default type: write-back | |
Aug 08 13:03:33 localhost kernel: MTRR fixed ranges enabled: | |
Aug 08 13:03:33 localhost kernel: 00000-9FFFF write-back | |
Aug 08 13:03:33 localhost kernel: A0000-BFFFF uncachable | |
Aug 08 13:03:33 localhost kernel: C0000-FFFFF write-protect | |
Aug 08 13:03:33 localhost kernel: MTRR variable ranges enabled: | |
Aug 08 13:03:33 localhost kernel: 0 base 0000C0000000 mask 3FFFC0000000 uncachable | |
Aug 08 13:03:33 localhost kernel: 1 disabled | |
Aug 08 13:03:33 localhost kernel: 2 disabled | |
Aug 08 13:03:33 localhost kernel: 3 disabled | |
Aug 08 13:03:33 localhost kernel: 4 disabled | |
Aug 08 13:03:33 localhost kernel: 5 disabled | |
Aug 08 13:03:33 localhost kernel: 6 disabled | |
Aug 08 13:03:33 localhost kernel: 7 disabled | |
Aug 08 13:03:33 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WC UC- WT | |
Aug 08 13:03:33 localhost kernel: e820: last_pfn = 0xbfff3 max_arch_pfn = 0x400000000 | |
Aug 08 13:03:33 localhost kernel: found SMP MP-table at [mem 0x000f2a80-0x000f2a8f] mapped at [ffff8800000f2a80] | |
Aug 08 13:03:33 localhost kernel: Scanning 1 areas for low memory corruption | |
Aug 08 13:03:33 localhost kernel: Base memory trampoline at [ffff880000099000] 99000 size 24576 | |
Aug 08 13:03:33 localhost kernel: Using GB pages for direct mapping | |
Aug 08 13:03:33 localhost kernel: BRK [0x25ae8000, 0x25ae8fff] PGTABLE | |
Aug 08 13:03:33 localhost kernel: BRK [0x25ae9000, 0x25ae9fff] PGTABLE | |
Aug 08 13:03:33 localhost kernel: BRK [0x25aea000, 0x25aeafff] PGTABLE | |
Aug 08 13:03:33 localhost kernel: ACPI: Early table checksum verification disabled | |
Aug 08 13:03:33 localhost kernel: ACPI: RSDP 0x00000000000F2A40 000014 (v00 Google) | |
Aug 08 13:03:33 localhost kernel: ACPI: RSDT 0x00000000BFFF3420 000038 (v01 Google GOOGRSDT 00000001 GOOG 00000001) | |
Aug 08 13:03:33 localhost kernel: ACPI: FACP 0x00000000BFFFCF30 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) | |
Aug 08 13:03:33 localhost kernel: ACPI: DSDT 0x00000000BFFF3460 0017B2 (v01 Google GOOGDSDT 00000001 GOOG 00000001) | |
Aug 08 13:03:33 localhost kernel: ACPI: FACS 0x00000000BFFFCEC0 000040 | |
Aug 08 13:03:33 localhost kernel: ACPI: FACS 0x00000000BFFFCEC0 000040 | |
Aug 08 13:03:33 localhost kernel: ACPI: SSDT 0x00000000BFFF65B0 00690D (v01 Google GOOGSSDT 00000001 GOOG 00000001) | |
Aug 08 13:03:33 localhost kernel: ACPI: APIC 0x00000000BFFF5CD0 0000A6 (v01 Google GOOGAPIC 00000001 GOOG 00000001) | |
Aug 08 13:03:33 localhost kernel: ACPI: WAET 0x00000000BFFFCF00 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) | |
Aug 08 13:03:33 localhost kernel: ACPI: SRAT 0x00000000BFFF4C20 000128 (v01 Google GOOGSRAT 00000001 GOOG 00000001) | |
Aug 08 13:03:33 localhost kernel: ACPI: Local APIC address 0xfee00000 | |
Aug 08 13:03:33 localhost kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 | |
Aug 08 13:03:33 localhost kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 | |
Aug 08 13:03:33 localhost kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 | |
Aug 08 13:03:33 localhost kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 | |
Aug 08 13:03:33 localhost kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 | |
Aug 08 13:03:33 localhost kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 | |
Aug 08 13:03:33 localhost kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 | |
Aug 08 13:03:33 localhost kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 | |
Aug 08 13:03:33 localhost kernel: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] | |
Aug 08 13:03:33 localhost kernel: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] | |
Aug 08 13:03:33 localhost kernel: SRAT: Node 0 PXM 0 [mem 0x100000000-0x7bfffffff] | |
Aug 08 13:03:33 localhost kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] | |
Aug 08 13:03:33 localhost kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x7bfffffff] -> [mem 0x00000000-0x7bfffffff] | |
Aug 08 13:03:33 localhost kernel: NODE_DATA(0) allocated [mem 0x7bfffa000-0x7bfffdfff] | |
Aug 08 13:03:33 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 | |
Aug 08 13:03:33 localhost kernel: kvm-clock: cpu 0, msr 7:bfff9001, primary cpu clock | |
Aug 08 13:03:33 localhost kernel: kvm-clock: using sched offset of 1301013127 cycles | |
Aug 08 13:03:33 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns | |
Aug 08 13:03:33 localhost kernel: Zone ranges: | |
Aug 08 13:03:33 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] | |
Aug 08 13:03:33 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] | |
Aug 08 13:03:33 localhost kernel: Normal [mem 0x0000000100000000-0x00000007bfffffff] | |
Aug 08 13:03:33 localhost kernel: Movable zone start for each node | |
Aug 08 13:03:33 localhost kernel: Early memory node ranges | |
Aug 08 13:03:33 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] | |
Aug 08 13:03:33 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bfff2fff] | |
Aug 08 13:03:33 localhost kernel: node 0: [mem 0x0000000100000000-0x00000007bfffffff] | |
Aug 08 13:03:33 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000007bfffffff] | |
Aug 08 13:03:33 localhost kernel: On node 0 totalpages: 7864209 | |
Aug 08 13:03:33 localhost kernel: DMA zone: 64 pages used for memmap | |
Aug 08 13:03:33 localhost kernel: DMA zone: 21 pages reserved | |
Aug 08 13:03:33 localhost kernel: DMA zone: 3998 pages, LIFO batch:0 | |
Aug 08 13:03:33 localhost kernel: DMA32 zone: 12224 pages used for memmap | |
Aug 08 13:03:33 localhost kernel: DMA32 zone: 782323 pages, LIFO batch:31 | |
Aug 08 13:03:33 localhost kernel: Normal zone: 110592 pages used for memmap | |
Aug 08 13:03:33 localhost kernel: Normal zone: 7077888 pages, LIFO batch:31 | |
Aug 08 13:03:33 localhost kernel: ACPI: Local APIC address 0xfee00000 | |
Aug 08 13:03:33 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) | |
Aug 08 13:03:33 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 | |
Aug 08 13:03:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) | |
Aug 08 13:03:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) | |
Aug 08 13:03:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) | |
Aug 08 13:03:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) | |
Aug 08 13:03:33 localhost kernel: ACPI: IRQ5 used by override. | |
Aug 08 13:03:33 localhost kernel: ACPI: IRQ9 used by override. | |
Aug 08 13:03:33 localhost kernel: ACPI: IRQ10 used by override. | |
Aug 08 13:03:33 localhost kernel: ACPI: IRQ11 used by override. | |
Aug 08 13:03:33 localhost kernel: Using ACPI (MADT) for SMP configuration information | |
Aug 08 13:03:33 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs | |
Aug 08 13:03:33 localhost kernel: e820: [mem 0xc0000000-0xfffbbfff] available for PCI devices | |
Aug 08 13:03:33 localhost kernel: Booting paravirtualized kernel on KVM | |
Aug 08 13:03:33 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns | |
Aug 08 13:03:33 localhost kernel: setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:8 nr_node_ids:1 | |
Aug 08 13:03:33 localhost kernel: PERCPU: Embedded 32 pages/cpu @ffff8807bfc00000 s92904 r8192 d29976 u262144 | |
Aug 08 13:03:33 localhost kernel: pcpu-alloc: s92904 r8192 d29976 u262144 alloc=1*2097152 | |
Aug 08 13:03:33 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 | |
Aug 08 13:03:33 localhost kernel: Built 1 zonelists in Node order, mobility grouping on. Total pages: 7741308 | |
Aug 08 13:03:33 localhost kernel: Policy zone: Normal | |
Aug 08 13:03:33 localhost kernel: Kernel command line: BOOT_IMAGE=/syslinux/vmlinuz.A init=/usr/lib/systemd/systemd boot=local rootwait ro noresume noswap loglevel=7 noinitrd console=ttyS0 vsyscall=emulate security=apparmor systemd.unified_cgroup_hierarchy=false systemd.legacy_systemd_cgroup_controller=true dm_verity.error_behavior=3 dm_verity.max_bios=-1 dm_verity.dev_wait=1 i915.modeset=1 cros_efi root=/dev/dm-0 "dm=1 vroot none ro 1,0 2539520 verity payload=PARTUUID=72C5C8D0-6721-0E4A-9408-2C95B1A56C3E hashtree=PARTUUID=72C5C8D0-6721-0E4A-9408-2C95B1A56C3E hashstart=2539520 alg=sha1 root_hexdigest=bd2e40281a062c14b4946005d2e425cb8e7ea881 salt=547509642c1cf4f33b76d318f1eec6f873178e767793fdcc22435f5ed85c08b2" | |
Aug 08 13:03:33 localhost kernel: device-mapper: init: will configure 1 devices | |
Aug 08 13:03:33 localhost kernel: PID hash table entries: 4096 (order: 3, 32768 bytes) | |
Aug 08 13:03:33 localhost kernel: Memory: 30886652K/31456836K available (5954K kernel code, 1019K rwdata, 1752K rodata, 1172K init, 752K bss, 570184K reserved, 0K cma-reserved) | |
Aug 08 13:03:33 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 | |
Aug 08 13:03:33 localhost kernel: Kernel/User page tables isolation: enabled | |
Aug 08 13:03:33 localhost kernel: Hierarchical RCU implementation. | |
Aug 08 13:03:33 localhost kernel: Build-time adjustment of leaf fanout to 64. | |
Aug 08 13:03:33 localhost kernel: RCU restricting CPUs from NR_CPUS=64 to nr_cpu_ids=8. | |
Aug 08 13:03:33 localhost kernel: RCU: Adjusting geometry for rcu_fanout_leaf=64, nr_cpu_ids=8 | |
Aug 08 13:03:33 localhost kernel: NR_IRQS:4352 nr_irqs:488 16 | |
Aug 08 13:03:33 localhost kernel: console [ttyS0] enabled | |
Aug 08 13:03:33 localhost kernel: tsc: Initial usec timer 1699904 | |
Aug 08 13:03:33 localhost kernel: tsc: Detected 2600.000 MHz processor | |
Aug 08 13:03:33 localhost kernel: Calibrating delay loop (skipped) preset value.. 5200.00 BogoMIPS (lpj=2600000) | |
Aug 08 13:03:33 localhost kernel: pid_max: default: 32768 minimum: 301 | |
Aug 08 13:03:33 localhost kernel: ACPI: Core revision 20150930 | |
Aug 08 13:03:33 localhost kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded | |
Aug 08 13:03:33 localhost kernel: Security Framework initialized | |
Aug 08 13:03:33 localhost kernel: Yama: becoming mindful. | |
Aug 08 13:03:33 localhost kernel: AppArmor: AppArmor initialized | |
Aug 08 13:03:33 localhost kernel: Chromium OS LSM: enabled | |
Aug 08 13:03:33 localhost kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes) | |
Aug 08 13:03:33 localhost kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes) | |
Aug 08 13:03:33 localhost kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes) | |
Aug 08 13:03:33 localhost kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes) | |
Aug 08 13:03:33 localhost kernel: Initializing cgroup subsys io | |
Aug 08 13:03:33 localhost kernel: Initializing cgroup subsys memory | |
Aug 08 13:03:33 localhost kernel: Initializing cgroup subsys devices | |
Aug 08 13:03:33 localhost kernel: Initializing cgroup subsys freezer | |
Aug 08 13:03:33 localhost kernel: Initializing cgroup subsys net_cls | |
Aug 08 13:03:33 localhost kernel: Initializing cgroup subsys perf_event | |
Aug 08 13:03:33 localhost kernel: Initializing cgroup subsys net_prio | |
Aug 08 13:03:33 localhost kernel: Initializing cgroup subsys hugetlb | |
Aug 08 13:03:33 localhost kernel: Initializing cgroup subsys pids | |
Aug 08 13:03:33 localhost kernel: CPU: Physical Processor ID: 0 | |
Aug 08 13:03:33 localhost kernel: CPU: Processor Core ID: 0 | |
Aug 08 13:03:33 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 8, 4MB 8 | |
Aug 08 13:03:33 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32, 1GB 0 | |
Aug 08 13:03:33 localhost kernel: Freeing SMP alternatives memory: 24K | |
Aug 08 13:03:33 localhost kernel: ftrace: allocating 21595 entries in 85 pages | |
Aug 08 13:03:33 localhost kernel: x2apic enabled | |
Aug 08 13:03:33 localhost kernel: Switched APIC routing to physical x2apic. | |
Aug 08 13:03:33 localhost kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 | |
Aug 08 13:03:33 localhost kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.60GHz (family: 0x6, model: 0x2d, stepping: 0x7) | |
Aug 08 13:03:33 localhost kernel: Performance Events: unsupported p6 CPU model 45 no PMU driver, software events only. | |
Aug 08 13:03:33 localhost kernel: x86: Booting SMP configuration: | |
Aug 08 13:03:33 localhost kernel: .... node #0, CPUs: #1 | |
Aug 08 13:03:33 localhost kernel: kvm-clock: cpu 1, msr 7:bfff9041, secondary cpu clock | |
Aug 08 13:03:33 localhost kernel: #2 | |
Aug 08 13:03:33 localhost kernel: kvm-clock: cpu 2, msr 7:bfff9081, secondary cpu clock | |
Aug 08 13:03:33 localhost kernel: #3 | |
Aug 08 13:03:33 localhost kernel: kvm-clock: cpu 3, msr 7:bfff90c1, secondary cpu clock | |
Aug 08 13:03:33 localhost kernel: #4 | |
Aug 08 13:03:33 localhost kernel: kvm-clock: cpu 4, msr 7:bfff9101, secondary cpu clock | |
Aug 08 13:03:33 localhost kernel: #5 | |
Aug 08 13:03:33 localhost kernel: kvm-clock: cpu 5, msr 7:bfff9141, secondary cpu clock | |
Aug 08 13:03:33 localhost kernel: #6 | |
Aug 08 13:03:33 localhost kernel: kvm-clock: cpu 6, msr 7:bfff9181, secondary cpu clock | |
Aug 08 13:03:33 localhost kernel: #7 | |
Aug 08 13:03:33 localhost kernel: kvm-clock: cpu 7, msr 7:bfff91c1, secondary cpu clock | |
Aug 08 13:03:33 localhost kernel: x86: Booted up 1 node, 8 CPUs | |
Aug 08 13:03:33 localhost kernel: smpboot: Total of 8 processors activated (41600.00 BogoMIPS) | |
Aug 08 13:03:33 localhost kernel: devtmpfs: initialized | |
Aug 08 13:03:33 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns | |
Aug 08 13:03:33 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes) | |
Aug 08 13:03:33 localhost kernel: RTC time: 13:03:30, date: 08/08/18 | |
Aug 08 13:03:33 localhost kernel: NET: Registered protocol family 16 | |
Aug 08 13:03:33 localhost kernel: cpuidle: using governor ladder | |
Aug 08 13:03:33 localhost kernel: cpuidle: using governor menu | |
Aug 08 13:03:33 localhost kernel: clocksource: Switched to clocksource kvm-clock | |
Aug 08 13:03:33 localhost kernel: ACPI: bus type PCI registered | |
Aug 08 13:03:33 localhost kernel: PCI: Using configuration type 1 for base access | |
Aug 08 13:03:33 localhost kernel: ACPI: Added _OSI(Module Device) | |
Aug 08 13:03:33 localhost kernel: ACPI: Added _OSI(Processor Device) | |
Aug 08 13:03:33 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) | |
Aug 08 13:03:33 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) | |
Aug 08 13:03:33 localhost kernel: ACPI: Executed 2 blocks of module-level executable AML code | |
Aug 08 13:03:33 localhost kernel: ACPI: Interpreter enabled | |
Aug 08 13:03:33 localhost kernel: ACPI: (supports S0 S3 S5) | |
Aug 08 13:03:33 localhost kernel: ACPI: Using IOAPIC for interrupt routing | |
Aug 08 13:03:33 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug | |
Aug 08 13:03:33 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) | |
Aug 08 13:03:33 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI] | |
Aug 08 13:03:33 localhost kernel: acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM | |
Aug 08 13:03:33 localhost kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. | |
Aug 08 13:03:33 localhost kernel: PCI host bridge to bus 0000:00 | |
Aug 08 13:03:33 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] | |
Aug 08 13:03:33 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] | |
Aug 08 13:03:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] | |
Aug 08 13:03:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] | |
Aug 08 13:03:33 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] | |
Aug 08 13:03:33 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 | |
Aug 08 13:03:33 localhost kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 | |
Aug 08 13:03:33 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 | |
Aug 08 13:03:33 localhost kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI | |
Aug 08 13:03:33 localhost kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 | |
Aug 08 13:03:33 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc03f] | |
Aug 08 13:03:33 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebfe000-0xfebfe07f] | |
Aug 08 13:03:33 localhost kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 | |
Aug 08 13:03:33 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc040-0xc07f] | |
Aug 08 13:03:33 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebff000-0xfebff1ff] | |
Aug 08 13:03:33 localhost kernel: ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) | |
Aug 08 13:03:33 localhost kernel: ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) | |
Aug 08 13:03:33 localhost kernel: ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) | |
Aug 08 13:03:33 localhost kernel: ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) | |
Aug 08 13:03:33 localhost kernel: ACPI: PCI Interrupt Link [LNKS] (IRQs *9) | |
Aug 08 13:03:33 localhost kernel: ACPI: Enabled 16 GPEs in block 00 to 0F | |
Aug 08 13:03:33 localhost kernel: SCSI subsystem initialized | |
Aug 08 13:03:33 localhost kernel: libata version 3.00 loaded. | |
Aug 08 13:03:33 localhost kernel: PCI: Using ACPI for IRQ routing | |
Aug 08 13:03:33 localhost kernel: PCI: pci_cache_line_size set to 64 bytes | |
Aug 08 13:03:33 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] | |
Aug 08 13:03:33 localhost kernel: e820: reserve RAM buffer [mem 0xbfff3000-0xbfffffff] | |
Aug 08 13:03:33 localhost kernel: amd_nb: Cannot enumerate AMD northbridges | |
Aug 08 13:03:33 localhost kernel: AppArmor: AppArmor Filesystem Enabled | |
Aug 08 13:03:33 localhost kernel: pnp: PnP ACPI init | |
Aug 08 13:03:33 localhost kernel: pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active) | |
Aug 08 13:03:33 localhost kernel: pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active) | |
Aug 08 13:03:33 localhost kernel: pnp 00:02: Plug and Play ACPI device, IDs PNP0f13 (active) | |
Aug 08 13:03:33 localhost kernel: pnp 00:03: Plug and Play ACPI device, IDs PNP0501 (active) | |
Aug 08 13:03:33 localhost kernel: pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active) | |
Aug 08 13:03:33 localhost kernel: pnp 00:05: Plug and Play ACPI device, IDs PNP0501 (active) | |
Aug 08 13:03:33 localhost kernel: pnp 00:06: Plug and Play ACPI device, IDs PNP0501 (active) | |
Aug 08 13:03:33 localhost kernel: pnp: PnP ACPI: found 7 devices | |
Aug 08 13:03:33 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] | |
Aug 08 13:03:33 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] | |
Aug 08 13:03:33 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] | |
Aug 08 13:03:33 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] | |
Aug 08 13:03:33 localhost kernel: NET: Registered protocol family 2 | |
Aug 08 13:03:33 localhost kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes) | |
Aug 08 13:03:33 localhost kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes) | |
Aug 08 13:03:33 localhost kernel: TCP: Hash tables configured (established 262144 bind 65536) | |
Aug 08 13:03:33 localhost kernel: UDP hash table entries: 16384 (order: 8, 1572864 bytes) | |
Aug 08 13:03:33 localhost kernel: UDP-Lite hash table entries: 16384 (order: 8, 1572864 bytes) | |
Aug 08 13:03:33 localhost kernel: NET: Registered protocol family 1 | |
Aug 08 13:03:33 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers | |
Aug 08 13:03:33 localhost kernel: PCI: CLS 0 bytes, default 64 | |
Aug 08 13:03:33 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) | |
Aug 08 13:03:33 localhost kernel: software IO TLB [mem 0xbbff3000-0xbfff3000] (64MB) mapped at [ffff8800bbff3000-ffff8800bfff2fff] | |
Aug 08 13:03:33 localhost kernel: RAPL PMU detected, API unit is 2^-32 Joules, 3 fixed counters 10737418240 ms ovfl timer | |
Aug 08 13:03:33 localhost kernel: hw unit of domain pp0-core 2^-0 Joules | |
Aug 08 13:03:33 localhost kernel: hw unit of domain package 2^-0 Joules | |
Aug 08 13:03:33 localhost kernel: hw unit of domain dram 2^-0 Joules | |
Aug 08 13:03:33 localhost kernel: Scanning for low memory corruption every 60 seconds | |
Aug 08 13:03:33 localhost kernel: audit: initializing netlink subsys (disabled) | |
Aug 08 13:03:33 localhost kernel: audit: type=2000 audit(1533733410.426:1): initialized | |
Aug 08 13:03:33 localhost kernel: HugeTLB registered 2 MB page size, pre-allocated 0 pages | |
Aug 08 13:03:33 localhost kernel: VFS: Disk quotas dquot_6.6.0 | |
Aug 08 13:03:33 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) | |
Aug 08 13:03:33 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) | |
Aug 08 13:03:33 localhost kernel: io scheduler noop registered (default) | |
Aug 08 13:03:33 localhost kernel: io scheduler deadline registered | |
Aug 08 13:03:33 localhost kernel: io scheduler cfq registered | |
Aug 08 13:03:33 localhost kernel: pci_hotplug: PCI Hot Plug PCI Core version: 0.5 | |
Aug 08 13:03:33 localhost kernel: pciehp: PCI Express Hot Plug Controller Driver version: 0.4 | |
Aug 08 13:03:33 localhost kernel: intel_idle: does not run on family 6 model 45 | |
Aug 08 13:03:33 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 | |
Aug 08 13:03:33 localhost kernel: ACPI: Power Button [PWRF] | |
Aug 08 13:03:33 localhost kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input1 | |
Aug 08 13:03:33 localhost kernel: ACPI: Sleep Button [SLPF] | |
Aug 08 13:03:33 localhost kernel: ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 | |
Aug 08 13:03:33 localhost kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver | |
Aug 08 13:03:33 localhost kernel: ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 | |
Aug 08 13:03:33 localhost kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver | |
Aug 08 13:03:33 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled | |
Aug 08 13:03:33 localhost kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A | |
Aug 08 13:03:33 localhost kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A | |
Aug 08 13:03:33 localhost kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A | |
Aug 08 13:03:33 localhost kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A | |
Aug 08 13:03:33 localhost kernel: Non-volatile memory driver v1.3 | |
Aug 08 13:03:33 localhost kernel: loop: module loaded | |
Aug 08 13:03:33 localhost kernel: scsi host0: Virtio SCSI HBA | |
Aug 08 13:03:33 localhost kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 | |
Aug 08 13:03:33 localhost kernel: sd 0:0:1:0: [sda] 41943040 512-byte logical blocks: (21.5 GB/20.0 GiB) | |
Aug 08 13:03:33 localhost kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks | |
Aug 08 13:03:33 localhost kernel: sd 0:0:1:0: [sda] Write Protect is off | |
Aug 08 13:03:33 localhost kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 | |
Aug 08 13:03:33 localhost kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA | |
Aug 08 13:03:33 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 | |
Aug 08 13:03:33 localhost kernel: i8042: Warning: Keylock active | |
Aug 08 13:03:33 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 | |
Aug 08 13:03:33 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. | |
Aug 08 13:03:33 localhost kernel: GPT:20971519 != 41943039 | |
Aug 08 13:03:33 localhost kernel: GPT:Alternate GPT header not at the end of the disk. | |
Aug 08 13:03:33 localhost kernel: GPT:20971519 != 41943039 | |
Aug 08 13:03:33 localhost kernel: GPT: Use GNU Parted to correct GPT errors. | |
Aug 08 13:03:33 localhost kernel: sda: sda1 sda2 sda3 sda4 sda5 sda6 sda7 sda8 sda9 sda10 sda11 sda12 | |
Aug 08 13:03:33 localhost kernel: sd 0:0:1:0: [sda] Attached SCSI disk | |
Aug 08 13:03:33 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 | |
Aug 08 13:03:33 localhost kernel: rtc_cmos 00:00: RTC can wake from S4 | |
Aug 08 13:03:33 localhost kernel: rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0 | |
Aug 08 13:03:33 localhost kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram | |
Aug 08 13:03:33 localhost kernel: iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11 | |
Aug 08 13:03:33 localhost kernel: iTCO_vendor_support: vendor-support=0 | |
Aug 08 13:03:33 localhost kernel: device-mapper: ioctl: 4.34.0-ioctl (2015-10-28) initialised: dm-devel@redhat.com | |
Aug 08 13:03:33 localhost kernel: device-mapper: verity-chromeos: dm-verity-chromeos registered | |
Aug 08 13:03:33 localhost kernel: Netfilter messages via NETLINK v0.30. | |
Aug 08 13:03:33 localhost kernel: nf_conntrack version 0.5.0 (65536 buckets, 262144 max) | |
Aug 08 13:03:33 localhost kernel: ctnetlink v0.93: registering with nfnetlink. | |
Aug 08 13:03:33 localhost kernel: ip_tables: (C) 2000-2006 Netfilter Core Team | |
Aug 08 13:03:33 localhost kernel: Initializing XFRM netlink socket | |
Aug 08 13:03:33 localhost kernel: NET: Registered protocol family 10 | |
Aug 08 13:03:33 localhost kernel: NET: Registered protocol family 17 | |
Aug 08 13:03:33 localhost kernel: bridge: automatic filtering via arp/ip/ip6tables has been deprecated. Update your scripts to load br_netfilter if you need this. | |
Aug 08 13:03:33 localhost kernel: registered taskstats version 1 | |
Aug 08 13:03:33 localhost kernel: AppArmor: AppArmor sha1 policy hashing enabled | |
Aug 08 13:03:33 localhost kernel: ima: No TPM chip found, activating TPM-bypass! | |
Aug 08 13:03:33 localhost kernel: Magic number: 14:503:78 | |
Aug 08 13:03:33 localhost kernel: device-mapper: init: attempting early device configuration. | |
Aug 08 13:03:33 localhost kernel: device-mapper: init: adding target '0 2539520 verity payload=PARTUUID=72C5C8D0-6721-0E4A-9408-2C95B1A56C3E hashtree=PARTUUID=72C5C8D0-6721-0E4A-9408-2C95B1A56C3E hashstart=2539520 alg=sha1 root_hexdigest=bd2e40281a062c14b4946005d2e425cb8e7ea881 salt=547509642c1cf4f33b76d318f1eec6f873178e767793fdcc22435f5ed85c08b2' | |
Aug 08 13:03:33 localhost kernel: device-mapper: verity: Argument 0: 'payload=PARTUUID=72C5C8D0-6721-0E4A-9408-2C95B1A56C3E' | |
Aug 08 13:03:33 localhost kernel: device-mapper: verity: Argument 1: 'hashtree=PARTUUID=72C5C8D0-6721-0E4A-9408-2C95B1A56C3E' | |
Aug 08 13:03:33 localhost kernel: device-mapper: verity: Argument 2: 'hashstart=2539520' | |
Aug 08 13:03:33 localhost kernel: device-mapper: verity: Argument 3: 'alg=sha1' | |
Aug 08 13:03:33 localhost kernel: device-mapper: verity: Argument 4: 'root_hexdigest=bd2e40281a062c14b4946005d2e425cb8e7ea881' | |
Aug 08 13:03:33 localhost kernel: device-mapper: verity: Argument 5: 'salt=547509642c1cf4f33b76d318f1eec6f873178e767793fdcc22435f5ed85c08b2' | |
Aug 08 13:03:33 localhost kernel: device-mapper: init: dm-0 is ready | |
Aug 08 13:03:33 localhost kernel: EXT4-fs (dm-0): couldn't mount as ext3 due to feature incompatibilities | |
Aug 08 13:03:33 localhost kernel: EXT4-fs (dm-0): mounting ext2 file system using the ext4 subsystem | |
Aug 08 13:03:33 localhost kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: (null) | |
Aug 08 13:03:33 localhost kernel: VFS: Mounted root (ext2 filesystem) readonly on device 254:0. | |
Aug 08 13:03:33 localhost kernel: devtmpfs: mounted | |
Aug 08 13:03:33 localhost kernel: Freeing unused kernel memory: 1172K | |
Aug 08 13:03:33 localhost kernel: Write protecting the kernel read-only data: 8192k | |
Aug 08 13:03:33 localhost kernel: Freeing unused kernel memory: 180K | |
Aug 08 13:03:33 localhost kernel: Freeing unused kernel memory: 296K | |
Aug 08 13:03:33 localhost kernel: random: nonblocking pool is initialized | |
Aug 08 13:03:33 localhost kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x257a3c3232d, max_idle_ns: 440795236700 ns | |
Aug 08 13:03:33 localhost kernel: audit: type=1805 audit(1533733412.215:2): action="dont_measure" fsmagic="0x9fa0" res=1 | |
Aug 08 13:03:33 localhost kernel: audit: type=1805 audit(1533733412.217:3): action="dont_measure" fsmagic="0x62656572" res=1 | |
Aug 08 13:03:33 localhost kernel: audit: type=1805 audit(1533733412.218:4): action="dont_measure" fsmagic="0x64626720" res=1 | |
Aug 08 13:03:33 localhost kernel: audit: type=1805 audit(1533733412.219:5): action="dont_measure" fsmagic="0x01021994" res=1 | |
Aug 08 13:03:33 localhost kernel: audit: type=1805 audit(1533733412.221:6): action="dont_measure" fsmagic="0x858458f6" res=1 | |
Aug 08 13:03:33 localhost kernel: audit: type=1805 audit(1533733412.222:7): action="dont_measure" fsmagic="0x1cd1" res=1 | |
Aug 08 13:03:33 localhost kernel: audit: type=1805 audit(1533733412.223:8): action="dont_measure" fsmagic="0x42494e4d" res=1 | |
Aug 08 13:03:33 localhost kernel: audit: type=1805 audit(1533733412.225:9): action="dont_measure" fsmagic="0x73636673" res=1 | |
Aug 08 13:03:33 localhost kernel: audit: type=1805 audit(1533733412.226:10): action="dont_measure" fsmagic="0xf97cff8c" res=1 | |
Aug 08 13:03:33 localhost systemd[1]: Successfully loaded the IMA custom policy /etc/ima/ima-policy. | |
Aug 08 13:03:33 localhost kernel: IMA: policy update completed | |
Aug 08 13:03:33 localhost systemd[1]: systemd 232 running in system mode. (+PAM +AUDIT -SELINUX +IMA +APPARMOR +SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP +GCRYPT -GNUTLS -ACL -XZ +LZ4 +SECCOMP +BLKID -ELFUTILS +KMOD -IDN) | |
Aug 08 13:03:33 localhost systemd[1]: Detected virtualization kvm. | |
Aug 08 13:03:33 localhost systemd[1]: Detected architecture x86-64. | |
Aug 08 13:03:33 localhost systemd[1]: Initializing machine ID from KVM UUID. | |
Aug 08 13:03:33 localhost systemd[1]: Installed transient /etc/machine-id file. | |
Aug 08 13:03:33 localhost systemd[1]: Listening on Process Core Dump Socket. | |
Aug 08 13:03:33 localhost systemd[1]: Listening on Journal Socket. | |
Aug 08 13:03:33 localhost systemd[1]: Listening on udev Control Socket. | |
Aug 08 13:03:33 localhost systemd[1]: Listening on /dev/initctl Compatibility Named Pipe. | |
Aug 08 13:03:33 localhost systemd[1]: Listening on Journal Socket (/dev/log). | |
Aug 08 13:03:33 localhost systemd[1]: Listening on Network Service Netlink Socket. | |
Aug 08 13:03:33 localhost systemd[1]: Listening on udev Kernel Socket. | |
Aug 08 13:03:33 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. | |
Aug 08 13:03:33 localhost systemd[1]: Reached target Swap. | |
Aug 08 13:03:33 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. | |
Aug 08 13:03:33 localhost systemd[1]: Reached target Paths. | |
Aug 08 13:03:33 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. | |
Aug 08 13:03:33 localhost systemd[1]: Created slice System Slice. | |
Aug 08 13:03:33 localhost systemd[1]: Mounting /mnt/disks... | |
Aug 08 13:03:33 localhost systemd[1]: Created slice Slice for System Daemons. | |
Aug 08 13:03:33 localhost systemd[1]: Mounting Huge Pages File System... | |
Aug 08 13:03:33 localhost systemd[1]: Starting Remount Root and Kernel File Systems... | |
Aug 08 13:03:33 localhost systemd[1]: Mounting Temporary Directory... | |
Aug 08 13:03:33 localhost systemd[1]: Mounting POSIX Message Queue File System... | |
Aug 08 13:03:33 localhost systemd[1]: Mounting Debug File System... | |
Aug 08 13:03:33 localhost systemd[1]: Created slice system-serial\x2dgetty.slice. | |
Aug 08 13:03:33 localhost systemd[1]: Starting Apply Kernel Variables... | |
Aug 08 13:03:33 localhost systemd[1]: Starting udev Coldplug all Devices... | |
Aug 08 13:03:33 localhost systemd[1]: Starting Create list of required static device nodes for the current kernel... | |
Aug 08 13:03:33 localhost systemd[1]: Starting Mount /dev/shm with 'noexec'... | |
Aug 08 13:03:33 localhost systemd[1]: Created slice system-systemd\x2dfsck.slice. | |
Aug 08 13:03:33 localhost systemd[1]: Listening on Journal Audit Socket. | |
Aug 08 13:03:33 localhost systemd[1]: Starting Journal Service... | |
Aug 08 13:03:33 localhost systemd[1]: Created slice User and Session Slice. | |
Aug 08 13:03:33 localhost systemd[1]: Reached target Slices. | |
Aug 08 13:03:33 localhost systemd[1]: Reached target Remote File Systems. | |
Aug 08 13:03:33 localhost systemd[1]: Started Remount Root and Kernel File Systems. | |
Aug 08 13:03:33 localhost systemd[1]: Started Apply Kernel Variables. | |
Aug 08 13:03:33 localhost systemd[1]: Started Create list of required static device nodes for the current kernel. | |
Aug 08 13:03:33 localhost systemd[1]: Mounted /mnt/disks. | |
Aug 08 13:03:33 localhost systemd[1]: Mounted Huge Pages File System. | |
Aug 08 13:03:33 localhost systemd[1]: Mounted Temporary Directory. | |
Aug 08 13:03:33 localhost systemd[1]: Mounted POSIX Message Queue File System. | |
Aug 08 13:03:33 localhost systemd[1]: Mounted Debug File System. | |
Aug 08 13:03:33 localhost systemd[1]: Started Mount /dev/shm with 'noexec'. | |
Aug 08 13:03:33 localhost systemd[1]: Starting Create Static Device Nodes in /dev... | |
Aug 08 13:03:33 localhost systemd[1]: Started udev Coldplug all Devices. | |
Aug 08 13:03:33 localhost systemd[1]: Started Create Static Device Nodes in /dev. | |
Aug 08 13:03:33 localhost systemd[1]: Starting udev Kernel Device Manager... | |
Aug 08 13:03:33 localhost systemd[1]: Reached target Local File Systems (Pre). | |
Aug 08 13:03:33 localhost systemd-journald[419]: Journal started | |
Aug 08 13:03:33 localhost systemd-journald[419]: Runtime journal (/run/log/journal/651b6e0e5349cb2e139d1183caac3a1b) is 0B, max 0B, 0B free. | |
Aug 08 13:03:33 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:33 localhost systemd[1]: Started Journal Service. | |
Aug 08 13:03:33 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:33 localhost audit[439]: INTEGRITY_RULE file="/usr/lib/systemd/systemd-udevd" hash="sha256:eb808b9bd1f1467d7aff6cbe1e4d136dfaae7c85e157fbe8d0497777b20282a5" ppid=1 pid=439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(md-udevd)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:33 localhost audit[439]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55f6d206b960 a1=55f6d20ad490 a2=55f6d202e690 a3=30 items=0 ppid=1 pid=439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-udevd" exe="/usr/lib/systemd/systemd-udevd" key=(null) | |
Aug 08 13:03:33 localhost audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-udevd" | |
Aug 08 13:03:34 localhost systemd[1]: Started udev Kernel Device Manager. | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost systemd[1]: Found device /dev/ttyS0. | |
Aug 08 13:03:34 localhost audit[475]: INTEGRITY_RULE file="/lib/udev/scsi_id" hash="sha256:d43fe08ed09290fb17bdb7ad71cdae06e4e75b1f4b26cbe283cf6f3d81242fdf" ppid=468 pid=475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-udevd" exe="/usr/lib/systemd/systemd-udevd" | |
Aug 08 13:03:34 localhost audit[475]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=7ffc472552c0 a1=7ffc472556c0 a2=558b26f6d230 a3=8 items=0 ppid=468 pid=475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="scsi_id" exe="/lib/udev/scsi_id" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F6C69622F756465762F736373695F6964002D2D6578706F7274002D2D77686974656C6973746564002D64002F6465762F736461 | |
Aug 08 13:03:34 localhost kernel: Chromium OS LSM: dev(254,0): read-only | |
Aug 08 13:03:34 localhost kernel: Chromium OS LSM: module locking engaged. | |
Aug 08 13:03:34 localhost kernel: Chromium OS LSM: init_module locked obj="/lib/modules/4.4.111+/kernel/crypto/cryptd.ko" pid=472 cmdline="/usr/lib/systemd/systemd-udevd" | |
Aug 08 13:03:34 localhost kernel: AVX version of gcm_enc/dec engaged. | |
Aug 08 13:03:34 localhost kernel: AES CTR mode by8 optimization enabled | |
Aug 08 13:03:34 localhost systemd[1]: Found device PersistentDisk STATE. | |
Aug 08 13:03:34 localhost systemd[1]: Starting File System Check on /dev/sda1... | |
Aug 08 13:03:34 localhost audit[502]: INTEGRITY_RULE file="/usr/lib/systemd/systemd-fsck" hash="sha256:37362bb158157f21e2cdbcc51c28415de599d20738513cd9967ba8c219f4cb4f" ppid=1 pid=502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(emd-fsck)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:34 localhost audit[502]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55f6d20b4dc0 a1=55f6d20bbaf0 a2=55f6d202e030 a3=20 items=0 ppid=1 pid=502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-fsck" exe="/usr/lib/systemd/systemd-fsck" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D642D6673636B002F6465762F73646131 | |
Aug 08 13:03:34 localhost systemd[1]: Found device PersistentDisk OEM. | |
Aug 08 13:03:34 localhost systemd[1]: Mounting /usr/share/oem... | |
Aug 08 13:03:34 localhost kernel: EXT4-fs (sda8): mounted filesystem with ordered data mode. Opts: (null) | |
Aug 08 13:03:34 localhost systemd[1]: Mounted /usr/share/oem. | |
Aug 08 13:03:34 localhost audit[506]: INTEGRITY_RULE file="/sbin/fsck" hash="sha256:4567d9772c86e40123c01d0f5faef9eb4b738846645fb13aeddb5e1b9f0337e0" ppid=502 pid=506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-fsck" exe="/usr/lib/systemd/systemd-fsck" | |
Aug 08 13:03:34 localhost audit[506]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55a44c2c3121 a1=7fff9e4acd00 a2=7fff9e4acec0 a3=0 items=0 ppid=502 pid=506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fsck" exe="/sbin/fsck" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F7362696E2F6673636B002D61002D54002D6C002D4D002D4335002F6465762F73646131 | |
Aug 08 13:03:34 localhost audit[514]: INTEGRITY_RULE file="/sbin/e2fsck" hash="sha256:3eb47c8dee219ec69c025a2dbcfaa232cb3bdb9ae6898764bbab4163f6db525d" ppid=506 pid=514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fsck" exe="/sbin/fsck" | |
Aug 08 13:03:34 localhost audit[514]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55c68daeb2d0 a1=7ffd0614f780 a2=7ffd06150cf8 a3=7f673794b020 items=0 ppid=506 pid=514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fsck.ext4" exe="/sbin/e2fsck" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=6673636B2E65787434002D61002D4335002F6465762F73646131 | |
Aug 08 13:03:34 localhost audit[514]: INTEGRITY_RULE file="/lib64/libext2fs.so.2.4" hash="sha256:ccdab0b0e47bc0679cd2bcdf9350fe9e022e336019269366b12540b017ffea73" ppid=506 pid=514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fsck.ext4" exe="/sbin/e2fsck" | |
Aug 08 13:03:34 localhost audit[514]: SYSCALL arch=c000003e syscall=9 success=yes exit=140049048420352 a0=0 a1=4d784 a2=5 a3=802 items=0 ppid=506 pid=514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fsck.ext4" exe="/sbin/e2fsck" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=6673636B2E65787434002D61002D4335002F6465762F73646131 | |
Aug 08 13:03:34 localhost audit[514]: INTEGRITY_RULE file="/lib64/libcom_err.so.2.1" hash="sha256:e9eb4526253fe477902f0a66775f59858bb61893e047200e51a10daf0dec063b" ppid=506 pid=514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fsck.ext4" exe="/sbin/e2fsck" | |
Aug 08 13:03:34 localhost audit[514]: SYSCALL arch=c000003e syscall=9 success=yes exit=140049048399872 a0=0 a1=406e a2=5 a3=802 items=0 ppid=506 pid=514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fsck.ext4" exe="/sbin/e2fsck" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=6673636B2E65787434002D61002D4335002F6465762F73646131 | |
Aug 08 13:03:34 localhost audit[514]: INTEGRITY_RULE file="/lib64/libe2p.so.2.3" hash="sha256:4fed153f79bf46d05b208552b6be0ee9e309b94c8fb41aaa49dca2d9ebea1352" ppid=506 pid=514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fsck.ext4" exe="/sbin/e2fsck" | |
Aug 08 13:03:34 localhost audit[514]: SYSCALL arch=c000003e syscall=9 success=yes exit=140049048072192 a0=0 a1=9254 a2=5 a3=802 items=0 ppid=506 pid=514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fsck.ext4" exe="/sbin/e2fsck" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=6673636B2E65787434002D61002D4335002F6465762F73646131 | |
Aug 08 13:03:34 localhost audit[514]: INTEGRITY_RULE file="/lib64/libdl-2.23.so" hash="sha256:75b094e77d7999223b2765724150ebf441456263db4c14ab0f1c371e6939e29f" ppid=506 pid=514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fsck.ext4" exe="/sbin/e2fsck" | |
Aug 08 13:03:34 localhost audit[514]: SYSCALL arch=c000003e syscall=9 success=yes exit=140049044402176 a0=0 a1=203090 a2=5 a3=802 items=0 ppid=506 pid=514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fsck.ext4" exe="/sbin/e2fsck" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=6673636B2E65787434002D61002D4335002F6465762F73646131 | |
Aug 08 13:03:34 localhost systemd-fsck[502]: STATE: clean, 128362/383520 files, 693817/1533435 blocks | |
Aug 08 13:03:34 localhost systemd[1]: Started File System Check on /dev/sda1. | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-fsck@dev-sda1 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost systemd[1]: Mounting /mnt/stateful_partition... | |
Aug 08 13:03:34 localhost kernel: EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: commit=30 | |
Aug 08 13:03:34 localhost systemd[1]: Mounted /mnt/stateful_partition. | |
Aug 08 13:03:34 localhost systemd[1]: Starting Make /mnt/stateful_partition private... | |
Aug 08 13:03:34 localhost systemd[1]: Starting Resize stateful partition... | |
Aug 08 13:03:34 localhost audit[527]: INTEGRITY_RULE file="/usr/share/cloud/resize-stateful" hash="sha256:70b6e1bbd40fe631b56753e5bd87a0089d3cf2648c1cbaa6b27af5a04fcc889f" ppid=1 pid=527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(stateful)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:34 localhost audit[527]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55f6d2062260 a1=55f6d202e170 a2=55f6d20d18d0 a3=6c75666574617473 items=0 ppid=1 pid=527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="resize-stateful" exe="/bin/bash" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F62696E2F62617368002F7573722F73686172652F636C6F75642F726573697A652D737461746566756C | |
Aug 08 13:03:34 localhost systemd[1]: Started Make /mnt/stateful_partition private. | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=mnt-stateful_partition-make-private comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=mnt-stateful_partition-make-private comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost audit[529]: INTEGRITY_RULE file="/bin/lsblk" hash="sha256:e51d06b36eff59e33fc864341e979b601c411580b624eda674a23b10b97a9f57" ppid=527 pid=529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="resize-stateful" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[529]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=564482c613e0 a1=564482c63bb0 a2=564482c60ef0 a3=31 items=0 ppid=527 pid=529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="lsblk" exe="/bin/lsblk" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=6C73626C6B002D50002D6F004E414D452C4653545950452C4D4F554E54504F494E54 | |
Aug 08 13:03:34 localhost systemd[1]: Mounting /var... | |
Aug 08 13:03:34 localhost systemd[1]: home.mount: Directory /home to mount over is not empty, mounting anyway. | |
Aug 08 13:03:34 localhost systemd[1]: Mounting /home... | |
Aug 08 13:03:34 localhost systemd[1]: Mounted /var. | |
Aug 08 13:03:34 localhost systemd[1]: Mounted /home. | |
Aug 08 13:03:34 localhost systemd[1]: Starting Flush Journal to Persistent Storage... | |
Aug 08 13:03:34 localhost systemd[1]: var-lib-docker.mount: Directory /var/lib/docker to mount over is not empty, mounting anyway. | |
Aug 08 13:03:34 localhost systemd[1]: Mounting Bind mount /var/lib/docker to itself... | |
Aug 08 13:03:34 localhost audit[529]: INTEGRITY_RULE file="/lib64/libsmartcols.so.1.1.0" hash="sha256:8759876741f069a5a8975e04580759f7640cfb847222b3adc2ccab2f54e66c6d" ppid=527 pid=529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="lsblk" exe="/bin/lsblk" | |
Aug 08 13:03:34 localhost audit[529]: SYSCALL arch=c000003e syscall=9 success=yes exit=139747276902400 a0=0 a1=24078 a2=5 a3=802 items=0 ppid=527 pid=529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="lsblk" exe="/bin/lsblk" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=6C73626C6B002D50002D6F004E414D452C4653545950452C4D4F554E54504F494E54 | |
Aug 08 13:03:34 localhost audit[535]: INTEGRITY_RULE file="/usr/bin/journalctl" hash="sha256:e43cc4fc76cadf9fe48980d72c1f3f2aece415a648570888b300e2b7a94bb1b4" ppid=1 pid=535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(urnalctl)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:34 localhost audit[535]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55f6d20501b0 a1=55f6d20d6290 a2=55f6d20c0b70 a3=10 items=0 ppid=1 pid=535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="journalctl" exe="/usr/bin/journalctl" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F7573722F62696E2F6A6F75726E616C63746C002D2D666C757368 | |
Aug 08 13:03:34 localhost systemd[1]: Mounting /var/lib/cloud... | |
Aug 08 13:03:34 localhost systemd[1]: Mounting Bind mount /var/lib/toolbox to itself... | |
Aug 08 13:03:34 localhost systemd[1]: Mounting Bind mount /var/lib/google to itself... | |
Aug 08 13:03:34 localhost systemd[1]: Starting Init GCI filesystems... | |
Aug 08 13:03:34 localhost audit[547]: INTEGRITY_RULE file="/usr/share/cloud/mount-etc-overlay" hash="sha256:67f2d4bf68ff997c4966ccc0d1393c83b231084ab60724acb17ce741c5d4aa44" ppid=1 pid=547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(-overlay)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:34 localhost audit[547]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55f6d20ad730 a1=55f6d20358d0 a2=55f6d202c720 a3=6c7265766f2d6374 items=0 ppid=1 pid=547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mount-etc-overl" exe="/bin/bash" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F62696E2F7368002F7573722F73686172652F636C6F75642F6D6F756E742D6574632D6F7665726C6179 | |
Aug 08 13:03:34 localhost audit[549]: INTEGRITY_RULE file="/bin/mountpoint" hash="sha256:d639970b78634b15c066320e30f06715cf44625a7793df8c3db664ad9e274fcb" ppid=547 pid=549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mount-etc-overl" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[549]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55db16849d70 a1=55db16847470 a2=55db16846ef0 a3=31 items=0 ppid=547 pid=549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mountpoint" exe="/bin/mountpoint" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=6D6F756E74706F696E74002D71002F657463 | |
Aug 08 13:03:34 localhost systemd[1]: Mounted /var/lib/cloud. | |
Aug 08 13:03:34 localhost systemd[1]: Mounted Bind mount /var/lib/google to itself. | |
Aug 08 13:03:34 localhost audit[552]: INTEGRITY_RULE file="/bin/umount" hash="sha256:5a81deadddf2e1a4b29c8c093745ff79875c5be480da0165dde0764d48e55b1f" ppid=547 pid=552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mount-etc-overl" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[552]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55db1684ae30 a1=55db16847470 a2=55db16846ef0 a3=21 items=0 ppid=547 pid=552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="umount" exe="/bin/umount" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=756D6F756E74002F6574632F6D616368696E652D6964 | |
Aug 08 13:03:34 localhost systemd[1]: Mounted Bind mount /var/lib/toolbox to itself. | |
Aug 08 13:03:34 localhost systemd[1]: Mounted Bind mount /var/lib/docker to itself. | |
Aug 08 13:03:34 localhost audit[557]: INTEGRITY_RULE file="/bin/rmdir" hash="sha256:6e999e3d0c2ccfeb001f513f6a6f51c5121327894ea279e2682b4b8272c4bca8" ppid=547 pid=557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mount-etc-overl" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[557]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55db1684a7e0 a1=55db1684ae30 a2=55db16846ef0 a3=20 items=0 ppid=547 pid=557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rmdir" exe="/usr/bin/coreutils" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D726D646972002F62696E2F726D646972002F746D702F6574635F6F7665726C6179 | |
Aug 08 13:03:34 localhost systemd[1]: Starting Mount /var/lib/docker with 'exec'... | |
Aug 08 13:03:34 localhost audit[559]: INTEGRITY_RULE file="/bin/cp" hash="sha256:79c39d67b7969b943045e80485f9dc3202a11ddfc5c9f7fdb4287216d0255a90" ppid=547 pid=559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mount-etc-overl" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[559]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55db1684aa60 a1=55db1684ac30 a2=55db16846ef0 a3=30 items=0 ppid=547 pid=559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cp" exe="/usr/bin/coreutils" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D6370002F62696E2F6370002F72756E2F6D616368696E652D6964002F6574632F6D616368696E652D6964 | |
Aug 08 13:03:34 localhost kernel: EXT4-fs (sda1): re-mounted. Opts: commit=30,data=ordered | |
Aug 08 13:03:34 localhost systemd[1]: Starting Mount /var/lib/toolbox with 'exec' and 'suid' bits... | |
Aug 08 13:03:34 localhost kernel: EXT4-fs (sda1): re-mounted. Opts: commit=30,data=ordered | |
Aug 08 13:03:34 localhost systemd[1]: Starting Mount /var/lib/google with 'exec'... | |
Aug 08 13:03:34 localhost kernel: EXT4-fs (sda1): re-mounted. Opts: commit=30,data=ordered | |
Aug 08 13:03:34 localhost systemd[1]: Started Init GCI filesystems. | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=mount-etc-overlay comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=mount-etc-overlay comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost systemd[1]: Started Mount /var/lib/docker with 'exec'. | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=var-lib-docker-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=var-lib-docker-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost systemd[1]: Started Mount /var/lib/toolbox with 'exec' and 'suid' bits. | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=var-lib-toolbox-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=var-lib-toolbox-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost systemd[1]: Started Mount /var/lib/google with 'exec'. | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=var-lib-google-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=var-lib-google-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost systemd-journald[419]: Time spent on flushing to /var is 5.412ms for 527 entries. | |
Aug 08 13:03:34 localhost systemd-journald[419]: System journal (/var/log/journal/651b6e0e5349cb2e139d1183caac3a1b) is 0B, max 0B, 0B free. | |
Aug 08 13:03:34 localhost audit[569]: INTEGRITY_RULE file="/sbin/blockdev" hash="sha256:d5cd76de2e386d9a60ee2f4cc08d860f5b1e7bab3953d2eb68a563cdd1ecf96f" ppid=530 pid=569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="resize-stateful" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[569]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=564482c649d0 a1=564482c64e20 a2=564482c60ef0 a3=31 items=0 ppid=530 pid=569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="blockdev" exe="/sbin/blockdev" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=626C6F636B646576002D2D676574737A002F6465762F73646131 | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost systemd[1]: Reached target Local File Systems. | |
Aug 08 13:03:34 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... | |
Aug 08 13:03:34 localhost systemd[1]: Starting Rebuild Journal Catalog... | |
Aug 08 13:03:34 localhost systemd[1]: Started Flush Journal to Persistent Storage. | |
Aug 08 13:03:34 localhost systemd[1]: Starting Create Volatile Files and Directories... | |
Aug 08 13:03:34 localhost systemd-tmpfiles[576]: [/usr/lib/tmpfiles.d/var.conf:12] Duplicate line for path "/var/run", ignoring. | |
Aug 08 13:03:34 localhost systemd[1]: Started Rebuild Journal Catalog. | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost audit[574]: INTEGRITY_RULE file="/usr/share/cloud/cgpt" hash="sha256:20473dbe3cf35b51aabcbae31bafdec386799b670f2eff33ea6de72fda2fb012" ppid=530 pid=574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="resize-stateful" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[574]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=564482c647f0 a1=564482c62720 a2=564482c60ef0 a3=30 items=0 ppid=530 pid=574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cgpt" exe="/usr/share/cloud/cgpt" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F7573722F73686172652F636C6F75642F6367707400726573697A65002F6465762F73646131 | |
Aug 08 13:03:34 localhost audit[580]: INTEGRITY_RULE file="/sbin/resize2fs" hash="sha256:9cd78335293ffc90d2b3bf8509eca88c68057cbfe89f72f6c506302900d647b2" ppid=530 pid=580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="resize-stateful" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[580]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=564482c64d50 a1=564482c64ba0 a2=564482c60ef0 a3=21 items=0 ppid=530 pid=580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="resize2fs" exe="/sbin/resize2fs" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=726573697A65326673002F6465762F73646131 | |
Aug 08 13:03:34 localhost resize-stateful[527]: resize2fs 1.43.6 (29-Aug-2017) | |
Aug 08 13:03:34 localhost kernel: EXT4-fs (sda1): resizing filesystem from 1533435 to 4154875 blocks | |
Aug 08 13:03:34 localhost systemd[1]: Started Create Volatile Files and Directories. | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost systemd[1]: Starting Load Security Auditing Rules... | |
Aug 08 13:03:34 localhost audit[585]: INTEGRITY_RULE file="/sbin/augenrules" hash="sha256:29f2e4fa2a3a2776c6bd3826f54d3997b7cf772a10734408903ee4feb93c30e9" ppid=1 pid=585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(genrules)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:34 localhost audit[585]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55f6d206ac40 a1=55f6d20bbdd0 a2=55f6d2033460 a3=10 items=0 ppid=1 pid=585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="augenrules" exe="/bin/bash" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F62696E2F62617368002F7362696E2F617567656E72756C6573002D2D6C6F6164 | |
Aug 08 13:03:34 localhost audit[611]: INTEGRITY_RULE file="/bin/mktemp" hash="sha256:79dab18e96e909ca19e901731bd14bbd182327ed34050ad15d3c22186c112601" ppid=585 pid=611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="augenrules" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[611]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55a34e6ca940 a1=55a34e6cade0 a2=55a34e6c9ef0 a3=21 items=0 ppid=585 pid=611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mktemp" exe="/usr/bin/coreutils" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D6D6B74656D70002F7573722F62696E2F6D6B74656D70002F746D702F617572756C65732E5858585858585858 | |
Aug 08 13:03:34 localhost systemd[1]: Starting Network Time Synchronization... | |
Aug 08 13:03:34 localhost audit[617]: INTEGRITY_RULE file="/bin/ls" hash="sha256:309b3c9a3246361ec0338641aed3c14e7f91e23e7cf10de000c75135ba99fddd" ppid=616 pid=617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="augenrules" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[617]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55a34e6ca860 a1=55a34e6ca8e0 a2=55a34e6c9ef0 a3=31 items=0 ppid=616 pid=617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ls" exe="/usr/bin/coreutils" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D6C73002F62696E2F6C73002D3176002F6574632F61756469742F72756C65732E64 | |
Aug 08 13:03:34 localhost audit[568]: INTEGRITY_RULE file="/sbin/ldconfig" hash="sha256:bc1e3c4a0960f0a45ab3359a553da4de914d843d5ad86f895d463f375f1b7564" ppid=1 pid=568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(ldconfig)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:34 localhost audit[568]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55f6d202bd60 a1=55f6d20e0140 a2=55f6d2033460 a3=10 items=0 ppid=1 pid=568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ldconfig" exe="/sbin/ldconfig" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F7362696E2F6C64636F6E666967002D58 | |
Aug 08 13:03:34 localhost audit[614]: INTEGRITY_RULE file="/usr/bin/mawk" hash="sha256:fdbf50e871d47e7dc2a93b080cb1c7ef8426fda049d55d1b1ab48c19729c02c5" ppid=585 pid=614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="augenrules" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[614]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55a34e6ce5d0 a1=55a34e6cad60 a2=55a34e6c9ef0 a3=20 items=0 ppid=585 pid=614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="awk" exe="/usr/bin/mawk" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=61776B005C0A424547494E2020207B0A20202020202020206D696E75735F65203D2022223B0A20202020202020206D696E75735F44203D2022223B0A20202020202020206D696E75735F66203D2022223B0A20202020202020206D696E75735F62203D2022223B0A202020202020202072657374203D20303B0A7D207B0A2020 | |
Aug 08 13:03:34 localhost audit[612]: INTEGRITY_RULE file="/usr/lib/systemd/systemd-timesyncd" hash="sha256:baa28b6baf4566322e7f57fb8c53c87f148b34e6bf4b3ff5442003b035c0de4d" ppid=1 pid=612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(imesyncd)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:34 localhost audit[612]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55f6d20702d0 a1=55f6d202fa60 a2=55f6d2073010 a3=6e7973656d69742d items=0 ppid=1 pid=612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-timesyn" exe="/usr/lib/systemd/systemd-timesyncd" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-timesyncd" | |
Aug 08 13:03:34 localhost systemd[1]: Started Network Time Synchronization. | |
Aug 08 13:03:34 localhost audit[618]: INTEGRITY_RULE file="/bin/grep" hash="sha256:28980e71d93bb5939d819dca55957e1e4a6885f48fdcb79a872a461129984cc5" ppid=616 pid=618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="augenrules" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[618]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55a34e6ca8d0 a1=55a34e6cab80 a2=55a34e6c9ef0 a3=21 items=0 ppid=616 pid=618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="grep" exe="/bin/grep" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=67726570002E72756C657324 | |
Aug 08 13:03:34 localhost audit[621]: INTEGRITY_RULE file="/bin/cat" hash="sha256:c6138c9502337f42763d627e4b665dd4fd66f26a987891d0a7e8313783f689ad" ppid=613 pid=621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="augenrules" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[621]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55a34e6ca640 a1=55a34e6ca9e0 a2=55a34e6c9ef0 a3=21 items=0 ppid=613 pid=621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cat" exe="/usr/bin/coreutils" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D636174002F62696E2F636174002F6574632F61756469742F72756C65732E642F30302D636C6561722E72756C6573 | |
Aug 08 13:03:34 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Aug 08 13:03:34 localhost systemd[1]: Reached target System Time Synchronized. | |
Aug 08 13:03:34 localhost audit[623]: INTEGRITY_RULE file="/usr/bin/cmp" hash="sha256:5bd189dd436b2f2709605466a7b87fcd67cc3b48f047adffe163cbed2392a8a9" ppid=585 pid=623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="augenrules" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[623]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55a34e6cd4f0 a1=55a34e6cd2d0 a2=55a34e6c9ef0 a3=30 items=0 ppid=585 pid=623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cmp" exe="/usr/bin/cmp" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=636D70002D73002F746D702F617572756C65732E4A74315246575374002F6574632F61756469742F61756469742E72756C6573 | |
Aug 08 13:03:34 localhost audit[625]: INTEGRITY_RULE file="/bin/rm" hash="sha256:c909af3f4c8ae30d059b96a9cf7b097db39159d161622b95190d60d25a5db776" ppid=585 pid=625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="augenrules" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[625]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55a34e6cbe70 a1=55a34e6cd2d0 a2=55a34e6c9ef0 a3=30 items=0 ppid=585 pid=625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rm" exe="/usr/bin/coreutils" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D726D002F62696E2F726D002D66002F746D702F617572756C65732E4A74315246575374 | |
Aug 08 13:03:34 localhost audit[626]: INTEGRITY_RULE file="/sbin/auditctl" hash="sha256:b10d123c527eb7729d360a966b73c6f5cd0b26ff239cedd913026f1bd3daa1fa" ppid=585 pid=626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="augenrules" exe="/bin/bash" | |
Aug 08 13:03:34 localhost audit[626]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=55a34e6ceb10 a1=55a34e6cd2d0 a2=55a34e6c9ef0 a3=30 items=0 ppid=585 pid=626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/sbin/auditctl" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 | |
Aug 08 13:03:34 localhost audit[626]: INTEGRITY_RULE file="/lib64/libauparse.so.0.0.0" hash="sha256:86ad75c19098082009bf62879aba66196f4b0fd85d63682442ec47071e41fd46" ppid=585 pid=626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/sbin/auditctl" | |
Aug 08 13:03:34 localhost audit[626]: SYSCALL arch=c000003e syscall=9 success=yes exit=139876571213824 a0=0 a1=15080 a2=5 a3=802 items=0 ppid=585 pid=626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/sbin/auditctl" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 | |
Aug 08 13:03:34 localhost audit[626]: INTEGRITY_RULE file="/lib64/libaudit.so.1.0.0" hash="sha256:ae945c7d81906b715a08765be1a2bd330983a534b7c0911db290cf388ffa82b1" ppid=585 pid=626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/sbin/auditctl" | |
Aug 08 13:03:34 localhost audit[626]: SYSCALL arch=c000003e syscall=9 success=yes exit=139876571058176 a0=0 a1=25048 a2=5 a3=802 items=0 ppid=585 pid=626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/sbin/auditctl" key=(null) | |
Aug 08 13:03:34 localhost audit: PROCTITLE proctitle=617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 | |
Aug 08 13:03:34 localhost audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 op="add_rule" key=(null) list=5 res=1 | |
Aug 08 13:03:34 localhost augenrules[585]: No rules | |
Aug 08 13:03:34 localhost systemd[1]: Started Load Security Auditing Rules. | |
Aug 08 13:03:35 localhost kernel: EXT4-fs (sda1): resized filesystem to 4154875 | |
Aug 08 13:03:35 localhost resize-stateful[527]: Filesystem at /dev/sda1 is mounted on /mnt/stateful_partition; on-line resizing required | |
Aug 08 13:03:35 localhost resize-stateful[527]: old_desc_blocks = 1, new_desc_blocks = 2 | |
Aug 08 13:03:35 localhost resize-stateful[527]: The filesystem on /dev/sda1 is now 4154875 (4k) blocks long. | |
Aug 08 13:03:35 localhost systemd[1]: Started Resize stateful partition. | |
Aug 08 13:03:36 localhost systemd[1]: Started Rebuild Dynamic Linker Cache. | |
Aug 08 13:03:36 localhost systemd[1]: Starting Update is Completed... | |
Aug 08 13:03:36 localhost audit[634]: INTEGRITY_RULE file="/usr/lib/systemd/systemd-update-done" hash="sha256:7c7981fdd4f8d20cd26ce64a905cac7ee08a53590fc9d5b28439b0e1dfbb629b" ppid=1 pid=634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(ate-done)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost systemd[1]: Started Update is Completed. | |
Aug 08 13:03:36 localhost systemd[1]: Reached target System Initialization. | |
Aug 08 13:03:36 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. | |
Aug 08 13:03:36 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. | |
Aug 08 13:03:36 localhost systemd[1]: Started Run Crash Sender hourly. | |
Aug 08 13:03:36 localhost systemd[1]: Reached target Timers. | |
Aug 08 13:03:36 localhost systemd[1]: Starting Docker Socket for the API. | |
Aug 08 13:03:36 localhost systemd[1]: Listening on Docker Socket for the API. | |
Aug 08 13:03:36 localhost systemd[1]: Reached target Sockets. | |
Aug 08 13:03:36 localhost systemd[1]: Reached target Basic System. | |
Aug 08 13:03:36 localhost systemd[1]: Starting Configure ip6tables... | |
Aug 08 13:03:36 localhost systemd[1]: Started D-Bus System Message Bus. | |
Aug 08 13:03:36 localhost audit[638]: INTEGRITY_RULE file="/usr/share/cloud/ip6tables-setup" hash="sha256:064f1a5da4809c0f4f33812eb163dbbdf2f93035fea2ae9fa205eaffe34cb4ee" ppid=1 pid=638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(es-setup)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost audit[640]: INTEGRITY_RULE file="/usr/bin/dbus-daemon" hash="sha256:a8ce03c00f1b1fa0e2ab2740a61cb1ed0f41180440a105579667ce70c3711848" ppid=1 pid=640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(s-daemon)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost audit[640]: INTEGRITY_RULE file="/usr/lib64/libdbus-1.so.3.14.8" hash="sha256:b2a6863a1f2348767f824fa468c007cb10bb53821a4a202cff6df6d2982ab41a" ppid=1 pid=640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="dbus-daemon" exe="/usr/bin/dbus-daemon" | |
Aug 08 13:03:36 localhost audit[640]: INTEGRITY_RULE file="/usr/lib64/libsystemd.so.0.17.0" hash="sha256:77c80672b0a928592e6486f10bdd22b39b84a464c665d9949f4761fff2d7a7bd" ppid=1 pid=640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="dbus-daemon" exe="/usr/bin/dbus-daemon" | |
Aug 08 13:03:36 localhost audit[640]: INTEGRITY_RULE file="/usr/lib64/libexpat.so.1.6.0" hash="sha256:cb041ff11f37202eb2f1bf58b3356535affd792c6896b21c33accf88334ba4f6" ppid=1 pid=640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="dbus-daemon" exe="/usr/bin/dbus-daemon" | |
Aug 08 13:03:36 localhost audit[642]: INTEGRITY_RULE file="/sbin/xtables-multi" hash="sha256:2aee46ca981e6daf24a5a636cd05fbe019cc6dda3fde7cc125ac63cf82136b9d" ppid=638 pid=642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-setup" exe="/bin/bash" | |
Aug 08 13:03:36 localhost audit[642]: INTEGRITY_RULE file="/lib64/libip4tc.so.0.1.0" hash="sha256:e6ebe722a2039aa386abd7bb47ca0a4d6db28ab30eed39e1500c97492c6ea115" ppid=638 pid=642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/sbin/xtables-multi" | |
Aug 08 13:03:36 localhost audit[642]: INTEGRITY_RULE file="/lib64/libip6tc.so.0.1.0" hash="sha256:457babbc25cccf69589e1b389d63f1cac27b145681a19ce076268df247ec3cfe" ppid=638 pid=642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/sbin/xtables-multi" | |
Aug 08 13:03:36 localhost audit[642]: INTEGRITY_RULE file="/lib64/libxtables.so.10.0.0" hash="sha256:089d5a69eadf6059b0cce5ff85e425d5e8e04d3a413496ddf5a03906c7c3f91a" ppid=638 pid=642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/sbin/xtables-multi" | |
Aug 08 13:03:36 localhost dbus-daemon[640]: Unknown username "power" in message bus configuration file | |
Aug 08 13:03:36 localhost kernel: ip6_tables: (C) 2000-2006 Netfilter Core Team | |
Aug 08 13:03:36 localhost systemd[1]: Starting Network Service... | |
Aug 08 13:03:36 localhost systemd[1]: Reached target Containers. | |
Aug 08 13:03:36 localhost audit[650]: INTEGRITY_RULE file="/usr/lib64/xtables/libxt_standard.so" hash="sha256:75e295cebd9b6984a32d43f759e91d7739f488c066db859e9f84632ff3d9dcd3" ppid=638 pid=650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/sbin/xtables-multi" | |
Aug 08 13:03:36 localhost systemd[1]: Starting Login Service... | |
Aug 08 13:03:36 localhost systemd[1]: Starting Configure iptables... | |
Aug 08 13:03:36 localhost audit[656]: INTEGRITY_RULE file="/usr/share/cloud/iptables-setup" hash="sha256:ee0f8f8d32ca8861042d530b5d6b738b29dc6dd7d3a4c34c44f06437cab082bf" ppid=1 pid=656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(es-setup)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)... | |
Aug 08 13:03:36 localhost systemd[1]: Starting Initialize Crash Reporter... | |
Aug 08 13:03:36 localhost audit[667]: INTEGRITY_RULE file="/usr/lib64/xtables/libxt_state.so" hash="sha256:9016950514663b63c6ea24942052fd203ba303864245c34fa14e35de11cb5e5c" ppid=656 pid=667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:03:36 localhost systemd[1]: Started Configure ip6tables. | |
Aug 08 13:03:36 localhost systemd[1]: Started Configure iptables. | |
Aug 08 13:03:36 localhost audit[644]: INTEGRITY_RULE file="/usr/lib/systemd/systemd-networkd" hash="sha256:dce5a619f58f9a4b160e33ee8df36b226439891aaf2b94f7ce0e0f5d4148dab7" ppid=1 pid=644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(networkd)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost audit[653]: INTEGRITY_RULE file="/usr/lib/systemd/systemd-logind" hash="sha256:c955336e904a92d338b26d982fa975439c1cdde9ee73f032b5ab75d795fcddad" ppid=1 pid=653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(d-logind)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost audit[658]: INTEGRITY_RULE file="/usr/lib/python-exec/python-exec2" hash="sha256:422574cf71ec8fae01b733b0959127afab65bb8d9dc2c5a78018035a9e8f1464" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(oud-init)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost audit[658]: INTEGRITY_RULE file="/usr/lib/python-exec/python-exec2-c" hash="sha256:9bfe3f405b9ebc60f5800ae19b3ec212b10dc8bbb7468a1675ee0bc2c543bcee" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(oud-init)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost systemd-networkd[644]: Enumeration completed | |
Aug 08 13:03:36 localhost systemd[1]: Started Network Service. | |
Aug 08 13:03:36 localhost systemd-timesyncd[612]: No network connectivity, watching for changes. | |
Aug 08 13:03:36 localhost systemd-networkd[644]: eth0: IPv6 enabled for interface: Success | |
Aug 08 13:03:36 localhost audit[658]: INTEGRITY_RULE file="/usr/lib/python-exec/python2.7/cloud-init" hash="sha256:21af364aab643743cbc34824b29f1713f053ec10a6d74431f2f23231cb8e4f06" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/lib/python-exec/python-exec2-c" | |
Aug 08 13:03:36 localhost audit[658]: INTEGRITY_RULE file="/usr/bin/python2.7" hash="sha256:70186c6bdfa750380b9af2144ab080c4b364f70df4ec434e737bff5d2de18adc" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/lib/python-exec/python-exec2-c" | |
Aug 08 13:03:36 localhost systemd-timesyncd[612]: No network connectivity, watching for changes. | |
Aug 08 13:03:36 localhost systemd-timesyncd[612]: No network connectivity, watching for changes. | |
Aug 08 13:03:36 localhost systemd[1]: Reached target Network. | |
Aug 08 13:03:36 localhost systemd-timesyncd[612]: No network connectivity, watching for changes. | |
Aug 08 13:03:36 localhost systemd-logind[653]: Watching system buttons on /dev/input/event0 (Power Button) | |
Aug 08 13:03:36 localhost systemd-logind[653]: Watching system buttons on /dev/input/event1 (Sleep Button) | |
Aug 08 13:03:36 localhost systemd-logind[653]: New seat seat0. | |
Aug 08 13:03:36 localhost systemd-networkd[644]: eth0: Gained carrier | |
Aug 08 13:03:36 localhost systemd-timesyncd[612]: No network connectivity, watching for changes. | |
Aug 08 13:03:36 localhost systemd-networkd[644]: eth0: DHCPv4 address 10.240.0.7/32 via 10.240.0.1 | |
Aug 08 13:03:36 localhost systemd-timesyncd[612]: No network connectivity, watching for changes. | |
Aug 08 13:03:36 localhost dbus[640]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' | |
Aug 08 13:03:36 localhost systemd[1]: Starting Docker Application Container Engine... | |
Aug 08 13:03:36 localhost systemd[1]: Starting Permit User Sessions... | |
Aug 08 13:03:36 localhost audit[675]: INTEGRITY_RULE file="/usr/lib/systemd/systemd-user-sessions" hash="sha256:5b2f0b6b0cde717db2f0b50810ccadb5afc418e03d441c5055dcb5217967b0e0" ppid=1 pid=675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(sessions)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost systemd[1]: Starting OpenSSH server daemon... | |
Aug 08 13:03:36 localhost audit[677]: INTEGRITY_RULE file="/usr/share/chromeos-ssh-config/sshd-pre" hash="sha256:28a2fdc62cf915f23030e6099f3f606c8e53413695a534f168a88f026b9d96b0" ppid=1 pid=677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(sshd-pre)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost systemd[1]: Starting Network Name Resolution... | |
Aug 08 13:03:36 localhost systemd[1]: Starting Wait for Network to be Configured... | |
Aug 08 13:03:36 localhost audit[684]: INTEGRITY_RULE file="/usr/lib/systemd/systemd-networkd-wait-online" hash="sha256:e7c585483c810403a25b79e3bceee90d8c33fa2e01ab90f0b70b2183f6c79d33" ppid=1 pid=684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(t-online)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost systemd-networkd-wait-online[684]: ignoring: lo | |
Aug 08 13:03:36 localhost systemd[1]: Started Permit User Sessions. | |
Aug 08 13:03:36 localhost systemd[1]: Started Login Service. | |
Aug 08 13:03:36 localhost audit[664]: INTEGRITY_RULE file="/sbin/crash_reporter" hash="sha256:43c276466e03391ef7ce87fdb62592e0d2f7dcccf7d08ad1efaada24b23eb434" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(reporter)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost audit[664]: INTEGRITY_RULE file="/usr/lib64/libmetrics-395517.so" hash="sha256:0f721257465511023e29e28bdf0758a6775287519effdb49338021a19efc205b" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 localhost audit[664]: INTEGRITY_RULE file="/lib64/libminijail.so" hash="sha256:67a31e1c0ab9bf4c310b24660db405b5709c3bdc0b0f7008dd70b46f5d2143c6" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 localhost systemd[1]: Starting Hostname Service... | |
Aug 08 13:03:36 localhost audit[664]: INTEGRITY_RULE file="/usr/lib64/libbrillo-core-395517.so" hash="sha256:2b2826ed540603ac1e7df00e76c8cb554ce1c97d8df382b864c1f2945866202b" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 localhost audit[664]: INTEGRITY_RULE file="/usr/lib64/libbrillo-cryptohome-395517.so" hash="sha256:af0bfaacfd5f19ce26f824187dcf0ae1b9bf262343c40c06c3c344015c939769" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 localhost systemd[1]: Started Serial Getty on ttyS0. | |
Aug 08 13:03:36 localhost systemd[1]: Reached target Login Prompts. | |
Aug 08 13:03:36 localhost audit[664]: INTEGRITY_RULE file="/usr/lib64/libbase-core-395517.so" hash="sha256:bc95834716264bc53d08b35f259acda9be0039bee41ece3fe240334d80401d2f" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 localhost audit[680]: INTEGRITY_RULE file="/usr/lib/systemd/systemd-resolved" hash="sha256:ccd75f9bcc37c26c151d5048bf68f8c21aa2ea6446276292406c4ca09442028c" ppid=1 pid=680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(resolved)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost systemd-resolved[680]: Positive Trust Anchors: | |
Aug 08 13:03:36 localhost systemd-resolved[680]: . IN DS 19036 8 2 49aac11d7b6f6446702e54a1607371607a1a41855200fd2ce1cdde32f24e8fb5 | |
Aug 08 13:03:36 localhost systemd-resolved[680]: Negative trust anchors: 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test | |
Aug 08 13:03:36 localhost systemd-resolved[680]: Defaulting to hostname 'linux'. | |
Aug 08 13:03:36 localhost systemd[1]: Started Network Name Resolution. | |
Aug 08 13:03:36 localhost audit[683]: INTEGRITY_RULE file="/usr/sbin/sshd" hash="sha256:e729f8d96673ac2c8bb1ae0c1e0c0343c4f028450fec710f072a920b73841bad" ppid=677 pid=683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd-pre" exe="/bin/bash" | |
Aug 08 13:03:36 localhost audit[683]: INTEGRITY_RULE file="/lib64/libpam.so.0.84.1" hash="sha256:90fab37d1e457f959642f8ed27cf6b21416f4e3b98bcec4d800c03f90ca9204a" ppid=677 pid=683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:03:36 localhost audit[689]: INTEGRITY_RULE file="/usr/lib/systemd/systemd-hostnamed" hash="sha256:5bca992a5146d143435fcddf80a685c37cade9da0c0246e2a5c165232cac5860" ppid=1 pid=689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(ostnamed)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:36 localhost dbus[640]: [system] Successfully activated service 'org.freedesktop.hostname1' | |
Aug 08 13:03:36 localhost systemd[1]: Started Hostname Service. | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-hostnamed[689]: Changed host name to 'gke-cs-test-dan-test-pool-bca3c3a7-m055' | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-resolved[680]: System hostname changed to 'gke-cs-test-dan-test-pool-bca3c3a7-m055'. | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[664]: INTEGRITY_RULE file="/usr/lib64/libbase-dbus-395517.so" hash="sha256:ff5b4a6ea9c104dabe6db4712c8b588abaa5e954d3414aa3866063158df481f1" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[664]: INTEGRITY_RULE file="/usr/lib64/libpcrecpp.so.0.0.1" hash="sha256:e3837386b42e9ee06330ba41a8cbcba9ccede753e1e393dd2d52b292ebd8ba62" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/libpython2.7.so.1.0" hash="sha256:223a23734485332dd85e01b4cc791ceeb97745f56e0999bfd43356323a4f885a" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/lib64/libutil-2.23.so" hash="sha256:0665f0370c7bac0b44695831cf89884cc3066bb8ae79df7ba2a1e8150abc3035" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[664]: INTEGRITY_RULE file="/usr/lib64/libstdc++.so.6.0.20" hash="sha256:d37ade4d723815210c4d4a53384b2788ca7f2853cd5ceeca1166260bed72c2e6" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[664]: INTEGRITY_RULE file="/usr/lib64/libpolicy-395517.so" hash="sha256:9a3e268e7c1d7650af1f82668000687b312130278a3f44a068a8ffa0dd0a3f3a" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[683]: INTEGRITY_RULE file="/usr/lib64/libcrypto.so.1.0.0" hash="sha256:83a4486c341c2426a0956785bcdf234044deee2dbccc6bab51c11fe97cc21963" ppid=677 pid=683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[683]: INTEGRITY_RULE file="/lib64/libcrypt-2.23.so" hash="sha256:7ab6242ed26377eb2379fa3190cc89e19e84b58a387013e05a4565fe8d73b0c9" ppid=677 pid=683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[664]: INTEGRITY_RULE file="/usr/lib64/libglib-2.0.so.0.5000.3" hash="sha256:e9806383dc317685c06d3127992452cad897dd9d71bf9d1a361ec7d28d20cd23" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[664]: INTEGRITY_RULE file="/usr/lib64/libevent_core-2.1.so.6.0.2" hash="sha256:87b50743b3df1c6d7d064413f9cef01e93eacedbaa26a96de255a49e8af5cc0e" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[664]: INTEGRITY_RULE file="/usr/lib64/libprotobuf-lite.so.13.0.0" hash="sha256:7bcfd91810606b7868c7ba3e68b507da18923bb7cf9fb2cb44f835b8e81a6500" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[664]: INTEGRITY_RULE file="/lib64/libpcre.so.1.2.8" hash="sha256:087edde34a2f4f4c86bc86c66aad4a6662ce1e189cf2f8f419a0c9ca7157a1df" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[664]: INTEGRITY_RULE file="/usr/lib64/libgcc_s.so.1" hash="sha256:6d2d6b823ea5e5060458eaae3cf6de3d8caf78ac1f2ab238a19c2d2493e1c358" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[664]: INTEGRITY_RULE file="/usr/lib64/libinstallattributes-395517.so" hash="sha256:35c2f0dfc189badc512408f4b800b2d7870405b7fbaab687ab5e0f4936a01f1f" ppid=1 pid=664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="crash_reporter" exe="/sbin/crash_reporter" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 crash_reporter[664]: libminijail[664]: mount /dev/log -> /dev/log type '' | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 crash_reporter[664]: Enabling user crash handling | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[694]: INTEGRITY_RULE file="/usr/bin/ssh-keygen" hash="sha256:64fa758c26d0d90266aa752724c92b97ed2e20c2852e9e633ddbe1a45fa9444f" ppid=677 pid=694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd-pre" exe="/bin/bash" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Initialize Crash Reporter. | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Run per-boot crash collection tasks... | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 crash_reporter[697]: libminijail[697]: mount /dev/log -> /dev/log type '' | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[704]: INTEGRITY_RULE file="/usr/lib64/xtables/libxt_tcp.so" hash="sha256:8bc3bdc494add6cfe01fb5942914cc1250dcd66decdc51dcca095397b26fbb89" ppid=677 pid=704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started OpenSSH server daemon. | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 crash_reporter[697]: Enabling kernel crash handling | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 crash_reporter[697]: Cannot find /var/log/eventlog.txt, skipping hardware watchdog check. | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 crash_reporter[697]: Could not load the device policy file. | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Server listening on 0.0.0.0 port 22. | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Run per-boot crash collection tasks. | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_locale.so" hash="sha256:b512bb1586a377421e3bdb8d6382948690e47507c4204d58f2b2515708eb9551" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_collections.so" hash="sha256:4f8574da1b7a2f91831b697f085a12e1d8133d38bc9f875f3cc44db51d1dc33e" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/operator.so" hash="sha256:869d3b1b53d71fb643901feddead9d20c83c2adc5e41165d327d705a44764ac2" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/itertools.so" hash="sha256:cdcb0e545cf8b177cadf5a5bc75b72a2885a80b6c9363fce390587fbebb89824" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_heapq.so" hash="sha256:c1d618fb6542f9577695d080b29076daa4ae27799e1e18ab1ef0f2e56dd00ea8" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/strop.so" hash="sha256:bfd820a6ccf3018dbfa5f1ccee8e533b594c69c02fca7badc3cb67766509c878" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_functools.so" hash="sha256:531e9347840d884ddf09ee55f25a95d5fa003146450a5dc8ab364f8a071a84cd" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_struct.so" hash="sha256:374fcb5965c83b34b3c4899a5b995475a52221ae2e8e4ef7b2651a7f102287eb" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_json.so" hash="sha256:f237f3b3b7854321b3e7c274a9e338566a865dc0e951cfd9a84c377862f58c88" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/binascii.so" hash="sha256:4d78db6972efcc0c26b384a8c557aa0cbe21253aae52d347cd569b1cf4e45643" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/time.so" hash="sha256:690a0d1a40ff4c543a6af8bf1a85e1b1615b501d43f134135b01152de0667559" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_io.so" hash="sha256:4dbe5e1e6d2b71039dad5e82b0a6b04f546ab47a6b20031ab5035cd1a411de61" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/math.so" hash="sha256:adc38c0bf0dcc14d62db01048e5cbbb1f5bf07b5b5c9ac478b33f4577bf3bd23" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_hashlib.so" hash="sha256:b8d7dc4c649022b9c541165217452f12542048c9457560d91a1ce769337dbee5" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_random.so" hash="sha256:b88aa324f12d8a6f4d70a495fc9ccd35e9da58a5dd3cf7685ea82f20b7e89803" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/cStringIO.so" hash="sha256:ba5494fff4b745ada18521bdae95e9f3a7fe1dd60fa15fe5047ff01ea5d08f42" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/fcntl.so" hash="sha256:324f90234f0a55566d0206baa2f39b58ea1b42b54d822b189523a2f1d7af8fd0" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_socket.so" hash="sha256:ff25516d1a36bc1fce34338cc3150e650b26f1220cb8edafd95cdc5200f876ca" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[687]: INTEGRITY_RULE file="/usr/bin/dockerd" hash="sha256:9c94c6a9ef82238741ada277ea40ab6c7c19b2bcd62f915a6f50c8799767bd01" ppid=1 pid=687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(dockerd)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[687]: INTEGRITY_RULE file="/usr/lib64/libapparmor.so.1.3.0" hash="sha256:ced335ff279f22c3ad9dbe5d5d7b3e0ca09cc674d5a2d3601d2ced30c6e1ef53" ppid=1 pid=687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="dockerd" exe="/usr/bin/dockerd" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_ssl.so" hash="sha256:b1746187ab440bf2a4170ddd5a95ead4976a09f0f503af30dce38222089b0eb2" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/libssl.so.1.0.0" hash="sha256:a112156da62bce9106d3ffd58a12564064a60bbcfc93d0c5212723a671d22325" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/cPickle.so" hash="sha256:013ef28404f865f26058e8717bd616212522bdf478b8cd3f2a593d2384b1a21f" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/select.so" hash="sha256:ce1ffc7b1eea499e1a9dab498fd4066d7149bc8eaf7c47569aab1177bc7f663e" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: eth0: Gained IPv6LL | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: eth0: Configured | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd-wait-online[684]: ignoring: lo | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Wait for Network to be Configured. | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reached target Network is Online. | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Initialize device policy... | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Wait for User Data to be accessible... | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[725]: INTEGRITY_RULE file="/usr/share/cloud/wait_for_user_data.sh" hash="sha256:ec69611808fc1425f742f2acb451144f3f5e8628fe198b17896b404ce005abb5" ppid=1 pid=725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(_data.sh)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Google Compute Engine Instance Setup... | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[727]: INTEGRITY_RULE file="/usr/lib/python-exec/python2.7/google_instance_setup" hash="sha256:7625299f0c090645f019a87d5b118d5d97f7e5a69d0da3fb75f0f120bf3a7c25" ppid=1 pid=727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="google_instance" exe="/usr/lib/python-exec/python-exec2-c" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Wait for Google Container Registry (GCR) to be accessible... | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[728]: INTEGRITY_RULE file="/usr/share/google/get_metadata_value" hash="sha256:30e2bb131edff9f0c01b72f8a9746f6dcb366cbacacf690427854f3f4aec5b1c" ppid=725 pid=728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="wait_for_user_d" exe="/bin/bash" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_ctypes.so" hash="sha256:4c9427ea2c0d171f67c064db243c6338f6c63dc909cf7cbaa225335a38482ccf" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/libffi.so.6.0.2" hash="sha256:6803fe8a9fceb3eaf4b38b58bebb4158dc81febaa70939b66775983bd2b0b856" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[731]: INTEGRITY_RULE file="/usr/bin/curl" hash="sha256:c2b4caf960cc6fc6914cd87adb1cf9be48da64912d0649ab04482c87c03d2992" ppid=730 pid=731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bash" exe="/bin/bash" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[731]: INTEGRITY_RULE file="/usr/lib64/libcurl.so.4.5.0" hash="sha256:38a391ae8a1de5ba1c113f4c225bc007300ff30ea1fdddea8ff516c94f032bbf" ppid=730 pid=731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="curl" exe="/usr/bin/curl" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[731]: INTEGRITY_RULE file="/lib64/libnss_dns-2.23.so" hash="sha256:8684b79f47c49a7ce60a970d6bfd4f72275060b76673cd34b0e0859a1f304400" ppid=730 pid=731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="curl" exe="/usr/bin/curl" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/grp.so" hash="sha256:0596cca91ca6d7b6ecd43948063c1bc7b49c5e7ec69808d68f512ed9b162ca41" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Wait for User Data to be accessible. | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/zlib.so" hash="sha256:e0925df5246230df19b08cf34c2ea7fa936b02553e3cbef7515bd886f9790c79" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reached target User Data for Cloud Init is accessible. | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[727]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/datetime.so" hash="sha256:0604300e1ef73ef0e7b93610c0f81bb21405adc6750c8154644c6d26477d5b14" ppid=1 pid=727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="google_instance" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/array.so" hash="sha256:cf1a1ad0836d3614fe8398420956c8906474a1d0ee1528137042d75e9f79cc90" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[727]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/parser.so" hash="sha256:f5731c2f7f5cb8583500695eaf2e0aca6ede72c57bf2c48a3a5a045741fb6fa8" ppid=1 pid=727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="google_instance" exe="/usr/bin/python2.7" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[721]: INTEGRITY_RULE file="/usr/bin/containerd" hash="sha256:9d7defdecb0b90ac3734cc269dab5b71c17f4e135522ee0bcb7717d9c2621d52" ppid=687 pid=721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="dockerd" exe="/usr/bin/dockerd" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[687]: time="2018-08-08T13:03:38.034934167Z" level=info msg="libcontainerd: new containerd process, pid: 721" | |
Aug 08 13:03:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[723]: INTEGRITY_RULE file="/usr/sbin/device_policy_manager" hash="sha256:5b69a3e3b69c443e3ed23caffae85d50e0a3556c0fa409b53fd58d3557277a45" ppid=1 pid=723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(_manager)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 device_policy_manager[723]: I0808 13:03:38.120470 723 main.go:296] Started in init device policy mode | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 device_policy_manager[723]: I0808 13:03:38.152828 723 manager_impl.go:235] Updating device policy. Instance config changed from {} to {metrics_enabled:false target_version_prefix:"10323.12.0" update_scatter_seconds:33 reboot_after_update:false } | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_bisect.so" hash="sha256:d260b85e65691250fe4e6f7190e9ac95c1ddd370117935559368a452df4eec0a" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[755]: INTEGRITY_RULE file="/bin/uname" hash="sha256:050964cc46affadcf00d817106c47a6bde087fc483fa0643e168a816b76de608" ppid=754 pid=755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/bash" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/_csv.so" hash="sha256:9f4f25b9464f77ffaa55843a94864183d6bdc011ef5a77ae20266a96c73ca3cd" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/lib-dynload/unicodedata.so" hash="sha256:e1655ae0dc486dcb14dd646dc111d98e56be9168d496a85833a30f7eead46085" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Generating SSH host keys for instance 5915659819976971506. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[756]: INTEGRITY_RULE file="/usr/bin/optimize_local_ssd" hash="sha256:9a3e6d73af186cb59eb9071889c32a49ca50b36d3ca54e8422fec64cb3965a71" ppid=727 pid=756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/bash" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[757]: INTEGRITY_RULE file="/usr/bin/nproc" hash="sha256:802d57a2bb4a148ce86b5987d42de833613df533033939621c98af7c83ed10a9" ppid=756 pid=757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="optimize_local_" exe="/bin/bash" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[759]: INTEGRITY_RULE file="/usr/bin/set_multiqueue" hash="sha256:e416fa1288fc03d5d97e75d3e4d8071216951c2122ac22da8c4526e3f8bc9644" ppid=727 pid=759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/bash" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[760]: INTEGRITY_RULE file="/bin/basename" hash="sha256:9a1e6804fef8ca36d39b008210b187dc4a82c456919574e61e17ab40c033589c" ppid=759 pid=760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="set_multiqueue" exe="/bin/bash" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Running set_multiqueue. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[764]: INTEGRITY_RULE file="/usr/sbin/ethtool" hash="sha256:6fd9b382f550141a6fc172c28244e267bd0647e796c98f6bd4c7bc1afaa5ca5f" ppid=763 pid=764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="set_multiqueue" exe="/bin/bash" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[768]: INTEGRITY_RULE file="/bin/cut" hash="sha256:7448d9549e82dc4aa8917fc2bc2c49ad3f99e85e0c4f9a0957926a1f33c60421" ppid=765 pid=768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="set_multiqueue" exe="/bin/bash" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Set channels for eth0 to 8. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/29/smp_affinity_list to 0 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/29/smp_affinity_list: real affinity 0 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/30/smp_affinity_list to 0 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/30/smp_affinity_list: real affinity 0 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/31/smp_affinity_list to 1 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/31/smp_affinity_list: real affinity 1 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/32/smp_affinity_list to 1 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/32/smp_affinity_list: real affinity 1 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/33/smp_affinity_list to 2 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/33/smp_affinity_list: real affinity 2 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/34/smp_affinity_list to 2 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/34/smp_affinity_list: real affinity 2 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/35/smp_affinity_list to 3 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/35/smp_affinity_list: real affinity 3 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/36/smp_affinity_list to 3 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/36/smp_affinity_list: real affinity 3 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/37/smp_affinity_list to 4 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/37/smp_affinity_list: real affinity 4 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/38/smp_affinity_list to 4 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/38/smp_affinity_list: real affinity 4 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/39/smp_affinity_list to 5 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/39/smp_affinity_list: real affinity 5 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/40/smp_affinity_list to 5 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/40/smp_affinity_list: real affinity 5 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/41/smp_affinity_list to 6 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/41/smp_affinity_list: real affinity 6 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/42/smp_affinity_list to 6 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/42/smp_affinity_list: real affinity 6 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/43/smp_affinity_list to 7 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/43/smp_affinity_list: real affinity 7 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Setting /proc/irq/44/smp_affinity_list to 7 for device virtio1. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO /proc/irq/44/smp_affinity_list: real affinity 7 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[795]: INTEGRITY_RULE file="/bin/sort" hash="sha256:7db440f4b4270d53050569407160d7f7c23fd7832394ce9bfbbc0b1d6e7e7dac" ppid=759 pid=795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="set_multiqueue" exe="/bin/bash" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[796]: INTEGRITY_RULE file="/bin/seq" hash="sha256:bb11a10b8a0f5c8e974b844f2fb6c94f81d50ed2508f4583235d4cbe8bc0fbc6" ppid=794 pid=796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="set_multiqueue" exe="/bin/bash" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Queue 0 XPS=01 for /sys/class/net/eth0/queues/tx-0/xps_cpus | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Queue 1 XPS=02 for /sys/class/net/eth0/queues/tx-1/xps_cpus | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Queue 2 XPS=04 for /sys/class/net/eth0/queues/tx-2/xps_cpus | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Queue 3 XPS=08 for /sys/class/net/eth0/queues/tx-3/xps_cpus | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Queue 4 XPS=10 for /sys/class/net/eth0/queues/tx-4/xps_cpus | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Queue 5 XPS=20 for /sys/class/net/eth0/queues/tx-5/xps_cpus | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Queue 6 XPS=40 for /sys/class/net/eth0/queues/tx-6/xps_cpus | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 instance-setup[727]: INFO Queue 7 XPS=80 for /sys/class/net/eth0/queues/tx-7/xps_cpus | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Google Compute Engine Instance Setup. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Google Compute Engine Network Setup... | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[821]: INTEGRITY_RULE file="/usr/lib/python-exec/python2.7/google_network_setup" hash="sha256:30d831335d6d7fa36df22bdc68005bd00f4bc602bb1ea32f2b8285dc4b9c819e" ppid=1 pid=821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="google_network_" exe="/usr/lib/python-exec/python-exec2-c" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/site-packages/Cheetah/_namemapper.so" hash="sha256:039b26f779dce84be172935e12d07620818a25d56334d72c5fa480769bd09f25" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[658]: INTEGRITY_RULE file="/usr/lib64/python2.7/site-packages/markupsafe/_speedups.so" hash="sha256:9e5838cf6b3c237cad40d2b1ebdff4a230eb713043c9b394d820cc7cd7366fe2" ppid=1 pid=658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Google Compute Engine Network Setup. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Google Compute Engine Shutdown Scripts... | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[826]: INTEGRITY_RULE file="/bin/true" hash="sha256:c73afb60197c9c64805d2b4ab95efdee8646f8248ff800de2575a11eed8f9f08" ppid=1 pid=826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(true)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Google Compute Engine Clock Skew Daemon. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[829]: INTEGRITY_RULE file="/usr/lib/python-exec/python2.7/google_clock_skew_daemon" hash="sha256:ebf104f483ab0d3280deae334008e7ba5e4d833c7fb57e298976a37517997c27" ppid=1 pid=829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="google_clock_sk" exe="/usr/lib/python-exec/python-exec2-c" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Google Compute Engine Startup Scripts... | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[831]: INTEGRITY_RULE file="/usr/lib/python-exec/python2.7/google_metadata_script_runner" hash="sha256:b1a05f0b6eda97cefbda57a75904699de57899c6957bc9e65da4d844e9607b96" ppid=1 pid=831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="google_metadata" exe="/usr/lib/python-exec/python-exec2-c" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Google Compute Engine IP Forwarding Daemon. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[833]: INTEGRITY_RULE file="/usr/lib/python-exec/python2.7/google_ip_forwarding_daemon" hash="sha256:82a8c0f655e515505c01eb82a15535abe0ccbe1a7a96c07ae3dbf6e67b23ff7a" ppid=1 pid=833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="google_ip_forwa" exe="/usr/lib/python-exec/python-exec2-c" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Google Compute Engine Accounts Daemon. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[835]: INTEGRITY_RULE file="/usr/lib/python-exec/python2.7/google_accounts_daemon" hash="sha256:a9148892e69146c91ef07f7d798cc9d4f5c968dae06cd862d31908448d4f8df5" ppid=1 pid=835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="google_accounts" exe="/usr/lib/python-exec/python-exec2-c" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Google Compute Engine Shutdown Scripts. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: Cloud-init v. 0.7.6 running 'init-local' at Wed, 08 Aug 2018 13:03:38 +0000. Up 8.75 seconds. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Cloud-init v. 0.7.6 running 'init-local' at Wed, 08 Aug 2018 13:03:38 +0000. Up 8.75 seconds. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/log/cloud-init.log - ab: [420] 0 bytes | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Changing the ownership of /var/log/cloud-init.log to 202:4 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Attempting to remove /var/lib/cloud/instance/boot-finished | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Attempting to remove /var/lib/cloud/instance | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Attempting to remove /var/lib/cloud/data/no-net | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Reading from /var/lib/cloud/instance/obj.pkl (quiet=False) | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.ubuntu.Distro'> | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] __init__.py[DEBUG]: Looking for for data source in: ['GCE', 'NoCloud', 'None'], via packages ['', 'cloudinit.sources'] that matches dependencies ['FILESYSTEM'] | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] __init__.py[DEBUG]: Searching for data source in: ['DataSourceNoCloud'] | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] __init__.py[DEBUG]: Seeing if we can get any data from <class 'cloudinit.sources.DataSourceNoCloud.DataSourceNoCloud'> | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Reading from /proc/cmdline (quiet=False) | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Read 665 bytes from /proc/cmdline | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Reading from /var/lib/cloud/seed/nocloud/user-data (quiet=False) | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Reading from /var/lib/cloud/seed/nocloud/meta-data (quiet=False) | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Reading from /var/lib/cloud/seed/nocloud/vendor-data (quiet=False) | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Running command ['blkid', '-odevice', '/dev/sr0'] with allowed return codes [0, 2] (shell=False, capture=True) | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[837]: INTEGRITY_RULE file="/sbin/blkid" hash="sha256:593e33b2072af976822d84ee99f8dd46563796fa8d1736e1f5b6d7d4ef84b002" ppid=658 pid=837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Running command ['blkid', '-odevice', '/dev/sr1'] with allowed return codes [0, 2] (shell=False, capture=True) | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Running command ['blkid', '-tTYPE=vfat', '-odevice'] with allowed return codes [0, 2] (shell=False, capture=True) | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 startup-script[831]: INFO Starting startup scripts. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-clock-skew[829]: INFO Starting Google Clock Skew daemon. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-clock-skew[829]: INFO Clock drift token has changed: 0. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 startup-script[831]: INFO No startup scripts found in metadata. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 startup-script[831]: INFO Finished running startup scripts. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-ip-forwarding[833]: INFO Starting Google IP Forwarding daemon. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[848]: INTEGRITY_RULE file="/sbin/hwclock" hash="sha256:7d0f1759735fb28e8fc9ee4d5c126f7b89f086c7ff875094e5da4ed9be53ca9c" ppid=829 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="google_clock_sk" exe="/usr/bin/python2.7" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Google Compute Engine Startup Scripts. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[849]: INTEGRITY_RULE file="/bin/ip" hash="sha256:2cae59e037399720e80989451cf6e8ec5fb115f47ee7ca35d234e09ceb0cf37f" ppid=833 pid=849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="google_ip_forwa" exe="/usr/bin/python2.7" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[850]: INTEGRITY_RULE file="/usr/sbin/groupadd" hash="sha256:ee26df34f4ce5b2d2f2e60c802d789a67896e6ef26ad7cd00dc3c44740418395" ppid=835 pid=850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="google_accounts" exe="/usr/bin/python2.7" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[850]: INTEGRITY_RULE file="/lib64/libpam_misc.so.0.82.1" hash="sha256:df24a3b7ac866971a523df50d380bb464364288fef94418cc8790298796c0e3d" ppid=835 pid=850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="groupadd" exe="/usr/sbin/groupadd" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[850]: INTEGRITY_RULE file="/lib64/security/pam_rootok.so" hash="sha256:9951f8c9c0df91c39531f33af12dfecde4aacf788eef2911fd693d029921efe0" ppid=835 pid=850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="groupadd" exe="/usr/sbin/groupadd" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[850]: INTEGRITY_RULE file="/lib64/security/pam_permit.so" hash="sha256:6684ab8aab891239caeac2c9fc98a370d037ee8021667b1afffcc0cbf867e34c" ppid=835 pid=850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="groupadd" exe="/usr/sbin/groupadd" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[850]: INTEGRITY_RULE file="/lib64/security/pam_unix.so" hash="sha256:99bf12b5da48f8f7277631dbce783a24af94f9f5ef7d46230563765ba5aa12c2" ppid=835 pid=850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="groupadd" exe="/usr/sbin/groupadd" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[850]: INTEGRITY_RULE file="/lib64/security/pam_deny.so" hash="sha256:5c40b56aaf896125ec1c10794f2b026dda8fa957d56d9e6e96f464528e905a55" ppid=835 pid=850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="groupadd" exe="/usr/sbin/groupadd" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 groupadd[850]: group added to /etc/group: name=google-sudoers, GID=1002 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 groupadd[850]: new group: name=google-sudoers, GID=1002 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Starting Google Accounts daemon. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for gke-68d111821db0e011e1fc. | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[855]: INTEGRITY_RULE file="/usr/sbin/useradd" hash="sha256:0ab7459c7d7bd8b5179681c26cb60b98edf1f1251576e0dbf05cc24d0d4c7258" ppid=835 pid=855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="google_accounts" exe="/usr/bin/python2.7" | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[855]: new group: name=gke-68d111821db0e011e1fc, GID=5000 | |
Aug 08 13:03:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[855]: new user: name=gke-68d111821db0e011e1fc, UID=5000, GID=5000, home=/home/gke-68d111821db0e011e1fc, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account gke-68d111821db0e011e1fc. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[860]: INTEGRITY_RULE file="/usr/sbin/usermod" hash="sha256:2eff45395162d52740d89b6989f21bd83332329f8083a946029ae64ab1409478" ppid=835 pid=860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="google_accounts" exe="/usr/bin/python2.7" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[860]: add 'gke-68d111821db0e011e1fc' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[860]: add 'gke-68d111821db0e011e1fc' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[860]: add 'gke-68d111821db0e011e1fc' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[860]: add 'gke-68d111821db0e011e1fc' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for dkunin. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[865]: new group: name=dkunin, GID=5001 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[865]: new user: name=dkunin, UID=5001, GID=5001, home=/home/dkunin, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account dkunin. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[870]: add 'dkunin' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[870]: add 'dkunin' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[870]: add 'dkunin' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[870]: add 'dkunin' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for charlesc. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[875]: new group: name=charlesc, GID=5002 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[875]: new user: name=charlesc, UID=5002, GID=5002, home=/home/charlesc, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account charlesc. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[880]: add 'charlesc' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[880]: add 'charlesc' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[880]: add 'charlesc' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[880]: add 'charlesc' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for johnc. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[885]: new group: name=johnc, GID=5003 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[885]: new user: name=johnc, UID=5003, GID=5003, home=/home/johnc, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account johnc. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[891]: add 'johnc' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[891]: add 'johnc' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[891]: add 'johnc' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[891]: add 'johnc' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for maccum. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[896]: new group: name=maccum, GID=5004 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[896]: new user: name=maccum, UID=5004, GID=5004, home=/home/maccum, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account maccum. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[901]: add 'maccum' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[901]: add 'maccum' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[901]: add 'maccum' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[901]: add 'maccum' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for wang. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[906]: new group: name=wang, GID=5005 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[906]: new user: name=wang, UID=5005, GID=5005, home=/home/wang, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Running command ['blkid', '-tTYPE=iso9660', '-odevice'] with allowed return codes [0, 2] (shell=False, capture=True) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account wang. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[912]: add 'wang' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[912]: add 'wang' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[912]: add 'wang' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[912]: add 'wang' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for jbloom. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[917]: new group: name=jbloom, GID=5006 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[917]: new user: name=jbloom, UID=5006, GID=5006, home=/home/jbloom, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account jbloom. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[922]: add 'jbloom' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[922]: add 'jbloom' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[922]: add 'jbloom' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[922]: add 'jbloom' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for jigold. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[927]: new group: name=jigold, GID=5007 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[927]: new user: name=jigold, UID=5007, GID=5007, home=/home/jigold, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account jigold. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[932]: add 'jigold' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[932]: add 'jigold' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[932]: add 'jigold' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[932]: add 'jigold' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for tpoterba. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[937]: new group: name=tpoterba, GID=5008 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[937]: new user: name=tpoterba, UID=5008, GID=5008, home=/home/tpoterba, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Running command ['blkid', '-tLABEL=cidata', '-odevice'] with allowed return codes [0, 2] (shell=False, capture=True) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account tpoterba. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[943]: add 'tpoterba' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[943]: add 'tpoterba' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[943]: add 'tpoterba' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[943]: add 'tpoterba' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for dking. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[948]: new group: name=dking, GID=5009 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[948]: new user: name=dking, UID=5009, GID=5009, home=/home/dking, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account dking. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[886]: INTEGRITY_RULE file="/sbin/apparmor_parser" hash="sha256:1287c7457d3178011d7eaf321e70d85bae8325dfefe836418ae02098ca47b2d9" ppid=687 pid=886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="dockerd" exe="/usr/bin/dockerd" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[953]: add 'dking' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[953]: add 'dking' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[953]: add 'dking' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[953]: add 'dking' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for cseed. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 device_policy_manager[723]: I0808 13:03:39.155886 723 main.go:310] Device policy initialized successfully! | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Initialize device policy. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[959]: new group: name=cseed, GID=5010 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[959]: new user: name=cseed, UID=5010, GID=5010, home=/home/cseed, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account cseed. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[965]: add 'cseed' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[965]: add 'cseed' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Chromium OS system update service. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[965]: add 'cseed' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[965]: add 'cseed' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for labbott. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] cloud-init[DEBUG]: No local datasource found | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Reading from /proc/uptime (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: Read 11 bytes from /proc/uptime | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[658]: [CLOUDINIT] util.py[DEBUG]: cloud-init mode 'init' took 0.457 seconds (0.46) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[971]: new group: name=labbott, GID=5011 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[971]: new user: name=labbott, UID=5011, GID=5011, home=/home/labbott, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Metrics Daemon. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account labbott. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[978]: add 'labbott' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[978]: add 'labbott' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[978]: add 'labbott' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started GCI Device Policy Service. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[978]: add 'labbott' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for teamcity. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[985]: new group: name=teamcity, GID=5012 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[985]: new user: name=teamcity, UID=5012, GID=5012, home=/home/teamcity, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 device_policy_manager[981]: I0808 13:03:39.182903 981 main.go:317] Started in monitor mode | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account teamcity. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 device_policy_manager[981]: I0808 13:03:39.185575 981 main.go:335] Detected instance ID is: 5915659819976971506 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[996]: add 'teamcity' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[996]: add 'teamcity' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[996]: add 'teamcity' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[996]: add 'teamcity' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for pschulz. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 device_policy_manager[981]: I0808 13:03:39.190865 981 manager_impl.go:230] Skipping device policy update. Current: {metrics_enabled:false target_version_prefix:"10323.12.0" update_scatter_seconds:33 reboot_after_update:false } | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[1001]: new group: name=pschulz, GID=5013 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[1001]: new user: name=pschulz, UID=5013, GID=5013, home=/home/pschulz, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account pschulz. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 device_policy_manager[981]: E0808 13:03:39.196781 981 main.go:388] The name org.chromium.UpdateEngine was not provided by any .service files | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[977]: INTEGRITY_RULE file="/usr/bin/metrics_daemon" hash="sha256:f752c2e3b82adbbb6be07ab56b160f7574379fbea0a0ff05562f9b7daedd0c42" ppid=1 pid=977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(s_daemon)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1006]: add 'pschulz' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1006]: add 'pschulz' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1006]: add 'pschulz' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[977]: INTEGRITY_RULE file="/usr/lib64/libbrillo-http-395517.so" hash="sha256:ad2ad707334111f7eec904cf1868b466ec539d860b325911e63f4e3eb0ac1652" ppid=1 pid=977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="metrics_daemon" exe="/usr/bin/metrics_daemon" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1006]: add 'pschulz' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Initial cloud-init job (pre-networking). | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for ec2-user. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[977]: INTEGRITY_RULE file="/usr/lib64/libbrillo-streams-395517.so" hash="sha256:5ddee26cb2c355e130a5d2a6f0484a70daac57a82b51795bb55217591e7c2c89" ppid=1 pid=977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="metrics_daemon" exe="/usr/bin/metrics_daemon" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[1012]: new group: name=ec2-user, GID=5014 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[1012]: new user: name=ec2-user, UID=5014, GID=5014, home=/home/ec2-user, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account ec2-user. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1018]: add 'ec2-user' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Initial cloud-init job (metadata service crawler)... | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/version.cycle for reading: No such file or directory | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/version.cycle for reading: No such file or directory | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1018]: add 'ec2-user' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1018]: add 'ec2-user' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1018]: add 'ec2-user' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for aganna. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[1023]: new group: name=aganna, GID=5015 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[1023]: new user: name=aganna, UID=5015, GID=5015, home=/home/aganna, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account aganna. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1029]: add 'aganna' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1029]: add 'aganna' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1029]: add 'aganna' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1029]: add 'aganna' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[954]: AVC apparmor="STATUS" operation="profile_load" name="docker-default" pid=954 comm="apparmor_parser" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for andrea. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[1034]: new group: name=andrea, GID=5016 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[1034]: new user: name=andrea, UID=5016, GID=5016, home=/home/andrea, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account andrea. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1040]: add 'andrea' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1040]: add 'andrea' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1040]: add 'andrea' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [INFO:metrics_daemon.cc(382)] uploader enabled | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1040]: add 'andrea' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [INFO:metrics_daemon.cc(382)] uploader enabled | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: WARNING Could not update the authorized keys file for user root. [Errno 30] Read-only file system: '/root/.ssh'. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Creating a new user account for cotton. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[1045]: new group: name=cotton, GID=5017 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 useradd[1045]: new user: name=cotton, UID=5017, GID=5017, home=/home/cotton, shell=/bin/bash | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-accounts[835]: INFO Created user account cotton. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1050]: add 'cotton' to group 'docker' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1050]: add 'cotton' to group 'adm' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1050]: add 'cotton' to group 'video' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 usermod[1050]: add 'cotton' to group 'google-sudoers' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[966]: INTEGRITY_RULE file="/usr/sbin/update_engine" hash="sha256:c08fe29a5e2df14d071f0ff481ce36d2b1b6734b09586c01023c0a65bf53b9fb" ppid=1 pid=966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(e_engine)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[966]: INTEGRITY_RULE file="/lib64/libbz2.so.1.0.6" hash="sha256:16fbedaffe40ef84cf6d7a0ab6b3bb273c1a8de483793a3000cf10173e7c8bde" ppid=1 pid=966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="update_engine" exe="/usr/sbin/update_engine" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[966]: INTEGRITY_RULE file="/usr/lib64/libbspatch.so" hash="sha256:2e06e67d1ea5c610887417a02d20ef17631c30c73befb1e7536cc7b09285db7c" ppid=1 pid=966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="update_engine" exe="/usr/sbin/update_engine" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[966]: INTEGRITY_RULE file="/usr/lib64/libpuffpatch.so" hash="sha256:adbec1f414192ba944cb459341100b0a73802e9ee6669b0095e3d26a068a93a9" ppid=1 pid=966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="update_engine" exe="/usr/sbin/update_engine" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[966]: INTEGRITY_RULE file="/usr/lib64/libbrotlidec.so.1.0.1" hash="sha256:8142657b133ba5686ffc8b7a0b0978bf00938a76ad15e1c880b1473706df16e6" ppid=1 pid=966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="update_engine" exe="/usr/sbin/update_engine" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[966]: INTEGRITY_RULE file="/usr/lib64/libbrotlicommon.so.1.0.1" hash="sha256:3761f9fd63054b38760372a4e922acaffe786aab587129c0016f4dd981aa0204" ppid=1 pid=966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="update_engine" exe="/usr/sbin/update_engine" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:main.cc(113)] Chrome OS Update Engine starting | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:boot_control_chromeos.cc(127)] Booted from slot 0 (slot A) of 2 slots present on disk /dev/sda | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:real_system_state.cc(73)] Booted in dev mode. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:prefs.cc(122)] boot-id not present in /var/lib/update_engine/prefs | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:omaha_request_params.cc(66)] Initializing parameters for this update attempt | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:omaha_request_params.cc(179)] Download channel for this attempt = beta-channel | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:omaha_request_params.cc(77)] Running from channel beta-channel | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:ERROR:object_proxy.cc(582)] Failed to call method: org.chromium.flimflam.Manager.GetProperties: object_path= /: org.freedesktop.DBus.Error.ServiceUnknown: The name org.chromium.flimflam was not provided by any .service files | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:ERROR:dbus_method_invoker.h(111)] CallMethodAndBlockWithTimeout(...): Domain=dbus, Code=org.freedesktop.DBus.Error.ServiceUnknown, Message=The name org.chromium.flimflam was not provided by any .service files | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(861)] Payload Attempt Number = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(869)] Full Payload Attempt Number = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(881)] Current URL Index = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(534)] Current download source: Unknown | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(925)] Current URL (Url0)'s Failure Count = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(914)] URL Switch Count = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(1006)] Update Timestamp Start = 8/8/2018 13:03:39 GMT | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(1081)] Update Duration Uptime = 0s | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(1122)] Current bytes downloaded for HttpsServer = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(1147)] Total bytes downloaded for HttpsServer = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(1122)] Current bytes downloaded for HttpServer = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(1147)] Total bytes downloaded for HttpServer = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(1122)] Current bytes downloaded for HttpPeer = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(1147)] Total bytes downloaded for HttpPeer = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(751)] Number of Reboots during current update attempt = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(1159)] Num Responses Seen = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:prefs.cc(122)] rollback-version not present in /mnt/stateful_partition/unencrypted/preserve/update_engine/prefs | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(1344)] p2p First Attempt Timestamp = 1/1/1601 0:00:00 GMT | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:payload_state.cc(1329)] p2p Num Attempts = 0 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:daemon.cc(89)] Waiting for DBus object to be registered. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:update_manager-inl.h(52)] ChromeOSPolicy::UpdateCheckAllowed: START | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:WARNING:evaluation_context-inl.h(43)] Error reading Variable update_disabled: "No value set for update_disabled" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:WARNING:evaluation_context-inl.h(43)] Error reading Variable release_channel_delegated: "No value set for release_channel_delegated" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:chromeos_policy.cc(317)] Periodic check interval not satisfied, blocking until 8/8/2018 13:06:00 GMT | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:update_manager-inl.h(74)] ChromeOSPolicy::UpdateCheckAllowed: END | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:prefs.cc(122)] attempt-in-progress not present in /var/lib/update_engine/prefs | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:update_manager-inl.h(52)] ChromeOSPolicy::P2PEnabled: START | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:WARNING:evaluation_context-inl.h(43)] Error reading Variable au_p2p_enabled: "No value set for au_p2p_enabled" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:WARNING:evaluation_context-inl.h(43)] Error reading Variable owner: "No value set for owner" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:update_manager-inl.h(74)] ChromeOSPolicy::P2PEnabled: END | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:update_attempter.cc(1466)] Not starting p2p at startup since our application is not sharing any files. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:update_manager-inl.h(52)] ChromeOSPolicy::P2PEnabledChanged: START | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:WARNING:evaluation_context-inl.h(43)] Error reading Variable au_p2p_enabled: "No value set for au_p2p_enabled" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:WARNING:evaluation_context-inl.h(43)] Error reading Variable owner: "No value set for owner" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130339:INFO:update_manager-inl.h(74)] ChromeOSPolicy::P2PEnabledChanged: END | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: Cloud-init v. 0.7.6 running 'init' at Wed, 08 Aug 2018 13:03:39 +0000. Up 9.46 seconds. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Cloud-init v. 0.7.6 running 'init' at Wed, 08 Aug 2018 13:03:39 +0000. Up 9.46 seconds. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/log/cloud-init.log - ab: [420] 0 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Changing the ownership of /var/log/cloud-init.log to 202:4 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Running command ['ifconfig', '-a'] with allowed return codes [0] (shell=False, capture=True) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1057]: INTEGRITY_RULE file="/bin/ifconfig" hash="sha256:3604095ac47042ed0513c656f36dac084e5c92180593c5fb2ad84aa3298a0717" ppid=1017 pid=1057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Running command ['netstat', '-rn'] with allowed return codes [0] (shell=False, capture=True) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1058]: INTEGRITY_RULE file="/bin/netstat" hash="sha256:3652c1afa2077decdfb1c4e85c102239f540a826f84dc24abbaf0e43d7844dc7" ppid=1017 pid=1058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: ++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++ | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: +--------+------+------------+-----------------+-------------------+ | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: | Device | Up | Address | Mask | Hw-Address | | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: +--------+------+------------+-----------------+-------------------+ | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: | lo: | True | 127.0.0.1 | 255.0.0.0 | . | | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: | eth0: | True | 10.240.0.7 | 255.255.255.255 | 42:01:0a:f0:00:07 | | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: +--------+------+------------+-----------------+-------------------+ | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: ++++++++++++++++++++++++++++++++Route info++++++++++++++++++++++++++++++++ | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: +-------+-------------+------------+-----------------+-----------+-------+ | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: +-------+-------------+------------+-----------------+-----------+-------+ | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: | 0 | 0.0.0.0 | 10.240.0.1 | 0.0.0.0 | eth0 | UG | | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: | 1 | 0.0.0.0 | 10.240.0.1 | 0.0.0.0 | eth0 | UG | | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: | 2 | 10.240.0.1 | 0.0.0.0 | 255.255.255.255 | eth0 | UH | | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: | 3 | 10.240.0.1 | 0.0.0.0 | 255.255.255.255 | eth0 | UH | | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: ci-info: +-------+-------------+------------+-----------------+-----------+-------+ | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] cloud-init[DEBUG]: Checking to see if files that we need already exist from a previous run that would allow us to stop early. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /var/lib/cloud/data/no-net (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /var/lib/cloud/instance/obj.pkl (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] cloud-init[DEBUG]: Execution continuing, no previous run detected that would allow us to stop early. | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /var/lib/cloud/instance/obj.pkl (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.ubuntu.Distro'> | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] __init__.py[DEBUG]: Looking for for data source in: ['GCE', 'NoCloud', 'None'], via packages ['', 'cloudinit.sources'] that matches dependencies ['FILESYSTEM', 'NETWORK'] | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] __init__.py[DEBUG]: Searching for data source in: ['DataSourceGCE', 'DataSourceNoCloudNet', 'DataSourceNone'] | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] __init__.py[DEBUG]: Seeing if we can get any data from <class 'cloudinit.sources.DataSourceGCE.DataSourceGCE'> | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] url_helper.py[DEBUG]: [0/6] open 'http://metadata.google.internal/computeMetadata/v1/instance/id' with {'url': 'http://metadata.google.internal/computeMetadata/v1/instance/id', 'headers': {'X-Google-Metadata-Request': True}, 'allow_redirects': True, 'method': 'GET'} configuration | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] url_helper.py[DEBUG]: Read from http://metadata.google.internal/computeMetadata/v1/instance/id (200, 19b) after 1 attempts | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] url_helper.py[DEBUG]: [0/6] open 'http://metadata.google.internal/computeMetadata/v1/instance/zone' with {'url': 'http://metadata.google.internal/computeMetadata/v1/instance/zone', 'headers': {'X-Google-Metadata-Request': True}, 'allow_redirects': True, 'method': 'GET'} configuration | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] url_helper.py[DEBUG]: Read from http://metadata.google.internal/computeMetadata/v1/instance/zone (200, 41b) after 1 attempts | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] url_helper.py[DEBUG]: [0/6] open 'http://metadata.google.internal/computeMetadata/v1/instance/hostname' with {'url': 'http://metadata.google.internal/computeMetadata/v1/instance/hostname', 'headers': {'X-Google-Metadata-Request': True}, 'allow_redirects': True, 'method': 'GET'} configuration | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] url_helper.py[DEBUG]: Read from http://metadata.google.internal/computeMetadata/v1/instance/hostname (200, 61b) after 1 attempts | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] url_helper.py[DEBUG]: [0/6] open 'http://metadata.google.internal/computeMetadata/v1/project/attributes/sshKeys' with {'url': 'http://metadata.google.internal/computeMetadata/v1/project/attributes/sshKeys', 'headers': {'X-Google-Metadata-Request': True}, 'allow_redirects': True, 'method': 'GET'} configuration | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] url_helper.py[DEBUG]: Read from http://metadata.google.internal/computeMetadata/v1/project/attributes/sshKeys (200, 10593b) after 1 attempts | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] url_helper.py[DEBUG]: [0/6] open 'http://metadata.google.internal/computeMetadata/v1/instance/attributes/user-data' with {'url': 'http://metadata.google.internal/computeMetadata/v1/instance/attributes/user-data', 'headers': {'X-Google-Metadata-Request': True}, 'allow_redirects': True, 'method': 'GET'} configuration | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] url_helper.py[DEBUG]: Read from http://metadata.google.internal/computeMetadata/v1/instance/attributes/user-data (200, 3812b) after 1 attempts | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[INFO]: Loaded datasource DataSourceGCE - DataSourceGCE | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /proc/cmdline (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Read 665 bytes from /proc/cmdline | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Read 2338 bytes from /etc/cloud/cloud.cfg | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Attempting to load yaml from string of length 2338 with allowed root types (<type 'dict'>,) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Read 1910 bytes from /etc/cloud/cloud.cfg.d/05_logging.cfg | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Attempting to load yaml from string of length 1910 with allowed root types (<type 'dict'>,) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Attempting to remove /var/lib/cloud/instance | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Creating symbolic link from '/var/lib/cloud/instance' => '/var/lib/cloud/instances/5915659819976971506' | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /var/lib/cloud/instances/5915659819976971506/datasource (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/instances/5915659819976971506/datasource - wb: [420] 29 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/data/previous-datasource - wb: [420] 29 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /var/lib/cloud/data/instance-id (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/data/instance-id - wb: [420] 20 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/data/previous-instance-id - wb: [420] 20 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] cloud-init[DEBUG]: init will now be targeting instance id: 5915659819976971506 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /proc/cmdline (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Read 665 bytes from /proc/cmdline | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Read 2338 bytes from /etc/cloud/cloud.cfg | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Attempting to load yaml from string of length 2338 with allowed root types (<type 'dict'>,) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Read 1910 bytes from /etc/cloud/cloud.cfg.d/05_logging.cfg | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Attempting to load yaml from string of length 1910 with allowed root types (<type 'dict'>,) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/instance/obj.pkl - wb: [256] 19012 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/instances/5915659819976971506/user-data.txt - wb: [384] 3812 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Attempting to load yaml from string of length 3812 with allowed root types (<type 'dict'>,) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/instances/5915659819976971506/user-data.txt.i - wb: [384] 4154 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/instances/5915659819976971506/vendor-data.txt - wb: [384] 4 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/instances/5915659819976971506/vendor-data.txt.i - wb: [384] 345 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.ubuntu.Distro'> | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/instances/5915659819976971506/sem/consume_data - wb: [420] 20 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] helpers.py[DEBUG]: Running consume_data using lock (<FileLock using file '/var/lib/cloud/instances/5915659819976971506/sem/consume_data'>) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[DEBUG]: Added default handler for set(['text/cloud-config-jsonp', 'text/cloud-config']) from CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']] | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[DEBUG]: Added default handler for set(['text/x-shellscript']) from ShellScriptPartHandler: [['text/x-shellscript']] | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[DEBUG]: Added default handler for set(['text/cloud-boothook']) from BootHookPartHandler: [['text/cloud-boothook']] | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[DEBUG]: Added default handler for set(['text/upstart-job']) from UpstartJobPartHandler: [['text/upstart-job']] | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] __init__.py[DEBUG]: Calling handler BootHookPartHandler: [['text/cloud-boothook']] (__begin__, None, 2) with frequency once-per-instance | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] __init__.py[DEBUG]: Calling handler UpstartJobPartHandler: [['text/upstart-job']] (__begin__, None, 2) with frequency once-per-instance | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] __init__.py[DEBUG]: Calling handler CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']] (__begin__, None, 3) with frequency once-per-instance | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] __init__.py[DEBUG]: Calling handler ShellScriptPartHandler: [['text/x-shellscript']] (__begin__, None, 2) with frequency once-per-instance | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] __init__.py[DEBUG]: {'Content-Type': 'text/cloud-config', 'Content-Disposition': 'attachment; filename="part-001"', 'MIME-Version': '1.0'} | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] __init__.py[DEBUG]: Calling handler CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']] (text/cloud-config, part-001, 3) with frequency once-per-instance | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Attempting to load yaml from string of length 3812 with allowed root types (<type 'dict'>,) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] cloud_config.py[DEBUG]: Merging by applying [('dict', ['replace']), ('list', []), ('str', [])] | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] __init__.py[DEBUG]: Calling handler BootHookPartHandler: [['text/cloud-boothook']] (__end__, None, 2) with frequency once-per-instance | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] __init__.py[DEBUG]: Calling handler UpstartJobPartHandler: [['text/upstart-job']] (__end__, None, 2) with frequency once-per-instance | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] __init__.py[DEBUG]: Calling handler CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']] (__end__, None, 3) with frequency once-per-instance | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/instances/5915659819976971506/cloud-config.txt - wb: [384] 4059 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] __init__.py[DEBUG]: Calling handler ShellScriptPartHandler: [['text/x-shellscript']] (__end__, None, 2) with frequency once-per-instance | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[DEBUG]: no vendordata from datasource | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /proc/cmdline (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Read 665 bytes from /proc/cmdline | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Read 2338 bytes from /etc/cloud/cloud.cfg | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Attempting to load yaml from string of length 2338 with allowed root types (<type 'dict'>,) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Read 1910 bytes from /etc/cloud/cloud.cfg.d/05_logging.cfg | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Attempting to load yaml from string of length 1910 with allowed root types (<type 'dict'>,) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /var/lib/cloud/instance/cloud-config.txt (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Read 4059 bytes from /var/lib/cloud/instance/cloud-config.txt | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Attempting to load yaml from string of length 4059 with allowed root types (<type 'dict'>,) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /var/lib/cloud/instance/cloud-config.txt (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Read 4059 bytes from /var/lib/cloud/instance/cloud-config.txt | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Attempting to load yaml from string of length 4059 with allowed root types (<type 'dict'>,) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.ubuntu.Distro'> | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[DEBUG]: Running module bootcmd (<module 'cloudinit.config.cc_bootcmd' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_bootcmd.pyc'>) with frequency once | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_bootcmd.once - wb: [420] 20 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] helpers.py[DEBUG]: Running config-bootcmd using lock (<FileLock using file '/var/lib/cloud/sem/config_bootcmd.once'>) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] cc_bootcmd.py[DEBUG]: Skipping module named bootcmd, no 'bootcmd' key in configuration | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[DEBUG]: Running module update_etc_hosts (<module 'cloudinit.config.cc_update_etc_hosts' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_update_etc_hosts.pyc'>) with frequency always | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] helpers.py[DEBUG]: Running config-update_etc_hosts using lock (<cloudinit.helpers.DummyLock object at 0x7fcd7729cc90>) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] cc_update_etc_hosts.py[DEBUG]: Configuration option 'manage_etc_hosts' is not set, not managing /etc/hosts in module update_etc_hosts | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[DEBUG]: Running module users-groups (<module 'cloudinit.config.cc_users_groups' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_users_groups.pyc'>) with frequency once | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_users_groups.once - wb: [420] 20 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] helpers.py[DEBUG]: Running config-users-groups using lock (<FileLock using file '/var/lib/cloud/sem/config_users_groups.once'>) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[DEBUG]: Running module write-files (<module 'cloudinit.config.cc_write_files' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_write_files.pyc'>) with frequency always | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] helpers.py[DEBUG]: Running config-write-files using lock (<cloudinit.helpers.DummyLock object at 0x7fcd7729c1d0>) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /etc/systemd/system/kube-node-installation.service - wb: [420] 887 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Changing the ownership of /etc/systemd/system/kube-node-installation.service to 0:-1 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /etc/systemd/system/kube-node-configuration.service - wb: [420] 284 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Changing the ownership of /etc/systemd/system/kube-node-configuration.service to 0:-1 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /etc/systemd/system/kube-docker-monitor.service - wb: [420] 338 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Changing the ownership of /etc/systemd/system/kube-docker-monitor.service to 0:-1 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /etc/systemd/system/kubelet-monitor.service - wb: [420] 340 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Changing the ownership of /etc/systemd/system/kubelet-monitor.service to 0:-1 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /etc/systemd/system/kube-logrotate.timer - wb: [420] 117 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Changing the ownership of /etc/systemd/system/kube-logrotate.timer to 0:-1 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /etc/systemd/system/kube-logrotate.service - wb: [420] 194 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Changing the ownership of /etc/systemd/system/kube-logrotate.service to 0:-1 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /etc/systemd/system/kubernetes.target - wb: [420] 68 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Changing the ownership of /etc/systemd/system/kubernetes.target to 0:-1 | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] stages.py[DEBUG]: Running module rsyslog (<module 'cloudinit.config.cc_rsyslog' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_rsyslog.pyc'>) with frequency once | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_rsyslog.once - wb: [420] 20 bytes | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] helpers.py[DEBUG]: Running config-rsyslog using lock (<FileLock using file '/var/lib/cloud/sem/config_rsyslog.once'>) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] cc_rsyslog.py[DEBUG]: Skipping module named rsyslog, no 'rsyslog' key in configuration | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] cloud-init[DEBUG]: Ran 5 modules with 0 failures | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Reading from /proc/uptime (quiet=False) | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: Read 11 bytes from /proc/uptime | |
Aug 08 13:03:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1017]: [CLOUDINIT] util.py[DEBUG]: cloud-init mode 'init' took 0.336 seconds (0.34) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Time has been changed | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-clock-skew[829]: INFO Synced system time with hardware clock. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Initial cloud-init job (metadata service crawler). | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reached target Cloud-config availability. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Apply the settings specified in cloud-config... | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: Cloud-init v. 0.7.6 running 'modules:config' at Wed, 08 Aug 2018 13:03:40 +0000. Up 10.03 seconds. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] util.py[DEBUG]: Cloud-init v. 0.7.6 running 'modules:config' at Wed, 08 Aug 2018 13:03:40 +0000. Up 10.03 seconds. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.ubuntu.Distro'> | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] stages.py[DEBUG]: Running module emit_upstart (<module 'cloudinit.config.cc_emit_upstart' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_emit_upstart.pyc'>) with frequency always | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] helpers.py[DEBUG]: Running config-emit_upstart using lock (<cloudinit.helpers.DummyLock object at 0x7f9b52ec4ed0>) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] cc_emit_upstart.py[DEBUG]: Skipping module named emit_upstart, no /sbin/initctl located | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] stages.py[DEBUG]: Running module mounts (<module 'cloudinit.config.cc_mounts' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_mounts.pyc'>) with frequency once | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_mounts.once - wb: [420] 20 bytes | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] helpers.py[DEBUG]: Running config-mounts using lock (<FileLock using file '/var/lib/cloud/sem/config_mounts.once'>) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] cc_mounts.py[DEBUG]: Attempting to determine the real name of ephemeral0 | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] cc_mounts.py[DEBUG]: Ignoring nonexistant default named mount ephemeral0 | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] cc_mounts.py[DEBUG]: Attempting to determine the real name of swap | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] cc_mounts.py[DEBUG]: Ignoring nonexistant default named mount swap | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] cc_mounts.py[DEBUG]: no need to setup swap | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] cc_mounts.py[DEBUG]: No modifications to fstab needed. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] stages.py[DEBUG]: Running module ssh-import-id (<module 'cloudinit.config.cc_ssh_import_id' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_ssh_import_id.pyc'>) with frequency once | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_ssh_import_id.once - wb: [420] 20 bytes | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] helpers.py[DEBUG]: Running config-ssh-import-id using lock (<FileLock using file '/var/lib/cloud/sem/config_ssh_import_id.once'>) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] stages.py[DEBUG]: Running module timezone (<module 'cloudinit.config.cc_timezone' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_timezone.pyc'>) with frequency once | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_timezone.once - wb: [420] 20 bytes | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] helpers.py[DEBUG]: Running config-timezone using lock (<FileLock using file '/var/lib/cloud/sem/config_timezone.once'>) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] cc_timezone.py[DEBUG]: Skipping module named timezone, no 'timezone' specified | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] stages.py[DEBUG]: Running module disable-ec2-metadata (<module 'cloudinit.config.cc_disable_ec2_metadata' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_disable_ec2_metadata.pyc'>) with frequency always | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] helpers.py[DEBUG]: Running config-disable-ec2-metadata using lock (<cloudinit.helpers.DummyLock object at 0x7f9b52ec4d10>) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] cc_disable_ec2_metadata.py[DEBUG]: Skipping module named disable-ec2-metadata, disabling the ec2 route not enabled | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] stages.py[DEBUG]: Running module runcmd (<module 'cloudinit.config.cc_runcmd' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_runcmd.pyc'>) with frequency always | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] helpers.py[DEBUG]: Running config-runcmd using lock (<cloudinit.helpers.DummyLock object at 0x7f9b52ec4190>) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] util.py[DEBUG]: Shellified 9 commands. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/instances/5915659819976971506/scripts/runcmd - wb: [448] 364 bytes | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] cloud-init[DEBUG]: Ran 6 modules with 0 failures | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] util.py[DEBUG]: Reading from /proc/uptime (quiet=False) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] util.py[DEBUG]: Read 12 bytes from /proc/uptime | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1062]: [CLOUDINIT] util.py[DEBUG]: cloud-init mode 'modules' took 0.100 seconds (0.10) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Apply the settings specified in cloud-config. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Execute cloud user/final scripts... | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: Cloud-init v. 0.7.6 running 'modules:final' at Wed, 08 Aug 2018 13:03:40 +0000. Up 10.37 seconds. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Cloud-init v. 0.7.6 running 'modules:final' at Wed, 08 Aug 2018 13:03:40 +0000. Up 10.37 seconds. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.ubuntu.Distro'> | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] stages.py[DEBUG]: Running module rightscale_userdata (<module 'cloudinit.config.cc_rightscale_userdata' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_rightscale_userdata.pyc'>) with frequency once | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_rightscale_userdata.once - wb: [420] 20 bytes | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] helpers.py[DEBUG]: Running config-rightscale_userdata using lock (<FileLock using file '/var/lib/cloud/sem/config_rightscale_userdata.once'>) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] cc_rightscale_userdata.py[DEBUG]: Failed to get raw userdata in module rightscale_userdata | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] stages.py[DEBUG]: Running module scripts-vendor (<module 'cloudinit.config.cc_scripts_vendor' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_scripts_vendor.pyc'>) with frequency once | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_scripts_vendor.once - wb: [420] 20 bytes | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] helpers.py[DEBUG]: Running config-scripts-vendor using lock (<FileLock using file '/var/lib/cloud/sem/config_scripts_vendor.once'>) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] stages.py[DEBUG]: Running module scripts-per-once (<module 'cloudinit.config.cc_scripts_per_once' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_scripts_per_once.pyc'>) with frequency once | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_scripts_per_once.once - wb: [420] 20 bytes | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] helpers.py[DEBUG]: Running config-scripts-per-once using lock (<FileLock using file '/var/lib/cloud/sem/config_scripts_per_once.once'>) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] stages.py[DEBUG]: Running module scripts-per-boot (<module 'cloudinit.config.cc_scripts_per_boot' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_scripts_per_boot.pyc'>) with frequency once | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_scripts_per_boot.once - wb: [420] 20 bytes | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] helpers.py[DEBUG]: Running config-scripts-per-boot using lock (<FileLock using file '/var/lib/cloud/sem/config_scripts_per_boot.once'>) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] stages.py[DEBUG]: Running module scripts-per-instance (<module 'cloudinit.config.cc_scripts_per_instance' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_scripts_per_instance.pyc'>) with frequency once | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_scripts_per_instance.once - wb: [420] 20 bytes | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] helpers.py[DEBUG]: Running config-scripts-per-instance using lock (<FileLock using file '/var/lib/cloud/sem/config_scripts_per_instance.once'>) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] stages.py[DEBUG]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_scripts_user.pyc'>) with frequency always | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] helpers.py[DEBUG]: Running config-scripts-user using lock (<cloudinit.helpers.DummyLock object at 0x7fba7161fe50>) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Running command ['/var/lib/cloud/instance/scripts/runcmd'] with allowed return codes [0] (shell=False, capture=False) | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1071]: INTEGRITY_RULE file="/var/lib/cloud/instances/5915659819976971506/scripts/runcmd" hash="sha256:dc5f46fd2da1be76055280bd36c86767199027b7a24d2196873277128924e34e" ppid=1067 pid=1071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1072]: INTEGRITY_RULE file="/usr/bin/systemctl" hash="sha256:e74db6a4b86fd0ae50fd6429e9a6a52b58e2efe77612ea2798eb51656a5523ed" ppid=1071 pid=1072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runcmd" exe="/bin/bash" | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reloading. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: Created symlink /etc/systemd/system/kubernetes.target.wants/kube-node-installation.service → /etc/systemd/system/kube-node-installation.service. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reloading. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: Created symlink /etc/systemd/system/kubernetes.target.wants/kube-node-configuration.service → /etc/systemd/system/kube-node-configuration.service. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reloading. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: Created symlink /etc/systemd/system/kubernetes.target.wants/kube-docker-monitor.service → /etc/systemd/system/kube-docker-monitor.service. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reloading. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: Created symlink /etc/systemd/system/kubernetes.target.wants/kubelet-monitor.service → /etc/systemd/system/kubelet-monitor.service. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reloading. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: Created symlink /etc/systemd/system/kubernetes.target.wants/kube-logrotate.timer → /etc/systemd/system/kube-logrotate.timer. | |
Aug 08 13:03:40 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reloading. | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: Created symlink /etc/systemd/system/kubernetes.target.wants/kube-logrotate.service → /etc/systemd/system/kube-logrotate.service. | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reloading. | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[687]: time="2018-08-08T13:03:41.039310477Z" level=info msg="Graph migration to content-addressability took 0.00 seconds" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[687]: time="2018-08-08T13:03:41.040018440Z" level=info msg="Loading containers: start." | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: Bridge firewalling registered | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[687]: time="2018-08-08T13:03:41.050517161Z" level=info msg="Firewalld running: false" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1171]: INTEGRITY_RULE file="/usr/lib64/xtables/libxt_conntrack.so" hash="sha256:6810701bc8440c97fb5fee7e3cb3332d2420f62e869d083371325bf1b712709c" ppid=687 pid=1171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1177]: INTEGRITY_RULE file="/usr/lib64/xtables/libxt_addrtype.so" hash="sha256:6a4a2e53cea3bce81e06e339929e854cff4f562ba32a17ff034aaa052d806a81" ppid=687 pid=1177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: Created symlink /etc/systemd/system/multi-user.target.wants/kubernetes.target → /etc/systemd/system/kubernetes.target. | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reloading. | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1218]: INTEGRITY_RULE file="/usr/lib64/xtables/libipt_MASQUERADE.so" hash="sha256:16ef5f1e2bdd01653fa4b75cc829ebac7603c8cdb6fe728ee2b3cd1e8f701f0f" ppid=687 pid=1218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Download and install k8s binaries and configurations... | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Hourly kube-logrotate invocation. | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[687]: time="2018-08-08T13:03:41.169261345Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: EXT4-fs (sda1): re-mounted. Opts: commit=30,data=ordered | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[687]: time="2018-08-08T13:03:41.202300510Z" level=info msg="Loading containers: done." | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1302]: INTEGRITY_RULE file="/bin/chmod" hash="sha256:8ee704ee6e399f29d23b37223b4c80a1a5a39fd0752c6d913d9bcb176b4bb930" ppid=1 pid=1302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(chmod)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1305]: INTEGRITY_RULE file="/home/kubernetes/bin/configure.sh" hash="sha256:6491326cd80c6cc10d7ab7a4b6bb8497f6e3bbb7e79ffa46f00bf5bbeb823f7d" ppid=1 pid=1305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(igure.sh)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure.sh[1305]: Start to install kubernetes files | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1312]: INTEGRITY_RULE file="/usr/bin/python-wrapper" hash="sha256:3061290d55a156bfba1e5c054178d3ee87770d27b37a3d5bacdd7b95f1e3050a" ppid=1311 pid=1312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="configure.sh" exe="/bin/bash" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1316]: INTEGRITY_RULE file="/bin/tr" hash="sha256:e3bb9a0ff998a6452a2652ad9fe93913ed17426812718c0618b769c0a45859be" ppid=1314 pid=1316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="configure.sh" exe="/bin/bash" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure.sh[1305]: kubernetes-server-linux-amd64.tar.gz is preloaded. | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure.sh[1305]: Downloading node problem detector. | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure.sh[1305]: % Total % Received % Xferd Average Speed Time Time Time Current | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure.sh[1305]: Dload Upload Total Spent Left Speed | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1299]: INTEGRITY_RULE file="/usr/bin/runc" hash="sha256:d22360fba55a9eea94c7764b77e322d5922a91318f77b17fd60ef71f4008637f" ppid=687 pid=1299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="dockerd" exe="/usr/bin/dockerd" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1325]: INTEGRITY_RULE file="/usr/bin/tini" hash="sha256:a3024637b24a5e1b8f2add0d3035c221ae5c860c4f6fbf288a857943562a363d" ppid=687 pid=1325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="dockerd" exe="/usr/bin/dockerd" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[687]: time="2018-08-08T13:03:41.545364538Z" level=info msg="Daemon has completed initialization" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[687]: time="2018-08-08T13:03:41.545416072Z" level=info msg="Docker daemon" commit=f5ec1e2 graphdriver=overlay version=17.03.2-ce | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[687]: time="2018-08-08T13:03:41.551240693Z" level=info msg="API listen on /var/run/docker.sock" | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Docker Application Container Engine. | |
Aug 08 13:03:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[692]: INTEGRITY_RULE file="/sbin/agetty" hash="sha256:44c1c3f7a86ba67e1f16536fce141a081b28c6e48692051cb8fc203be984a13a" ppid=1 pid=692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(agetty)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure.sh[1305]: [1.3K blob data] | |
Aug 08 13:03:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1329]: INTEGRITY_RULE file="/usr/bin/sha1sum" hash="sha256:c5e5033951d2d051b6a8412aab455294f2ec529aec279e2bc1600d21dd242a7d" ppid=1328 pid=1329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="configure.sh" exe="/bin/bash" | |
Aug 08 13:03:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure.sh[1305]: == Downloaded https://storage.googleapis.com/kubernetes-release/node-problem-detector/node-problem-detector-v0.4.1.tar.gz (SHA1 = a57a3fe64cab8a18ec654f5cef0aec59dae62568) == | |
Aug 08 13:03:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1332]: INTEGRITY_RULE file="/bin/tar" hash="sha256:499973a883c8a950118784b416b04bd39fd0c1d42da04fb22ef4f69e0e0c22f1" ppid=1305 pid=1332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="configure.sh" exe="/bin/bash" | |
Aug 08 13:03:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1333]: INTEGRITY_RULE file="/bin/gzip" hash="sha256:3b02f615eb67506984eda582728fb7657070c339179113db184be8b75136f31a" ppid=1332 pid=1333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tar" exe="/bin/tar" | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1334]: INTEGRITY_RULE file="/bin/mv" hash="sha256:6ee04af6a9560da8304d298561d76ecc42ce167cd37496cbbee5f7975b2b6c83" ppid=1305 pid=1334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="configure.sh" exe="/bin/bash" | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure.sh[1305]: cni-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz is preloaded. | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure.sh[1305]: kubernetes-manifests.tar.gz is preloaded. | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure.sh[1305]: mounter is preloaded. | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: EXT4-fs (sda1): re-mounted. Opts: commit=30,data=ordered | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure.sh[1305]: Done for installing kubernetes files | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Download and install k8s binaries and configurations. | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Configure kubernetes node... | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1356]: INTEGRITY_RULE file="/home/kubernetes/bin/configure-helper.sh" hash="sha256:af2cc0a67138ce7d7e0b90c7866c35240c45759e18dd02e373c61fb3464b0460" ppid=1 pid=1356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(elper.sh)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Start to configure instance for kubernetes | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1365]: INTEGRITY_RULE file="/bin/dd" hash="sha256:afc84e0b7f78721d72cc70e5207c0a948889401b1ae985f88fe5557010898d16" ppid=1361 pid=1365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="configure-helpe" exe="/bin/bash" | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1363]: INTEGRITY_RULE file="/usr/bin/base64" hash="sha256:fed1b291454a61812e605fd06b04f915ef7e5436cfc1ee17f96523f56c2fbebf" ppid=1361 pid=1363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="configure-helpe" exe="/bin/bash" | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Configuring IP firewall rules | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: Unsafe core_pattern used with suid_dumpable=2. Pipe handler or fully qualified core dump path required. | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: net.ipv4.conf.all.route_localnet = 1 | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Add rules to accept all inbound TCP/UDP/ICMP packets | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Add rules to accept all forwarded TCP/UDP/ICMP packets | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Add rules for ip masquerade | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Creating required directories | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Making /var/lib/kubelet executable for kubelet | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: No local SSD disks found. | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Creating node pki files | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1402]: INTEGRITY_RULE file="/bin/ln" hash="sha256:085d5f728f31abf16f2e2b6f848b10e987c2ddcf160cb9a8f12378de2b6c6657" ppid=1356 pid=1402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="configure-helpe" exe="/bin/bash" | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Creating kubelet kubeconfig file | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Creating kube-proxy user kubeconfig file | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Creating node-problem-detector kubeconfig file | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: overriding kubectl | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Assemble docker command line flags | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Enable docker registry mirror at: https://mirror.gcr.io | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Extend the docker.service configuration to remove the network checkpiont | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Extend the docker.service configuration to set a higher pids limit | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reloading. | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Docker command line is updated. Restart docker to pick it up | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Stopping Docker Application Container Engine... | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[687]: time="2018-08-08T13:03:57.457492241Z" level=info msg="Processing signal 'terminated'" | |
Aug 08 13:03:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[687]: time="2018-08-08T13:03:57.467333923Z" level=info msg="stopping containerd after receiving terminated" | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 device_policy_manager[981]: E0808 13:03:58.088093 981 main.go:396] SetInstanceStatus returned status 403 (PERMISSION_DENIED): Request had insufficient authentication scopes. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 device_policy_manager[981]: I0808 13:03:58.088126 981 main.go:370] Using InstanceConfig: update_strategy:"update_disabled" | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 device_policy_manager[981]: I0808 13:03:58.088534 981 manager_impl.go:230] Skipping device policy update. Current: {metrics_enabled:false target_version_prefix:"10323.12.0" update_scatter_seconds:33 reboot_after_update:false } | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:common_service.cc(79)] Attempt update: app_version="" omaha_url="" flags=0x0 interactive=yes | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(768)] Forced update check requested. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:ERROR:object_proxy.cc(582)] Failed to call method: org.chromium.debugd.QueryDevFeatures: object_path= /org/chromium/debugd: org.freedesktop.DBus.Error.ServiceUnknown: The name org.chromium.debugd was not provided by any .service files | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:ERROR:dbus_method_invoker.h(111)] CallMethodAndBlockWithTimeout(...): Domain=dbus, Code=org.freedesktop.DBus.Error.ServiceUnknown, Message=The name org.chromium.debugd was not provided by any .service files | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(1547)] Developer features disabled; disallowing custom update sources. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_manager-inl.h(52)] ChromeOSPolicy::UpdateCheckAllowed: START | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:WARNING:evaluation_context-inl.h(43)] Error reading Variable update_disabled: "No value set for update_disabled" | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:WARNING:evaluation_context-inl.h(43)] Error reading Variable release_channel_delegated: "No value set for release_channel_delegated" | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:chromeos_policy.cc(280)] Forced update signaled (interactive), allowing update check. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_manager-inl.h(74)] ChromeOSPolicy::UpdateCheckAllowed: END | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(842)] Running interactive update. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(206)] Reporting daily metrics. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:metrics.cc(120)] Uploading 187d6h8m12.598928s for metric UpdateEngine.Daily.OSAgeDays | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(312)] Device policies/settings present | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(465)] Scattering disabled as this is an interactive update check | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(341)] Forcibly disabling use of p2p for downloading since this update attempt is interactive. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:payload_state.cc(534)] Current download source: Unknown | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(1483)] Ensuring that p2p is running. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:ERROR:p2p_manager.cc(265)] Error spawning ["initctl", "start", "p2p"] | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:ERROR:update_attempter.cc(1485)] Error starting p2p. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(375)] Forcibly disabling use of p2p since starting p2p or performing housekeeping failed. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:payload_state.cc(534)] Current download source: Unknown | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:omaha_request_params.cc(66)] Initializing parameters for this update attempt | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:omaha_request_params.cc(77)] Running from channel beta-channel | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(391)] No target channel mandated by policy. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(410)] target_version_prefix = 10323.12.0, scatter_factor_in_seconds = 33s | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(415)] Wall Clock Based Wait Enabled = 0, Update Check Count Wait Enabled = 0, Waiting Period = 0s | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(422)] Use p2p For Downloading = 0, Use p2p For Sharing = 0 | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(429)] forced to obey proxies | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(432)] proxy manual checks: 1 | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:prefs.cc(122)] delta-update-failures not present in /var/lib/update_engine/prefs | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(1159)] Marking booted slot as good. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1433]: INTEGRITY_RULE file="/usr/sbin/chromeos-setgoodkernel" hash="sha256:3785680a993839eb0ad4d8f8dd4d7d945422d391c0d09e7c9d1c45f696506fec" ppid=966 pid=1433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="update_engine" exe="/usr/sbin/update_engine" | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1434]: INTEGRITY_RULE file="/usr/bin/id" hash="sha256:44cd8c4e4d7c0abda1cf8f4d3cb8c3eaa3a099e3583226a71b7b2717862293d3" ppid=1433 pid=1434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="chromeos-setgoo" exe="/bin/bash" | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1439]: INTEGRITY_RULE file="/bin/expr" hash="sha256:27afd110fdd08ce37ae374cdb19348dc82db58b2a299d601b1a202e524ffad7c" ppid=1438 pid=1439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="chromeos-setgoo" exe="/bin/bash" | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1448]: INTEGRITY_RULE file="/usr/bin/cgpt" hash="sha256:1cb2245497e024bb92de99d595cb22800b0c7e900fb0cb6e728132c97b9b7e54" ppid=1433 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="chromeos-setgoo" exe="/bin/bash" | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1450]: INTEGRITY_RULE file="/usr/bin/crossystem" hash="sha256:6beb996c940dbd7d7154745500bdfe50f985da7d9c602eecc038c4cee86c039e" ppid=1433 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="chromeos-setgoo" exe="/bin/bash" | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:subprocess.cc(152)] Subprocess exited with si_status: 1 | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:subprocess.cc(156)] Subprocess output: | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: Parameter kernel_max_rollforward is read-only | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(1298)] Scheduling an action processor start. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:action_processor.cc(46)] ActionProcessor: starting OmahaRequestAction | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:prefs.cc(122)] last-active-ping-day not present in /var/lib/update_engine/prefs | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:prefs.cc(122)] last-roll-call-ping-day not present in /var/lib/update_engine/prefs | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:ERROR:hardware_chromeos.cc(258)] Failed to get vpd key for first_active_omaha_ping_sent with exit code: 127 | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:prefs.cc(122)] install-date-days not present in /var/lib/update_engine/prefs | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:omaha_request_action.cc(627)] Not generating Omaha InstallData as we have no prefs file and OOBE is not complete or not enabled. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:prefs.cc(122)] previous-version not present in /var/lib/update_engine/prefs | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:omaha_request_action.cc(680)] Posting an Omaha request to https://tools.google.com/service/update2 | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:omaha_request_action.cc(681)] Request: <?xml version="1.0" encoding="UTF-8"?> | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: <request protocol="3.0" version="ChromeOSUpdateEngine-0.1.0.0" updaterversion="ChromeOSUpdateEngine-0.1.0.0" installsource="ondemandupdate" ismachine="1"> | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: <os version="Indy" platform="Chrome OS" sp="10323.12.0_x86_64"></os> | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: <app appid="{76E245CF-C0D0-444D-BA50-36739C18EB00}" version="10323.12.0" track="beta-channel" lang="en-US" board="lakitu-signed-mpkeys" hardware_class="LAKITU DEFAULT" delta_okay="true" fw_version="" ec_version="" > | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: <ping active="1" a="-1" r="-1"></ping> | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: <updatecheck targetversionprefix="10323.12.0"></updatecheck> | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: <event eventtype="54" eventresult="1" previousversion="0.0.0.0"></event> | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: </app> | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: </request> | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:ERROR:object_proxy.cc(582)] Failed to call method: org.chromium.NetworkProxyServiceInterface.ResolveProxy: object_path= /org/chromium/NetworkProxyService: org.freedesktop.DBus.Error.ServiceUnknown: The name org.chromium.NetworkProxyService was not provided by any .service files | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:ERROR:http_proxy.cc(27)] org.chromium.NetworkProxyService D-Bus call to ResolveProxy failed | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:libcurl_http_fetcher.cc(146)] Starting/Resuming transfer | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:libcurl_http_fetcher.cc(165)] Using proxy: no | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:libcurl_http_fetcher.cc(308)] Setting up curl options for HTTPS | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:prefs.cc(122)] update-server-cert-0-2 not present in /var/lib/update_engine/prefs | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:metrics.cc(517)] Uploading 0 for metric UpdateEngine.CertificateCheck.UpdateCheck | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:prefs.cc(122)] update-server-cert-0-1 not present in /var/lib/update_engine/prefs | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:metrics.cc(517)] Uploading 0 for metric UpdateEngine.CertificateCheck.UpdateCheck | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:prefs.cc(122)] update-server-cert-0-0 not present in /var/lib/update_engine/prefs | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:metrics.cc(517)] Uploading 0 for metric UpdateEngine.CertificateCheck.UpdateCheck | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:libcurl_http_fetcher.cc(433)] HTTP response code: 200 | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:libcurl_http_fetcher.cc(509)] Transfer completed (200), 458 bytes downloaded | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:omaha_request_action.cc(940)] Omaha request response: <?xml version="1.0" encoding="UTF-8"?><response protocol="3.0" server="prod"><daystart elapsed_days="4237" elapsed_seconds="21838"/><app appid="{76E245CF-C0D0-444D-BA50-36739C18EB00}" cohort="" cohortname="" status="ok"><ping status="ok"/><updatecheck _firmware_version_0="1.1" _firmware_version_1="1.1" _firmware_version_2="1.1" _kernel_version_0="1.1" _kernel_version_1="1.1" _kernel_version_2="1.1" status="noupdate"/><event status="ok"/></app></response> | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:ERROR:hardware_chromeos.cc(258)] Failed to get vpd key for first_active_omaha_ping_sent with exit code: 127 | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:ERROR:hardware_chromeos.cc(282)] Failed to set vpd key for first_active_omaha_ping_sent with exit code: 127 with error: [0808/130358:ERROR:process.cc(329)] Exec of vpd failed:: No such file or directory | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:omaha_request_action.cc(775)] Set the Omaha InstallDate from Omaha Response to 4235 days | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:omaha_request_action.cc(813)] No update. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:metrics.cc(142)] Sending 1 for metric UpdateEngine.Check.Result (enum) | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:prefs.cc(122)] metrics-check-last-reporting-time not present in /var/lib/update_engine/prefs | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:action_processor.cc(116)] ActionProcessor: finished OmahaRequestAction with code ErrorCode::kSuccess | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:action_processor.cc(143)] ActionProcessor: starting OmahaResponseHandlerAction | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:omaha_response_handler_action.cc(56)] There are no updates. Aborting. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:action_processor.cc(116)] ActionProcessor: finished OmahaResponseHandlerAction with code ErrorCode::kError | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:action_processor.cc(121)] ActionProcessor: Aborting processing due to failure. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(873)] Processing Done. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_attempter.cc(947)] No update. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_manager-inl.h(52)] ChromeOSPolicy::UpdateCheckAllowed: START | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:WARNING:evaluation_context-inl.h(43)] Error reading Variable update_disabled: "No value set for update_disabled" | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:WARNING:evaluation_context-inl.h(43)] Error reading Variable release_channel_delegated: "No value set for release_channel_delegated" | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:chromeos_policy.cc(317)] Periodic check interval not satisfied, blocking until 8/8/2018 13:45:13 GMT | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130358:INFO:update_manager-inl.h(74)] ChromeOSPolicy::UpdateCheckAllowed: END | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1495]: INTEGRITY_RULE file="/bin/echo" hash="sha256:5e2e65807f2d7416260cc04c62a37b6aa14c3336632709406a6eb5e20a25676e" ppid=1 pid=1495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(echo)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 echo[1495]: docker daemon exited | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Stopped Docker Application Container Engine. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Closed Docker Socket for the API. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Stopping Docker Socket for the API. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Docker Socket for the API. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Listening on Docker Socket for the API. | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Docker Application Container Engine... | |
Aug 08 13:03:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 sh[1505]: + rm -rf /var/lib/docker/network | |
Aug 08 13:03:59 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Docker Application Container Engine. | |
Aug 08 13:03:59 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Start kubelet | |
Aug 08 13:03:59 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1546]: INTEGRITY_RULE file="/home/kubernetes/bin/kubelet" hash="sha256:eb264796937e0e95fb09cf805b28b6546cfedd49eb39ae70ab047d87ca46db3d" ppid=1545 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="configure-helpe" exe="/bin/bash" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Using kubelet binary at /home/kubernetes/bin/kubelet | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes kubelet. | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Start kube-proxy static pod | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1564]: INTEGRITY_RULE file="/bin/touch" hash="sha256:cc4165b10af3012c31ff266d74ec268c5f0ebf0521faf2596141d1b33b9c1309" ppid=1356 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="configure-helpe" exe="/bin/bash" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1573]: INTEGRITY_RULE file="/bin/chown" hash="sha256:497a658c90080afd7bba33da8297fdb1be37cf36a3465a344164a8cb14390b53" ppid=1356 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="configure-helpe" exe="/bin/bash" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.350663 1562 feature_gate.go:162] feature gates: map[ExperimentalCriticalPodAnnotation:true] | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.350752 1562 controller.go:114] kubelet config controller: starting controller | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.350759 1562 controller.go:118] kubelet config controller: validating combination of defaults and flags | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Start node problem detector | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Using node problem detector binary at /home/kubernetes/bin/node-problem-detector | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes node problem detector. | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1612]: INTEGRITY_RULE file="/usr/bin/systemd-run" hash="sha256:4235a331392aefb1b975d6fdda56a96a391012d7582d9019a22dd4f23f070d80" ppid=1562 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/home/kubernetes/bin/kubelet" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes systemd probe. | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.499677 1562 mount_linux.go:211] Detected OS with systemd | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.499758 1562 client.go:75] Connecting to docker on unix:///var/run/docker.sock | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.499803 1562 client.go:95] Start docker client with request timeout=2m0s | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:04:01.502823 1562 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.509469 1562 feature_gate.go:162] feature gates: map[ExperimentalCriticalPodAnnotation:true] | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.517626 1562 gce.go:913] Using existing Token Source &oauth2.reuseTokenSource{new:google.computeSource{account:""}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)} | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.521516 1562 gce.go:913] Using existing Token Source &oauth2.reuseTokenSource{new:google.computeSource{account:""}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(0xc4201aa240)} | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.521556 1562 gce.go:913] Using existing Token Source &oauth2.reuseTokenSource{new:google.computeSource{account:""}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(0xc4201aa240)} | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Prepare containerized mounter | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: EXT4-fs (sda1): re-mounted. Opts: commit=30,data=ordered | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 configure-helper.sh[1356]: Done for the configuration for kubernetes | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Configure kubernetes node. | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Kubernetes log rotation... | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Kubernetes health monitoring for kubelet... | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Starting Kubernetes health monitoring for docker... | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes health monitoring for kubelet. | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes health monitoring for docker. | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1641]: INTEGRITY_RULE file="/home/kubernetes/bin/health-monitor.sh" hash="sha256:c07fbbb1274f042084846b3a788d60648d6ee9692de6f8b31b3b632ade4e4128" ppid=1 pid=1641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(nitor.sh)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 health-monitor.sh[1641]: Start kubernetes health monitoring for kubelet | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 health-monitor.sh[1643]: Start kubernetes health monitoring for docker | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 health-monitor.sh[1641]: Wait for 2 minutes for kubelet to be functional | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1648]: INTEGRITY_RULE file="/bin/sleep" hash="sha256:96f0366d3535f1556981c0ccc68c160328eb6f2757971e15642d585b2cd1f344" ppid=1641 pid=1648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="health-monitor." exe="/bin/bash" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1649]: INTEGRITY_RULE file="/usr/bin/timeout" hash="sha256:a58c5c3de98fedf0d7199e4e51e6d241169678259b99c21c75e5bbc17b885662" ppid=1643 pid=1649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="health-monitor." exe="/bin/bash" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1632]: INTEGRITY_RULE file="/usr/sbin/logrotate" hash="sha256:550a7d538f388c0f8193506cc1a66068da45128d16a0332900132a1c6043e29d" ppid=1 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(ogrotate)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1632]: INTEGRITY_RULE file="/usr/lib64/libpopt.so.0.0.0" hash="sha256:627fe84556d7d5f152b812449ede870c53f4a6e631d735ad0a52135853257317" ppid=1 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="logrotate" exe="/usr/sbin/logrotate" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes log rotation. | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reached target Kubernetes. | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] stages.py[DEBUG]: Running module ssh-authkey-fingerprints (<module 'cloudinit.config.cc_ssh_authkey_fingerprints' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_ssh_authkey_fingerprints.pyc'>) with frequency once | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_ssh_authkey_fingerprints.once - wb: [420] 20 bytes | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] helpers.py[DEBUG]: Running config-ssh-authkey-fingerprints using lock (<FileLock using file '/var/lib/cloud/sem/config_ssh_authkey_fingerprints.once'>) | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] stages.py[DEBUG]: Running module keys-to-console (<module 'cloudinit.config.cc_keys_to_console' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_keys_to_console.pyc'>) with frequency once | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_keys_to_console.once - wb: [420] 20 bytes | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] helpers.py[DEBUG]: Running config-keys-to-console using lock (<FileLock using file '/var/lib/cloud/sem/config_keys_to_console.once'>) | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Running command ['/usr/lib/cloud-init/write-ssh-key-fingerprints', '', 'ssh-dss'] with allowed return codes [0] (shell=False, capture=True) | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1652]: INTEGRITY_RULE file="/usr/lib/cloud-init/write-ssh-key-fingerprints" hash="sha256:1eebccc6b85cdc2f77b2102080d99a2465d19e018c772c2090799d6c5779375a" ppid=1067 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cloud-init" exe="/usr/bin/python2.7" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1654]: INTEGRITY_RULE file="/usr/bin/logger" hash="sha256:c323c2e3a25e6b7d1ca893a07e03c2dc44b563a6d4175aa588b0b2d96be91465" ppid=1652 pid=1654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="write-ssh-key-f" exe="/bin/bash" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 ec2[1654]: | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 ec2[1654]: ############################################################# | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 ec2[1654]: -----BEGIN SSH HOST KEY FINGERPRINTS----- | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 ec2[1654]: -----END SSH HOST KEY FINGERPRINTS----- | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 ec2[1654]: ############################################################# | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] stages.py[DEBUG]: Running module phone-home (<module 'cloudinit.config.cc_phone_home' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_phone_home.pyc'>) with frequency once | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_phone_home.once - wb: [420] 20 bytes | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] helpers.py[DEBUG]: Running config-phone-home using lock (<FileLock using file '/var/lib/cloud/sem/config_phone_home.once'>) | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] cc_phone_home.py[DEBUG]: Skipping module named phone-home, no 'phone_home' configuration found | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] stages.py[DEBUG]: Running module final-message (<module 'cloudinit.config.cc_final_message' from '/usr/lib64/python2.7/site-packages/cloudinit/config/cc_final_message.pyc'>) with frequency always | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] helpers.py[DEBUG]: Running config-final-message using lock (<cloudinit.helpers.DummyLock object at 0x7fba7161fd90>) | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Reading from /proc/uptime (quiet=False) | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Read 13 bytes from /proc/uptime | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: Cloud-init v. 0.7.6 finished at Wed, 08 Aug 2018 13:04:01 +0000. Datasource DataSourceGCE. Up 31.49 seconds | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Cloud-init v. 0.7.6 finished at Wed, 08 Aug 2018 13:04:01 +0000. Datasource DataSourceGCE. Up 31.49 seconds | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Writing to /var/lib/cloud/instance/boot-finished - wb: [420] 51 bytes | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] cloud-init[DEBUG]: Ran 10 modules with 0 failures | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Creating symbolic link from '/run/cloud-init/result.json' => '../../var/lib/cloud/data/result.json' | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Reading from /proc/uptime (quiet=False) | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: Read 13 bytes from /proc/uptime | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 cloud-init[1067]: [CLOUDINIT] util.py[DEBUG]: cloud-init mode 'modules' took 21.122 seconds (21.12) | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Execute cloud user/final scripts. | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.713950 1562 gce.go:506] Network "default" is type legacy - no subnetwork | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.714004 1562 plugins.go:71] Registered KMS plugin "gcp-cloudkms" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.714027 1562 server.go:301] Successfully initialized cloud provider: "gce" from the config file: "" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.714061 1562 server.go:534] cloud provider determined current node name to be gke-cs-test-dan-test-pool-bca3c3a7-m055 | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:01.714090 1562 bootstrap.go:57] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1597]: INTEGRITY_RULE file="/home/kubernetes/bin/node-problem-detector" hash="sha256:863e0d46faf3f88edc84f953f566fd50483210334844023a5848a960db5c1abf" ppid=1 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(detector)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: I0808 13:04:01.783548 1597 log_monitor.go:72] Finish parsing log monitor config file: {WatcherConfig:{Plugin:journald PluginConfig:map[source:kernel] LogPath:/var/log/journal Lookback:5m} BufferSize:10 Source:kernel-monitor DefaultConditions:[{Type:KernelDeadlock Status:false Transition:0001-01-01 00:00:00 +0000 UTC Reason:KernelHasNoDeadlock Message:kernel has no deadlock}] Rules:[{Type:temporary Condition: Reason:OOMKilling Pattern:Kill process \d+ (.+) score \d+ or sacrifice child\nKilled process \d+ (.+) total-vm:\d+kB, anon-rss:\d+kB, file-rss:\d+kB} {Type:temporary Condition: Reason:TaskHung Pattern:task \S+:\w+ blocked for more than \w+ seconds\.} {Type:temporary Condition: Reason:UnregisterNetDevice Pattern:unregister_netdevice: waiting for \w+ to become free. Usage count = \d+} {Type:temporary Condition: Reason:KernelOops Pattern:BUG: unable to handle kernel NULL pointer dereference at .*} {Type:temporary Condition: Reason:KernelOops Pattern:divide error: 0000 \[#\d+\] SMP} {Type:permanent Condition:KernelDeadlock Reason:AUFSUmountHung Pattern:task umount\.aufs:\w+ blocked for more than \w+ seconds\.} {Type:permanent Condition:KernelDeadlock Reason:DockerHung Pattern:task docker:\w+ blocked for more than \w+ seconds\.}]} | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: I0808 13:04:01.783673 1597 log_watchers.go:40] Use log watcher of plugin "journald" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: I0808 13:04:01.783781 1597 log_monitor.go:72] Finish parsing log monitor config file: {WatcherConfig:{Plugin:journald PluginConfig:map[source:docker] LogPath:/var/log/journal Lookback:5m} BufferSize:10 Source:docker-monitor DefaultConditions:[] Rules:[{Type:temporary Condition: Reason:CorruptDockerImage Pattern:Error trying v2 registry: failed to register layer: rename /var/lib/docker/image/(.+) /var/lib/docker/image/(.+): directory not empty.*}]} | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: I0808 13:04:01.783795 1597 log_watchers.go:40] Use log watcher of plugin "journald" | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: I0808 13:04:01.784945 1597 log_monitor.go:81] Start log monitor | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: I0808 13:04:01.786155 1597 log_watcher.go:160] Lookback changed to system uptime: 32s | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: I0808 13:04:01.786201 1597 log_watcher.go:69] Start watching journald | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: I0808 13:04:01.786228 1597 log_monitor.go:81] Start log monitor | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: I0808 13:04:01.786293 1597 log_watcher.go:160] Lookback changed to system uptime: 32s | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: I0808 13:04:01.786336 1597 log_watcher.go:69] Start watching journald | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: I0808 13:04:01.786352 1597 problem_detector.go:74] Problem detector started | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: I0808 13:04:01.786416 1597 log_monitor.go:173] Initialize condition generated: [] | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: I0808 13:04:01.786375 1597 log_monitor.go:173] Initialize condition generated: [{Type:KernelDeadlock Status:false Transition:2018-08-08 13:04:01.786359658 +0000 UTC Reason:KernelHasNoDeadlock Message:kernel has no deadlock}] | |
Aug 08 13:04:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1650]: INTEGRITY_RULE file="/usr/bin/docker" hash="sha256:83e620b68ae3072298ce900b566b969b8999714c62a829abe8faa4be8b4b1c86" ppid=1649 pid=1650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="timeout" exe="/usr/bin/coreutils" | |
Aug 08 13:04:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 node-problem-detector[1597]: E0808 13:04:02.794041 1597 manager.go:160] failed to update node conditions: nodes "gke-cs-test-dan-test-pool-bca3c3a7-m055" not found | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.821713 1562 manager.go:149] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:04:04.836164 1562 manager.go:157] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:04:04.836325 1562 manager.go:166] unable to connect to CRI-O api service: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dial unix /var/run/crio.sock: connect: no such file or directory | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.850013 1562 fs.go:139] Filesystem UUIDs: map[24A1-9C9D:/dev/sda12 28b1f8b0-1bc8-438d-a380-bc6b23959db2:/dev/sda8 37e5c151-3f01-4c5f-98fe-2b4425b696b9:/dev/sda1] | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.850045 1562 fs.go:140] Filesystem partitions: map[/dev/root:{mountpoint:/ major:254 minor:0 fsType:ext2 blockSize:0} tmpfs:{mountpoint:/dev/shm major:0 minor:18 fsType:tmpfs blockSize:0} /dev/sda8:{mountpoint:/usr/share/oem major:8 minor:8 fsType:ext4 blockSize:0} /dev/sda1:{mountpoint:/var/lib/docker/overlay major:8 minor:1 fsType:ext4 blockSize:0}] | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.855661 1562 manager.go:216] Machine: {NumCores:8 CpuFrequency:2600000 MemoryCapacity:31629643776 HugePages:[{PageSize:2048 NumPages:0}] MachineID:651b6e0e5349cb2e139d1183caac3a1b SystemUUID:651B6E0E-5349-CB2E-139D-1183CAAC3A1B BootID:6bb0504e-004c-4982-a8e7-92bf95504030 Filesystems:[{Device:/dev/root DeviceMajor:254 DeviceMinor:0 Capacity:1279787008 Type:vfs Inodes:79360 HasInodes:true} {Device:tmpfs DeviceMajor:0 DeviceMinor:18 Capacity:15814819840 Type:vfs Inodes:3861040 HasInodes:true} {Device:/dev/sda8 DeviceMajor:8 DeviceMinor:8 Capacity:12042240 Type:vfs Inodes:4096 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:16684785664 Type:vfs Inodes:1036320 HasInodes:true}] DiskMap:map[254:0:{Name:dm-0 Major:254 Minor:0 Size:1300234240 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:21474836480 Scheduler:noop}] NetworkDevices:[{Name:eth0 MacAddress:42:01:0a:f0:00:07 Speed:0 Mtu:1460}] Topology:[{Id:0 Memory:31629643776 Cores:[{Id:0 Threads:[0 4] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[1 5] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:2 Threads:[2 6] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:3 Threads:[3 7] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:20971520 Type:Unified Level:3}]}] CloudProvider:GCE InstanceType:n1-standard-8 InstanceID:5915659819976971506} | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.856889 1562 manager.go:222] Version: {KernelVersion:4.4.111+ ContainerOsVersion:Container-Optimized OS from Google DockerVersion:17.03.2-ce DockerAPIVersion:1.27 CadvisorVersion: CadvisorRevision:} | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.860389 1562 container_manager_linux.go:252] container manager verified user specified cgroup-root exists: / | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.860446 1562 container_manager_linux.go:257] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[cpu:{i:{value:90 scale:-3} d:{Dec:<nil>} s:90m Format:DecimalSI} memory:{i:{value:3652190208 scale:0} d:{Dec:<nil>} s:3483Mi Format:BinarySI}] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s} | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.860627 1562 container_manager_linux.go:288] Creating device plugin handler: false | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.860716 1562 server.go:534] cloud provider determined current node name to be gke-cs-test-dan-test-pool-bca3c3a7-m055 | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.860736 1562 server.go:686] Using root directory: /var/lib/kubelet | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.860793 1562 kubelet.go:349] cloud provider determined current node name to be gke-cs-test-dan-test-pool-bca3c3a7-m055 | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.860826 1562 kubelet.go:274] Adding manifest file: /etc/kubernetes/manifests | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.860869 1562 file.go:52] Watching path "/etc/kubernetes/manifests" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.860885 1562 kubelet.go:284] Watching apiserver | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.866204 1562 kubelet.go:518] Hairpin mode set to "promiscuous-bridge" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1698]: INTEGRITY_RULE file="/usr/lib64/xtables/libxt_comment.so" hash="sha256:26aba6519907765db9439b32d2a9f8c6267daed484f9da5e463fd1a1f48aeb93" ppid=1562 pid=1698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.874569 1562 plugins.go:187] Loaded network plugin "kubenet" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:04:04.876002 1562 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.883831 1562 plugins.go:187] Loaded network plugin "kubenet" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.883875 1562 docker_service.go:207] Docker cri networking managed by kubenet | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.900745 1562 docker_service.go:212] Docker Info: &{ID:QLFB:7BNB:FZIO:AARD:ZETH:RJV2:C5QD:SHJK:ZQJA:WING:23JS:56KB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:25 Driver:overlay DriverStatus:[[Backing Filesystem extfs] [Supports d_type true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:15 OomKillDisable:true NGoroutines:22 SystemTime:2018-08-08T13:04:04.893312668Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.4.111+ OperatingSystem:Container-Optimized OS from Google OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc4201b4d20 NCPU:8 MemTotal:31629643776 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:gke-cs-test-dan-test-pool-bca3c3a7-m055 Labels:[] ExperimentalBuild:false ServerVersion:17.03.2-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc4200f8500} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:595e75c212d19a81d2b808a518fe1afc1391dad5 Expected:4ab9917febca54791c5f071a9d1f404867857fcc} RuncCommit:{ID:54296cf Expected:54296cf40ad8143b62dbcaa1d90e520a2136ddfe} InitCommit:{ID:v0.13.0 Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=apparmor name=seccomp,profile=default]} | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.900871 1562 docker_service.go:225] Setting cgroupDriver to cgroupfs | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.902100 1562 docker_legacy.go:151] No legacy containers found, stop performing legacy cleanup. | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.902148 1562 kubelet.go:607] Starting the GRPC server for the docker CRI shim. | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.902171 1562 docker_server.go:51] Start dockershim grpc server | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.914221 1562 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.920841 1562 kuberuntime_manager.go:178] Container runtime docker initialized, version: 17.03.2-ce, apiVersion: 1.27.0 | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921281 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/aws-ebs" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921295 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/empty-dir" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921305 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/gce-pd" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921314 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/git-repo" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921323 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/host-path" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921332 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/nfs" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921341 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/secret" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921352 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/iscsi" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921366 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/glusterfs" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921377 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/rbd" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921386 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/cinder" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921396 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/quobyte" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921420 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/cephfs" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921449 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/downward-api" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921461 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/fc" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921471 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/flocker" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921481 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/azure-file" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921491 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/configmap" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921501 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/vsphere-volume" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921510 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/azure-disk" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921519 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/photon-pd" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921528 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/projected" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921537 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/portworx-volume" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921553 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/scaleio" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921596 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/local-volume" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.921607 1562 plugins.go:420] Loaded volume plugin "kubernetes.io/storageos" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.922704 1562 server.go:718] Started kubelet v1.8.10-gke.0 | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:04:04.922753 1562 kubelet.go:1235] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container / | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.922786 1562 server.go:128] Starting to listen on 0.0.0.0:10250 | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.922830 1562 server.go:148] Starting to listen read-only on 0.0.0.0:10255 | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.923530 1562 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.923862 1562 server.go:296] Adding debug handlers to kubelet server. | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.934168 1562 kubelet_node_status.go:336] Adding node label from cloud provider: beta.kubernetes.io/instance-type=n1-standard-8 | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.934202 1562 kubelet_node_status.go:347] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-central1-a | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.934218 1562 kubelet_node_status.go:351] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-central1 | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.941163 1562 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node gke-cs-test-dan-test-pool-bca3c3a7-m055 | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.941203 1562 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node gke-cs-test-dan-test-pool-bca3c3a7-m055 | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.941214 1562 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node gke-cs-test-dan-test-pool-bca3c3a7-m055 | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.941730 1562 container_manager_linux.go:388] Updating kernel flag: kernel/panic, expected value: 10, actual value: -1 | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.953206 1562 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.953247 1562 status_manager.go:140] Starting to sync pod status with apiserver | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.953260 1562 kubelet.go:1768] Starting kubelet main sync loop. | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.953313 1562 kubelet.go:1779] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s] | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:04:04.953356 1562 container_manager_linux.go:603] [ContainerManager]: Fail to get rootfs information unable to find data for container / | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.953384 1562 volume_manager.go:244] The desired_state_of_world populator starts | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.953389 1562 volume_manager.go:246] Starting Kubelet Volume Manager | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.953407 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.954845 1562 container_manager_linux.go:446] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:04:04.955597 1562 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1723]: INTEGRITY_RULE file="/usr/lib64/xtables/libxt_MARK.so" hash="sha256:97ae4361d5b5a6a27c9fcf0a3c0e59622904e21098b3caf8be0f53690c5e47e1" ppid=1562 pid=1723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.968389 1562 factory.go:355] Registering Docker factory | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:04:04.968458 1562 manager.go:265] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:04:04.968614 1562 manager.go:276] Registration of the crio container factory failed: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dial unix /var/run/crio.sock: connect: no such file or directory | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.968625 1562 factory.go:54] Registering systemd factory | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.968823 1562 factory.go:86] Registering Raw factory | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.969007 1562 manager.go:1140] Started watching for new ooms in manager | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:04.969663 1562 manager.go:311] Starting recovery of all containers | |
Aug 08 13:04:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1736]: INTEGRITY_RULE file="/usr/lib64/xtables/libxt_mark.so" hash="sha256:7be03785033666362caf3366876a4eb1cb920a3ec9a4ab75c25b2d43279de45e" ppid=1562 pid=1736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:04:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:05.053563 1562 manager.go:316] Recovery completed | |
Aug 08 13:04:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:05.053583 1562 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach | |
Aug 08 13:04:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:05.064745 1562 kubelet_node_status.go:336] Adding node label from cloud provider: beta.kubernetes.io/instance-type=n1-standard-8 | |
Aug 08 13:04:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:05.065263 1562 kubelet_node_status.go:347] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-central1-a | |
Aug 08 13:04:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:05.065631 1562 kubelet_node_status.go:351] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-central1 | |
Aug 08 13:04:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:05.071265 1562 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node gke-cs-test-dan-test-pool-bca3c3a7-m055 | |
Aug 08 13:04:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:05.071785 1562 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node gke-cs-test-dan-test-pool-bca3c3a7-m055 | |
Aug 08 13:04:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:05.072225 1562 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node gke-cs-test-dan-test-pool-bca3c3a7-m055 | |
Aug 08 13:04:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:05.072269 1562 kubelet_node_status.go:83] Attempting to register node gke-cs-test-dan-test-pool-bca3c3a7-m055 | |
Aug 08 13:04:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:05.077369 1562 kubelet_node_status.go:86] Successfully registered node gke-cs-test-dan-test-pool-bca3c3a7-m055 | |
Aug 08 13:04:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:04:05.138652 1562 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node "gke-cs-test-dan-test-pool-bca3c3a7-m055" not found | |
Aug 08 13:04:07 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:04:08 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Wait for Google Container Registry (GCR) to be accessible. | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reached target Google Container Registry (GCR) is Online. | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Containers on GCE Setup. | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Reached target Multi-User System. | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Startup finished in 2.167s (kernel) + 36.870s (userspace) = 39.037s. | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1759]: INTEGRITY_RULE file="/usr/share/gce-containers/konlet-startup" hash="sha256:4aeba71f96496b76c5e42985679735a53ffceb84f5f1a5c4bf1952bc4dc4b831" ppid=1 pid=1759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(-startup)" exe="/usr/lib/systemd/systemd" | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 konlet-startup[1759]: No metadata present - not running containers | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:metrics_daemon.cc(621)] cannot read /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:metrics_daemon.cc(621)] cannot read /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:09.953507 1562 kubelet.go:1837] SyncLoop (ADD, "api"): "" | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:09.953605 1562 kubelet.go:1837] SyncLoop (ADD, "file"): "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055_kube-system(2e818d8133a68e1221b16a322c7713be)" | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:09.953709 1562 kubelet.go:1837] SyncLoop (ADD, "api"): "fluentd-gcp-v2.0.9-6z6km_kube-system(8741af9f-9b0b-11e8-93a5-42010a8001a5)" | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:09.954079 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:04:09.954818 1562 pod_workers.go:182] Error syncing pod 8741af9f-9b0b-11e8-93a5-42010a8001a5 ("fluentd-gcp-v2.0.9-6z6km_kube-system(8741af9f-9b0b-11e8-93a5-42010a8001a5)"), skipping: network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR] | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:04:09.956212 1562 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 8; ignoring extra CPUs | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:09.962183 1562 kubelet.go:1837] SyncLoop (ADD, "api"): "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055_kube-system(8a281dd0-9b0b-11e8-93a5-42010a8001a5)" | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 121.18.238.123 port 56041:11: [preauth] | |
Aug 08 13:04:09 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 121.18.238.123 port 56041 [preauth] | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.053763 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "varlog" (UniqueName: "kubernetes.io/host-path/8741af9f-9b0b-11e8-93a5-42010a8001a5-varlog") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.054370 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ssl-certs" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-etc-ssl-certs") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.054841 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "varlog" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-varlog") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.055246 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "run" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-run") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.055723 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "varlibdockercontainers" (UniqueName: "kubernetes.io/host-path/8741af9f-9b0b-11e8-93a5-42010a8001a5-varlibdockercontainers") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.056134 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "libsystemddir" (UniqueName: "kubernetes.io/host-path/8741af9f-9b0b-11e8-93a5-42010a8001a5-libsystemddir") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.056612 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8741af9f-9b0b-11e8-93a5-42010a8001a5-config-volume") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.057015 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "fluentd-gcp-token-zmcrm" (UniqueName: "kubernetes.io/secret/8741af9f-9b0b-11e8-93a5-42010a8001a5-fluentd-gcp-token-zmcrm") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.057408 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-ca-certs" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-usr-ca-certs") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.057908 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-kubeconfig") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.057947 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "iptableslock" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-iptableslock") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.058089 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "etc-ssl-certs" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-etc-ssl-certs") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.058121 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "varlog" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-varlog") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.058145 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "run" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-run") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.058169 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "varlog" (UniqueName: "kubernetes.io/host-path/8741af9f-9b0b-11e8-93a5-42010a8001a5-varlog") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.058207 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "fluentd-gcp-token-zmcrm" (UniqueName: "kubernetes.io/secret/8741af9f-9b0b-11e8-93a5-42010a8001a5-fluentd-gcp-token-zmcrm") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.058235 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "usr-ca-certs" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-usr-ca-certs") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.058279 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "varlibdockercontainers" (UniqueName: "kubernetes.io/host-path/8741af9f-9b0b-11e8-93a5-42010a8001a5-varlibdockercontainers") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.058305 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "libsystemddir" (UniqueName: "kubernetes.io/host-path/8741af9f-9b0b-11e8-93a5-42010a8001a5-libsystemddir") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.058329 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8741af9f-9b0b-11e8-93a5-42010a8001a5-config-volume") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.058911 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "etc-ssl-certs" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-etc-ssl-certs") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.059094 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "varlog" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-varlog") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.059160 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "run" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-run") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.059188 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "varlog" (UniqueName: "kubernetes.io/host-path/8741af9f-9b0b-11e8-93a5-42010a8001a5-varlog") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.060704 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "libsystemddir" (UniqueName: "kubernetes.io/host-path/8741af9f-9b0b-11e8-93a5-42010a8001a5-libsystemddir") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.060793 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "usr-ca-certs" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-usr-ca-certs") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.060907 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "varlibdockercontainers" (UniqueName: "kubernetes.io/host-path/8741af9f-9b0b-11e8-93a5-42010a8001a5-varlibdockercontainers") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.064290 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8741af9f-9b0b-11e8-93a5-42010a8001a5-config-volume") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/8741af9f-9b0b-11e8-93a5-42010a8001a5/volumes/kubernetes.io~secret/fluentd-gcp-token-zmcrm. | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.086032 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "fluentd-gcp-token-zmcrm" (UniqueName: "kubernetes.io/secret/8741af9f-9b0b-11e8-93a5-42010a8001a5-fluentd-gcp-token-zmcrm") pod "fluentd-gcp-v2.0.9-6z6km" (UID: "8741af9f-9b0b-11e8-93a5-42010a8001a5") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:04:10.140296 1562 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.158632 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-kubeconfig") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.158678 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "iptableslock" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-iptableslock") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.158805 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "iptableslock" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-iptableslock") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.158881 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/2e818d8133a68e1221b16a322c7713be-kubeconfig") pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055" (UID: "2e818d8133a68e1221b16a322c7713be") | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.262519 1562 kuberuntime_manager.go:374] No sandbox for pod "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055_kube-system(2e818d8133a68e1221b16a322c7713be)" can be found. Need to start a new one | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1784]: INTEGRITY_RULE file="/usr/bin/containerd-shim" hash="sha256:4a88d07947f4ae48e74c00fd34ad9e64cc3e268fc1dd9a82ab20cc2b9203c327" ppid=1515 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="docker-containe" exe="/usr/bin/containerd" | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1803]: INTEGRITY_RULE file="/pause" hash="sha256:d1690439fe15c8bdd932010c525b183f515da011a8687a9b99636467df5066db" ppid=1784 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc:[2:INIT]" exe="/usr/bin/runc" | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1834]: INTEGRITY_RULE file="/usr/bin/nice" hash="sha256:05dd64c5d88a6308828a66bdf10fe43a1d762bdf9867eddd1613ed95abcd9eb8" ppid=1562 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/home/kubernetes/bin/kubelet" | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1834]: INTEGRITY_RULE file="/bin/du" hash="sha256:4f987dcb68ff9790a7da315f18dba495258e0689c9c145cba4ae33bbf1153ef1" ppid=1562 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="nice" exe="/usr/bin/coreutils" | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1835]: INTEGRITY_RULE file="/usr/bin/find" hash="sha256:59d498d5b19bad2c07d1d469414e9f880d0500778b8bf8f598f62d538042eba6" ppid=1562 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/home/kubernetes/bin/kubelet" | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1857]: INTEGRITY_RULE file="/bin/touch" hash="sha256:1b76456e69a30c72b4475cb6cfe84e9c71288763b8711d1d5afa70681e8a61ff" ppid=1838 pid=1857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc:[2:INIT]" exe="/usr/bin/runc" | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:04:10.916866217Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container 886bb5b0b7efbf6f8f8b5693bb099c6952e4a68c66051056971706d5d9784a3e" | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.973946 1562 kubelet.go:1871] SyncLoop (PLEG): "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055_kube-system(2e818d8133a68e1221b16a322c7713be)", event: &pleg.PodLifecycleEvent{ID:"2e818d8133a68e1221b16a322c7713be", Type:"ContainerDied", Data:"886bb5b0b7efbf6f8f8b5693bb099c6952e4a68c66051056971706d5d9784a3e"} | |
Aug 08 13:04:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:10.974062 1562 kubelet.go:1871] SyncLoop (PLEG): "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055_kube-system(2e818d8133a68e1221b16a322c7713be)", event: &pleg.PodLifecycleEvent{ID:"2e818d8133a68e1221b16a322c7713be", Type:"ContainerStarted", Data:"d7508c76cf05261de4e0cc9ade240556a4add7cb1d02c0106a055e88b122e325"} | |
Aug 08 13:04:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:11.274732 1562 kuberuntime_manager.go:503] Container {Name:kube-proxy Image:gcr.io/google_containers/kube-proxy:v1.8.10-gke.0 Command:[/bin/sh -c echo -998 > /proc/$$$/oom_score_adj && exec kube-proxy --master=https://35.232.241.146 --kubeconfig=/var/lib/kube-proxy/kubeconfig --cluster-cidr=10.56.0.0/14 --resource-container="" --v=2 --feature-gates=ExperimentalCriticalPodAnnotation=true --iptables-sync-period=1m --iptables-min-sync-period=10s 1>>/var/log/kube-proxy.log 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etc-ssl-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:usr-ca-certs ReadOnly:true MountPath:/usr/share/ca-certificates SubPath: MountPropagation:<nil>} {Name:varlog ReadOnly:false MountPath:/var/log SubPath: MountPropagation:<nil>} {Name:kubeconfig ReadOnly:false MountPath:/var/lib/kube-proxy/kubeconfig SubPath: MountPropagation:<nil>} {Name:iptableslock ReadOnly:false MountPath:/run/xtables.lock SubPath: MountPropagation:<nil>}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Aug 08 13:04:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1927]: INTEGRITY_RULE file="/bin/dash" hash="sha256:ef4fcb032b3628f8245ac42b58f210e007331388297c4a604a40b6baacf83182" ppid=1907 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc:[2:INIT]" exe="/usr/bin/runc" | |
Aug 08 13:04:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1927]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/ld-2.19.so" hash="sha256:a3ae546f2fc2fc4bc946d4c9fd56ad853051b4f3223f69b780221b17af7ef337" ppid=1907 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/dash" | |
Aug 08 13:04:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:04:11.611802362Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container 08f90acf236f04695dbcaa8e7057ef0ce5cd7d74d6b10f0faf5199a249866570" | |
Aug 08 13:04:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1927]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libc-2.19.so" hash="sha256:00e1af30cda22dc21d2ee6b1b2bee7265b306ac815294c927283964b550ba281" ppid=1907 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/dash" | |
Aug 08 13:04:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:11.984317 1562 kubelet.go:1871] SyncLoop (PLEG): "kube-proxy-gke-cs-test-dan-test-pool-bca3c3a7-m055_kube-system(2e818d8133a68e1221b16a322c7713be)", event: &pleg.PodLifecycleEvent{ID:"2e818d8133a68e1221b16a322c7713be", Type:"ContainerStarted", Data:"08f90acf236f04695dbcaa8e7057ef0ce5cd7d74d6b10f0faf5199a249866570"} | |
Aug 08 13:04:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1927]: INTEGRITY_RULE file="/usr/local/bin/kube-proxy" hash="sha256:b174d5a27ed5db5e6d9034895e7115b523f65f66790df92e8e6bf5c3b1a690c6" ppid=1907 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/dash" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1927]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libpthread-2.19.so" hash="sha256:106055689a0d678b08edf4ab1f394bf888482f6bef6136492cf939b0aebf01bb" ppid=1907 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-proxy" exe="/usr/local/bin/kube-proxy" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1927]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libnss_compat-2.19.so" hash="sha256:512a4fc0d268439035265c257dc2d9d063ab45bd5cbf0b63f6ff98d047ed6a8f" ppid=1907 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-proxy" exe="/usr/local/bin/kube-proxy" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1927]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libnsl-2.19.so" hash="sha256:5fb1fe02c096cc5ef4b28fbdc97262d08f13c7a1293cd7bbe997806be9ef88aa" ppid=1907 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-proxy" exe="/usr/local/bin/kube-proxy" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1927]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libnss_nis-2.19.so" hash="sha256:ad90088a9ad5f588b7eaf50f93409565da27745a5f08ad008025e6f5b988b00e" ppid=1907 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-proxy" exe="/usr/local/bin/kube-proxy" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1927]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libnss_files-2.19.so" hash="sha256:2c793c47593a797c6744acff76b92bcaba3644d6d6070906c5d624f71ab0179c" ppid=1907 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-proxy" exe="/usr/local/bin/kube-proxy" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1989]: INTEGRITY_RULE file="/sbin/xtables-multi" hash="sha256:adee44923e1f20d6cb168ea5f51e5abefc5d07f476fd82e79fb6fa7502e7c48e" ppid=1927 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-proxy" exe="/usr/local/bin/kube-proxy" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1989]: INTEGRITY_RULE file="/lib/libip4tc.so.0.1.0" hash="sha256:7cfebd6aa92eac12c65440bf8d65ea0ef210973169bb164522e544c8d700986a" ppid=1927 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1989]: INTEGRITY_RULE file="/lib/libip6tc.so.0.1.0" hash="sha256:cedf4e6dc075f5670914ba40a3e9dd3074ce53da0ed850f614509e1917db9e1b" ppid=1927 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1989]: INTEGRITY_RULE file="/lib/libxtables.so.10.0.0" hash="sha256:3deede9d04871df717b2733aaf28c803da1b4b599de3c420db0a3f0ae26681ab" ppid=1927 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1989]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libm-2.19.so" hash="sha256:de7d0ae10381b137ff4c120b690d2e75c8c9c607d6b2d8dab50035e5e4fff818" ppid=1927 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1989]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libdl-2.19.so" hash="sha256:58d1b242ef829bb6977edc78aa370e2d9a6dd69fcd1927008e5431385de6ede2" ppid=1927 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1992]: INTEGRITY_RULE file="/bin/kmod" hash="sha256:1565141f22ced81707c13abebec93f3e201261d75d601b037f3307a80f2c8b8b" ppid=1927 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-proxy" exe="/usr/local/bin/kube-proxy" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[1998]: INTEGRITY_RULE file="/lib/xtables/libxt_comment.so" hash="sha256:916218ca1d831b52f70c0ae5f45f3b897b5802068598db3c3c057d0b207c1103" ppid=1927 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2000]: INTEGRITY_RULE file="/lib/xtables/libxt_addrtype.so" hash="sha256:990f4d9d6388ff713577f99d41c72bcfb512fe8a44c8d59c6fce2f7cf54442b8" ppid=1927 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:04:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2008]: INTEGRITY_RULE file="/bin/ip" hash="sha256:dcec164996ec3ccd555743bf09483d24ae806356fedbb585f0d9d88f6a00b041" ppid=1927 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-proxy" exe="/usr/local/bin/kube-proxy" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2015]: INTEGRITY_RULE file="/lib/xtables/libxt_standard.so" hash="sha256:6b15569eb684de2322cccf89ce6e78501e21beb8e33de7a4b417efb0ac747765" ppid=1927 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2019]: INTEGRITY_RULE file="/lib/xtables/libipt_MASQUERADE.so" hash="sha256:19694a06f74456c9e0e7722ca509567eeeb529702155a64414df012a7374883a" ppid=1927 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-save" exe="/sbin/xtables-multi" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2019]: INTEGRITY_RULE file="/lib/xtables/libxt_MARK.so" hash="sha256:bc83855485686c193c74500f5c939396037c6e02e03b864a1bcb477c99a39f78" ppid=1927 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-save" exe="/sbin/xtables-multi" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2019]: INTEGRITY_RULE file="/lib/xtables/libxt_mark.so" hash="sha256:fb0df602f36573df917f4610f72d0e38683fc3605a07092909c207aaaebe03ee" ppid=1927 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-save" exe="/sbin/xtables-multi" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2016]: INTEGRITY_RULE file="/lib64/security/pam_tally2.so" hash="sha256:ed819121e8cea667f043a69e6b503fa04aee3c798d2418d3e15737a7d6cd4dde" ppid=707 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2016]: INTEGRITY_RULE file="/lib64/security/pam_shells.so" hash="sha256:7c26842d36b4b68d5a0b6b0c5bac881c63173b398823384ee2f8810da71390a5" ppid=707 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2016]: INTEGRITY_RULE file="/lib64/security/pam_nologin.so" hash="sha256:f956fa3f403e7e55d2967bbf845cfd4b5216830aa22c5e0d80c0be3be804a96b" ppid=707 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2016]: INTEGRITY_RULE file="/lib64/security/pam_env.so" hash="sha256:c2ae4e5bc75b17c3fc896f239076ab579011fc786401375c84737f421b2547b9" ppid=707 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2016]: INTEGRITY_RULE file="/lib64/security/pam_access.so" hash="sha256:5989d3e40bd58ff65d6a6a35f1642492de947d81a894d8e9e31f6b35bcfa739d" ppid=707 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2016]: INTEGRITY_RULE file="/lib64/security/pam_loginuid.so" hash="sha256:7d20b463a5ff54bdc4025099176092c58692b3854131f2cf3a3e3c0a70a38acb" ppid=707 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2016]: INTEGRITY_RULE file="/lib64/security/pam_lastlog.so" hash="sha256:01cb695860936713e9895d26bde03894664702748e18786209760d3c26ad9825" ppid=707 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2016]: INTEGRITY_RULE file="/lib64/security/pam_limits.so" hash="sha256:020fcf0a50cf9cd43f4fdb3b645df60c64afebcd010ece07a2baabf5559b8bcb" ppid=707 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2016]: INTEGRITY_RULE file="/lib64/security/pam_tty_audit.so" hash="sha256:c956a360f6243ac204ffa56adcf7e219f5141baaed7483f8d531ced45f35f7b1" ppid=707 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2016]: INTEGRITY_RULE file="/lib64/security/pam_motd.so" hash="sha256:46ce66fed667fab6cdbc226f50766547fbc3e3c1094ae6d6b2a4c04d46698f6e" ppid=707 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2016]: INTEGRITY_RULE file="/lib64/security/pam_mail.so" hash="sha256:b365a7d4c7102af9f76d86d7d33c98a9eb2a84cfe0e727a182272ac841e538e7" ppid=707 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Accepted publickey for gke-68d111821db0e011e1fc from 35.232.241.146 port 47496 ssh2: RSA SHA256:CIh97OPb6r9tWpBvF6Em1eiTYkaTwufcFeHWNIb4LAU | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[2016]: pam_lastlog(sshd:session): file /var/log/lastlog created | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[2016]: pam_unix(sshd:session): session opened for user gke-68d111821db0e011e1fc by (uid=0) | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2016]: CONFIG_CHANGE pid=2016 uid=0 auid=5000 ses=1 op=tty_set old-enabled=0 new-enabled=1 old-log_passwd=0 new-log_passwd=0 res=1 | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[2016]: pam_tty_audit(sshd:session): changed status from 0 to 1 | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2040]: INTEGRITY_RULE file="/lib/xtables/libxt_conntrack.so" hash="sha256:56e28368b4f053c660d435ea37b108bba00744794b7c55079990a40945eacd26" ppid=1927 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-save" exe="/sbin/xtables-multi" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2040]: INTEGRITY_RULE file="/lib/xtables/libxt_tcp.so" hash="sha256:e5cc8b78fc1537454cc4c0a5a57971ccf591a1abb1a967757e7ed22df6a81bd3" ppid=1927 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-save" exe="/sbin/xtables-multi" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2042]: INTEGRITY_RULE file="/lib/xtables/libipt_DNAT.so" hash="sha256:dab2cf17fa48bc89ca8d6ba4d866156ca1e416f43c745de2d4cd1fc3dcaacce2" ppid=1927 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/sbin/xtables-multi" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2042]: INTEGRITY_RULE file="/lib/xtables/libxt_udp.so" hash="sha256:6da8d8b785ab605583e63f0533dd9499487e851d4bc06d356c66c156539f942a" ppid=1927 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/sbin/xtables-multi" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2042]: INTEGRITY_RULE file="/lib/xtables/libxt_statistic.so" hash="sha256:56a84d847a16d492b7d2e3508ff650948bf536829249e44cfa8f1edc88ba4db9" ppid=1927 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/sbin/xtables-multi" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2042]: INTEGRITY_RULE file="/lib/xtables/libxt_recent.so" hash="sha256:eb9013737599d0c90445c4407cc11f6a659c33a1f30e39731874bc140a394893" ppid=1927 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/sbin/xtables-multi" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2051]: INTEGRITY_RULE file="/usr/sbin/conntrack" hash="sha256:9f366e7610c20403ce818c4b78a8a23d5f04adedb1cd6ead49ffe0ab5e37aea1" ppid=1927 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-proxy" exe="/usr/local/bin/kube-proxy" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2051]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/libnetfilter_conntrack.so.3.5.0" hash="sha256:dd78d692123923232568a000b58b07dea4b415b18abb21c1af37b05a14f7d0a7" ppid=1927 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="conntrack" exe="/usr/sbin/conntrack" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2051]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libmnl.so.0.1.0" hash="sha256:1e854d7d565a9583ebf31ae1b18a97ae49a04473ae331887a05d5f8452bd505a" ppid=1927 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="conntrack" exe="/usr/sbin/conntrack" | |
Aug 08 13:04:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2051]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/libnfnetlink.so.0.2.0" hash="sha256:863fe185065715c4d4ba54554841c7cfe8e6a6057de8b61e831e60e7cd8fd5bb" ppid=1927 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="conntrack" exe="/usr/sbin/conntrack" | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:15.113414 1562 kuberuntime_manager.go:902] updating runtime config through cri with podcidr 10.56.38.0/24 | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:15.113652 1562 docker_service.go:307] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.56.38.0/24,},} | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:15.113723 1562 kubenet_linux.go:265] CNI network config set to { | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "cniVersion": "0.1.0", | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "name": "kubenet", | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "type": "bridge", | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "bridge": "cbr0", | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "mtu": 1460, | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "addIf": "eth0", | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "isGateway": true, | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "ipMasq": false, | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "hairpinMode": false, | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "ipam": { | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "type": "host-local", | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "subnet": "10.56.38.0/24", | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "gateway": "10.56.38.1", | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: "routes": [ | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: { "dst": "0.0.0.0/0" } | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: ] | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: } | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: } | |
Aug 08 13:04:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:15.114048 1562 kubelet_network.go:276] Setting Pod CIDR: -> 10.56.38.0/24 | |
Aug 08 13:04:23 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:23.953928 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:04:23 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:04:23.955991 1562 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 8; ignoring extra CPUs | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:24.259530 1562 kuberuntime_manager.go:374] No sandbox for pod "fluentd-gcp-v2.0.9-6z6km_kube-system(8741af9f-9b0b-11e8-93a5-42010a8001a5)" can be found. Need to start a new one | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2129]: INTEGRITY_RULE file="/home/kubernetes/bin/loopback" hash="sha256:27850f0048411030ab6b671cbdf4dff775dc8ee9691fedd87e628a3156fe509f" ppid=1562 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/home/kubernetes/bin/kubelet" | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130424:INFO:update_attempter.cc(1149)] Already updated boot flags. Skipping. | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2136]: INTEGRITY_RULE file="/home/kubernetes/bin/bridge" hash="sha256:9f903e9f68f0fe66f15d1b92a27856c9475081b3e2d79e52a9a82b185033a08b" ppid=1562 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/home/kubernetes/bin/kubelet" | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: cbr0: Gained carrier | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: vethad48ad21: Gained carrier | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: device vethad48ad21 entered promiscuous mode | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 1(vethad48ad21) entered forwarding state | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 1(vethad48ad21) entered forwarding state | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2149]: INTEGRITY_RULE file="/home/kubernetes/bin/host-local" hash="sha256:24af6092969958ca552bacdb1c90373b11c26eddc84d3ffc1cf6cefaf17afe06" ppid=2136 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bridge" exe="/home/kubernetes/bin/bridge" | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: device cbr0 entered promiscuous mode | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2156]: INTEGRITY_RULE file="/sbin/ebtables" hash="sha256:6dfa1b63e826583d4572b69602c4dbf4fc0ded44e8ba031a00345b9a99b1c567" ppid=1562 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/home/kubernetes/bin/kubelet" | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: Ebtables v2.0 registered | |
Aug 08 13:04:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:04:24.977604 1562 kubenet_linux.go:801] Failed to flush dedup chain: Failed to flush filter chain KUBE-DEDUP: exit status 255, output: Chain 'KUBE-DEDUP' doesn't exist. | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:25.023361 1562 kubelet.go:1871] SyncLoop (PLEG): "fluentd-gcp-v2.0.9-6z6km_kube-system(8741af9f-9b0b-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"8741af9f-9b0b-11e8-93a5-42010a8001a5", Type:"ContainerStarted", Data:"882a629fac366b019661e5f6a6731e182fb69011fee73818141d8e4445814c76"} | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2172]: INTEGRITY_RULE file="/sbin/tc" hash="sha256:b750c52ba0ab5cb5f42a169239d9adfdb1fd1807660e64a8741c4c48cdb0e88d" ppid=1562 pid=2172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/home/kubernetes/bin/kubelet" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:25.135212 1562 kubelet_node_status.go:443] Recording NodeReady event message for node gke-cs-test-dan-test-pool-bca3c3a7-m055 | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:04:25.170150 1562 conversion.go:110] Could not get instant cpu stats: different number of cpus | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2198]: INTEGRITY_RULE file="/bin/dash" hash="sha256:ef4fcb032b3628f8245ac42b58f210e007331388297c4a604a40b6baacf83182" ppid=2177 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc:[2:INIT]" exe="/usr/bin/runc" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2198]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/ld-2.19.so" hash="sha256:a3ae546f2fc2fc4bc946d4c9fd56ad853051b4f3223f69b780221b17af7ef337" ppid=2177 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/dash" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:04:25.586803843Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container adee3c283e42e6120f44c4adfcca0214397f45e0fb15554d0298cbf86b0e663a" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2198]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/libjemalloc.so.1" hash="sha256:e23dc612a910f00771e9c5e0f39556ff76154e0161769fec48f0f02aaa352095" ppid=2177 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/dash" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2198]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libc-2.19.so" hash="sha256:00e1af30cda22dc21d2ee6b1b2bee7265b306ac815294c927283964b550ba281" ppid=2177 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/dash" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2198]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libpthread-2.19.so" hash="sha256:106055689a0d678b08edf4ab1f394bf888482f6bef6136492cf939b0aebf01bb" ppid=2177 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/dash" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: cbr0: Gained IPv6LL | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2267]: INTEGRITY_RULE file="/run.sh" hash="sha256:b37836852534ce1506fa6469f7d45f061a015be88b1be98b90e89f8ed829fa42" ppid=2198 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/dash" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2272]: INTEGRITY_RULE file="/bin/mkdir" hash="sha256:7116427442ac2403c6803350c73b1649babf4f046a3ee7e7ebe90601b71b57cb" ppid=2267 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="run.sh" exe="/bin/dash" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2272]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libselinux.so.1" hash="sha256:98b102ec6ed7dc3f90d55f222a26d06e466aed198fb5c8375350269fbed0c771" ppid=2267 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mkdir" exe="/bin/mkdir" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2272]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libpcre.so.3.13.1" hash="sha256:da7631bbfa129d94d338786d6dd0f3b0f477caae55a52acb926a11ac6afa0252" ppid=2267 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mkdir" exe="/bin/mkdir" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2272]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libdl-2.19.so" hash="sha256:58d1b242ef829bb6977edc78aa370e2d9a6dd69fcd1927008e5431385de6ede2" ppid=2267 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mkdir" exe="/bin/mkdir" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2289]: INTEGRITY_RULE file="/bin/ls" hash="sha256:ce14856cf5fce0b4401fac00d50e1ce82b641be19a2566121045c9bcd30b4664" ppid=2267 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="run.sh" exe="/bin/dash" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2289]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libacl.so.1.1.0" hash="sha256:d0322138477772a82e69d03a6ef626863ccded7cb0126dd2822b17df4234306b" ppid=2267 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ls" exe="/bin/ls" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2289]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libattr.so.1.1.0" hash="sha256:8a16c6dd3b6e17ddd48dae7d5ed5e5cd2891ccf4239ab3fce64d003445e97ee1" ppid=2267 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ls" exe="/bin/ls" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2290]: INTEGRITY_RULE file="/bin/rm" hash="sha256:cde420719b4641f05e50f820d153326ded82f97b7c232e83ea7f7932b36ac8c0" ppid=2267 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="run.sh" exe="/bin/dash" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2291]: INTEGRITY_RULE file="/bin/cp" hash="sha256:fcf1f8902dca028c45fcb287f984db0ddd94d96d026cfc3ad0c67d78361c366c" ppid=2267 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="run.sh" exe="/bin/dash" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/local/bin/fluentd" hash="sha256:9a7d22b7bd9eed370575417b410c8962dff3f6c745482f94b758d5a9cec30701" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="run.sh" exe="/bin/dash" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/bin/ruby2.1" hash="sha256:50cecb4f29372270452d8231fbd3da426b6df7bd31fd9991f6d0533f0136743a" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="run.sh" exe="/bin/dash" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/libruby-2.1.so.2.1.0" hash="sha256:cc4ccbc15407546e8dfa7c6507bcca505a41e67912cac2b311fc6d4c5cb0cbc1" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/libgmp.so.10.2.0" hash="sha256:e02b89368669d19c4cf03d38a4895b4be86f3ca1f3c8146df59f7b89b91716c1" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libcrypt-2.19.so" hash="sha256:2cfacda2f1963502d9954ce247b780496f095dd93dde5cec5ccb4dc50a488fd4" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:25 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2249]: INTEGRITY_RULE file="/monitor" hash="sha256:90196b22b124f1ae6345b1f4149607507fae290237e446918879742a70738764" ppid=2229 pid=2249 auid=4294967295 uid=65534 gid=65534 euid=65534 suid=65534 fsuid=65534 egid=65534 sgid=65534 fsgid=65534 tty=(none) ses=4294967295 comm="runc:[2:INIT]" exe="/usr/bin/runc" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:04:26.093098376Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container 5e0fcc4a057ae101ae4cebb51877311980f9a74181d9a1478527f9d72693a816" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:26.112513 1562 kubelet.go:1871] SyncLoop (PLEG): "fluentd-gcp-v2.0.9-6z6km_kube-system(8741af9f-9b0b-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"8741af9f-9b0b-11e8-93a5-42010a8001a5", Type:"ContainerStarted", Data:"5e0fcc4a057ae101ae4cebb51877311980f9a74181d9a1478527f9d72693a816"} | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:04:26.112583 1562 kubelet.go:1871] SyncLoop (PLEG): "fluentd-gcp-v2.0.9-6z6km_kube-system(8741af9f-9b0b-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"8741af9f-9b0b-11e8-93a5-42010a8001a5", Type:"ContainerStarted", Data:"adee3c283e42e6120f44c4adfcca0214397f45e0fb15554d0298cbf86b0e663a"} | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libm-2.19.so" hash="sha256:de7d0ae10381b137ff4c120b690d2e75c8c9c607d6b2d8dab50035e5e4fff818" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: vethad48ad21: Gained IPv6LL | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/enc/encdb.so" hash="sha256:e19347a2f72c1717d8875e29786aeb836afbb4dd69524b0380e7e63b6936c64f" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/enc/trans/transdb.so" hash="sha256:e314b85eb8ee2ab97a67ec29f6b58679b45e64d94b175a029b784fe46cce1e4a" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/thread.so" hash="sha256:c914c6052e9abd73510c2660ea98c7f16db4f55dcf80650ad4945b920dce4d7c" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/etc.so" hash="sha256:88cf0206fb5d3a4ea402428a765ab66825bfe020dab1a8ba5ab98e2d075eeb9a" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/fcntl.so" hash="sha256:c3e26a4b06ea8e5015561e0f8fedf4facc7464bcfdfbc2cac8c9bddc1bfd71fe" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/stringio.so" hash="sha256:cfc72d7bc100b0075f0d0c91dab2ede16025219244a1c3d59b36ea7deeae6ef5" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/var/lib/gems/2.1.0/extensions/x86_64-linux/2.1.0/json-1.8.6/json/ext/parser.so" hash="sha256:85d41d954e5822c9f3c4a8176471642709121e128c50cd82183f3bec99f2ca27" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/enc/utf_16be.so" hash="sha256:02a6dccb2428a0a20dc8b6e108ee617c95430c3413cb883e09a1d00e40f5161e" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/enc/utf_16le.so" hash="sha256:d4c46289f45308bc6936cabd16abd3cdb66d0747aa2152cd52e4cb02128234d1" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/enc/utf_32be.so" hash="sha256:f4f77341c919bf36d2303d533c68c8cde35712722051b69169e712b876d987da" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/enc/utf_32le.so" hash="sha256:6260b302c48656cda5e028a1831c107812f4145b9951586a84f22a112a51f46b" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/var/lib/gems/2.1.0/extensions/x86_64-linux/2.1.0/json-1.8.6/json/ext/generator.so" hash="sha256:daf5193bd0659bda732088d159996b0fe9251a4bf0a1d612bfdfaee9ad62a9a9" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/var/lib/gems/2.1.0/extensions/x86_64-linux/2.1.0/yajl-ruby-1.3.0/yajl/yajl.so" hash="sha256:15d5ddd5d8ae46852ec8f664619ff65bf4964ddbaac5678f44d33b25be7b9592" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/strscan.so" hash="sha256:5475bdd1c596d4d15c97e9ee44d0f40f22842db7ad0c05df051586ad78631179" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:26 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/date_core.so" hash="sha256:4f3d1ae887fe2301ab77abed700c79f351ea6ceb5018a9ef9655b24e361e8b3e" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/socket.so" hash="sha256:7f047007be2de81c11261c306fe8171d40a061f6f50e94472949ac3607ab3d27" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/var/lib/gems/2.1.0/extensions/x86_64-linux/2.1.0/msgpack-1.1.0/msgpack/msgpack.so" hash="sha256:9ec7ab583c4dda1b4a54ebcfc22f9440dcd30fcad0aee03caca8cb3030410ea6" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/var/lib/gems/2.1.0/extensions/x86_64-linux/2.1.0/cool.io-1.5.1/iobuffer_ext.so" hash="sha256:cdc54e89c6c9ec031fb4bba070c4db278c16173a90f19e2f415e46b088edbedd" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/var/lib/gems/2.1.0/extensions/x86_64-linux/2.1.0/cool.io-1.5.1/cool.io_ext.so" hash="sha256:24041d13848fed55b9a67296352bd7288142ad819a84ad4fa0ebd137398fc5d5" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/librt-2.19.so" hash="sha256:f08358f58c33291093e80384b0e53a97b6b30ab93e78ce59445cf3057c46ef79" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/openssl.so" hash="sha256:905843599914935b1f82f28a21b633af012d1a03f8092cb81a381ef750e627ad" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/libssl.so.1.0.0" hash="sha256:3952397c95651a043634d983de484faca3cd18bb4139e88ba64b2db6a72936f1" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0" hash="sha256:a21e5ecc64b39070c6cde5fdf05a20153565ad8c7502a82edcae13619c07c19c" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/digest.so" hash="sha256:f36c7bff322fe7f416a9e6a54fb06dff89bfdf3d58f08303105467c8097f9a3a" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libnss_files-2.19.so" hash="sha256:2c793c47593a797c6744acff76b92bcaba3644d6d6070906c5d624f71ab0179c" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/digest/md5.so" hash="sha256:eb6fa6188e9c4935e0cc6246620f202ae9974db04bea81b2eadf263acfda59ff" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/digest/sha1.so" hash="sha256:2889cde0ab4c816958d9578d28d3826b87c52fdf9ee13b8a67951c66757542be" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/var/lib/gems/2.1.0/gems/http_parser.rb-0.6.0/lib/ruby_http_parser.so" hash="sha256:c6e20120234b0bcd3c3a91c543935ae2c42694759fbe926e2d77bd60849a0ded" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/zlib.so" hash="sha256:a3f300ff923e55d1d160e8a8d9e3b70c6f1dfc16d9ac4f2280bf364c4097a68d" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libz.so.1.2.8" hash="sha256:f84a533ad5970857dfa28e78157de5945e96314b8f4ce1549b3d6f194796da75" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/pathname.so" hash="sha256:7fbe81ab581e8fcc3d862a7757b8ed99768ba1a1f18bcf1103a576c1ce69a75c" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:27 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/var/lib/gems/2.1.0/gems/grpc-1.2.5-x86_64-linux/src/ruby/lib/grpc/2.1/grpc_c.so" hash="sha256:3022b218cf06c6398bdec4c74afa4ceeeb11a4d4200ae86501e32b8553c6c150" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:28 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/psych.so" hash="sha256:34b32d8fcd68d938fa5c9548e26e80f52b9a1fe71f023c5a02dc24e017627c68" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:28 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/libyaml-0.so.2.0.4" hash="sha256:7b9765dba199d3f94869c382224cd62731acee01e528926ecf5575f7114bf475" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:28 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2317]: INTEGRITY_RULE file="/bin/uname" hash="sha256:e4ea87520a0b7b2061e986e881cca0a9c9edac447c93531016ced8acc6e29800" ppid=2292 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:28 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/enc/trans/single_byte.so" hash="sha256:fc1b0a35ee0150de196183400a10a217b7a615b7847981a535ccbb7644ea2e8c" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:28 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/var/lib/gems/2.1.0/gems/google-protobuf-3.4.0.2-x86_64-linux/lib/google/2.1/protobuf_c.so" hash="sha256:1a25f8a8e0f0ce5a8e3f0919fa6dd00fce92e22ddc26cf266822c026260a1298" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:29 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/ruby/2.1.0/bigdecimal.so" hash="sha256:0c18c143b5de248f0c94f6ce8603acc420cb5124757403aa10aae3a1ad6fc8d9" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:29 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/var/lib/gems/2.1.0/extensions/x86_64-linux/2.1.0/oj-2.18.5/oj/oj.so" hash="sha256:d425dbec836eaec4e4a46811ff30117719eee1abdccaf7514652b7ab45597c37" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:29 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/var/lib/gems/2.1.0/extensions/x86_64-linux/2.1.0/ffi-1.9.18/ffi_c.so" hash="sha256:3e32da94615b986bdf7813b8af6d7c57970f12779e7286aeadfdbcba085c94ea" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:29 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libsystemd.so.0.17.0" hash="sha256:77c80672b0a928592e6486f10bdd22b39b84a464c665d9949f4761fff2d7a7bd" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:29 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libresolv-2.19.so" hash="sha256:8ede96134d1ee44ac323fd4e4f4de2d687886e287f71c364d2e07718ca8c9a30" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:29 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libcap.so.2.24" hash="sha256:195f33c1af2dd1f89c37c6c893b93498599d4c70e8180fc286a0babd3ff79f80" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:29 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/usr/lib/x86_64-linux-gnu/liblz4.so.1.3.0" hash="sha256:07f40c0b80ea8f00cec19d35021cdb13c0f6e3beb461d566ac463a6750132cb6" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:29 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libgcrypt.so.20.0.3" hash="sha256:a47b5b9f51c93c15fb412d3350c33eb55da5569ecd3c7f81b2489d0f839e6c9e" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:29 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2292]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libgpg-error.so.0.13.0" hash="sha256:741c19e9f7e87b81fb3805379f2fa6e78d5771af9fcf10b0f05e6f629aecd72e" ppid=2267 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:29 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2335]: INTEGRITY_RULE file="/bin/date" hash="sha256:65d08a5ed6d00356fd6ed2f9d8b821fdaf339f8ad39e2b2247374c6871fbbed5" ppid=2332 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/dash" | |
Aug 08 13:04:35 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[2321]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libnss_dns-2.19.so" hash="sha256:5f84c03d30f4e451052b823b896e3a3f76ff8cfef94bbebf2d45d7051ea78ba6" ppid=2267 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fluentd" exe="/usr/bin/ruby2.1" | |
Aug 08 13:04:35 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:04:35.183706 1562 conversion.go:110] Could not get instant cpu stats: different number of cpus | |
Aug 08 13:04:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 1(vethad48ad21) entered forwarding state | |
Aug 08 13:04:45 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:04:45.204347 1562 conversion.go:110] Could not get instant cpu stats: different number of cpus | |
Aug 08 13:04:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:04:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 221.194.47.239 port 42020:11: [preauth] | |
Aug 08 13:04:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 221.194.47.239 port 42020 [preauth] | |
Aug 08 13:04:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:04:55.220796 1562 conversion.go:110] Could not get instant cpu stats: different number of cpus | |
Aug 08 13:04:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-ip-forwarding[833]: INFO Changing eth0 IPs from None to [u'35.224.116.159'] by adding [u'35.224.116.159'] and removing None. | |
Aug 08 13:05:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:05:04.955290 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:05:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:05:05.051072 1562 server.go:779] GET /stats/summary/: (21.901259ms) 200 [[Go-http-client/1.1] 10.56.0.64:58048] | |
Aug 08 13:05:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:05:12.282694 1562 server.go:779] GET /healthz: (42.816µs) 200 [[Go-http-client/1.1] 127.0.0.1:60688] | |
Aug 08 13:05:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:05:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 115.238.245.8 port 59324:11: [preauth] | |
Aug 08 13:05:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 115.238.245.8 port 59324 [preauth] | |
Aug 08 13:06:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:06:01.594499 1562 server.go:779] GET /healthz: (20.197µs) 200 [[curl/7.57.0] 127.0.0.1:60704] | |
Aug 08 13:06:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:06:04.955618 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:06:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:06:05.064591 1562 server.go:779] GET /stats/summary/: (27.273973ms) 200 [[Go-http-client/1.1] 10.56.0.64:58048] | |
Aug 08 13:06:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-ip-forwarding[833]: WARNING Could not parse IP address: "local". | |
Aug 08 13:06:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:06:11.602688 1562 server.go:779] GET /healthz: (21.482µs) 200 [[curl/7.57.0] 127.0.0.1:60718] | |
Aug 08 13:06:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:06:12.277353 1562 server.go:779] GET /healthz: (26.389µs) 200 [[Go-http-client/1.1] 127.0.0.1:60720] | |
Aug 08 13:06:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:06:21.609978 1562 server.go:779] GET /healthz: (35.535µs) 200 [[curl/7.57.0] 127.0.0.1:60726] | |
Aug 08 13:06:31 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:06:31.617969 1562 server.go:779] GET /healthz: (20.67µs) 200 [[curl/7.57.0] 127.0.0.1:60736] | |
Aug 08 13:06:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:06:41.626123 1562 server.go:779] GET /healthz: (19.038µs) 200 [[curl/7.57.0] 127.0.0.1:60744] | |
Aug 08 13:06:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:06:51.634230 1562 server.go:779] GET /healthz: (36.094µs) 200 [[curl/7.57.0] 127.0.0.1:60750] | |
Aug 08 13:07:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:07:01.641616 1562 server.go:779] GET /healthz: (23.007µs) 200 [[curl/7.57.0] 127.0.0.1:60756] | |
Aug 08 13:07:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:07:04.955865 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:07:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:07:05.031927 1562 server.go:779] GET /stats/summary/: (18.265653ms) 200 [[Go-http-client/1.1] 10.56.0.64:58048] | |
Aug 08 13:07:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:07:11.649074 1562 server.go:779] GET /healthz: (26.769µs) 200 [[curl/7.57.0] 127.0.0.1:60766] | |
Aug 08 13:07:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:07:12.272521 1562 server.go:779] GET /healthz: (38.706µs) 200 [[Go-http-client/1.1] 127.0.0.1:60768] | |
Aug 08 13:07:18 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:07:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 122.226.181.164 port 41798:11: [preauth] | |
Aug 08 13:07:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 122.226.181.164 port 41798 [preauth] | |
Aug 08 13:07:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:07:21.656424 1562 server.go:779] GET /healthz: (28.408µs) 200 [[curl/7.57.0] 127.0.0.1:60776] | |
Aug 08 13:07:24 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-ip-forwarding[833]: WARNING Could not parse IP address: "local". | |
Aug 08 13:07:31 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:07:31.664642 1562 server.go:779] GET /healthz: (29.757µs) 200 [[curl/7.57.0] 127.0.0.1:60788] | |
Aug 08 13:07:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:07:41.671912 1562 server.go:779] GET /healthz: (19.206µs) 200 [[curl/7.57.0] 127.0.0.1:60796] | |
Aug 08 13:07:43 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:07:44 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 221.194.47.239 port 35994:11: [preauth] | |
Aug 08 13:07:44 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 221.194.47.239 port 35994 [preauth] | |
Aug 08 13:07:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:07:51.679978 1562 server.go:779] GET /healthz: (19.075µs) 200 [[curl/7.57.0] 127.0.0.1:60802] | |
Aug 08 13:08:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:01.687719 1562 server.go:779] GET /healthz: (27.479µs) 200 [[curl/7.57.0] 127.0.0.1:60808] | |
Aug 08 13:08:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:04.956781 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:08:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:05.142499 1562 server.go:779] GET /stats/summary/: (27.629823ms) 200 [[Go-http-client/1.1] 10.56.0.64:58048] | |
Aug 08 13:08:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:11.694957 1562 server.go:779] GET /healthz: (21.498µs) 200 [[curl/7.57.0] 127.0.0.1:60818] | |
Aug 08 13:08:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:12.267213 1562 server.go:779] GET /healthz: (27.217µs) 200 [[Go-http-client/1.1] 127.0.0.1:60820] | |
Aug 08 13:08:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:13.234248 1562 kubelet.go:1837] SyncLoop (ADD, "api"): "job-124-nvh88_default(1b2a26d5-9b0c-11e8-93a5-42010a8001a5)" | |
Aug 08 13:08:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:13.234898 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:08:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:08:13.237090 1562 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 8; ignoring extra CPUs | |
Aug 08 13:08:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:13.412186 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/1b2a26d5-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") pod "job-124-nvh88" (UID: "1b2a26d5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:13.412866 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/1b2a26d5-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "job-124-nvh88" (UID: "1b2a26d5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:13.513549 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/1b2a26d5-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "job-124-nvh88" (UID: "1b2a26d5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:13.515837 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/1b2a26d5-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") pod "job-124-nvh88" (UID: "1b2a26d5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/1b2a26d5-9b0c-11e8-93a5-42010a8001a5/volumes/kubernetes.io~secret/hail-ci-0-1-service-account-key. | |
Aug 08 13:08:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:13.532903 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/1b2a26d5-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "job-124-nvh88" (UID: "1b2a26d5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/1b2a26d5-9b0c-11e8-93a5-42010a8001a5/volumes/kubernetes.io~secret/default-token-szcn5. | |
Aug 08 13:08:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:13.548840 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/1b2a26d5-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") pod "job-124-nvh88" (UID: "1b2a26d5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:13 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:13.840865 1562 kuberuntime_manager.go:374] No sandbox for pod "job-124-nvh88_default(1b2a26d5-9b0c-11e8-93a5-42010a8001a5)" can be found. Need to start a new one | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: veth80959533: Gained carrier | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: device veth80959533 entered promiscuous mode | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth80959533) entered forwarding state | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth80959533) entered forwarding state | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:08:14.065967440Z" level=error msg="Handler for GET /v1.27/images/google/cloud-sdk:alpine/json returned error: No such image: google/cloud-sdk:alpine" | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:14.066704 1562 provider.go:119] Refreshing cache for provider: *gcp_credentials.dockerConfigKeyProvider | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:14.069981 1562 config.go:191] body of failing http response: &{0x6eefe0 0xc4206d3300 0x6f8090} | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:08:14.070054 1562 metadata.go:142] while reading 'google-dockercfg' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:14.070083 1562 provider.go:119] Refreshing cache for provider: *gcp_credentials.dockerConfigUrlKeyProvider | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:14.072493 1562 config.go:191] body of failing http response: &{0x6eefe0 0xc4206d3900 0x6f8090} | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:08:14.072547 1562 metadata.go:159] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:14.077624 1562 provider.go:119] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Aug 08 13:08:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:14.881168 1562 kubelet.go:1871] SyncLoop (PLEG): "job-124-nvh88_default(1b2a26d5-9b0c-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"1b2a26d5-9b0c-11e8-93a5-42010a8001a5", Type:"ContainerStarted", Data:"bfa1c990f667b7058dc3d275c9380d60526ebd7016899647d4e695af0c63a023"} | |
Aug 08 13:08:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: veth80959533: Gained IPv6LL | |
Aug 08 13:08:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:08:15 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:19.454193 1562 kube_docker_client.go:333] Stop pulling image "google/cloud-sdk:alpine": "Status: Downloaded newer image for google/cloud-sdk:alpine" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3362]: INTEGRITY_RULE file="/bin/bash" hash="sha256:ba9dcb22f96e1288b3e786e6959c71a33f3620e55996ed792147a1ee753d63f7" ppid=3341 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc:[2:INIT]" exe="/usr/bin/runc" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3362]: INTEGRITY_RULE file="/lib/ld-musl-x86_64.so.1" hash="sha256:539a0c16c095d99dad2e0ac459a058e03e30c1f43ee1c1a94dae2bd3c0564a3c" ppid=3341 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bash" exe="/bin/bash" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3362]: INTEGRITY_RULE file="/usr/lib/libreadline.so.7.0" hash="sha256:09536d36cfe0720076cefb6b1b1b9c426c093c3b7b40809d9297b38b42349aac" ppid=3341 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bash" exe="/bin/bash" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3362]: INTEGRITY_RULE file="/usr/lib/libncursesw.so.6.0" hash="sha256:e5d4822713eebb7c0f7234b07f7fc0aa60dba73553e6d567f66dff83ddf0e19d" ppid=3341 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bash" exe="/bin/bash" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:08:19.610735939Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container 40c085f291740004cf54bf6e66dd0d7070a9441468fa253534eed578ee8fb640" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3384]: INTEGRITY_RULE file="/usr/bin/git" hash="sha256:7919ccfb8558a42cc90abf2958dc1e0eaecb04e22b73e66502239187e43f7c0b" ppid=3362 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bash" exe="/bin/bash" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3384]: INTEGRITY_RULE file="/usr/lib/libpcre2-8.so.0.6.0" hash="sha256:f9c07bdaac1e668a84906eae8ff7c959c23cfb47419f9fcce1c5b9614d93662d" ppid=3362 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="git" exe="/usr/bin/git" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3384]: INTEGRITY_RULE file="/lib/libz.so.1.2.11" hash="sha256:d9cee4b5986c8c6c6bd5285cf901afd8d00b8e7484aa2623f8181db5955e669f" ppid=3362 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="git" exe="/usr/bin/git" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3396]: INTEGRITY_RULE file="/usr/libexec/git-core/git-remote-https" hash="sha256:27253598fbb3f134b2d493299d39b807010c01f1f2b93cb406722c2416ace98b" ppid=3384 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="git" exe="/usr/bin/git" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3396]: INTEGRITY_RULE file="/usr/lib/libcurl.so.4.5.0" hash="sha256:74079872b9cc9531f9e494b78dc3cd4d935f63ad4eff6dda69a8a450e11329a3" ppid=3384 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="git-remote-http" exe="/usr/libexec/git-core/git-remote-https" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3396]: INTEGRITY_RULE file="/usr/lib/libssh2.so.1.0.1" hash="sha256:e9677f96d233102a83c00010b61d39d4b1cc201f5df3d45a0afe80cfd419cf6f" ppid=3384 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="git-remote-http" exe="/usr/libexec/git-core/git-remote-https" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3396]: INTEGRITY_RULE file="/lib/libssl.so.44.0.1" hash="sha256:b251352d358f64f02ca3f3b2b195235be5057ac93ccddfc53f8a43b6670b3204" ppid=3384 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="git-remote-http" exe="/usr/libexec/git-core/git-remote-https" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3396]: INTEGRITY_RULE file="/lib/libcrypto.so.42.0.0" hash="sha256:d445c2ce8664a3325c0f590590b1d373d18ebb32d13c51f800f9e2d6de0239fa" ppid=3384 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="git-remote-http" exe="/usr/libexec/git-core/git-remote-https" | |
Aug 08 13:08:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:19.903878 1562 kubelet.go:1871] SyncLoop (PLEG): "job-124-nvh88_default(1b2a26d5-9b0c-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"1b2a26d5-9b0c-11e8-93a5-42010a8001a5", Type:"ContainerStarted", Data:"40c085f291740004cf54bf6e66dd0d7070a9441468fa253534eed578ee8fb640"} | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3429]: INTEGRITY_RULE file="/usr/libexec/git-core/git-rebase" hash="sha256:dab7be0a725ae4fb76ddbb084043651572816f76d1978c1bda4061a2c0c2f5cb" ppid=3428 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="git" exe="/usr/bin/git" | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3429]: INTEGRITY_RULE file="/bin/busybox" hash="sha256:3e6431f91dfebfd6f2c0e3cc98b37fa1da6ccc5c7bc6208c3dda1c2f083944f9" ppid=3428 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="git" exe="/usr/bin/git" | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3467]: INTEGRITY_RULE file="/usr/libexec/git-core/git-sh-i18n--envsubst" hash="sha256:9dad2049acc2e7a06cb3812c6528f38c5cbd1ee018df03b3c0aa42249278bb37" ppid=3466 pid=3467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="git" exe="/usr/libexec/git-core/git" | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3480]: INTEGRITY_RULE file="/repo/hail-ci-build.sh" hash="sha256:5d824f8534fd4efd58c46500b45d3846e089c231a1fd108ddb63106a1734b9fe" ppid=3362 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bash" exe="/bin/bash" | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:20.912769 1562 kubelet.go:1871] SyncLoop (PLEG): "job-124-nvh88_default(1b2a26d5-9b0c-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"1b2a26d5-9b0c-11e8-93a5-42010a8001a5", Type:"ContainerDied", Data:"40c085f291740004cf54bf6e66dd0d7070a9441468fa253534eed578ee8fb640"} | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: veth80959533: Lost carrier | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth80959533) entered disabled state | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:20.936209 1562 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/1b2a26d5-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") pod "1b2a26d5-9b0c-11e8-93a5-42010a8001a5" (UID: "1b2a26d5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:20.937678 1562 reconciler.go:186] operationExecutor.UnmountVolume started for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/1b2a26d5-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "1b2a26d5-9b0c-11e8-93a5-42010a8001a5" (UID: "1b2a26d5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: device veth80959533 left promiscuous mode | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth80959533) entered disabled state | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:20.951805 1562 operation_generator.go:545] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2a26d5-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5" (OuterVolumeSpecName: "default-token-szcn5") pod "1b2a26d5-9b0c-11e8-93a5-42010a8001a5" (UID: "1b2a26d5-9b0c-11e8-93a5-42010a8001a5"). InnerVolumeSpecName "default-token-szcn5". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:20.951851 1562 operation_generator.go:545] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2a26d5-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key" (OuterVolumeSpecName: "hail-ci-0-1-service-account-key") pod "1b2a26d5-9b0c-11e8-93a5-42010a8001a5" (UID: "1b2a26d5-9b0c-11e8-93a5-42010a8001a5"). InnerVolumeSpecName "hail-ci-0-1-service-account-key". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Aug 08 13:08:20 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:20.962239 1562 kubelet_pods.go:1080] Killing unwanted pod "job-124-nvh88" | |
Aug 08 13:08:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:08:21.032683 1562 remote_runtime.go:115] StopPodSandbox "bfa1c990f667b7058dc3d275c9380d60526ebd7016899647d4e695af0c63a023" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin kubenet failed to teardown pod "job-124-nvh88_default" network: Error removing container from network: failed to Statfs "/proc/3204/ns/net": no such file or directory | |
Aug 08 13:08:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:08:21.032815 1562 kuberuntime_manager.go:784] Failed to stop sandbox {"docker" "bfa1c990f667b7058dc3d275c9380d60526ebd7016899647d4e695af0c63a023"} | |
Aug 08 13:08:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:08:21.032860 1562 kubelet_pods.go:1083] Failed killing the pod "job-124-nvh88": failed to "KillPodSandbox" for "1b2a26d5-9b0c-11e8-93a5-42010a8001a5" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin kubenet failed to teardown pod \"job-124-nvh88_default\" network: Error removing container from network: failed to Statfs \"/proc/3204/ns/net\": no such file or directory" | |
Aug 08 13:08:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:21.039625 1562 reconciler.go:290] Volume detached for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/1b2a26d5-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") on node "gke-cs-test-dan-test-pool-bca3c3a7-m055" DevicePath "" | |
Aug 08 13:08:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:21.039676 1562 reconciler.go:290] Volume detached for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/1b2a26d5-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") on node "gke-cs-test-dan-test-pool-bca3c3a7-m055" DevicePath "" | |
Aug 08 13:08:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:21.702151 1562 server.go:779] GET /healthz: (26.819µs) 200 [[curl/7.57.0] 127.0.0.1:60868] | |
Aug 08 13:08:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:21.921887 1562 kubelet.go:1871] SyncLoop (PLEG): "job-124-nvh88_default(1b2a26d5-9b0c-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"1b2a26d5-9b0c-11e8-93a5-42010a8001a5", Type:"ContainerDied", Data:"bfa1c990f667b7058dc3d275c9380d60526ebd7016899647d4e695af0c63a023"} | |
Aug 08 13:08:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:08:21.922544 1562 pod_container_deletor.go:77] Container "bfa1c990f667b7058dc3d275c9380d60526ebd7016899647d4e695af0c63a023" not found in pod's containers | |
Aug 08 13:08:30 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:08:31 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:31.710331 1562 server.go:779] GET /healthz: (29.571µs) 200 [[curl/7.57.0] 127.0.0.1:60882] | |
Aug 08 13:08:31 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 115.238.245.4 port 57660:11: [preauth] | |
Aug 08 13:08:31 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 115.238.245.4 port 57660 [preauth] | |
Aug 08 13:08:34 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:08:35 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:35.949620 1562 kubelet.go:1837] SyncLoop (ADD, "api"): "job-125-z5fdz_default(28b43ed5-9b0c-11e8-93a5-42010a8001a5)" | |
Aug 08 13:08:35 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:35.950069 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:08:35 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:08:35.952763 1562 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 8; ignoring extra CPUs | |
Aug 08 13:08:35 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:35.968909 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/28b43ed5-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "job-125-z5fdz" (UID: "28b43ed5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:35 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:35.968967 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/28b43ed5-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") pod "job-125-z5fdz" (UID: "28b43ed5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:36.069261 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/28b43ed5-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "job-125-z5fdz" (UID: "28b43ed5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:36.070147 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/28b43ed5-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") pod "job-125-z5fdz" (UID: "28b43ed5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/28b43ed5-9b0c-11e8-93a5-42010a8001a5/volumes/kubernetes.io~secret/hail-ci-0-1-service-account-key. | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/28b43ed5-9b0c-11e8-93a5-42010a8001a5/volumes/kubernetes.io~secret/default-token-szcn5. | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:36.091737 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/28b43ed5-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "job-125-z5fdz" (UID: "28b43ed5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:36.092881 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/28b43ed5-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") pod "job-125-z5fdz" (UID: "28b43ed5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:36.255174 1562 kuberuntime_manager.go:374] No sandbox for pod "job-125-z5fdz_default(28b43ed5-9b0c-11e8-93a5-42010a8001a5)" can be found. Need to start a new one | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: veth6eea6d2c: Gained carrier | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: device veth6eea6d2c entered promiscuous mode | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth6eea6d2c) entered forwarding state | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth6eea6d2c) entered forwarding state | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 221.194.47.221 port 57042:11: [preauth] | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 221.194.47.221 port 57042 [preauth] | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:08:36.601454886Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container c94e1489c97175111774a206c6bc946fa89f12aaf6bae63ef2acfd235cf3bd0c" | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:36.989405 1562 kubelet.go:1871] SyncLoop (PLEG): "job-125-z5fdz_default(28b43ed5-9b0c-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"28b43ed5-9b0c-11e8-93a5-42010a8001a5", Type:"ContainerStarted", Data:"c94e1489c97175111774a206c6bc946fa89f12aaf6bae63ef2acfd235cf3bd0c"} | |
Aug 08 13:08:36 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:36.990094 1562 kubelet.go:1871] SyncLoop (PLEG): "job-125-z5fdz_default(28b43ed5-9b0c-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"28b43ed5-9b0c-11e8-93a5-42010a8001a5", Type:"ContainerStarted", Data:"73a83769a6582129a966f8a7212fea0aabf464c2ab3a46634c22b7f57ed68069"} | |
Aug 08 13:08:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[3862]: INTEGRITY_RULE file="/repo/hail-ci-build.sh" hash="sha256:5d824f8534fd4efd58c46500b45d3846e089c231a1fd108ddb63106a1734b9fe" ppid=3754 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bash" exe="/bin/bash" | |
Aug 08 13:08:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: veth6eea6d2c: Gained IPv6LL | |
Aug 08 13:08:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:08:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:08:37 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:37.997338 1562 kubelet.go:1871] SyncLoop (PLEG): "job-125-z5fdz_default(28b43ed5-9b0c-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"28b43ed5-9b0c-11e8-93a5-42010a8001a5", Type:"ContainerDied", Data:"c94e1489c97175111774a206c6bc946fa89f12aaf6bae63ef2acfd235cf3bd0c"} | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: veth6eea6d2c: Lost carrier | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth6eea6d2c) entered disabled state | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: device veth6eea6d2c left promiscuous mode | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth6eea6d2c) entered disabled state | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:38.076448 1562 reconciler.go:186] operationExecutor.UnmountVolume started for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/28b43ed5-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "28b43ed5-9b0c-11e8-93a5-42010a8001a5" (UID: "28b43ed5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:38.076530 1562 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/28b43ed5-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") pod "28b43ed5-9b0c-11e8-93a5-42010a8001a5" (UID: "28b43ed5-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:38.090816 1562 operation_generator.go:545] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b43ed5-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key" (OuterVolumeSpecName: "hail-ci-0-1-service-account-key") pod "28b43ed5-9b0c-11e8-93a5-42010a8001a5" (UID: "28b43ed5-9b0c-11e8-93a5-42010a8001a5"). InnerVolumeSpecName "hail-ci-0-1-service-account-key". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:38.091830 1562 operation_generator.go:545] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b43ed5-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5" (OuterVolumeSpecName: "default-token-szcn5") pod "28b43ed5-9b0c-11e8-93a5-42010a8001a5" (UID: "28b43ed5-9b0c-11e8-93a5-42010a8001a5"). InnerVolumeSpecName "default-token-szcn5". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:38.176908 1562 reconciler.go:290] Volume detached for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/28b43ed5-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") on node "gke-cs-test-dan-test-pool-bca3c3a7-m055" DevicePath "" | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:38.176982 1562 reconciler.go:290] Volume detached for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/28b43ed5-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") on node "gke-cs-test-dan-test-pool-bca3c3a7-m055" DevicePath "" | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-ip-forwarding[833]: WARNING Could not parse IP address: "local". | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:38.957426 1562 kubelet_pods.go:1080] Killing unwanted pod "job-125-z5fdz" | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:38.961271 1562 kuberuntime_container.go:571] Killing container "docker://c94e1489c97175111774a206c6bc946fa89f12aaf6bae63ef2acfd235cf3bd0c" with 30 second grace period | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:08:38.961849152Z" level=error msg="Handler for POST /v1.27/containers/c94e1489c97175111774a206c6bc946fa89f12aaf6bae63ef2acfd235cf3bd0c/stop returned error: Container c94e1489c97175111774a206c6bc946fa89f12aaf6bae63ef2acfd235cf3bd0c is already stopped" | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:08:38.962860 1562 kuberuntime_container.go:66] Can't make a ref to pod "job-125-z5fdz_default(28b43ed5-9b0c-11e8-93a5-42010a8001a5)", container default: selfLink was empty, can't make reference | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:08:38.975794824Z" level=error msg="Handler for POST /v1.27/containers/73a83769a6582129a966f8a7212fea0aabf464c2ab3a46634c22b7f57ed68069/stop returned error: Container 73a83769a6582129a966f8a7212fea0aabf464c2ab3a46634c22b7f57ed68069 is already stopped" | |
Aug 08 13:08:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:38.976578 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:39.007309 1562 kubelet.go:1871] SyncLoop (PLEG): "job-125-z5fdz_default(28b43ed5-9b0c-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"28b43ed5-9b0c-11e8-93a5-42010a8001a5", Type:"ContainerDied", Data:"73a83769a6582129a966f8a7212fea0aabf464c2ab3a46634c22b7f57ed68069"} | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:08:39.007408 1562 pod_container_deletor.go:77] Container "73a83769a6582129a966f8a7212fea0aabf464c2ab3a46634c22b7f57ed68069" not found in pod's containers | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.DailyUseTime for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.DailyUseTime for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.UserCrashInterval for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.KernelCrashInterval for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/daily.cycle for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.UserCrashInterval for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.AnyCrashesDaily for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.UserCrashesDaily for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.KernelCrashesDaily for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.UncleanShutdownsDaily for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/weekly.cycle for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.AnyCrashesWeekly for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.UserCrashesWeekly for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.KernelCrashesWeekly for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.UncleanShutdownsWeekly for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.KernelCrashInterval for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/daily.cycle for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.AnyCrashesDaily for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.UserCrashesDaily for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.KernelCrashesDaily for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.UncleanShutdownsDaily for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/weekly.cycle for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.AnyCrashesWeekly for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.UserCrashesWeekly for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.KernelCrashesWeekly for reading: No such file or directory | |
Aug 08 13:08:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 metrics_daemon[977]: [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/Platform.UncleanShutdownsWeekly for reading: No such file or directory | |
Aug 08 13:08:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:41.718041 1562 server.go:779] GET /healthz: (31.694µs) 200 [[curl/7.57.0] 127.0.0.1:60904] | |
Aug 08 13:08:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:08:51.726200 1562 server.go:779] GET /healthz: (21.265µs) 200 [[curl/7.57.0] 127.0.0.1:60910] | |
Aug 08 13:08:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:08:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 221.194.44.232 port 39063:11: [preauth] | |
Aug 08 13:08:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 221.194.44.232 port 39063 [preauth] | |
Aug 08 13:08:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130858:INFO:update_manager-inl.h(52)] ChromeOSPolicy::UpdateCheckAllowed: START | |
Aug 08 13:08:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130858:WARNING:evaluation_context-inl.h(43)] Error reading Variable update_disabled: "No value set for update_disabled" | |
Aug 08 13:08:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130858:WARNING:evaluation_context-inl.h(43)] Error reading Variable release_channel_delegated: "No value set for release_channel_delegated" | |
Aug 08 13:08:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130858:INFO:chromeos_policy.cc(317)] Periodic check interval not satisfied, blocking until 8/8/2018 13:45:13 GMT | |
Aug 08 13:08:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/130858:INFO:update_manager-inl.h(74)] ChromeOSPolicy::UpdateCheckAllowed: END | |
Aug 08 13:09:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:01.733246 1562 server.go:779] GET /healthz: (17.933µs) 200 [[curl/7.57.0] 127.0.0.1:60916] | |
Aug 08 13:09:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:04.930020 1562 kubelet.go:1245] Image garbage collection succeeded | |
Aug 08 13:09:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:04.955175 1562 container_manager_linux.go:446] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service | |
Aug 08 13:09:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:04.957116 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:09:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:05.128658 1562 server.go:779] GET /stats/summary/: (16.617866ms) 200 [[Go-http-client/1.1] 10.56.0.64:58048] | |
Aug 08 13:09:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:11.741516 1562 server.go:779] GET /healthz: (22.375µs) 200 [[curl/7.57.0] 127.0.0.1:60926] | |
Aug 08 13:09:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:12.264766 1562 server.go:779] GET /healthz: (28.601µs) 200 [[Go-http-client/1.1] 127.0.0.1:60928] | |
Aug 08 13:09:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:21.749585 1562 server.go:779] GET /healthz: (22.503µs) 200 [[curl/7.57.0] 127.0.0.1:60934] | |
Aug 08 13:09:31 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:31.757267 1562 server.go:779] GET /healthz: (20.507µs) 200 [[curl/7.57.0] 127.0.0.1:60950] | |
Aug 08 13:09:33 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:33.695454 1562 kubelet.go:1853] SyncLoop (DELETE, "api"): "job-124-nvh88_default(1b2a26d5-9b0c-11e8-93a5-42010a8001a5)" | |
Aug 08 13:09:33 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:33.701310 1562 kubelet.go:1847] SyncLoop (REMOVE, "api"): "job-124-nvh88_default(1b2a26d5-9b0c-11e8-93a5-42010a8001a5)" | |
Aug 08 13:09:33 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:33.701357 1562 kubelet.go:2030] Failed to delete pod "job-124-nvh88_default(1b2a26d5-9b0c-11e8-93a5-42010a8001a5)", err: pod not found | |
Aug 08 13:09:33 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:09:33.704261 1562 status_manager.go:446] Failed to update status for pod "job-124-nvh88_default(1b2a26d5-9b0c-11e8-93a5-42010a8001a5)": Operation cannot be fulfilled on pods "job-124-nvh88": StorageError: invalid object, Code: 4, Key: /registry/pods/default/job-124-nvh88, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 1b2a26d5-9b0c-11e8-93a5-42010a8001a5, UID in object meta: | |
Aug 08 13:09:33 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:33.746160 1562 kubelet.go:1853] SyncLoop (DELETE, "api"): "job-125-z5fdz_default(28b43ed5-9b0c-11e8-93a5-42010a8001a5)" | |
Aug 08 13:09:33 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:33.749135 1562 kubelet.go:1847] SyncLoop (REMOVE, "api"): "job-125-z5fdz_default(28b43ed5-9b0c-11e8-93a5-42010a8001a5)" | |
Aug 08 13:09:33 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:33.749184 1562 kubelet.go:2030] Failed to delete pod "job-125-z5fdz_default(28b43ed5-9b0c-11e8-93a5-42010a8001a5)", err: pod not found | |
Aug 08 13:09:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:41.765514 1562 server.go:779] GET /healthz: (20.136µs) 200 [[curl/7.57.0] 127.0.0.1:60958] | |
Aug 08 13:09:50 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:50.902215 1562 kubelet.go:1837] SyncLoop (ADD, "api"): "job-126-f4rt8_default(55618b52-9b0c-11e8-93a5-42010a8001a5)" | |
Aug 08 13:09:50 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:50.903301 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:09:50 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:09:50.905405 1562 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 8; ignoring extra CPUs | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:51.033904 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/55618b52-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "job-126-f4rt8" (UID: "55618b52-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:51.033956 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/55618b52-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") pod "job-126-f4rt8" (UID: "55618b52-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:51.134213 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/55618b52-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "job-126-f4rt8" (UID: "55618b52-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:51.134879 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/55618b52-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") pod "job-126-f4rt8" (UID: "55618b52-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/55618b52-9b0c-11e8-93a5-42010a8001a5/volumes/kubernetes.io~secret/hail-ci-0-1-service-account-key. | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/55618b52-9b0c-11e8-93a5-42010a8001a5/volumes/kubernetes.io~secret/default-token-szcn5. | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:51.158577 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/55618b52-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "job-126-f4rt8" (UID: "55618b52-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:51.159194 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/55618b52-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") pod "job-126-f4rt8" (UID: "55618b52-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:51.209071 1562 kuberuntime_manager.go:374] No sandbox for pod "job-126-f4rt8_default(55618b52-9b0c-11e8-93a5-42010a8001a5)" can be found. Need to start a new one | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:51.375594 1562 kubelet.go:1871] SyncLoop (PLEG): "job-126-f4rt8_default(55618b52-9b0c-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"55618b52-9b0c-11e8-93a5-42010a8001a5", Type:"ContainerStarted", Data:"9ef9dc3e520e9e0f81a7885b7c96042d652fe810fba773691d43c479e03df40a"} | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: vethf569b8da: Gained carrier | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: device vethf569b8da entered promiscuous mode | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(vethf569b8da) entered forwarding state | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(vethf569b8da) entered forwarding state | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:09:51.594189805Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container 39b70c8dc4ab9906a4ceff15f3a5e7524d50e997edc7477204fff4271fc6e322" | |
Aug 08 13:09:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:51.773100 1562 server.go:779] GET /healthz: (27.744µs) 200 [[curl/7.57.0] 127.0.0.1:60966] | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[4439]: INTEGRITY_RULE file="/repo/hail-ci-build.sh" hash="sha256:5d824f8534fd4efd58c46500b45d3846e089c231a1fd108ddb63106a1734b9fe" ppid=4329 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bash" exe="/bin/bash" | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:52.385126 1562 kubelet.go:1871] SyncLoop (PLEG): "job-126-f4rt8_default(55618b52-9b0c-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"55618b52-9b0c-11e8-93a5-42010a8001a5", Type:"ContainerDied", Data:"39b70c8dc4ab9906a4ceff15f3a5e7524d50e997edc7477204fff4271fc6e322"} | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: vethf569b8da: Lost carrier | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: vethf569b8da: Removing non-existent address: fe80::ecf6:beff:feec:655/64 (valid forever) | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(vethf569b8da) entered disabled state | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: device vethf569b8da left promiscuous mode | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(vethf569b8da) entered disabled state | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:52.539707 1562 reconciler.go:186] operationExecutor.UnmountVolume started for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/55618b52-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "55618b52-9b0c-11e8-93a5-42010a8001a5" (UID: "55618b52-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:52.539809 1562 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/55618b52-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") pod "55618b52-9b0c-11e8-93a5-42010a8001a5" (UID: "55618b52-9b0c-11e8-93a5-42010a8001a5") | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:52.550801 1562 operation_generator.go:545] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55618b52-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key" (OuterVolumeSpecName: "hail-ci-0-1-service-account-key") pod "55618b52-9b0c-11e8-93a5-42010a8001a5" (UID: "55618b52-9b0c-11e8-93a5-42010a8001a5"). InnerVolumeSpecName "hail-ci-0-1-service-account-key". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:52.552843 1562 operation_generator.go:545] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55618b52-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5" (OuterVolumeSpecName: "default-token-szcn5") pod "55618b52-9b0c-11e8-93a5-42010a8001a5" (UID: "55618b52-9b0c-11e8-93a5-42010a8001a5"). InnerVolumeSpecName "default-token-szcn5". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:52.640260 1562 reconciler.go:290] Volume detached for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/55618b52-9b0c-11e8-93a5-42010a8001a5-default-token-szcn5") on node "gke-cs-test-dan-test-pool-bca3c3a7-m055" DevicePath "" | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:52.640962 1562 reconciler.go:290] Volume detached for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/55618b52-9b0c-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") on node "gke-cs-test-dan-test-pool-bca3c3a7-m055" DevicePath "" | |
Aug 08 13:09:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-ip-forwarding[833]: WARNING Could not parse IP address: "local". | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:53.393937 1562 kubelet.go:1871] SyncLoop (PLEG): "job-126-f4rt8_default(55618b52-9b0c-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"55618b52-9b0c-11e8-93a5-42010a8001a5", Type:"ContainerDied", Data:"9ef9dc3e520e9e0f81a7885b7c96042d652fe810fba773691d43c479e03df40a"} | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:09:53.394038 1562 pod_container_deletor.go:77] Container "9ef9dc3e520e9e0f81a7885b7c96042d652fe810fba773691d43c479e03df40a" not found in pod's containers | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:53.394171 1562 kuberuntime_manager.go:392] No ready sandbox for pod "job-126-f4rt8_default(55618b52-9b0c-11e8-93a5-42010a8001a5)" can be found. Need to start a new one | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:09:53.406320908Z" level=error msg="Handler for POST /v1.27/containers/9ef9dc3e520e9e0f81a7885b7c96042d652fe810fba773691d43c479e03df40a/stop returned error: Container 9ef9dc3e520e9e0f81a7885b7c96042d652fe810fba773691d43c479e03df40a is already stopped" | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:09:53.465967 1562 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 8; ignoring extra CPUs | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: veth5a6e7e54: Gained carrier | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: device veth5a6e7e54 entered promiscuous mode | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth5a6e7e54) entered forwarding state | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth5a6e7e54) entered forwarding state | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:09:53 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:09:54 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:54.404710 1562 kubelet.go:1871] SyncLoop (PLEG): "job-126-f4rt8_default(55618b52-9b0c-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"55618b52-9b0c-11e8-93a5-42010a8001a5", Type:"ContainerStarted", Data:"07fdfb53cead8b74149cb2c9d8b097140b71c5765018ea16d008e40cddaf9c2e"} | |
Aug 08 13:09:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: veth5a6e7e54: Lost carrier | |
Aug 08 13:09:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: veth5a6e7e54: Removing non-existent address: fe80::6406:8eff:fe4c:eac9/64 (valid forever) | |
Aug 08 13:09:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:09:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:09:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth5a6e7e54) entered disabled state | |
Aug 08 13:09:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: device veth5a6e7e54 left promiscuous mode | |
Aug 08 13:09:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth5a6e7e54) entered disabled state | |
Aug 08 13:09:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:09:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:09:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:09:55.415393 1562 kubelet.go:1871] SyncLoop (PLEG): "job-126-f4rt8_default(55618b52-9b0c-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"55618b52-9b0c-11e8-93a5-42010a8001a5", Type:"ContainerDied", Data:"07fdfb53cead8b74149cb2c9d8b097140b71c5765018ea16d008e40cddaf9c2e"} | |
Aug 08 13:09:55 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:09:55.415511 1562 pod_container_deletor.go:77] Container "07fdfb53cead8b74149cb2c9d8b097140b71c5765018ea16d008e40cddaf9c2e" not found in pod's containers | |
Aug 08 13:10:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:10:01.780837 1562 server.go:779] GET /healthz: (20.226µs) 200 [[curl/7.57.0] 127.0.0.1:60984] | |
Aug 08 13:10:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:10:04.957825 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:10:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:10:05.009159098Z" level=error msg="Handler for POST /v1.27/containers/73a83769a6582129a966f8a7212fea0aabf464c2ab3a46634c22b7f57ed68069/stop returned error: Container 73a83769a6582129a966f8a7212fea0aabf464c2ab3a46634c22b7f57ed68069 is already stopped" | |
Aug 08 13:10:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:10:05.033961089Z" level=error msg="Handler for POST /v1.27/containers/bfa1c990f667b7058dc3d275c9380d60526ebd7016899647d4e695af0c63a023/stop returned error: Container bfa1c990f667b7058dc3d275c9380d60526ebd7016899647d4e695af0c63a023 is already stopped" | |
Aug 08 13:10:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:10:05.034705 1562 server.go:779] GET /stats/summary/: (23.431452ms) 200 [[Go-http-client/1.1] 10.56.0.64:58048] | |
Aug 08 13:10:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:10:11.788740 1562 server.go:779] GET /healthz: (31.82µs) 200 [[curl/7.57.0] 127.0.0.1:60998] | |
Aug 08 13:10:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:10:12.262857 1562 server.go:779] GET /healthz: (37.323µs) 200 [[Go-http-client/1.1] 127.0.0.1:32768] | |
Aug 08 13:10:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:10:21.795868 1562 server.go:779] GET /healthz: (25.903µs) 200 [[curl/7.57.0] 127.0.0.1:32774] | |
Aug 08 13:10:31 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:10:31.803171 1562 server.go:779] GET /healthz: (72.868µs) 200 [[curl/7.57.0] 127.0.0.1:32788] | |
Aug 08 13:10:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:10:41.810180 1562 server.go:779] GET /healthz: (19.922µs) 200 [[curl/7.57.0] 127.0.0.1:32798] | |
Aug 08 13:10:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:10:51.817882 1562 server.go:779] GET /healthz: (23.973µs) 200 [[curl/7.57.0] 127.0.0.1:32804] | |
Aug 08 13:11:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:11:01.825063 1562 server.go:779] GET /healthz: (28.449µs) 200 [[curl/7.57.0] 127.0.0.1:32810] | |
Aug 08 13:11:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:11:04.958616 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:11:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:11:05.129428 1562 server.go:779] GET /stats/summary/: (21.402561ms) 200 [[Go-http-client/1.1] 10.56.0.64:58048] | |
Aug 08 13:11:06 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-ip-forwarding[833]: WARNING Could not parse IP address: "local". | |
Aug 08 13:11:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:11:11.832760 1562 server.go:779] GET /healthz: (30.17µs) 200 [[curl/7.57.0] 127.0.0.1:32822] | |
Aug 08 13:11:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:11:12.260989 1562 server.go:779] GET /healthz: (36.633µs) 200 [[Go-http-client/1.1] 127.0.0.1:32824] | |
Aug 08 13:11:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:11:21.840778 1562 server.go:779] GET /healthz: (26.346µs) 200 [[curl/7.57.0] 127.0.0.1:32830] | |
Aug 08 13:11:31 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:11:31.848638 1562 server.go:779] GET /healthz: (34.268µs) 200 [[curl/7.57.0] 127.0.0.1:32840] | |
Aug 08 13:11:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:11:41.856363 1562 server.go:779] GET /healthz: (19.037µs) 200 [[curl/7.57.0] 127.0.0.1:32848] | |
Aug 08 13:11:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:11:51.864106 1562 server.go:779] GET /healthz: (19.796µs) 200 [[curl/7.57.0] 127.0.0.1:32856] | |
Aug 08 13:11:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:11:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Accepted publickey for dking from 69.173.127.111 port 2282 ssh2: RSA SHA256:pNumoUGw+O1nUv6luhwH0kKx2qKzma1TUAvFI83iHok | |
Aug 08 13:11:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[5169]: pam_unix(sshd:session): session opened for user dking by (uid=0) | |
Aug 08 13:11:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[5169]: CONFIG_CHANGE pid=5169 uid=0 auid=5009 ses=2 op=tty_set old-enabled=0 new-enabled=1 old-log_passwd=0 new-log_passwd=0 res=1 | |
Aug 08 13:11:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[5169]: pam_tty_audit(sshd:session): changed status from 0 to 1 | |
Aug 08 13:11:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[5175]: INTEGRITY_RULE file="/usr/bin/dircolors" hash="sha256:db0b3071d1896d4cb925e2539b959071d3dd028501e989ae051a7cc2923a131d" ppid=5174 pid=5175 auid=5009 uid=5009 gid=5009 euid=5009 suid=5009 fsuid=5009 egid=5009 sgid=5009 fsgid=5009 tty=pts0 ses=2 comm="bash" exe="/bin/bash" | |
Aug 08 13:12:00 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:12:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:12:01.871994 1562 server.go:779] GET /healthz: (18.221µs) 200 [[curl/7.57.0] 127.0.0.1:32862] | |
Aug 08 13:12:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 221.194.47.205 port 50192:11: [preauth] | |
Aug 08 13:12:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 221.194.47.205 port 50192 [preauth] | |
Aug 08 13:12:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:12:04.958901 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:12:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:12:05.009981 1562 server.go:779] GET /stats/summary/: (17.422823ms) 200 [[Go-http-client/1.1] 10.56.0.64:58048] | |
Aug 08 13:12:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[5227]: INTEGRITY_RULE file="/usr/bin/sudo" hash="sha256:a2b9f820df24277f9e1b1a51caad7862512456b58eac0f4ef04c10379019a3c4" ppid=5173 pid=5227 auid=5009 uid=5009 gid=5009 euid=5009 suid=5009 fsuid=5009 egid=5009 sgid=5009 fsgid=5009 tty=pts0 ses=2 comm="bash" exe="/bin/bash" | |
Aug 08 13:12:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[5227]: INTEGRITY_RULE file="/usr/libexec/sudo/libsudo_util.so.0.0.0" hash="sha256:fe683290e561336fba7115e6b6b6c3e3f00fbdddcc81e1d666cbd7db312ff382" ppid=5173 pid=5227 auid=5009 uid=5009 gid=5009 euid=0 suid=0 fsuid=0 egid=5009 sgid=5009 fsgid=5009 tty=pts0 ses=2 comm="sudo" exe="/usr/bin/sudo" | |
Aug 08 13:12:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[5227]: INTEGRITY_RULE file="/usr/lib64/sudo/sudoers.so" hash="sha256:a889723b94c6e7d15fc2aabf49daff998825dbdb9f45a3cc17f149112eb1380d" ppid=5173 pid=5227 auid=5009 uid=5009 gid=5009 euid=0 suid=0 fsuid=0 egid=5009 sgid=5009 fsgid=5009 tty=pts0 ses=2 comm="sudo" exe="/usr/bin/sudo" | |
Aug 08 13:12:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 sudo[5227]: dking : TTY=pts/0 ; PWD=/home/dking ; USER=root ; COMMAND=/usr/bin/journalctl | |
Aug 08 13:12:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 sudo[5227]: pam_unix(sudo:session): session opened for user root by dking(uid=0) | |
Aug 08 13:12:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 sudo[5227]: pam_tty_audit(sudo:session): changed status from 1 to 1 | |
Aug 08 13:12:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[5231]: INTEGRITY_RULE file="/usr/bin/less" hash="sha256:6483973b5172964a5387c033105a49614635ad8045b0c10e8cbc02279b7fdc98" ppid=5230 pid=5231 auid=5009 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=2 comm="journalctl" exe="/usr/bin/journalctl" | |
Aug 08 13:12:08 gke-cs-test-dan-test-pool-bca3c3a7-m055 sudo[5227]: pam_unix(sudo:session): session closed for user root | |
Aug 08 13:12:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 sudo[5232]: dking : TTY=pts/0 ; PWD=/home/dking ; USER=root ; COMMAND=/usr/bin/journalctl | |
Aug 08 13:12:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 sudo[5232]: pam_unix(sudo:session): session opened for user root by dking(uid=0) | |
Aug 08 13:12:10 gke-cs-test-dan-test-pool-bca3c3a7-m055 sudo[5232]: pam_tty_audit(sudo:session): changed status from 1 to 1 | |
Aug 08 13:12:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:12:11.879686 1562 server.go:779] GET /healthz: (27.499µs) 200 [[curl/7.57.0] 127.0.0.1:32872] | |
Aug 08 13:12:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:12:12.258338 1562 server.go:779] GET /healthz: (39.25µs) 200 [[Go-http-client/1.1] 127.0.0.1:32874] | |
Aug 08 13:12:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-ip-forwarding[833]: WARNING Could not parse IP address: "local". | |
Aug 08 13:12:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:12:21.886999 1562 server.go:779] GET /healthz: (19.554µs) 200 [[curl/7.57.0] 127.0.0.1:32882] | |
Aug 08 13:12:31 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:12:31.894635 1562 server.go:779] GET /healthz: (19.462µs) 200 [[curl/7.57.0] 127.0.0.1:32892] | |
Aug 08 13:12:38 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:12:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 221.194.47.221 port 46869:11: [preauth] | |
Aug 08 13:12:39 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 221.194.47.221 port 46869 [preauth] | |
Aug 08 13:12:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:12:41.902533 1562 server.go:779] GET /healthz: (40.729µs) 200 [[curl/7.57.0] 127.0.0.1:32900] | |
Aug 08 13:12:44 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:12:45 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 115.238.245.4 port 43861:11: [preauth] | |
Aug 08 13:12:45 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 115.238.245.4 port 43861 [preauth] | |
Aug 08 13:12:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:12:51.909831 1562 server.go:779] GET /healthz: (24.316µs) 200 [[curl/7.57.0] 127.0.0.1:32906] | |
Aug 08 13:13:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:13:01.917130 1562 server.go:779] GET /healthz: (19.599µs) 200 [[curl/7.57.0] 127.0.0.1:32914] | |
Aug 08 13:13:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:13:04.959767 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:13:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:13:05.012831 1562 server.go:779] GET /stats/summary/: (17.755933ms) 200 [[Go-http-client/1.1] 10.56.0.64:58048] | |
Aug 08 13:13:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:13:11.925408 1562 server.go:779] GET /healthz: (21.261µs) 200 [[curl/7.57.0] 127.0.0.1:32924] | |
Aug 08 13:13:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:13:12.255749 1562 server.go:779] GET /healthz: (25.055µs) 200 [[Go-http-client/1.1] 127.0.0.1:32926] | |
Aug 08 13:13:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:13:21.933351 1562 server.go:779] GET /healthz: (31.538µs) 200 [[curl/7.57.0] 127.0.0.1:32932] | |
Aug 08 13:13:31 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:13:31.940570 1562 server.go:779] GET /healthz: (23.572µs) 200 [[curl/7.57.0] 127.0.0.1:32942] | |
Aug 08 13:13:35 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-ip-forwarding[833]: WARNING Could not parse IP address: "local". | |
Aug 08 13:13:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:13:41.948470 1562 server.go:779] GET /healthz: (38.818µs) 200 [[curl/7.57.0] 127.0.0.1:32952] | |
Aug 08 13:13:51 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:13:51.956149 1562 server.go:779] GET /healthz: (22.954µs) 200 [[curl/7.57.0] 127.0.0.1:32958] | |
Aug 08 13:13:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/131358:INFO:update_manager-inl.h(52)] ChromeOSPolicy::UpdateCheckAllowed: START | |
Aug 08 13:13:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/131358:WARNING:evaluation_context-inl.h(43)] Error reading Variable update_disabled: "No value set for update_disabled" | |
Aug 08 13:13:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/131358:WARNING:evaluation_context-inl.h(43)] Error reading Variable release_channel_delegated: "No value set for release_channel_delegated" | |
Aug 08 13:13:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/131358:INFO:chromeos_policy.cc(317)] Periodic check interval not satisfied, blocking until 8/8/2018 13:45:13 GMT | |
Aug 08 13:13:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 update_engine[966]: [0808/131358:INFO:update_manager-inl.h(74)] ChromeOSPolicy::UpdateCheckAllowed: END | |
Aug 08 13:14:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:14:01.964076 1562 server.go:779] GET /healthz: (25.549µs) 200 [[curl/7.57.0] 127.0.0.1:32966] | |
Aug 08 13:14:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:14:04.955948 1562 container_manager_linux.go:446] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service | |
Aug 08 13:14:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:14:04.960617 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:14:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:14:05.015338 1562 server.go:779] GET /stats/summary/: (17.73356ms) 200 [[Go-http-client/1.1] 10.56.0.64:58048] | |
Aug 08 13:14:11 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:14:11.971554 1562 server.go:779] GET /healthz: (26.464µs) 200 [[curl/7.57.0] 127.0.0.1:32976] | |
Aug 08 13:14:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:14:12.252494 1562 server.go:779] GET /healthz: (49.147µs) 200 [[Go-http-client/1.1] 127.0.0.1:32978] | |
Aug 08 13:14:21 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:14:21.978750 1562 server.go:779] GET /healthz: (24.331µs) 200 [[curl/7.57.0] 127.0.0.1:32984] | |
Aug 08 13:14:31 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:14:31.985934 1562 server.go:779] GET /healthz: (29.338µs) 200 [[curl/7.57.0] 127.0.0.1:32994] | |
Aug 08 13:14:41 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:14:41.993163 1562 server.go:779] GET /healthz: (26.246µs) 200 [[curl/7.57.0] 127.0.0.1:33002] | |
Aug 08 13:14:47 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:14:47.020360 1562 kubelet.go:1853] SyncLoop (DELETE, "api"): "job-126-f4rt8_default(55618b52-9b0c-11e8-93a5-42010a8001a5)" | |
Aug 08 13:14:47 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:14:47.025001 1562 kubelet.go:1847] SyncLoop (REMOVE, "api"): "job-126-f4rt8_default(55618b52-9b0c-11e8-93a5-42010a8001a5)" | |
Aug 08 13:14:47 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:14:47.025048 1562 kubelet.go:2030] Failed to delete pod "job-126-f4rt8_default(55618b52-9b0c-11e8-93a5-42010a8001a5)", err: pod not found | |
Aug 08 13:14:49 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-ip-forwarding[833]: WARNING Could not parse IP address: "local". | |
Aug 08 13:14:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:14:52.000421 1562 server.go:779] GET /healthz: (24.613µs) 200 [[curl/7.57.0] 127.0.0.1:33014] | |
Aug 08 13:14:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[5917]: INTEGRITY_RULE file="/bin/sed" hash="sha256:018f830413a9a08e177a0a3dccc31cbc4d33ec0c363fa18b3d7a41846ca00fd9" ppid=5914 pid=5917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/dash" | |
Aug 08 13:14:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[5915]: INTEGRITY_RULE file="/usr/bin/stat" hash="sha256:9ea8fbb8b1f3ca3e2dadfb92bfbcd5e0f2cfcb01fc0a0df5694a208d18758a11" ppid=5914 pid=5915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/dash" | |
Aug 08 13:14:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[5916]: INTEGRITY_RULE file="/bin/grep" hash="sha256:e9ed36d436c8fc63bb64933ab8a8a562f95751e1235a457fc9ca0dff9d45d6ab" ppid=5914 pid=5916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/dash" | |
Aug 08 13:14:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[5915]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libnss_compat-2.19.so" hash="sha256:512a4fc0d268439035265c257dc2d9d063ab45bd5cbf0b63f6ff98d047ed6a8f" ppid=5914 pid=5915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="stat" exe="/usr/bin/stat" | |
Aug 08 13:14:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[5915]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libnsl-2.19.so" hash="sha256:5fb1fe02c096cc5ef4b28fbdc97262d08f13c7a1293cd7bbe997806be9ef88aa" ppid=5914 pid=5915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="stat" exe="/usr/bin/stat" | |
Aug 08 13:14:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[5915]: INTEGRITY_RULE file="/lib/x86_64-linux-gnu/libnss_nis-2.19.so" hash="sha256:ad90088a9ad5f588b7eaf50f93409565da27745a5f08ad008025e6f5b988b00e" ppid=5914 pid=5915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="stat" exe="/usr/bin/stat" | |
Aug 08 13:14:56 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[5920]: INTEGRITY_RULE file="/usr/bin/expr" hash="sha256:4c056fa91af5e3781fbce9f3732ac5d51dad53213f2267254839c48e5003dbed" ppid=5907 pid=5920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sh" exe="/bin/dash" | |
Aug 08 13:15:00 gke-cs-test-dan-test-pool-bca3c3a7-m055 sudo[5232]: pam_unix(sudo:session): session closed for user root | |
Aug 08 13:15:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:15:02.007503 1562 server.go:779] GET /healthz: (53.321µs) 200 [[curl/7.57.0] 127.0.0.1:33020] | |
Aug 08 13:15:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:15:04.960938 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:15:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:15:04.994542 1562 server.go:779] GET /stats/summary/: (16.786642ms) 200 [[Go-http-client/1.1] 10.56.0.64:58048] | |
Aug 08 13:15:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:15:05.104766267Z" level=error msg="Handler for POST /v1.27/containers/9ef9dc3e520e9e0f81a7885b7c96042d652fe810fba773691d43c479e03df40a/stop returned error: Container 9ef9dc3e520e9e0f81a7885b7c96042d652fe810fba773691d43c479e03df40a is already stopped" | |
Aug 08 13:15:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:15:05.133691666Z" level=error msg="Handler for POST /v1.27/containers/07fdfb53cead8b74149cb2c9d8b097140b71c5765018ea16d008e40cddaf9c2e/stop returned error: Container 07fdfb53cead8b74149cb2c9d8b097140b71c5765018ea16d008e40cddaf9c2e is already stopped" | |
Aug 08 13:15:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:15:12.015460 1562 server.go:779] GET /healthz: (27.664µs) 200 [[curl/7.57.0] 127.0.0.1:33036] | |
Aug 08 13:15:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:15:12.249376 1562 server.go:779] GET /healthz: (36.593µs) 200 [[Go-http-client/1.1] 127.0.0.1:33038] | |
Aug 08 13:15:16 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:15:17 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 221.194.47.205 port 53420:11: [preauth] | |
Aug 08 13:15:17 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 221.194.47.205 port 53420 [preauth] | |
Aug 08 13:15:19 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[6083]: INTEGRITY_RULE file="/bin/date" hash="sha256:71fb38657d2a08ad7fa8a7d6d44b54b4a63b2b0ca654e7865c0993443fe36608" ppid=6082 pid=6083 auid=5009 uid=5009 gid=5009 euid=5009 suid=5009 fsuid=5009 egid=5009 sgid=5009 fsgid=5009 tty=pts0 ses=2 comm="bash" exe="/bin/bash" | |
Aug 08 13:15:22 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:15:22.022124 1562 server.go:779] GET /healthz: (21.587µs) 200 [[curl/7.57.0] 127.0.0.1:33044] | |
Aug 08 13:15:32 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:15:32.030004 1562 server.go:779] GET /healthz: (23.854µs) 200 [[curl/7.57.0] 127.0.0.1:33054] | |
Aug 08 13:15:42 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:15:42.037880 1562 server.go:779] GET /healthz: (18.755µs) 200 [[curl/7.57.0] 127.0.0.1:33062] | |
Aug 08 13:15:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:15:52.045717 1562 server.go:779] GET /healthz: (26.638µs) 200 [[curl/7.57.0] 127.0.0.1:33068] | |
Aug 08 13:15:57 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:15:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 221.194.47.221 port 53692:11: [preauth] | |
Aug 08 13:15:58 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 221.194.47.221 port 53692 [preauth] | |
Aug 08 13:16:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:01.705516 1562 kubelet.go:1837] SyncLoop (ADD, "api"): "job-127-g7vrh_default(3268153a-9b0d-11e8-93a5-42010a8001a5)" | |
Aug 08 13:16:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:01.705997 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:16:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:16:01.707619 1562 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 8; ignoring extra CPUs | |
Aug 08 13:16:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:01.883855 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/3268153a-9b0d-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "job-127-g7vrh" (UID: "3268153a-9b0d-11e8-93a5-42010a8001a5") | |
Aug 08 13:16:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:01.883917 1562 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/3268153a-9b0d-11e8-93a5-42010a8001a5-default-token-szcn5") pod "job-127-g7vrh" (UID: "3268153a-9b0d-11e8-93a5-42010a8001a5") | |
Aug 08 13:16:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:01.984354 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/3268153a-9b0d-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "job-127-g7vrh" (UID: "3268153a-9b0d-11e8-93a5-42010a8001a5") | |
Aug 08 13:16:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:01.985163 1562 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/3268153a-9b0d-11e8-93a5-42010a8001a5-default-token-szcn5") pod "job-127-g7vrh" (UID: "3268153a-9b0d-11e8-93a5-42010a8001a5") | |
Aug 08 13:16:01 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/3268153a-9b0d-11e8-93a5-42010a8001a5/volumes/kubernetes.io~secret/default-token-szcn5. | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/3268153a-9b0d-11e8-93a5-42010a8001a5/volumes/kubernetes.io~secret/hail-ci-0-1-service-account-key. | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:02.006474 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/3268153a-9b0d-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "job-127-g7vrh" (UID: "3268153a-9b0d-11e8-93a5-42010a8001a5") | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:02.006807 1562 operation_generator.go:485] MountVolume.SetUp succeeded for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/3268153a-9b0d-11e8-93a5-42010a8001a5-default-token-szcn5") pod "job-127-g7vrh" (UID: "3268153a-9b0d-11e8-93a5-42010a8001a5") | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:02.010876 1562 kuberuntime_manager.go:374] No sandbox for pod "job-127-g7vrh_default(3268153a-9b0d-11e8-93a5-42010a8001a5)" can be found. Need to start a new one | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:02.052947 1562 server.go:779] GET /healthz: (42.065µs) 200 [[curl/7.57.0] 127.0.0.1:33074] | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: veth2bb39aa1: Gained carrier | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: device veth2bb39aa1 entered promiscuous mode | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth2bb39aa1) entered forwarding state | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth2bb39aa1) entered forwarding state | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:16:02.386509507Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container bdb5e2fbb6815237d1c417de2aab55018898dcabbd9c8866c7b173a2f924ab09" | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:02.868879 1562 kubelet.go:1871] SyncLoop (PLEG): "job-127-g7vrh_default(3268153a-9b0d-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"3268153a-9b0d-11e8-93a5-42010a8001a5", Type:"ContainerStarted", Data:"bdb5e2fbb6815237d1c417de2aab55018898dcabbd9c8866c7b173a2f924ab09"} | |
Aug 08 13:16:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:02.869525 1562 kubelet.go:1871] SyncLoop (PLEG): "job-127-g7vrh_default(3268153a-9b0d-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"3268153a-9b0d-11e8-93a5-42010a8001a5", Type:"ContainerStarted", Data:"17a7782f43606644eb920c3c75cd307cafef3281cf702166ac2f8678a2838d7d"} | |
Aug 08 13:16:03 gke-cs-test-dan-test-pool-bca3c3a7-m055 google-ip-forwarding[833]: WARNING Could not parse IP address: "local". | |
Aug 08 13:16:03 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: veth2bb39aa1: Gained IPv6LL | |
Aug 08 13:16:03 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:16:03 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 audit[6469]: INTEGRITY_RULE file="/repo/hail-ci-build.sh" hash="sha256:5d824f8534fd4efd58c46500b45d3846e089c231a1fd108ddb63106a1734b9fe" ppid=6352 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bash" exe="/bin/bash" | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:04.880922 1562 kubelet.go:1871] SyncLoop (PLEG): "job-127-g7vrh_default(3268153a-9b0d-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"3268153a-9b0d-11e8-93a5-42010a8001a5", Type:"ContainerDied", Data:"bdb5e2fbb6815237d1c417de2aab55018898dcabbd9c8866c7b173a2f924ab09"} | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-networkd[644]: veth2bb39aa1: Lost carrier | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth2bb39aa1) entered disabled state | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: device veth2bb39aa1 left promiscuous mode | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kernel: cbr0: port 2(veth2bb39aa1) entered disabled state | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Network configuration changed, trying to establish connection. | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 systemd-timesyncd[612]: Synchronized to time server 169.254.169.254:123 (169.254.169.254). | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:04.963011 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:04.966302 1562 kubelet_pods.go:1080] Killing unwanted pod "job-127-g7vrh" | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:04.995466 1562 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/3268153a-9b0d-11e8-93a5-42010a8001a5-default-token-szcn5") pod "3268153a-9b0d-11e8-93a5-42010a8001a5" (UID: "3268153a-9b0d-11e8-93a5-42010a8001a5") | |
Aug 08 13:16:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:04.995542 1562 reconciler.go:186] operationExecutor.UnmountVolume started for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/3268153a-9b0d-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") pod "3268153a-9b0d-11e8-93a5-42010a8001a5" (UID: "3268153a-9b0d-11e8-93a5-42010a8001a5") | |
Aug 08 13:16:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:16:05.003710641Z" level=error msg="Handler for POST /v1.27/containers/17a7782f43606644eb920c3c75cd307cafef3281cf702166ac2f8678a2838d7d/stop returned error: Container 17a7782f43606644eb920c3c75cd307cafef3281cf702166ac2f8678a2838d7d is already stopped" | |
Aug 08 13:16:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:16:05.004274 1562 remote_runtime.go:115] StopPodSandbox "17a7782f43606644eb920c3c75cd307cafef3281cf702166ac2f8678a2838d7d" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin kubenet failed to teardown pod "job-127-g7vrh_default" network: Error removing container from network: failed to Statfs "/proc/6266/ns/net": no such file or directory | |
Aug 08 13:16:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:16:05.004921 1562 kuberuntime_manager.go:784] Failed to stop sandbox {"docker" "17a7782f43606644eb920c3c75cd307cafef3281cf702166ac2f8678a2838d7d"} | |
Aug 08 13:16:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: E0808 13:16:05.005355 1562 kubelet_pods.go:1083] Failed killing the pod "job-127-g7vrh": failed to "KillPodSandbox" for "3268153a-9b0d-11e8-93a5-42010a8001a5" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin kubenet failed to teardown pod \"job-127-g7vrh_default\" network: Error removing container from network: failed to Statfs \"/proc/6266/ns/net\": no such file or directory" | |
Aug 08 13:16:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:05.008543 1562 operation_generator.go:545] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3268153a-9b0d-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key" (OuterVolumeSpecName: "hail-ci-0-1-service-account-key") pod "3268153a-9b0d-11e8-93a5-42010a8001a5" (UID: "3268153a-9b0d-11e8-93a5-42010a8001a5"). InnerVolumeSpecName "hail-ci-0-1-service-account-key". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Aug 08 13:16:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:05.009853 1562 operation_generator.go:545] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3268153a-9b0d-11e8-93a5-42010a8001a5-default-token-szcn5" (OuterVolumeSpecName: "default-token-szcn5") pod "3268153a-9b0d-11e8-93a5-42010a8001a5" (UID: "3268153a-9b0d-11e8-93a5-42010a8001a5"). InnerVolumeSpecName "default-token-szcn5". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Aug 08 13:16:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:05.013094 1562 server.go:779] GET /stats/summary/: (20.724608ms) 200 [[Go-http-client/1.1] 10.56.0.64:58048] | |
Aug 08 13:16:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:05.095988 1562 reconciler.go:290] Volume detached for volume "hail-ci-0-1-service-account-key" (UniqueName: "kubernetes.io/secret/3268153a-9b0d-11e8-93a5-42010a8001a5-hail-ci-0-1-service-account-key") on node "gke-cs-test-dan-test-pool-bca3c3a7-m055" DevicePath "" | |
Aug 08 13:16:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:05.096050 1562 reconciler.go:290] Volume detached for volume "default-token-szcn5" (UniqueName: "kubernetes.io/secret/3268153a-9b0d-11e8-93a5-42010a8001a5-default-token-szcn5") on node "gke-cs-test-dan-test-pool-bca3c3a7-m055" DevicePath "" | |
Aug 08 13:16:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:05.889552 1562 kubelet.go:1871] SyncLoop (PLEG): "job-127-g7vrh_default(3268153a-9b0d-11e8-93a5-42010a8001a5)", event: &pleg.PodLifecycleEvent{ID:"3268153a-9b0d-11e8-93a5-42010a8001a5", Type:"ContainerDied", Data:"17a7782f43606644eb920c3c75cd307cafef3281cf702166ac2f8678a2838d7d"} | |
Aug 08 13:16:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: W0808 13:16:05.890193 1562 pod_container_deletor.go:77] Container "17a7782f43606644eb920c3c75cd307cafef3281cf702166ac2f8678a2838d7d" not found in pod's containers | |
Aug 08 13:16:06 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:06.957202 1562 kubelet_pods.go:1080] Killing unwanted pod "job-127-g7vrh" | |
Aug 08 13:16:06 gke-cs-test-dan-test-pool-bca3c3a7-m055 dockerd[1509]: time="2018-08-08T13:16:06.971675513Z" level=error msg="Handler for POST /v1.27/containers/17a7782f43606644eb920c3c75cd307cafef3281cf702166ac2f8678a2838d7d/stop returned error: Container 17a7782f43606644eb920c3c75cd307cafef3281cf702166ac2f8678a2838d7d is already stopped" | |
Aug 08 13:16:06 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:06.972822 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:16:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:12.060332 1562 server.go:779] GET /healthz: (23.773µs) 200 [[curl/7.57.0] 127.0.0.1:33098] | |
Aug 08 13:16:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:12.246534 1562 server.go:779] GET /healthz: (28.278µs) 200 [[Go-http-client/1.1] 127.0.0.1:33100] | |
Aug 08 13:16:22 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:22.067838 1562 server.go:779] GET /healthz: (25.684µs) 200 [[curl/7.57.0] 127.0.0.1:33108] | |
Aug 08 13:16:32 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:32.074993 1562 server.go:779] GET /healthz: (23.3µs) 200 [[curl/7.57.0] 127.0.0.1:33122] | |
Aug 08 13:16:42 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:42.081959 1562 server.go:779] GET /healthz: (20.117µs) 200 [[curl/7.57.0] 127.0.0.1:33130] | |
Aug 08 13:16:52 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:16:52.089281 1562 server.go:779] GET /healthz: (25.096µs) 200 [[curl/7.57.0] 127.0.0.1:33136] | |
Aug 08 13:17:02 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:17:02.096419 1562 server.go:779] GET /healthz: (20.875µs) 200 [[curl/7.57.0] 127.0.0.1:33142] | |
Aug 08 13:17:04 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:17:04.963428 1562 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration | |
Aug 08 13:17:05 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:17:05.017705 1562 server.go:779] GET /stats/summary/: (16.936373ms) 200 [[Go-http-client/1.1] 10.56.0.64:58048] | |
Aug 08 13:17:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:17:12.103585 1562 server.go:779] GET /healthz: (34.414µs) 200 [[curl/7.57.0] 127.0.0.1:33152] | |
Aug 08 13:17:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 kubelet[1562]: I0808 13:17:12.247175 1562 server.go:779] GET /healthz: (31.95µs) 200 [[Go-http-client/1.1] 127.0.0.1:33154] | |
Aug 08 13:17:12 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Could not load host key: /mnt/stateful_partition/etc/ssh/ssh_host_dsa_key | |
Aug 08 13:17:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Received disconnect from 115.238.245.4 port 45092:11: [preauth] | |
Aug 08 13:17:14 gke-cs-test-dan-test-pool-bca3c3a7-m055 sshd[707]: Disconnected from 115.238.245.4 port 45092 [preauth] | |
Aug 08 13:17:16 gke-cs-test-dan-test-pool-bca3c3a7-m055 sudo[6973]: dking : TTY=pts/0 ; PWD=/home/dking ; USER=root ; COMMAND=/usr/bin/journalctl | |
Aug 08 13:17:16 gke-cs-test-dan-test-pool-bca3c3a7-m055 sudo[6973]: pam_unix(sudo:session): session opened for user root by dking(uid=0) | |
Aug 08 13:17:16 gke-cs-test-dan-test-pool-bca3c3a7-m055 sudo[6973]: pam_tty_audit(sudo:session): changed status from 1 to 1 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment