Skip to content

Instantly share code, notes, and snippets.

@monder
Created August 26, 2016 13:46
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save monder/29b51bc30691afc7f270dedca961a069 to your computer and use it in GitHub Desktop.
Save monder/29b51bc30691afc7f270dedca961a069 to your computer and use it in GitHub Desktop.
core@ip-10-0-1-60 ~ $ cat /etc/os-release
NAME=CoreOS
ID=coreos
VERSION=1122.0.0
VERSION_ID=1122.0.0
BUILD_ID=2016-07-27-0739
PRETTY_NAME="CoreOS 1122.0.0 (MoreOS)"
ANSI_COLOR="1;32"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://github.com/coreos/bugs/issues"
core@ip-10-0-1-60 ~ $ sudo rkt run --debug --insecure-options=image https://s3-eu-west-1.amazonaws.com/monder-cc/bugs/library-elasticsearch-2.2.aci
image: using image from local store for image name coreos.com/rkt/stage1-coreos:1.11.0
image: remote fetching from URL "https://s3-eu-west-1.amazonaws.com/monder-cc/bugs/library-elasticsearch-2.2.aci"
image: fetching image from https://s3-eu-west-1.amazonaws.com/monder-cc/bugs/library-elasticsearch-2.2.aci
Downloading ACI: [=============================================] 163 MB/163 MB
stage0: Preparing stage1
stage0: Writing image manifest
stage0: Loading image sha512-930cb2281df34ee71123134154b6118047c59e21c8bc50943182371297db762e
stage0: Writing image manifest
stage0: Writing pod manifest
stage0: Setting up stage1
stage0: Wrote filesystem to /var/lib/rkt/pods/run/9402756b-4555-48e4-bb8f-910b6d0d3dc8
stage0: Pivoting to filesystem /var/lib/rkt/pods/run/9402756b-4555-48e4-bb8f-910b6d0d3dc8
stage0: Execing /init
networking: loading networks from /etc/rkt/net.d
networking: loading network default with type ptp
stage1: warning: no volume specified for mount point "volume-usr-share-elasticsearch-data", implicitly creating an "empty" volume. This volume will be removed when the pod is garbage-collected.
stage1: Docker converted image, initializing implicit volume with data contained at the mount point "volume-usr-share-elasticsearch-data".
stage1: warning: no volume specified for mount point "volume-usr-share-elasticsearch-data", implicitly creating an "empty" volume. This volume will be removed when the pod is garbage-collected.
stage1: Docker converted image, initializing implicit volume with data contained at the mount point "volume-usr-share-elasticsearch-data".
stage1: creating an empty volume folder for sharing: "sharedVolumes/library-elasticsearch-volume-usr-share-elasticsearch-data"
Spawning container rkt-9402756b-4555-48e4-bb8f-910b6d0d3dc8 on /var/lib/rkt/pods/run/9402756b-4555-48e4-bb8f-910b6d0d3dc8/stage1/rootfs.
Press ^] three times within 1s to kill container.
systemd 229 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT -GNUTLS -ACL +XZ -LZ4 +SECCOMP +BLKID -ELFUTILS +KMOD -IDN)
Detected virtualization rkt.
Detected architecture x86-64.
Welcome to Linux!
Set hostname to <rkt-9402756b-4555-48e4-bb8f-910b6d0d3dc8>.
Initializing machine ID from container UUID.
[ OK ] Listening on Journal Socket.
[ OK ] Created slice system.slice.
[ OK ] Started Pod shutdown.
Starting Create /etc/passwd and /etc/group...
[ OK ] Created slice system-prepare\x2dapp.slice.
[ OK ] Started library-elasticsearch Reaper.
[ OK ] Listening on Journal Socket (/dev/log).
Starting Journal Service...
[ OK ] Started Create /etc/passwd and /etc/group.
[ OK ] Started Journal Service.
Starting Prepare minimum environment for chrooted applications...
[ OK ] Started Prepare minimum environment for chrooted applications.
[ OK ] Started Application=library-elasticsearch Image=library-elasticsearch.
[ OK ] Reached target rkt apps target.
[523660.104546] library-elasticsearch[5]: [2016-08-26 13:31:41,082][INFO ][node ] [Sean Cassidy] version[2.2.2], pid[5], build[fcc01dd/2016-03-29T08:49:35Z]
[523660.105165] library-elasticsearch[5]: [2016-08-26 13:31:41,086][INFO ][node ] [Sean Cassidy] initializing ...
[523661.053492] library-elasticsearch[5]: [2016-08-26 13:31:42,034][INFO ][plugins ] [Sean Cassidy] modules [lang-expression, lang-groovy], plugins [head], sites [head]
[523661.098586] library-elasticsearch[5]: [2016-08-26 13:31:42,079][INFO ][env ] [Sean Cassidy] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/xvda9)]], net usable_space [6.9gb], net total_space [17gb], spins? [unknown], types [ext4]
[523661.099132] library-elasticsearch[5]: [2016-08-26 13:31:42,080][INFO ][env ] [Sean Cassidy] heap size [1015.6mb], compressed ordinary object pointers [true]
[523661.099756] library-elasticsearch[5]: [2016-08-26 13:31:42,080][WARN ][env ] [Sean Cassidy] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
[523664.043924] library-elasticsearch[5]: [2016-08-26 13:31:45,025][INFO ][node ] [Sean Cassidy] initialized
[523664.046159] library-elasticsearch[5]: [2016-08-26 13:31:45,027][INFO ][node ] [Sean Cassidy] starting ...
[523664.160972] library-elasticsearch[5]: [2016-08-26 13:31:45,142][INFO ][transport ] [Sean Cassidy] publish_address {172.16.28.7:9300}, bound_addresses {[::]:9300}
[523664.179199] library-elasticsearch[5]: [2016-08-26 13:31:45,160][INFO ][discovery ] [Sean Cassidy] elasticsearch/k3os6PNoRyaoqhkz4Bum9Q
[523667.246282] library-elasticsearch[5]: [2016-08-26 13:31:48,227][INFO ][cluster.service ] [Sean Cassidy] new_master {Sean Cassidy}{k3os6PNoRyaoqhkz4Bum9Q}{172.16.28.7}{172.16.28.7:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[523667.279904] library-elasticsearch[5]: [2016-08-26 13:31:48,261][INFO ][http ] [Sean Cassidy] publish_address {172.16.28.7:9200}, bound_addresses {[::]:9200}
[523667.280552] library-elasticsearch[5]: [2016-08-26 13:31:48,261][INFO ][node ] [Sean Cassidy] started
[523667.308360] library-elasticsearch[5]: [2016-08-26 13:31:48,289][INFO ][gateway ] [Sean Cassidy] recovered [0] indices into cluster_state
^C^C^C^]^]
Container rkt-9402756b-4555-48e4-bb8f-910b6d0d3dc8 terminated by signal KILL.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment