Skip to content

Instantly share code, notes, and snippets.

@papagunit
Created April 13, 2016 16:44
Show Gist options
  • Save papagunit/d8217476a013aa10805010ee1715374f to your computer and use it in GitHub Desktop.
Save papagunit/d8217476a013aa10805010ee1715374f to your computer and use it in GitHub Desktop.
Command failed: wait ${SUBPROC} on line 326.
******************* gcloud compute stdout *******************
NAME ZONE SIZE_GB TYPE STATUS
hadoop-w-0-pd us-central1-f 4 pd-standard READY
NAME ZONE SIZE_GB TYPE STATUS
hadoop-w-3-pd us-central1-f 4 pd-standard READY
NAME ZONE SIZE_GB TYPE STATUS
hadoop-w-2-pd us-central1-f 4 pd-standard READY
NAME ZONE SIZE_GB TYPE STATUS
hadoop-w-1-pd us-central1-f 4 pd-standard READY
NAME ZONE SIZE_GB TYPE STATUS
hadoop-m-pd us-central1-f 4 pd-standard READY
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
hadoop-w-1 us-central1-f n1-standard-4 10.128.0.2 104.197.114.4 RUNNING
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
hadoop-m us-central1-f n1-standard-4 10.128.0.4 104.197.193.135 RUNNING
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
hadoop-w-3 us-central1-f n1-standard-4 10.128.0.5 130.211.116.182 RUNNING
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
hadoop-w-0 us-central1-f n1-standard-4 10.128.0.3 107.178.219.36 RUNNING
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
hadoop-w-2 us-central1-f n1-standard-4 10.128.0.6 104.197.172.90 RUNNING
******************* gcloud compute stderr *******************
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#pdperformance.
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#pdperformance.
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#pdperformance.
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#pdperformance.
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#pdperformance.
Created [https://www.googleapis.com/compute/v1/projects/hwxex-1276/zones/us-central1-f/disks/hadoop-w-0-pd].
INFO: Explict Display.
Created [https://www.googleapis.com/compute/v1/projects/hwxex-1276/zones/us-central1-f/disks/hadoop-w-3-pd].
Created [https://www.googleapis.com/compute/v1/projects/hwxex-1276/zones/us-central1-f/disks/hadoop-w-2-pd].
INFO: Explict Display.
INFO: Explict Display.
Created [https://www.googleapis.com/compute/v1/projects/hwxex-1276/zones/us-central1-f/disks/hadoop-w-1-pd].
INFO: Explict Display.
Created [https://www.googleapis.com/compute/v1/projects/hwxex-1276/zones/us-central1-f/disks/hadoop-m-pd].
INFO: Explict Display.
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#pdperformance.
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#pdperformance.
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#pdperformance.
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#pdperformance.
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#pdperformance.
Created [https://www.googleapis.com/compute/v1/projects/hwxex-1276/zones/us-central1-f/instances/hadoop-w-1].
INFO: Explict Display.
Created [https://www.googleapis.com/compute/v1/projects/hwxex-1276/zones/us-central1-f/instances/hadoop-m].
INFO: Explict Display.
Created [https://www.googleapis.com/compute/v1/projects/hwxex-1276/zones/us-central1-f/instances/hadoop-w-0].
Created [https://www.googleapis.com/compute/v1/projects/hwxex-1276/zones/us-central1-f/instances/hadoop-w-3].
INFO: Explict Display.
INFO: Explict Display.
Created [https://www.googleapis.com/compute/v1/projects/hwxex-1276/zones/us-central1-f/instances/hadoop-w-2].
INFO: Explict Display.
ssh: connect to host 104.197.193.135 port 22: Connection refused
ssh: connect to host 107.178.219.36 port 22: Connection refused
Warning: Permanently added '104.197.172.90' (RSA) to the list of known hosts.
Warning: Permanently added '130.211.116.182' (RSA) to the list of known hosts.
INFO: Display format "default".
Warning: Permanently added '104.197.193.135' (RSA) to the list of known hosts.
Warning: Permanently added '107.178.219.36' (RSA) to the list of known hosts.
INFO: Display format "default".
INFO: Display format "default".
Connection reset by 104.197.114.4 port 22
Connection reset by 104.197.172.90 port 22
Warning: Permanently added '104.197.114.4' (RSA) to the list of known hosts.
Warning: Permanently added '104.197.172.90' (RSA) to the list of known hosts.
INFO: Display format "default".
INFO: Display format "default".
Warning: Permanently added '104.197.172.90' (RSA) to the list of known hosts.
Warning: Permanently added '107.178.219.36' (RSA) to the list of known hosts.
Warning: Permanently added '130.211.116.182' (RSA) to the list of known hosts.
Warning: Permanently added '104.197.193.135' (RSA) to the list of known hosts.
Warning: Permanently added '104.197.114.4' (RSA) to the list of known hosts.
Connection to 107.178.219.36 closed.
INFO: Display format "default".
Connection to 104.197.193.135 closed.
INFO: Display format "default".
Connection to 104.197.114.4 closed.
INFO: Display format "default".
Connection to 130.211.116.182 closed.
INFO: Display format "default".
Connection to 104.197.172.90 closed.
INFO: Display format "default".
Warning: Permanently added '104.197.114.4' (RSA) to the list of known hosts.
Warning: Permanently added '107.178.219.36' (RSA) to the list of known hosts.
Warning: Permanently added '104.197.172.90' (RSA) to the list of known hosts.
Warning: Permanently added '104.197.193.135' (RSA) to the list of known hosts.
Warning: Permanently added '130.211.116.182' (RSA) to the list of known hosts.
Connection to 104.197.172.90 closed.
INFO: Display format "default".
Connection to 104.197.114.4 closed.
INFO: Display format "default".
Connection to 107.178.219.36 closed.
INFO: Display format "default".
Connection to 130.211.116.182 closed.
INFO: Display format "default".
Connection to 104.197.193.135 closed.
INFO: Display format "default".
Warning: Permanently added '104.197.193.135' (RSA) to the list of known hosts.
Connection to 104.197.193.135 closed.
INFO: Display format "default".
Warning: Permanently added '107.178.219.36' (RSA) to the list of known hosts.
Warning: Permanently added '104.197.114.4' (RSA) to the list of known hosts.
Warning: Permanently added '104.197.172.90' (RSA) to the list of known hosts.
Warning: Permanently added '130.211.116.182' (RSA) to the list of known hosts.
Warning: Permanently added '104.197.193.135' (RSA) to the list of known hosts.
Connection to 104.197.193.135 closed.
INFO: Display format "default".
Connection to 130.211.116.182 closed.
INFO: Display format "default".
Connection to 107.178.219.36 closed.
INFO: Display format "default".
Connection to 104.197.172.90 closed.
INFO: Display format "default".
Connection to 104.197.114.4 closed.
INFO: Display format "default".
Warning: Permanently added '104.197.193.135' (RSA) to the list of known hosts.
Connection to 104.197.193.135 closed.
************ ERROR logs from gcloud compute stderr ************
******************* Exit codes and VM logs *******************
Wed, Apr 13, 2016 09:15:11: Exited 255 : gcloud --project=hwxex-1276 --quiet --verbosity=info compute ssh hadoop-m --command=exit 0 --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-central1-f
Wed, Apr 13, 2016 09:15:12: Exited 255 : gcloud --project=hwxex-1276 --quiet --verbosity=info compute ssh hadoop-w-0 --command=exit 0 --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-central1-f
Wed, Apr 13, 2016 09:15:33: Exited 255 : gcloud --project=hwxex-1276 --quiet --verbosity=info compute ssh hadoop-w-1 --command=exit 0 --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-central1-f
Wed, Apr 13, 2016 09:15:34: Exited 255 : gcloud --project=hwxex-1276 --quiet --verbosity=info compute ssh hadoop-w-2 --command=exit 0 --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-central1-f
Wed, Apr 13, 2016 09:29:34: Exited 1 : gcloud --project=hwxex-1276 --quiet --verbosity=info compute ssh hadoop-m --command=sudo su -l -c "cd ${PWD} && ./install-ambari-components.sh" 2>>install-ambari-components_deploy.stderr 1>>install-ambari-components_deploy.stdout --ssh-flag=-tt --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-central1-f
hadoop-m: Wed, Apr 13, 2016 09:29:34: Running gcloud --project=hwxex-1276 --quiet --verbosity=info compute ssh hadoop-m --command=tail -vn 30 *.stderr --ssh-flag=-n --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-central1-f
hadoop-m: ==> ambari-setup_deploy.stderr <==
hadoop-m: safe_format_and_mount: mke2fs 1.41.12 (17-May-2010)
hadoop-m: safe_format_and_mount: Discarding device blocks: 0/1048576 524288/1048576done
hadoop-m: safe_format_and_mount: Filesystem label=
hadoop-m: safe_format_and_mount: OS type: Linux
hadoop-m: safe_format_and_mount: Block size=4096 (log=2)
hadoop-m: safe_format_and_mount: Fragment size=4096 (log=2)
hadoop-m: safe_format_and_mount: Stride=1 blocks, Stripe width=0 blocks
hadoop-m: safe_format_and_mount: 262144 inodes, 1048576 blocks
hadoop-m: safe_format_and_mount: 52428 blocks (5.00%) reserved for the super user
hadoop-m: safe_format_and_mount: First data block=0
hadoop-m: safe_format_and_mount: Maximum filesystem blocks=1073741824
hadoop-m: safe_format_and_mount: 32 block groups
hadoop-m: safe_format_and_mount: 32768 blocks per group, 32768 fragments per group
hadoop-m: safe_format_and_mount: 8192 inodes per group
hadoop-m: safe_format_and_mount: Superblock backups stored on blocks:
hadoop-m: safe_format_and_mount: 32768, 98304, 163840, 229376, 294912, 819200, 884736
hadoop-m: safe_format_and_mount:
hadoop-m: safe_format_and_mount: Writing inode tables: 0/32 1/32 2/32 3/32 4/32 5/32 6/32 7/32 8/32 9/3210/3211/3212/3213/3214/3215/3216/3217/3218/3219/3220/3221/3222/3223/3224/3225/3226/3227/3228/3229/3230/3231/32done
hadoop-m: safe_format_and_mount: Creating journal (32768 blocks): done
hadoop-m: safe_format_and_mount: Writing superblocks and filesystem accounting information: done
hadoop-m: safe_format_and_mount:
hadoop-m: safe_format_and_mount: This filesystem will be automatically checked every 23 mounts or
hadoop-m: safe_format_and_mount: 180 days, whichever comes first. Use tune2fs -c or -i to override.
hadoop-m: safe_format_and_mount: Running: mount -o discard,defaults /dev/disk/by-id/google-persistent-disk-1 /mnt/pd1
hadoop-m: which: no apt-get in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
hadoop-m: warning: rpmts_HdrFromFdno: Header V4 RSA/SHA1 Signature, key ID 07513cad: NOKEY
hadoop-m: Importing GPG key 0x07513CAD:
hadoop-m: Userid: "Jenkins (HDP Builds) <jenkin@hortonworks.com>"
hadoop-m: From : http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
hadoop-m: nohup: ignoring input
hadoop-m:
hadoop-m: ==> bootstrap.stderr <==
hadoop-m: Downloading file://./public-hostname-gcloud.sh: 174 B/174 B
hadoop-m: Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/create_blueprint.py...
hadoop-m: Downloading file://./create_blueprint.py: 0 B/4.12 KiB
Downloading file://./ambari_manual_env.sh: 3.69 KiB/3.69 KiB
hadoop-m: Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/deploy-ssh-master-setup.sh...
hadoop-m: Downloading file://./deploy-ssh-master-setup.sh: 0 B/1.59 KiB
Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/hadoop-env-setup.sh...
hadoop-m: Downloading file://./hadoop-env-setup.sh: 0 B/32.74 KiB
Downloading file://./create_blueprint.py: 4.12 KiB/4.12 KiB
hadoop-m: Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/deploy-core-setup.sh...
hadoop-m: Downloading file://./deploy-core-setup.sh: 0 B/28.77 KiB
Downloading file://./hadoop-env-setup.sh: 32.74 KiB/32.74 KiB
hadoop-m: Downloading file://./deploy-ssh-master-setup.sh: 1.59 KiB/1.59 KiB
hadoop-m: Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/deploy-master-nfs-setup.sh...
hadoop-m: Downloading file://./deploy-master-nfs-setup.sh: 0 B/3.84 KiB
Downloading file://./deploy-core-setup.sh: 28.77 KiB/28.77 KiB
hadoop-m: Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/deploy-client-nfs-setup.sh...
hadoop-m: Downloading file://./deploy-master-nfs-setup.sh: 3.84 KiB/3.84 KiB
hadoop-m: Downloading file://./deploy-client-nfs-setup.sh: 0 B/1.53 KiB
Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/deploy-ssh-worker-setup.sh...
hadoop-m: Downloading file://./deploy-ssh-worker-setup.sh: 0 B/1.37 KiB
Downloading file://./deploy-client-nfs-setup.sh: 1.53 KiB/1.53 KiB
hadoop-m: Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/install_connectors.sh...
hadoop-m: Downloading file://./install_connectors.sh: 0 B/6.19 KiB
Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/deploy-start.sh...
hadoop-m: Downloading file://./deploy-start.sh: 0 B/1.27 KiB
Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/deploy_start2.sh...
hadoop-m: Downloading file://./deploy_start2.sh: 0 B/1.41 KiB
Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/install-gcs-connector-on-ambari.sh...
hadoop-m: Downloading file://./install-gcs-connector-on-ambari.sh: 0 B/1.47 KiB
Downloading file://./install_connectors.sh: 6.19 KiB/6.19 KiB
hadoop-m: Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/install-ambari-components.sh...
hadoop-m: Downloading file://./install-ambari-components.sh: 0 B/3.9 KiB
Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/update-ambari-config.sh...
hadoop-m: Downloading file://./update-ambari-config.sh: 0 B/3.3 KiB
Downloading file://./deploy-start.sh: 1.27 KiB/1.27 KiB
hadoop-m: Copying gs://hwxex-1276/bdutil-staging/hadoop-m/20160413-091354-i4y/ambari-setup.sh...
hadoop-m: Downloading file://./ambari-setup.sh: 0 B/9.27 KiB
Downloading file://./deploy_start2.sh: 1.41 KiB/1.41 KiB
hadoop-m: Downloading file://./install-gcs-connector-on-ambari.sh: 1.47 KiB/1.47 KiB
hadoop-m: Downloading file://./deploy-ssh-worker-setup.sh: 1.37 KiB/1.37 KiB
hadoop-m: Downloading file://./update-ambari-config.sh: 3.3 KiB/3.3 KiB
hadoop-m: Downloading file://./install-ambari-components.sh: 3.9 KiB/3.9 KiB
hadoop-m: Downloading file://./ambari-setup.sh: 9.27 KiB/9.27 KiB
hadoop-m:
hadoop-m: ==> deploy-client-nfs-setup_deploy.stderr <==
hadoop-m: which: no apt-get in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
hadoop-m: which: no apt-get in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
hadoop-m:
hadoop-m: ==> deploy-master-nfs-setup_deploy.stderr <==
hadoop-m: which: no systemctl in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
hadoop-m: which: no apt-get in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
hadoop-m:
hadoop-m: ==> install-ambari-components_deploy.stderr <==
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "IN_PROGRESS"+"PENDING"
hadoop-m: ambari_wait status: "ABORTED"+"FAILED"+"PENDING"
hadoop-m: Ambari operiation failed with status: "ABORTED"+"FAILED"+"PENDING"
hadoop-m: .
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment