I hereby claim:
- I am nrvale0 on github.
- I am nrvale0 (https://keybase.io/nrvale0) on keybase.
- I have a public key whose fingerprint is 47E1 FCAD 2CB1 3B5F AB56 9447 466D 243F B57F 3F56
To claim this, I am signing this object:
I hereby claim:
To claim this, I am signing this object:
echo 'hello there'
#!/bin/bash | |
[ ! -n "$DEBUG" ] || set -x | |
set -ue | |
function onerr { | |
echo 'Cleaning up after error...' | |
exit -1 |
There are a couple of somewhat unique behaviors of the aws_db_instance which might come into play here:
Changes to a DB instance can occur when you manually change a parameter, such as allocated_storage, and are reflected in the next maintenance window. Because of this, Terraform may report a difference in its planning phase because a modification has not yet taken place. You can use the apply_immediately flag to instruct the service to apply the change immediately (see documentation below).
So it would seem that changes to the sec group/parameter group might not be instantaneous depending on the configured "maintenance window".
Also, there's this:
$myaddr = $facts['networking']['interfaces']['eth0']['ip'] | |
notice("Configuring Consul to listen on ${myaddr}...") | |
file { '/var/hashicorp': | |
ensure => directory, | |
} | |
class { '::consul': | |
config_hash => { |
Sushain,
The behavior you describe around the 0th instance is strange but, regardless, I think an element() expression is not the correct approach here. Mocking things up a bit...
$ mkdir -p /tmp/terraform && cd /tmp/terraform
$ cat << EOF > test.tf
variable "instances_as_output" {
default = "i-94834,i-98454,i-98342"
$ vagrant up && vagrant destroy -f) > /tmp/log 2>&1 | |
Bringing machine 'consul0' up with 'docker' provider... | |
Bringing machine 'vault0' up with 'docker' provider... | |
Bringing machine 'nomad-server0' up with 'docker' provider... | |
Bringing machine 'nomad-client0' up with 'docker' provider... | |
Bringing machine 'nomad-client1' up with 'docker' provider... | |
Bringing machine 'nomad-client2' up with 'docker' provider... | |
==> nomad-client2: Building the container from a Dockerfile... | |
nomad-client2: Sending build context to Docker daemon 152.6kB | |
nomad-client2: Step 1/18 : FROM python:3 |
This is the happy path. Node is ejected from cluster in an orderly manner and then kubectl is used to spawn a replacement Pod. Although not shown in the output, the new Pod has the same IP address as the old Pod.
In this scenario, everything works as expected.
$ psql -W -h mydb-0.mydb -U arepuser -d mydb -c 'select * from bdr.bdr_nodes'
node_sysid | node_timeline | node_dboid | node_status | node_name | node_local_dsn | node_init_from_dsn | node_read_only
---------------------+---------------+------------+-------------+----------------------------+--------------------------------------------------------------------------+----------------------------------------------------------------------------+----------------
6395635580046348310 | 1 | 16387 | r | mydb-0-11ac0600-1489099948 | host=/var/run/postgresql dbname=mydb user=are
#!/usr/bin/env python3 | |
S = '01000101010110011101010101' | |
N = len(S) | |
posS = 0 | |
A = '101' | |
k = len(A) | |
posA = 0 |
#!/bin/sh | |
# send all stdout & stderr to rancherci-bootstrap.log | |
exec > /tmp/rancherci-bootstrap.log | |
exec 2>&1 | |
set -uxe | |
############################################################################### | |
# figure out the OS family for our context |