I hereby claim:
- I am phrawzty on github.
- I am phrawzty (https://keybase.io/phrawzty) on keybase.
- I have a public key whose fingerprint is 1204 786B D9B7 7FA0 618B 82A1 016F 9A65 192B FE03
To claim this, I am signing this object:
$ md5 boot2docker.iso | |
MD5 (boot2docker.iso) = e58b30593a2db15afc98f9a12293567d | |
$ md5 boot2docker | |
MD5 (boot2docker) = a73b6e6dec322393983ed34f640c31e5 | |
$ ./docker version | |
Client version: 0.8.0 | |
Go version (client): go1.2 | |
Git commit (client): cc3a8c8 |
I hereby claim:
To claim this, I am signing this object:
TEST: Mount disparate remote storage devices as if they were a single directory.
Ubuntu 14.04 "Daily cloud image" as obtained on 2014-07-31
Duplicati.CommandLine.exe backup \ | |
--passphrase=$PASSPHRASE \ | |
--aws_access_key_id=$AWS_KEY \ | |
--aws_secret_access_key=$AWS_SECRET \ | |
--s3-location-constraint=$LOC \ | |
--aes-encryption-dont-allow-fallback=true \ | |
$DIR \ | |
s3://$S3_BUCKET/$DIR | |
# Obviously the S3 DIR target can be tweaked; this is just a simple example. |
#!/usr/bin/env python | |
import hashlib | |
import logging | |
import boto | |
import config | |
import happybase | |
logger = logging.getLogger(__name__) |
#!/usr/bin/env bash | |
function techo { | |
STAMP=`date '+%b %d %H:%M:%S'` | |
echo "${STAMP} BOOTSTRAP: ${@}" | |
} | |
techo "start" | |
techo "install puppet yum repo" | |
rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm |
Managing multiple user accounts within the cloud-based Socorro infrastructure is a fool's errand; instead, the plan is use a single login (role acccount) with multiple accepted SSH keys (one per user). These keys are managed from the Source of Truth and implanted during the node provisioning step.
In order to keep track of things, however, it will be helpful to tag the public SSH keys with an identifier of the user that possesses the associated private key. Normally this is what the "comment" field is for:
ssh-rsa <big_ol_key> [comment]
The issue here is that the "comment" section isn't exported, announced, or otherwise relevent at all from a system perspective. Instead, I propose adding a small environment variable that does the job:
environment="SSH_KEY=happyuser" ssh-rsa <big_ol_key> [comment]
AWS ELBs have a series of "policies" which group different HTTPS (read: TLS and SSL) profiles together. It is possible that the "2011-08" policy would be appropriate for this purpose (remains to be verified), otherwise we can define a custom policy that fits our needs. Unfortunately for us, these policies cannot currently be managed in Terraform, so this may end up be trickier than we'd first envisioned.
One possible workaround is to use local-exec to apply the policy manually, as suggested by t0m on IRC: http://paste.scsys.co.uk/488127
provisioner "local-exec" {
command = "aws elb create-load-balancer-policy --region ${var.region} --profile ${var.account} --load-balancer-name ${aws_elb.extelb.name} --policy-name EnableProxyProtocol --policy-type-name ProxyProtocolPolicyType --policy-attributes AttributeName=ProxyProtocol,AttributeValue=Tru
variable "environment" {} | |
variable "access_key" {} | |
variable "secret_key" {} | |
variable "secret_bucket" {} | |
variable "subnets" {} | |
variable "collector_cert" { | |
default = { | |
prod = "" | |
stage = "" | |
} |