$ aws --region=ap-northeast-2 ec2 describe-spot-price-history --instance-types c4.large --start-time=$(date +%s) --product-descriptions="Linux/UNIX" --query 'SpotPriceHistory[*].{az:AvailabilityZone, price:SpotPrice}'
[
{
"price": "0.024900",
"az": "ap-northeast-2a"
},
{
<?php | |
/** | |
* UUID class | |
* | |
* The following class generates VALID RFC 4122 COMPLIANT | |
* Universally Unique IDentifiers (UUID) version 3, 4 and 5. | |
* | |
* UUIDs generated validates using OSSP UUID Tool, and output | |
* for named-based UUIDs are exactly the same. This is a pure | |
* PHP implementation. |
This snippet is a sample showing how to implement CloudWatch Logs streaming to ElasticSearch using terraform
.
I wrote this gist
because I didn't found a clear, end-to-end example on how to achieve this task. In particular,
I understood the resource "aws_lambda_permission" "cloudwatch_allow"
part by reading a couple of bug reports plus
this stackoverflow post.
The js
file is actually the Lambda function automatically created by AWS when creating this pipeline through the
web console. I only added a endpoint
variable handling so it is configurable from terraform
.
#!/bin/bash | |
# This script creates a docker config.json file with the auth section | |
# as an example of what can be passed into GitLab-CI and used in | |
# conjunction with DOCKER_CONFIG - the config file directory location. | |
# command line parameter default values | |
DOCKER_REGISTRY="" | |
DOCKER_USER="" | |
DOCKER_PASSWORD="" |
# Add this snippet to the top of your playbook. | |
# It will install python2 if missing (but checks first so no expensive repeated apt updates) | |
# gwillem@gmail.com | |
- hosts: all | |
gather_facts: False | |
tasks: | |
- name: install python 2 | |
raw: test -e /usr/bin/python || (apt -y update && apt install -y python-minimal) |
Windows Registry Editor Version 5.00 | |
[HKEY_CURRENT_USER\Software\SimonTatham\PuTTY\Sessions\monokai] | |
"Colour21"="255,255,255" | |
"Colour20"="245,222,179" | |
"Colour19"="200,240,240" | |
"Colour18"="0,217,217" | |
"Colour17"="179,146,239" | |
"Colour16"="174,129,255" | |
"Colour15"="122,204,218" |
# Login to AWS registry (must have docker running) | |
docker-login: | |
$$(aws ecr get-login --no-include-email --region us-east-1 --profile=mycompany) | |
# Build docker target | |
docker-build: | |
docker build -f Dockerfile --no-cache -t mycompany/myapp . | |
# Tag docker image | |
docker-tag: |
Protect container instance with containers running from scale-in. Uses aws-cli set-instance-protection. Inspired by: https://stackoverflow.com/questions/45020323/ecs-asg-scaling-down-policy-recommendations
Ignores ecs-agent and dd-agent when counting running containers. You can add more in containers_running
in the script below.
- awk
- awscli
{% block collection_widget %} | |
{% spaceless %} | |
<div class="collection"> | |
{% if prototype is defined %} | |
{% set attr = attr|merge({'data-prototype': block('collection_item_widget') }) %} | |
{% endif %} | |
<div {{ block('widget_container_attributes') }}> | |
{{ form_errors(form) }} | |
<ul> | |
{% for rows in form %} |
#!/usr/bin/env python | |
# -*- coding: utf-8 -*- | |
def archive_to_bytes(archive): | |
def to_seconds(s): | |
SECONDS_IN_A = { | |
's': 1, | |
'm': 1 * 60, | |
'h': 1 * 60 * 60, |