Redis cluster using docker
A quick instruction to setup the redis cluster on local machine for quick testing. I am going to use bitnami docker images.
Instructions
- Start the redis cluster with one master and one slave
docker-compose up -d
A quick instruction to setup the redis cluster on local machine for quick testing. I am going to use bitnami docker images.
docker-compose up -d
2020/06/05 17:01:13 [WARN] Provider "registry.terraform.io/-/aws" produced an invalid plan for aws_batch_job_definition.batch-job-definition, but we are tolerating it because it is using the legacy plugin SDK. | |
The following problems may be the cause of any confusing errors from downstream operations: | |
batch-job-role\",\"memory\":4096,\"mountPoints\":[],\"resourceRequirements\":[],\"ulimits\":[],\"vcpus\":1,\"volumes\":[]}") | |
2020/06/05 17:01:13 [INFO] backend/local: plan operation completed | |
An execution plan has been generated and is shown below. | |
Resource actions are indicated with the following symbols: | |
-/+ destroy and then create replacement | |
Terraform will perform the following actions: |
'use strict'; | |
const BbPromise = require('bluebird'); | |
const _ = require('lodash'); | |
const path = require('path'); | |
const archiver = require('archiver'); | |
const fs = require('fs'); | |
const glob = require('glob'); | |
const semver = require('semver'); |
aliases: | |
- &restore_gem_cache | |
keys: | |
- v1-gemfile-{{ checksum "Gemfile.lock" }} | |
- &save_gem_cache | |
name: Saving gem cache | |
key: v1-gemfile-{{ checksum "Gemfile.lock" }} | |
paths: | |
- ~/data/vendor/bundle |
When you want to order the elaticsearch document in the order of Ids you have provided in [Ids][1] query, then following function score query can be used to get required result:
{
"query":{
"function_score":{
"query":{
"ids":{
#!/usr/bin/env bash | |
set -e | |
# Formats any *.tf files according to the hashicorp convention | |
files=$(git diff --cached --name-only) | |
for f in $files | |
do | |
if [ -e "$f" ] && [[ $f == *.tf ]]; then | |
#terraform validate `dirname $f` | |
terraform fmt $f |
I hereby claim:
To claim this, I am signing this object:
Every so often I have to restore my gpg keys and I'm never sure how best to do it. So, I've spent some time playing around with the various ways to export/import (backup/restore) keys.
cp ~/.gnupg/pubring.gpg /path/to/backups/
cp ~/.gnupg/secring.gpg /path/to/backups/
cp ~/.gnupg/trustdb.gpg /path/to/backups/
import java.security.Key; | |
import java.util.Properties; | |
import javax.crypto.Cipher; | |
import javax.crypto.spec.SecretKeySpec; | |
import javax.persistence.AttributeConverter; | |
import javax.persistence.Converter; | |
import org.slf4j.Logger; |
require 'fileutils' | |
OUT_DIR = 'public' | |
desc 'Prepare and upload the static site to S3' | |
task :upload, [:name] do |t, args| | |
raise Exception.new('You must provide the name of site to upload to, e.g., be rake upload[www]') unless args[:name] | |
puts "Removing existing output directory" | |
FileUtils.rm_rf OUT_DIR if File.exists?(OUT_DIR) |