To achieve this, I built out a cluster of hosts using Amazon's EC2 Container Service and run multiple standalone OpenVPN containers in an Auto-scaling group on each host, ensuring high availability. An Elastic Load Balancer routes traffic to the hosts, which then redirect traffic to the appropriate containers. I created a mounted volume on both of my ECS Nodes which utilizes Elastic Filesystem to store configuration files for my OpenVPN containers ensuring there is no duplication of data between the various containers. And....is it fast? You betcha! From the time ECS recognizes there has been a failure with one of the containers, a new one is up and running within 5 seconds.
#! python3 | |
# downloadXkcd.py - Downloads every single XKCD comic. | |
# source: Automated the Boring Stuff with Python | |
import requests, os, bs4 | |
url = 'http://xkcd.com' # starting url | |
os.makedirs('xkcd', exist_ok=True) # store comics in ./xkcd | |
while not url.endswith('#'): | |
# Download the page. | |
print('Downloading page %s...' % url) |
#!/bin/bash | |
# PLACEHOLDERS | |
# [STAGING_FOLDER] - staging directory in your server | |
# [STAGING_URL] - staging url | |
# [STAGING_USER] - staging user in the server | |
# [STAGING_MYSQLUSER] - staging mysql user | |
# [STAGING_MYSQLPASSWORD] - staging mysql password | |
# [ROOTUSER] - Mysql root user | |
# [ROOTPASSWORD] - Mysql root password |
/** | |
* Authentication | |
* @namespace thinkster.authentication.services | |
* Make a file in static/javascripts/authentication/services/ called authentication.service.js | |
*/ | |
(function () { | |
'use strict'; | |
angular |
Dears, | |
I went through our AWS account specially the 2-instances xxx.xxx.com and ooo.ooo.com, | |
Here’s what I noticed and what can be done to reduce the cost. | |
First: | |
For the “xxx.xxx.com” | |
Instance type: On-Demand c4.large | |
We pay approximately $75/month for it. |
The deployment state of the Kubernetes cluster is stored in etcd. If you're concerned about backing up this information, you should look into backing up the etcd data directory for each etcd instance in your cluster. This can be done via an etcd-based backup strategy, or via snapshotting the underlying block device that backs the etcd data directory. Backing up Kubernetes clusters is not the purpose of this document.
This document's primary purpose is to show how to migrate the deployment state from one Kubernetes cluster to another. The clusters may have different versions, pod/service network cidrs, number of nodes, etc.
For the remainder of this document, the cluster that is being dumped will be referred to as the source cluster. The cluster that is being restored to will be called the target cluster. The goal is to migrate state from the source cluster to the *ta
I've been using Fish shell for years which is great and all, but one thing that has got me frustrated is using it with .env
files.
When attempting to run source .env
in a project, I usually encounter this problem:
.env (line 2): Unsupported use of '='. In fish, please use 'set KEY value'.
from sourcing file .env
source: Error while reading file '.env'
--- | |
AWSTemplateFormatVersion: '2010-09-09' | |
Description: some-sftp-server | |
Parameters: | |
HostedZoneIdParam: | |
Type: String | |
Description: Hosted Zone ID | |
SFTPHostnameParam: | |
Type: String |