This is a hands-on way to pull down a set of MySQL dumps from Amazon S3 and restore your database with it
Sister Document - Backup MySQL to Amazon S3 - read that first
# Set our variables
export mysqlpass="ROOTPASSWORD"
This is a hands-on way to pull down a set of MySQL dumps from Amazon S3 and restore your database with it
Sister Document - Backup MySQL to Amazon S3 - read that first
# Set our variables
export mysqlpass="ROOTPASSWORD"
import java.math.BigInteger; | |
public final class IbanTest { | |
public static final int IBANNUMBER_MIN_SIZE = 15; | |
public static final int IBANNUMBER_MAX_SIZE = 34; | |
public static final BigInteger IBANNUMBER_MAGIC_NUMBER = new BigInteger("97"); | |
public static boolean ibanTest(String accountNumber) { | |
String newAccountNumber = accountNumber.trim(); |
(by @andrestaltz)
If you prefer to watch video tutorials with live-coding, then check out this series I recorded with the same contents as in this article: Egghead.io - Introduction to Reactive Programming.
// BSD License (http://lemurproject.org/galago-license) | |
package org.lemurproject.galago.utility.json; | |
public class JSONUtil { | |
public static String escape(String input) { | |
StringBuilder output = new StringBuilder(); | |
for(int i=0; i<input.length(); i++) { | |
char ch = input.charAt(i); | |
int chx = (int) ch; |
On Tue Oct 27, 2015, history.state.gov began buckling under load, intermittently issuing 500 errors. Nginx's error log was sprinkled with the following errors:
2015/10/27 21:48:36 [crit] 2475#0: accept4() failed (24: Too many open files)
2015/10/27 21:48:36 [alert] 2475#0: *7163915 socket() failed (24: Too many open files) while connecting to upstream...
An article at http://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/ provided directions that mostly worked. Below are the steps we followed. The steps that diverged from the article's directions are marked with an *.
su
to run ulimit
on the nginx account, use ps aux | grep nginx
to locate nginx's process IDs. Then query each process's file handle limits using cat /proc/pid/limits
(where pid
is the process id retrieved from ps
). (Note: sudo
may be necessary on your system for the cat
command here, depending on your system.)fs.file-max = 70000
to /etc/sysctl.confPicking the right architecture = Picking the right battles + Managing trade-offs
/* | |
* See LICENSE for licensing and NOTICE for copyright. | |
*/ | |
package edu.vt.middleware.app; | |
import java.io.File; | |
import java.security.*; | |
import java.util.ArrayList; | |
import java.util.List; | |
import java.util.function.Predicate; |