Reference: How to conduct a full code review on GitHub
./review.sh [prefix-]
{ | |
"exclude_repos": ["oh_my_fortune","gotgit","GithubBotRepo","dlpkuhole2"], | |
"exclude_terms": ["Garbage Man", "roomservice", "fluent in silence"], | |
"queries": [ | |
"i wish", | |
"i just wish", | |
"i really wish", | |
"i seriously wish", | |
"i seriously hope", | |
"i really hope" |
Reference: How to conduct a full code review on GitHub
./review.sh [prefix-]
I hereby claim:
To claim this, I am signing this object:
Hi all, | |
At Harvard DCE we've been developing a few strategies for automated horizontal scaling of workers. Some involve analysis of AWS CloudWatch metrics and others look at data from Matterhorn itself, e.g. queued jobs. All of them, to a greater or lesser degree, are hindered by the behavior of the job dispatching mechanisms. I'll give a quick example of a typical scenario and why it causes problems for both types of strategies. | |
Say we have a cluster that has 10 workers at it's disposal but only 1 currently online. A burst of producer activity creates a bunch of new workflows and quickly there are a dozen media inspection and compose jobs generated. What we would expect is that the `max_jobs: n` setting on the worker host would prevent more than n of those jobs from being assigned to the single worker, the rest being put into some sort of queued state. What actually happens is that the 1 worker happily accepts all of them because the dispatching logic in `matterhorn-serviceregistry/.../ServiceRegistryJpaIm |
import sys | |
import time | |
import boto3 | |
import logging | |
import argparse | |
from faker import Factory | |
from random import choice, shuffle | |
from multiprocessing import Process, JoinableQueue | |
from mysql.connector import connect |
Starting storage, including a deploy | |
Starting db, including a deploy | |
Waiting on "bash /usr/bin/vagrant up storage" to finish . . . | |
There was an error while executing `VBoxManage`, a CLI used by Vagrant | |
for controlling VirtualBox. The command and stderr is shown below. | |
Command: ["setextradata", "07be0611-9249-468c-b3ee-96b6b6894db8", "VBoxInternal2/SharedFoldersEnableSymlinksCreate/tmp_vagrant-opsworks", "1"] | |
Stderr: VBoxManage: error: The machine 'mh-opsworks_storage_1459356738934_15160' already has a lock request pending | |
VBoxManage: error: Details: code VBOX_E_INVALID_OBJECT_STATE (0x80bb0007), component MachineWrap, interface IMachine, callee nsISupports |
// put this in yr custom json | |
"elk": { | |
"kibana_auth": { | |
"user": "foo", | |
"pass": "bar" | |
}, | |
"harvester_release": "v0.1.0" | |
} |
import os | |
import boto3 | |
from yaml import load | |
os.environ.setdefault('AWS_DEFAULT_REGION', 'us-east-1') | |
if __name__ == '__main__': | |
with open('/etc/aws/opsworks/instance-agent.yml') as f: | |
agent_config = load(f) |
// Use Gists to store code you would like to remember later on | |
console.log(window); // log the "window" object to the console |
* { | |
font-family: "monospace"; | |
} | |
a.name { | |
color: red; | |
} |