Skip to content

Instantly share code, notes, and snippets.

View imbstack's full-sized avatar
💾
Rendering

Brian Stack imbstack

💾
Rendering
View GitHub Profile
@imbstack
imbstack / gist:1201901
Created September 7, 2011 21:52
Basic Bash Intro

Introduction to Interacting with the Shell

for the CWRU hacker society

  • being able to use the unix shell is an important skill
  • it is also really helpful to know
    • it provides a consistent way to interact with all of your tools
    • it makes automation easy
    • being able to pipe and redirect output (defined later!) is immensely powerful
    • job interviews will ask shell questions
@imbstack
imbstack / README.md
Created September 15, 2012 02:55
test of line rendering for firefly

Issue 34 on Firefly may be caused by us passing a strange [null, value, null, value, ...] set of values for each data source to d3. This is a test of this hypothesis.

@imbstack
imbstack / wordsearch.py
Created November 24, 2015 02:50
wordsearch solver
#!/usr/bin/env python3
def find(puzzle, dir):
count = 0
for line in puzzle:
for word in words:
if word in line:
count += 1
print(dir, word)
return count

Keybase proof

I hereby claim:

  • I am imbstack on github.
  • I am imbstack (https://keybase.io/imbstack) on keybase.
  • I have a public key whose fingerprint is DE31 5A20 A083 3559 A3FC EF2E 9EE9 8FE0 1284 997C

To claim this, I am signing this object:

@imbstack
imbstack / index.js
Last active November 15, 2016 21:22
Finding Old Try Artifacts (list of artifacts: https://keybase.pub/imbstack/artifacts.gz)
require('babel-register');
let script = require('./script');
let fs = require('fs');
let _ = require('lodash');
// Each of these "sections" were run in order with previous ones being commented out
// I know this is super gross, but that's how my brain works!
// The script code is in the next file here.
// The "completed" variables are uninmportant other than allowing me to start again from where I left off
// when something breaks. They are populated by redirecting error output to a log file.
@imbstack
imbstack / update.js
Created February 17, 2017 19:46
Update Taskcluster Azure Access Scopes
let Promise = require('promise');
let _ = require('lodash');
let taskcluster = require('taskcluster-client');
let auth = new taskcluster.Auth({
credentials: {
clientId: '...',
accessToken: '...',
},
});
@imbstack
imbstack / Workflow.md
Created September 29, 2017 18:35
Workflow

You'll have three repositories:

  1. Your local one checked out on your computer
  2. Your fork in github
  3. The "canonical" repo (this is just the main one under the taskcluster org. I call it canon)

On your local repo, you should have two remotes:

  1. origin: which points to your fork (e.g. git@github.com:imbstack/ec2-manager.git)
  2. canon: which points to the canonical repo (e.g. https://github.com/taskcluster/ec2-manager.git) Notice that my origin is set up with ssh so I can push to it easier and the canon is https which I think is the sort you will have access to.

taskcluster-units.json

{
  "version": 0,
  "title": "Unit Tests",
  "description": "...",
  "units": {
    "set allowed key (twice)": {"format": "tap", "taskid": "<task_id>", "artifact": "report.tap"},
    "set disallowed key": {"format": "tap", "taskid": "<task_id>", "artifact": "report.tap"},
@imbstack
imbstack / .taskcluster.yml
Last active August 2, 2018 22:04
Example version-control-tools .taskcluster.yml
version: 1
tasks:
- metadata:
name: "version-control-tools Tests"
description: "Builds the test environment and runs tests (without Docker)"
owner: ${push.owner}
source: ${repository.url}
taskId: '${as_slugid("decision")}'
taskGroupId: '${as_slugid("decision")}'
schedulerId: '${repository.level}'
@imbstack
imbstack / 0_Explanation.md
Last active April 4, 2019 05:46
Worker Manager Configuration

Worker Manager Plan

The following is my thinking on how we will get a good balance between sharing configuration and having understandable configuration.

One provider per workertype

Unfortunately, I think the current plan of having the provisioner compare bids accross clouds is too aggressive for the current time. The time and money units that different clouds make available aren't easily comparable (and in gcp's case, it is hard to get the information in an understandable way in general). We can still allow bidding within a provider, but each workertype will need to specify which provider is responsible for creating its workers.

This has side benefits of simplifying the provisioning loop and the provider interfaces considerably. Each provider will still need to provide a standardised way to list/terminate/etc instances but that should be doable.