- being able to use the unix shell is an important skill
- it is also really helpful to know
- it provides a consistent way to interact with all of your tools
- it makes automation easy
- being able to pipe and redirect output (defined later!) is immensely powerful
- job interviews will ask shell questions
Issue 34 on Firefly may be caused by us passing a strange [null, value, null, value, ...] set of values for each data source to d3. This is a test of this hypothesis.
#!/usr/bin/env python3 | |
def find(puzzle, dir): | |
count = 0 | |
for line in puzzle: | |
for word in words: | |
if word in line: | |
count += 1 | |
print(dir, word) | |
return count |
I hereby claim:
- I am imbstack on github.
- I am imbstack (https://keybase.io/imbstack) on keybase.
- I have a public key whose fingerprint is DE31 5A20 A083 3559 A3FC EF2E 9EE9 8FE0 1284 997C
To claim this, I am signing this object:
require('babel-register'); | |
let script = require('./script'); | |
let fs = require('fs'); | |
let _ = require('lodash'); | |
// Each of these "sections" were run in order with previous ones being commented out | |
// I know this is super gross, but that's how my brain works! | |
// The script code is in the next file here. | |
// The "completed" variables are uninmportant other than allowing me to start again from where I left off | |
// when something breaks. They are populated by redirecting error output to a log file. |
let Promise = require('promise'); | |
let _ = require('lodash'); | |
let taskcluster = require('taskcluster-client'); | |
let auth = new taskcluster.Auth({ | |
credentials: { | |
clientId: '...', | |
accessToken: '...', | |
}, | |
}); |
You'll have three repositories:
- Your local one checked out on your computer
- Your fork in github
- The "canonical" repo (this is just the main one under the taskcluster org. I call it canon)
On your local repo, you should have two remotes:
- origin: which points to your fork (e.g. git@github.com:imbstack/ec2-manager.git)
- canon: which points to the canonical repo (e.g. https://github.com/taskcluster/ec2-manager.git) Notice that my origin is set up with ssh so I can push to it easier and the canon is https which I think is the sort you will have access to.
version: 1 | |
tasks: | |
- metadata: | |
name: "version-control-tools Tests" | |
description: "Builds the test environment and runs tests (without Docker)" | |
owner: ${push.owner} | |
source: ${repository.url} | |
taskId: '${as_slugid("decision")}' | |
taskGroupId: '${as_slugid("decision")}' | |
schedulerId: '${repository.level}' |
The following is my thinking on how we will get a good balance between sharing configuration and having understandable configuration.
Unfortunately, I think the current plan of having the provisioner compare bids accross clouds is too aggressive for the current time. The time and money units that different clouds make available aren't easily comparable (and in gcp's case, it is hard to get the information in an understandable way in general). We can still allow bidding within a provider, but each workertype will need to specify which provider is responsible for creating its workers.
This has side benefits of simplifying the provisioning loop and the provider interfaces considerably. Each provider will still need to provide a standardised way to list/terminate/etc instances but that should be doable.