Skip to content

Instantly share code, notes, and snippets.

View chelseakomlo's full-sized avatar

Chelsea Komlo chelseakomlo

View GitHub Profile
2018/04/26 20:02:14.447545 [DEBUG] client.fingerprint_manager: detected drivers [qemu rkt mock_driver docker exec raw_exec java]
2018/04/26 20:02:14.447762 [INFO] client: Node ID "6a37a793-2dd6-d849-c20a-77da1b9ad278"
2018/04/26 20:02:14.449133 [INFO] client: node registration complete
2018/04/26 20:02:14.449171 [ERR] client.consul: error discovering nomad servers: client.consul: unable to query Consul datacenters: Get http://127.0.0.1:8500/v1/catalog/datacenters: dial tcp 127.0.0.1:8500: getsockopt: connection refused
2018/04/26 20:02:14.449233 [DEBUG] client: updated allocations at index 1 (total 0) (pulled 0) (filtered 0)
2018/04/26 20:02:14.449796 [DEBUG] client: state updated to ready
2018/04/26 20:02:14.449831 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2018/04/26 20:02:15.449610 [DEBUG] client: state changed, updating node and re-registering.
201
==================
WARNING: DATA RACE
Read at 0x00c420308a60 by goroutine 100:
reflect.typedmemmove()
/usr/local/go/src/runtime/mbarrier.go:259 +0x0
==================
WARNING: DATA RACE
Write at 0x00c4204331a0 by goroutine 80:
runtime.mapdelete_faststr()
/usr/local/go/src/runtime/hashmap_fast.go:801 +0x0
Places to add tagged metrics in Nomad client:
- Task runner
- Emit resource usage stats of tasks
- key: "client", "allocs", alloc.Job.Name, alloc.TaskGroup, alloc.ID, task.Name, "memory", "rss"
value: ResourceUsage.MemoryStats.RSS
proposed tag: "memory"
- key: "client", "allocs", alloc.Job.Name, alloc.TaskGroup, alloc.ID, task.Name, "memory", "cache"
value: ResourceUsage.MemoryStats.Cache
extern crate libc;
use libc::c_char;
use std::ffi::CString;
fn get_string() -> String {
String::from("hello world!")
}
#[no_mangle]

Existing Tor Guard Selection Algorithm

  • ALL_GUARD_LIST = guard information from latest consensus
  • GUARD_LIST = guards persisted to our state file
  • DIRECTORY_GUARD = if we select guards with the V2Dir flag. Guards with the V2Dir Flag can be used as entry guards for both fetching information from directories as well as for standard entry guards.

ON_BOOTSTRAP (no existing guards)

  1. RECEIVE_NEW_CONSENSUS
  2. From listed guards in ALL_GUARD_LIST with DIRECTORY_GUARD=true:
  3. 3 times do (default guard value on startup):

Existing Tor Guard Selection Algorithm

  • ALL_GUARD_LIST = guard information from latest consensus
  • GUARD_LIST = guards persisted to our state file
  • DIRECTORY_GUARD = if we select guards with the V2Dir flag. Guards with the V2Dir Flag can be used as entry guards for both fetching information from directories as well as for standard entry guards.

ON_BOOTSTRAP (no existing guards)

  1. RECEIVE_NEW_CONSENSUS
  2. From listed guards in ALL_GUARD_LIST with DIRECTORY_GUARD=true:
  3. 3 times do (default guard value on startup):
@chelseakomlo
chelseakomlo / gist:5493176
Created May 1, 2013 01:23
Refactoring with Franklinclass PopularityCounterProtocol
def self.increase_popularity ; end
def self.popular ; end
end
class RedisPopularityCounter
#
@chelseakomlo
chelseakomlo / gist:5397911
Created April 16, 2013 17:39
SendConfirmationEmail class
class SendConfirmationEmail
@queue = :confirmation_email
def self.perform(order_user_email, confirmation_code, confirmation_hash)
UserMailer.order_confirmation(order_user_email, confirmation_code, confirmation_hash)
end
end
@chelseakomlo
chelseakomlo / gist:5035826
Created February 26, 2013 04:16
For some reason, the stub to work for remaining slices wouldn't stick, and it kept coming back as a random number.
describe "#slice" do
context "when given no parameters" do
it "returns removes one slice of the apple" do
apple = Fruit::Apple.new
apple.stub(:remaining_slices).and_return(5)
expect(apple.slice).to eq 1
end
it "reports the correct number of remaining slices" do
apple = Fruit::Apple.new