Skip to content

Instantly share code, notes, and snippets.

@wapiflapi
wapiflapi / binglide2_ipc.md
Last active August 29, 2015 14:15
IPC architecture for binglide V2

OUTDATED

This document is now outdated, thanks for the feedback everyone. This is basically what I will be trying out next:

The rest of this document is available for reference.

Goals

@wapiflapi
wapiflapi / granularity.md
Created February 11, 2015 16:30
Notes about binglide v2 architecture

Remarks about distributed computing.

  • Right now we have the need to compute histogram data on large files with cache support. This can work great by having a reducer process agregating results from mappers. this works because we have one type of reducer and one type of mapper with a minimum map/reduce ratio among the workers.

  • What happens if in the future we need something like A <- B <- C. This will require at least three processes with two of them waiting. We need to make

@wapiflapi
wapiflapi / binglide2_ipc2.md
Created February 16, 2015 00:10
More ramnling about binglide's IPC architecture

The purpose of this document is to explain the design decisions being taken for the multiprocessing and IPC architecture of binglide v2.

Synopsis

The main idea that this document ends up advocating is to have N workers for each type of service. In this scenario each worker knows how to do only one task. Those workers receive tasks from the broker who is responsable for splitting jobs into smaller tasks accoring to the cache. This endsup being hard

@wapiflapi
wapiflapi / broker.py
Created February 16, 2015 10:43
Binglide v2 IPC BXMDP00 / Draft POC
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import collections
import zmq
import ipc
@wapiflapi
wapiflapi / venv
Created April 7, 2015 11:46
my virtualenv wrapper
#!/bin/sh
source ./venv/bin/activate
"$@"
{
repositoryOwner(login: "github") {
login
avatarUrl
repositories(first: 10, isFork: false, orderBy: {field: CREATED_AT, direction: DESC}) {
edges {
node {
description
createdAt
stargazers {
@wapiflapi
wapiflapi / tracing.py
Created September 12, 2018 08:51
Graphql Tracing Middleware for Apollo Engine.
from graphql.execution import ExecutionResult
from .schema import schema
import datetime
import dateutil.parser
import dateutil.tz
import monotonic
@wapiflapi
wapiflapi / fish_prompt.fish
Created February 20, 2015 21:11
my fish prompt
# name: wapiflapi
function _git_branch_name
command git symbolic-ref HEAD ^/dev/null | sed -e 's|^refs/heads/||'
end
function _git_repo_name
basename (git rev-parse --show-toplevel ^/dev/null) ^/dev/null
end
<!DOCTYPE html>
<html lang="en">
<head><title>Self Share Screen</title></head>
<body><video id="video" autoplay></video></body>
<script>
navigator.mediaDevices.getDisplayMedia({
video: { cursor: "never" },
audio: false
}).then(function (stream) {
document.getElementById("video").srcObject = stream;
# @wapiflapi gqldl early draft.
#
# The goal of this is to manage dataloaders for multiple types while
# at the same time providing an easy integration for relay-compliance.
# - There will be full utility support for the relay spec.
# - Integration with ariadne WILL be easy and documented.
# - Integration with graphene MIGHT be documented.
#
# Feel free to comment, but documentation, tests and a proper release
# are comming soon(tm).