I hereby claim:
- I am d4rky-pl on github.
- I am nerdblogpl (https://keybase.io/nerdblogpl) on keybase.
- I have a public key whose fingerprint is 4DE7 F67D 3A8D 0C5F 04EC 8AAC DF1C E039 57AB 7520
To claim this, I am signing this object:
#!/bin/bash | |
if [[ -f /tmp/webpack-status ]]; then | |
STATUS=$(cat /tmp/webpack-status) | |
if [[ $STATUS -eq "-1" ]]; then | |
echo "Build failed!" | |
elif [[ $STATUS -eq "1" ]]; then | |
echo "Build success." | |
else | |
echo "Build in progress..." |
#!/usr/bin/env ruby | |
require 'net/http' | |
require 'json' | |
require 'time' | |
# Replace YOUR_API_KEY with your Codeship API key | |
API_KEY = 'YOUR_API_KEY' | |
# Change this to ['repository/name', 'repository/name2'] if you want to filter only specific projects | |
PROJECTS = nil |
require 'mobx' | |
Mobx.init | |
class Session | |
extend Mobx::Extension | |
observable :user | |
observable :company | |
end |
I hereby claim:
To claim this, I am signing this object:
// This script is ugly and simple but it works | |
// Sharing in case someone else find it useful | |
window.fieldConverter = (obj) => { | |
let fields = map(obj, (values, field_name ) => field_name) | |
let labels = reduce(obj, (acc, values, field_name) => { acc[field_name] = values.label; return acc } ,{}) | |
let rules = reduce(obj, (acc, values, field_name) => { acc[field_name] = values.rules; return acc } ,{}) | |
let defaults = reduce(obj, (acc, values, field_name) => { acc[field_name] = values.default; return acc } ,{}) | |
let types = reduce(obj, (acc, values, field_name) => { acc[field_name] = values.type; return acc } ,{}) | |
let values = reduce(obj, (acc, values, field_name) => { acc[field_name] = values.value; return acc } ,{}) |
ModuleExporter.module(__FILE__) do | |
_Bar = Class.new do | |
def bar | |
"Bar!" | |
end | |
end | |
export _Bar | |
end |
I'm trying to implement running ffmpeg process with timeout without resorting to Ruby's Timeout class or spawning a new Thread with counter inside (that's how streamio-ffmpeg does it and it's error-prone and it feels wrong)
Unfortunately when doing non-blocking reads with nio4r, the size of returned chunks of data is pretty much random. While doing synchronous reads result in block yielding every line, in case of nio4r the polling makes it unpredictable.
Is there a better way or should I just somehow glue together responses when I somehow detect the response is incomplete?
def self.extend_printouts_constants | |
const = 'PRINTOUT_PROCESS_STATES' | |
if ProductPrintingSupport.const_defined?(const) | |
old = ProductPrintingSupport.const_get(const) | |
ProductPrintingSupport.send(:remove_const, const) | |
ProductPrintingSupport.const_set(const, old + [:job_ganging_printout_in_progress]) | |
end | |
end |
Calculating ------------------------------------- | |
method 56.995k i/100ms | |
proc 49.213k i/100ms | |
proc new 57.562k i/100ms | |
------------------------------------------------- | |
method 1.797M (±10.7%) i/s - 8.891M | |
proc 1.522M (± 8.3%) i/s - 7.579M | |
proc new 1.867M (± 7.9%) i/s - 9.267M | |
Comparison: |
require 'benchmark' | |
require 'execjs' | |
require 'therubyracer' | |
js_code = '2+2' | |
STDOUT.sync = true | |
ExecJS.runtime = ExecJS::Runtimes::Node | |
Benchmark.bmbm(20) do |benchmark| |