Skip to content

Instantly share code, notes, and snippets.

Verifying that +ebakan is my Bitcoin username. You can send me #bitcoin here: https://onename.io/ebakan
### Keybase proof
I hereby claim:
* I am ebakan on github.
* I am ebakan (https://keybase.io/ebakan) on keybase.
* I have a public key whose fingerprint is 3419 FDB4 1E03 8E08 7DF1 F16E 0B9A C1B4 E0BF F2F9
To claim this, I am signing this object:
@ebakan
ebakan / Popper.js
Last active January 24, 2018 19:25
React Component for Popper.js that takes the reference as its first child and the popper as its second child (a la react-tether)
import React, { Component, PropTypes } from 'react';
import popperJS from 'popper.js';
export default class Popper extends Component {
constructor(props) {
super(props);
this.state = {};
this.update = this.update.bind(this);
}
@ebakan
ebakan / Rakefile
Created December 21, 2016 14:44 — forked from schickling/Rakefile
Activerecord without Rails
require "active_record"
namespace :db do
db_config = YAML::load(File.open('config/database.yml'))
db_config_admin = db_config.merge({'database' => 'postgres', 'schema_search_path' => 'public'})
desc "Create the database"
task :create do
ActiveRecord::Base.establish_connection(db_config_admin)
@ebakan
ebakan / includes_test.rb
Last active March 2, 2017 01:28
If I have an n-element array and I want to test element inclusion n times, is it faster to use Array#includes? or create a set and use Set#includes?
require 'benchmark'
require 'set'
[1, 10, 100, 110, 125, 150, 200, 250, 500, 1000, 10000].each do |i|
arr = (0..i).to_a.shuffle
set_time = Benchmark.realtime do
set = Set.new arr
i.times do |j|
set.include? j
@ebakan
ebakan / ridge_bias_variance.py
Created May 24, 2019 23:30 — forked from oliverangelil/ridge_bias_variance.py
generate dummy data and apply ridge with varying levels of regularisation. Plots train and test results. This code goes with my blog post: http://snoek.ddns.net/~oliver/mysite/the-bias-variance-tradeoff.html
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
import numpy as np
import matplotlib.pyplot as plt
# generate data
X, y, w = make_regression(n_samples=1000, n_features=200, coef=True,
random_state=1, bias=0, noise=3, tail_strength=0.9, effective_rank=10)