Skip to content

Instantly share code, notes, and snippets.

View yukoga's full-sized avatar

yukoga yukoga

  • Japan
View GitHub Profile
@yukoga
yukoga / eventemitter.js
Created June 26, 2023 12:58 — forked from mudge/eventemitter.js
A very simple EventEmitter in pure JavaScript (suitable for both node.js and browsers).
/* Polyfill indexOf. */
var indexOf;
if (typeof Array.prototype.indexOf === 'function') {
indexOf = function (haystack, needle) {
return haystack.indexOf(needle);
};
} else {
indexOf = function (haystack, needle) {
var i = 0, length = haystack.length, idx = -1, found = false;
-- Author: Krisjan Oldekamp
-- https://stacktonic.com/article/enrich-a-single-customer-view-with-google-analytics-4-big-query-data
declare lookback_window int64 default 365; -- how many days to lookback into the ga4 dataset to calculate profiles
-- udf: channel grouping (you could put this in a permanent function)
-- also see https://stacktonic.com/article/google-analytics-4-and-big-query-create-custom-channel-groupings-in-a-reusable-sql-function
create temporary function channel_grouping(tsource string, medium string, campaign string) as (
case
when (tsource = '(direct)' or tsource is null)
@yukoga
yukoga / GLM-hierarchical.ipynb
Created October 2, 2020 15:27 — forked from twiecki/GLM-hierarchical-jax.ipynb
notebooks/GLM-hierarchical.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@yukoga
yukoga / real_time.ipynb
Created April 26, 2020 08:12 — forked from nikhilkumarsingh/real_time.ipynb
Plotting real time data using Python
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@yukoga
yukoga / twitter-streaming-pubsub.py
Created April 26, 2020 08:01 — forked from kaxil/twitter-streaming-pubsub.py
Twitter Streaming API to PubSub
def publish(client, pubsub_topic, data_lines):
"""Publish to the given pubsub topic."""
messages = []
for line in data_lines:
messages.append({'data': line})
body = {'messages': messages}
str_body = json.dumps(body)
data = base64.urlsafe_b64encode(bytearray(str_body, 'utf8'))
client.publish(topic=pubsub_topic, data=data)
@yukoga
yukoga / pubsub-to-bokeh.py
Created April 26, 2020 08:01 — forked from kaxil/pubsub-to-bokeh.py
Google Cloud Pub/Sub to Bokeh Dashboard - Streaming Dashboard
# User module to receive tweets
from recevie_tweets_pubsub import receive_tweets
import pandas
from bokeh.io import curdoc
from bokeh.models import ColumnDataSource
from bokeh.models import DatetimeTickFormatter
from bokeh.plotting import figure, output_file
import sys
@yukoga
yukoga / python-convert-dictionary-to-object
Created October 23, 2019 10:46 — forked from typerandom/python-convert-dictionary-to-object
Convert a dictionary to an object (recursive).
class DictionaryUtility:
"""
Utility methods for dealing with dictionaries.
"""
@staticmethod
def to_object(item):
"""
Convert a dictionary to an object (recursive).
"""
def convert(item):
@yukoga
yukoga / demo.py
Created October 19, 2019 13:23 — forked from joelthchao/demo.py
Keras uses TensorBoard Callback with train_on_batch
import numpy as np
import tensorflow as tf
from keras.callbacks import TensorBoard
from keras.layers import Input, Dense
from keras.models import Model
def write_log(callback, names, logs, batch_no):
for name, value in zip(names, logs):
summary = tf.Summary()

Stevey's Google Platforms Rant

I was at Amazon for about six and a half years, and now I've been at Google for that long. One thing that struck me immediately about the two companies -- an impression that has been reinforced almost daily -- is that Amazon does everything wrong, and Google does everything right. Sure, it's a sweeping generalization, but a surprisingly accurate one. It's pretty crazy. There are probably a hundred or even two hundred different ways you can compare the two companies, and Google is superior in all but three of them, if I recall correctly. I actually did a spreadsheet at one point but Legal wouldn't let me show it to anyone, even though recruiting loved it.

I mean, just to give you a very brief taste: Amazon's recruiting process is fundamentally flawed by having teams hire for themselves, so their hiring bar is incredibly inconsistent across teams, despite various efforts they've made to level it out. And their operations are a mess; they don't real

from keras.preprocessing.text import Tokenizer
# ベクトル化したい文章をリストで宣言します。
texts = ["I am a student. He is a student, too.", "She is not a student."]
# Tokenizerをインスタンス化し、上で用意した文章を与えます。
tokenizer = Tokenizer()
tokenizer.fit_on_texts(texts)
# 与えられた文章の数を取得します。