Skip to content

Instantly share code, notes, and snippets.

@jadechip
jadechip / classification.sql
Created November 16, 2019 07:31
Create rudimentary classification model in BigQuery ML
CREATE OR REPLACE MODEL `ecommerce.classification_model`
OPTIONS
(
model_type='logistic_reg',
labels = ['will_buy_on_return_visit']
)
AS
#standardSQL
SELECT
@jadechip
jadechip / model.py
Created November 14, 2019 12:52
PySpark job on GCP
#!/usr/bin/env python
"""
Copyright Google Inc. 2016
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
@jadechip
jadechip / agent.py
Created November 6, 2019 05:26
Reinforcement learning - collaboration and competition companion code
class Agent():
"""Interacts with and learns from the environment."""
def __init__(self, state_size, action_size, random_seed, memory):
"""Initialize an Agent object.
Params
======
state_size (int): dimension of each state
action_size (int): dimension of each action
@jadechip
jadechip / actor-critic.py
Created November 6, 2019 05:24
Reinforcement learning - collaboration and competition companion code
# Simple function approximators
def hidden_init(layer):
fan_in = layer.weight.data.size()[0]
lim = 1. / np.sqrt(fan_in)
return (-lim, lim)
class Actor(nn.Module):
"""Actor (Policy) Model."""
@jadechip
jadechip / agent.py
Created November 6, 2019 05:15
Reinforcement learning - continuous control companion code
class Agent():
"""Interacts with and learns from the environment."""
def __init__(self, state_size, action_size, random_seed):
"""Initialize an Agent object.
Params
======
state_size (int): dimension of each state
action_size (int): dimension of each action
@jadechip
jadechip / actor-critic.py
Created November 6, 2019 05:13
Reinforcement learning - continuous control companion code
def hidden_init(layer):
fan_in = layer.weight.data.size()[0]
lim = 1. / np.sqrt(fan_in)
return (-lim, lim)
class Actor(nn.Module):
"""Actor (Policy) Model."""
def __init__(self, state_size, action_size, seed, fc1_units=400, fc2_units=300):
@jadechip
jadechip / q-network-agent.py
Created November 6, 2019 04:41
Reinforcement learning - Navigation companion code
class Agent():
def __init__(self, state_size, action_size, seed):
self.state_size = state_size
self.action_size = action_size
self.seed = random.seed(seed)
# Q-Network
self.qnetwork_local = QNetwork(state_size, action_size, seed).to(device)
self.qnetwork_target = QNetwork(state_size, action_size, seed).to(device)
@jadechip
jadechip / q-network.py
Created November 6, 2019 04:40
Reinforcement learning - Navigation companion code
class QNetwork(nn.Module):
"""Actor (Policy) Model."""
def __init__(self, state_size, action_size, seed, fc1_units=64, fc2_units=64):
super(QNetwork, self).__init__()
self.seed = torch.manual_seed(seed)
self.fc1 = nn.Linear(state_size, fc1_units)
self.fc2 = nn.Linear(fc1_units, fc2_units)
self.fc3 = nn.Linear(fc2_units, action_size)
@jadechip
jadechip / env.sh
Last active October 30, 2019 16:45
Environment variables for Drone CI
#!/bin/bash
# Set environment variables
export DRONE_SERVER_HOST=${DRONE_SERVER_HOST} # i.e drone.mydomain.io
export DRONE_RPC_SECRET=${DRONE_RPC_SECRET} # i.e correct-horse-batter-staple
export DRONE_GITHUB_CLIENT_ID=${DRONE_GITHUB_CLIENT_ID} # i.e di2450-3huchuy-fdhu378-k24556892
export DRONE_GITHUB_CLIENT_SECRET=${DRONE_GITHUB_CLIENT_SECRET} # i.e dx19x3-3o44h0y-id6uf7h-q4fg5hkp2
@jadechip
jadechip / start-drone.yaml
Created October 30, 2019 06:51
Sample Ansible Playbook
---
- hosts: droneci
become: true
vars:
ansible_python_interpreter: "/usr/bin/env python3"
environment:
DRONE_SERVER_HOST: '{{ lookup("env", "DRONE_SERVER_HOST") }}'
DRONE_RPC_SECRET: '{{ lookup("env", "DRONE_RPC_SECRET") }}'
DRONE_GITHUB_CLIENT_ID: '{{ lookup("env", "DRONE_GITHUB_CLIENT_ID") }}'
DRONE_GITHUB_CLIENT_SECRET: '{{ lookup("env", "DRONE_GITHUB_CLIENT_SECRET") }}'