Skip to content

Instantly share code, notes, and snippets.

View milhidaka's full-sized avatar

Masatoshi Hidaka milhidaka

View GitHub Profile
@milhidaka
milhidaka / Dockerfile
Last active August 24, 2022 10:50
Stable Diffusionによる画像生成ツールをインストールしたjupyter notebookを動かせるdockerイメージの作成 (cpu only)
FROM python:3.9-buster
ENV PYTHONUNBUFFERED=1
RUN mkdir app
WORKDIR /app
RUN pip install --upgrade diffusers transformers scipy jupyter matplotlib
@milhidaka
milhidaka / index.js
Last active July 20, 2022 02:55
COEPヘッダ付き開発用HTTPサーバ (SharedArrayBuffer等の検証)
const express = require('express');
const app = express();
const PORT = 8080;
app.use(express.static('public', {
setHeaders: function (res, path) {
// no allow cache
res.set("Cache-Control", "no-cache");
// CORS
@milhidaka
milhidaka / convert_float32.c
Created March 13, 2019 04:05
float16 -> float32 conversion in C
#include <stdio.h>
#include <stdint.h>
#include <assert.h>
#define DATA_SIZE 2052
float decode(uint16_t float16_value)
{
// MSB -> LSB
// float16=1bit: sign, 5bit: exponent, 10bit: fraction
@milhidaka
milhidaka / imdb_lstm_run.py
Created June 19, 2017 09:01
runs model trained by imdb_lstm.py and gets input-output pair
# runs model trained by imdb_lstm.py and gets input-output pair
import numpy as np
import keras
from keras.preprocessing import sequence
from keras.datasets import imdb
max_features = 20000
maxlen = 80 # cut texts after this number of words (among top max_features most common words)
@milhidaka
milhidaka / imdb_lstm_reproduce.py
Created June 19, 2017 09:00
reproduce same result as imdb_lstm model using numpy
# reproduce lstm prediction by basic numpy operations
# model trained on imdb_lstm.py
# based on https://github.com/fchollet/keras/blob/master/keras/layers/recurrent.py#L1130
import numpy as np
from scipy.special import expit # logistic function
import h5py
"""
{'class_name': 'Sequential',
'config': [{'class_name': 'Embedding',
@milhidaka
milhidaka / imdb_lstm.py
Created June 19, 2017 08:59
lstm training example with model save
'''Trains a LSTM on the IMDB sentiment classification task.
The dataset is actually too small for LSTM to be of any advantage
compared to simpler, much faster methods such as TF-IDF + LogReg.
Notes:
- RNNs are tricky. Choice of batch size is important,
choice of loss and optimizer is critical, etc.
Some configurations won't converge.
- LSTM loss decrease patterns during training can be quite different
@milhidaka
milhidaka / mul.cpp
Created May 5, 2017 02:15
matrix multiplication using provided memory area
#include <iostream>
#include <Eigen/Dense>
using namespace std;
using namespace Eigen;
// matrix multiplication using provided memory area
// compile with -std=c++11
int main()
{
const int size = 2;