Skip to content

Instantly share code, notes, and snippets.

View dragen1860's full-sized avatar
🎯
Focusing

Jackie Loong dragen1860

🎯
Focusing
  • Sydney
View GitHub Profile
name: "ResNet-101"
input: "data"
input_dim: 1
input_dim: 3
input_dim: 224
input_dim: 224
layer {
bottom: "data"
top: "conv1"
1. improve CSML-TF version and find it hard to debug
2. re-develop CSML-Pytorch
3. follow latest reasoning paper: Compositional Attention Networks for Machine Reasoning
Next Week:
1. work on CSML-Pytorch
2. rethink the machanism of machine reasoning
3. prepare for meta-learning presentation
1. Finish debug of csml-pytorch
2. meta-learning review
3. papers on memory relative topic
This week
1. NIPS
MOV SI, Table_A ;POINTER TO TABLE_A.
MOV DI, Table_B ;POINTER TO TABLE_B.
MOV CX, 3 ;ARRAY LENGTH.
REPEAT:
MOV AL, [SI]
MOV [DI], AL
INC SI
INC DI
LOOP REPEAT ;CX-1. IF CX>0 JUMP TO REPEAT.
1. Presentation
2. re-run the code on memory related papers and understand its limitations
3. propose an idea based on memory: learning to rethink
Next:
1. research on the new idea
@dragen1860
dragen1860 / copy_.cpp
Last active June 11, 2018 01:26
copy_.cpp
bool copy_(void* src, void* dest)
{
void* psrc = src;
void* pdest = dest;
while(*psrc!=0)
{
*pdest = *psrc;
psrc++;
pdest++;
}
import torch
import torch.nn as nn
from torch.autograd import Variable
import numpy as np
import torch.utils.data as data_utils
import pandas as pd
import os
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import pickle as pkl