Skip to content

Instantly share code, notes, and snippets.

Avatar
🎯
Focusing

xunge xunge

🎯
Focusing
View GitHub Profile
@xunge
xunge / pytorch_imagenet.py
Last active Sep 28, 2020
using pytorch to train and validate imagenet dataset
View pytorch_imagenet.py
import time
import shutil
import os
import torch
import torch.nn as nn
import torchvision.datasets as datasets
import torchvision.transforms as transforms
import torchvision.models as models
import torch.backends.cudnn as cudnn
@xunge
xunge / anaconda_sync.py
Created Jul 6, 2019
anaconda同步文件
View anaconda_sync.py
#!/usr/bin/env python3
import os
import json
import hashlib
import tempfile
import shutil
import logging
import subprocess as sp
from pathlib import Path
from email.utils import parsedate_to_datetime
@xunge
xunge / deep_feedforward_networks_03.py
Last active Jul 3, 2019
定义单隐藏层前馈网络模型的训练样本X和Y、定义输入x,输出y,隐藏层参数分别定义为w1和b1,隐藏层的激活函数选取ReLU;输出层参数为w2,b2,输出层激活函数选取sigmoid。定义深度前馈网络模型输出out,定义损失函数为均方差损失函数loss,并且使用Adam算法的Optimizer
View deep_feedforward_networks_03.py
import tensorflow as tf
# 输入训练数据,这里是python的list, 也可以定义为numpy的ndarray
x_data = [[1., 0.], [0., 1.], [0., 0.], [1., 1.]]
x = tf.placeholder(tf.float32, shape=[None, 2]) # 定义占位符,占位符在运行图的时候必须feed数据
y_data = [[1], [1], [0], [0]] # 训练数据的标签,注意维度
y = tf.placeholder(tf.float32, shape=[None, 1])
# 定义variables,在运行图的过程中会被按照优化目标改变和保存
weights = {'w1': tf.Variable(tf.random_normal([2, 16])),
'w2': tf.Variable(tf.random_normal([16, 1]))}
@xunge
xunge / deep_feedforward_networks_02.py
Last active Jul 3, 2019
线性回归的训练过程
View deep_feedforward_networks_02.py
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(1000):
for j in range(4):
sess.run(train, feed_dict={x: np.expand_dims(X[j], 0), y: np.expand_dims(Y[j], 0)})
loss_ = sess.run(loss, feed_dict={x: X, y: Y})
print("step: %d, loss: %.3f" % (i, loss_))
print("X: %r" % X)
print("pred: %r" % sess.run(out, feed_dict={x: X}))
@xunge
xunge / deep_feedforward_networks_01.py
Last active Jul 2, 2019
定义训练样本X和Y、定义输入x,输出y,定义权重w和偏置b,定义线性回归输出out,定义损失函数为均方差损失函数loss,并且使用Adam算法的Optimizer
View deep_feedforward_networks_01.py
import tensorflow as tf
import numpy as np
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
Y = np.array([[0], [1], [1], [0]])
x = tf.placeholder(tf.float32, [None, 2])
y = tf.placeholder(tf.float32, [None, 1])
w = tf.Variable(tf.random_normal([2, 1]))