Skip to content

Instantly share code, notes, and snippets.

View roachsinai's full-sized avatar
🌴
On vacation

RoachZhao roachsinai

🌴
On vacation
View GitHub Profile
// https://www.zhihu.com/question/33084689/answer/58994758
#include <stdio.h>
#include <string.h>
typedef struct inst
{
unsigned char code; // 指令
unsigned char cond; // 执行该指令的条件
short p1, p2; // 参数1、2
@roachsinai
roachsinai / lisp.cpp
Created May 16, 2019 05:14 — forked from ofan/lisp.cpp
Lisp interpreter in 90 lines of C++
Lisp interpreter in 90 lines of C++
I've enjoyed reading Peter Norvig's recent articles on Lisp. He implements a Scheme interpreter in 90 lines of Python in the first, and develops it further in the second.
Just for fun I wondered if I could write one in C++. My goals would be
1. A Lisp interpreter that would complete Peter's Lis.py test cases correctly...
2. ...in no more than 90 lines of C++.
Although I've been thinking about this for a few weeks, as I write this I have not written a line of the code. I'm pretty sure I will achieve 1, and 2 will be... a piece of cake!
@roachsinai
roachsinai / event.cpp
Created May 24, 2019 09:37 — forked from darkf/event.cpp
Simple event system in C++
#include <functional>
#include <map>
#include <typeinfo>
#include <iostream>
struct Event {
virtual ~Event() {}
};
struct TestEvent : Event {
std::string msg;

A Tour of PyTorch Internals (Part I)

The fundamental unit in PyTorch is the Tensor. This post will serve as an overview for how we implement Tensors in PyTorch, such that the user can interact with it from the Python shell. In particular, we want to answer four main questions:

  1. How does PyTorch extend the Python interpreter to define a Tensor type that can be manipulated from Python code?
  2. How does PyTorch wrap the C libraries that actually define the Tensor's properties and methods?
  3. How does PyTorch cwrap work to generate code for Tensor methods?
  4. How does PyTorch's build system take all of these components to compile and generate a workable application?

Extending the Python Interpreter

PyTorch defines a new package torch. In this post we will consider the ._C module. This module is known as an "extension module" - a Python module written in C. Such modules allow us to define new built-in object types (e.g. the Tensor) and to call C/C++ functions.

WORK IN PROGRESS

PyTorch Internals Part II - The Build System

In the first post I explained how we generate a torch.Tensor object that you can use in your Python interpreter. Next, I will explore the build system for PyTorch. The PyTorch codebase has a variety of components:

  • The core Torch libraries: TH, THC, THNN, THCUNN
  • Vendor libraries: CuDNN, NCCL
  • Python Extension libraries
  • Additional third-party libraries: NumPy, MKL, LAPACK
name: "VGG_coco_SSD_300x300_train"
layer {
name: "data"
type: "AnnotatedData"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
name: "YOLONET"
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 1 dim: 3 dim: 416 dim: 416 } }
}
layer {
name: "conv1"
type: "Convolution"
name: "cornernet"
input: "blob1"
input_dim: 1
input_dim: 3
input_dim: 511
input_dim: 511
layer {
name: "conv1"
type: "Convolution"
bottom: "blob1"
@roachsinai
roachsinai / n.sh
Created July 15, 2020 06:45 — forked from dagelf/n.sh
Netspeed 2 - gets Linux network interface throughput speed from /proc/net/dev; busybox bash/awk/sed compatible, good for embedded OpenWRT or UBNT / Ubiquiti, etc routers
#!/bin/sh
# Copy the contents of this file to the clipboard, then get a terminal open on your device and enter:
# $ cat > n.sh
# [Ctrl+V] or Right Click, Paste. Then [Ctrl+D].
# chmod +x n.sh
# To run: ./n.sh eth0
SLP=1 # display / sleep interval
DEVICE=$1
IS_GOOD=0
for GOOD_DEVICE in `grep \: /proc/net/dev | awk -F: '{print $1}'`; do
#include <assert.h>
#include <stdint.h>
#include <stdio.h>
#include <string.h>
#include <windows.h> // 各种位图数据结构
class Converter
{
public:
Converter() : pixels_(NULL), width_(0), height_(0) {}