Skip to content

Instantly share code, notes, and snippets.

A Tour of PyTorch Internals (Part I)

The fundamental unit in PyTorch is the Tensor. This post will serve as an overview for how we implement Tensors in PyTorch, such that the user can interact with it from the Python shell. In particular, we want to answer four main questions:

  1. How does PyTorch extend the Python interpreter to define a Tensor type that can be manipulated from Python code?
  2. How does PyTorch wrap the C libraries that actually define the Tensor's properties and methods?
  3. How does PyTorch cwrap work to generate code for Tensor methods?
  4. How does PyTorch's build system take all of these components to compile and generate a workable application?

Extending the Python Interpreter

PyTorch defines a new package torch. In this post we will consider the ._C module. This module is known as an "extension module" - a Python module written in C. Such modules allow us to define new built-in object types (e.g. the Tensor) and to call C/C++ functions.

@dusty-nv
dusty-nv / pytorch_jetson_install.sh
Last active October 21, 2021 01:28
Install procedure for pyTorch on NVIDIA Jetson TX1/TX2 with JetPack <= 3.2.1. For JetPack 4.2 and Xavier/Nano/TX2, see https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/
#!/bin/bash
#
# EDIT: this script is outdated, please see https://forums.developer.nvidia.com/t/pytorch-for-jetson-nano-version-1-6-0-now-available
#
sudo apt-get install python-pip
# upgrade pip
pip install -U pip
pip --version
# pip 9.0.1 from /home/ubuntu/.local/lib/python2.7/site-packages (python 2.7)