Skip to content

Instantly share code, notes, and snippets.

View cglukas's full-sized avatar
🦕
Having fun with autoencoders

Lukas Wieg cglukas

🦕
Having fun with autoencoders
View GitHub Profile
@dridk
dridk / RangeSlider.py
Last active February 1, 2024 08:59
The following code creates a range slider as a Qt widget with a native looks and feel
from PySide2.QtWidgets import *
from PySide2.QtCore import *
from PySide2.QtGui import *
import sys
class RangeSlider(QWidget):
def __init__(self, parent=None):
super().__init__(parent)
@doedotdev
doedotdev / lint.py
Created December 10, 2019 01:56
Python Pylint Runner to Pass (Exit 0) or Fail (Exit 1) Based on Pylint Score Threshold
import argparse
import logging
from pylint.lint import Run
logging.getLogger().setLevel(logging.INFO)
parser = argparse.ArgumentParser(prog="LINT")
parser.add_argument('-p',
@ZijiaLewisLu
ZijiaLewisLu / Tricks to Speed Up Data Loading with PyTorch.md
Last active June 6, 2024 08:31
Tricks to Speed Up Data Loading with PyTorch

In most of deep learning projects, the training scripts always start with lines to load in data, which can easily take a handful minutes. Only after data ready can start testing my buggy code. It is so frustratingly often that I wait for ten minutes just to find I made a stupid typo, then I have to restart and wait for another ten minutes hoping no other typos are made.

In order to make my life easy, I devote lots of effort to reduce the overhead of I/O loading. Here I list some useful tricks I found and hope they also save you some time.

  1. use Numpy Memmap to load array and say goodbye to HDF5.

    I used to relay on HDF5 to read/write data, especially when loading only sub-part of all data. Yet that was before I realized how fast and charming Numpy Memmapfile is. In short, Memmapfile does not load in the whole array at open, and only later "lazily" load in the parts that are required for real operations.

Sometimes I may want to copy the full array to memory at once, as it makes later operations