Skip to content

Instantly share code, notes, and snippets.

View Cc618's full-sized avatar
💚
https://celian.dev

Célian Raimbault Cc618

💚
https://celian.dev
View GitHub Profile
@Cc618
Cc618 / invoke-manage-system.md
Last active June 15, 2024 11:22
Use Invoke to manage your system(s) !

How did I use Invoke to manage my system(s) ?

What is Invoke ?

Invoke is a Python package used to make tasks executed by CLI.

Your Python code looks like this:

@harubaru
harubaru / wd1-4.md
Last active September 11, 2023 04:12

Waifu Diffusion 1.4 Overview

An image generated at resolution 512x512 then upscaled to 1024x1024 with Waifu Diffusion 1.3 Epoch 7.

Goals

  • Improving image generation at different aspect ratios using conditional masking during training. This will allow for the entire image to be seen during training instead of center cropped images, which will allow for better results when generating full body images, portraits, and improving the composition.
  • Expanded the input context from 77 tokens to 231 tokens or perhaps to an unlimited amount of tokens. Out of 77 tokens for input, only 75 are useable. This does not give nearly enough room for complex prompting that requires a lot of detail.
@fnky
fnky / ANSI.md
Last active June 30, 2024 12:35
ANSI Escape Codes

ANSI Escape Sequences

Standard escape codes are prefixed with Escape:

  • Ctrl-Key: ^[
  • Octal: \033
  • Unicode: \u001b
  • Hexadecimal: \x1B
  • Decimal: 27
@ZijiaLewisLu
ZijiaLewisLu / Tricks to Speed Up Data Loading with PyTorch.md
Last active June 26, 2024 02:45
Tricks to Speed Up Data Loading with PyTorch

In most of deep learning projects, the training scripts always start with lines to load in data, which can easily take a handful minutes. Only after data ready can start testing my buggy code. It is so frustratingly often that I wait for ten minutes just to find I made a stupid typo, then I have to restart and wait for another ten minutes hoping no other typos are made.

In order to make my life easy, I devote lots of effort to reduce the overhead of I/O loading. Here I list some useful tricks I found and hope they also save you some time.

  1. use Numpy Memmap to load array and say goodbye to HDF5.

    I used to relay on HDF5 to read/write data, especially when loading only sub-part of all data. Yet that was before I realized how fast and charming Numpy Memmapfile is. In short, Memmapfile does not load in the whole array at open, and only later "lazily" load in the parts that are required for real operations.

Sometimes I may want to copy the full array to memory at once, as it makes later operations

@BertrandBordage
BertrandBordage / cat.asm
Created April 16, 2014 19:04
Command-line read file in X86_64 intel assembly for Linux
; À compiler avec nasm -felf64 cat.asm && ld cat.o -o cat
%define SYS_EXIT 60
%define SYS_READ 0
%define SYS_WRITE 1
%define SYS_OPEN 2
%define SYS_CLOSE 3
%define STDOUT 1
%define BUFFER_SIZE 2048
@eliben
eliben / gist:5797351
Created June 17, 2013 14:36
Generic regex-based lexer in Python
#-------------------------------------------------------------------------------
# lexer.py
#
# A generic regex-based Lexer/tokenizer tool.
# See the if __main__ section in the bottom for an example.
#
# Eli Bendersky (eliben@gmail.com)
# This code is in the public domain
# Last modified: August 2010
#-------------------------------------------------------------------------------
@codebrainz
codebrainz / c99.l
Created June 14, 2012 23:49
C99 Lex/Flex & YACC/Bison Grammars
D [0-9]
L [a-zA-Z_]
H [a-fA-F0-9]
E ([Ee][+-]?{D}+)
P ([Pp][+-]?{D}+)
FS (f|F|l|L)
IS ((u|U)|(u|U)?(l|L|ll|LL)|(l|L|ll|LL)(u|U))
%{
#include <stdio.h>