Skip to content

Instantly share code, notes, and snippets.

View harubaru's full-sized avatar
🌺

Anthony Mercurio harubaru

🌺
  • CoreWeave, ASU
  • Phoenix, AZ
  • 06:04 (UTC -07:00)
View GitHub Profile
@harubaru
harubaru / sim.c
Created August 30, 2023 07:00
simulate nbody problem on a lot of solidified sand called a computer
// simulate nbody problem on a lot of solidified sand called a computer
// $ mpicc -o nbody_dist sim.c -lm -O3
// $ mpirun -np 16 ./nbody_dist --steps 2 --n 100000 --frequency 0.01 --info
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <mpi.h>
#include <math.h>
#include <time.h>
@harubaru
harubaru / wd1-3-release.md
Last active April 20, 2024 04:35
Official Release Notes for Waifu Diffusion 1.3
@harubaru
harubaru / wd1-4.md
Last active September 11, 2023 04:12

Waifu Diffusion 1.4 Overview

An image generated at resolution 512x512 then upscaled to 1024x1024 with Waifu Diffusion 1.3 Epoch 7.

Goals

  • Improving image generation at different aspect ratios using conditional masking during training. This will allow for the entire image to be seen during training instead of center cropped images, which will allow for better results when generating full body images, portraits, and improving the composition.
  • Expanded the input context from 77 tokens to 231 tokens or perhaps to an unlimited amount of tokens. Out of 77 tokens for input, only 75 are useable. This does not give nearly enough room for complex prompting that requires a lot of detail.

Data Labeling - A how to.

How do I...

Confused? You're in the right place! Here's a list of tips on how to use this bot.

  1. Type in the command /label

image

@harubaru
harubaru / tensorizer.py
Created April 14, 2022 04:26
Tensorizer
# Code is still a WIP!
from typing import Dict, List, Tuple
from mmappickle import mmapdict
import copy
import torch
import time
import psutil
import os
@harubaru
harubaru / context.py
Last active February 21, 2022 17:01
Context manager
import json
import re
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('gpt2')
TRIM_DIR_TOP=0
TRIM_DIR_BOTTOM=1
TRIM_DIR_NONE=2
TRIM_TYPE_NEWLINE=3

Keybase proof

I hereby claim:

  • I am tone121 on github.
  • I am amd64 (https://keybase.io/amd64) on keybase.
  • I have a public key whose fingerprint is DBF7 4ACC 0E39 9AD5 93DF C660 A42A 85B3 47A8 50BC

To claim this, I am signing this object:

@harubaru
harubaru / dither.glsl
Last active February 5, 2019 18:11
glsl dithering
/* Special thanks to Hugh Kennedy! */
float luma(vec3 color)
{
return dot(color, vec3(0.299, 0.587, 0.114));
}
float dither2x2_imp(vec2 pos, float brightness)
{
int x = int(mod(position.x, 2.0));
@harubaru
harubaru / sse.h
Last active August 7, 2018 18:15
a wrapper made in C for SSE
/*
* sse.h - a wrapper made in C for SSE
*/
#ifndef INTRINSICS_H
#define INTRINSICS_H
#if (__STDC_VERSION__ > 199409L)
#define USE_INLINE inline
#else