Skip to content

Instantly share code, notes, and snippets.

@navjack
navjack / readme.md
Last active April 8, 2023 18:28
Stable Diffusion GameDev Pixel Art Generation
@navjack
navjack / 1gpt-j 8bit readme.md
Last active June 10, 2023 14:10
Local GPT-J 8-Bit on WSL 2

Local GPT-J 8-Bit on WSL 2

This should would on GPUs with as little as 8GB of ram but in practice I've seen usage go up to 9-10GB

I have only personally tested this to be functional in WSL 2 and Windows 11's latest Dev preview build. Attempts to run natively in Windows didn't work but I won't stop you from trying.

I have personally backed up any possibly one day could be lost to time remote files. I could provide those if needed.

Now, why is this neat? Why is this cool?

How to set-up GPT-2 locally for Nvidia 20 series GPU usage

Like it says in the heading, this is a guide for 20 series Nvidia GPUs only. This project uses Tensorflow 1 which is incompatible with the 30 series and the 40 series GPUs.

  • Download and install the latest miniconda
  • Download and install CUDA Toolkit 10.0 cuda_10.0.130_411.31_win10.exe NOTE: When installing you can uncheck GeForce experience and the drivers but make sure everything else is still checked to install
  • The next step requires an Nvidia Developer account and accepting any agreements that entails
  • Download and extract Download cuDNN v7.6.5 (November 5th, 2019), for CUDA 10.0 cudnn-10.0-windows10-x64-v7.6.5.32.zip
  • Copy the folders inside of the cuDNN archive (bin, include, and lib) to your C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0 folder
  • To put it si
@navjack
navjack / win11gametesting.md
Last active January 19, 2022 00:09
Windows 11 Game Compatibility Testing

Results from my testing. Note that each game did have some funny bidness related to being fullscreen so if a game just randomly decides to disapear on you but is still showing up in the task manager to use alt+tab or win+tab to try and get focus again OR to disable fullscreen optimizations but that is an iffy fix usually this just resolves itself after you set your correct resolution in game.

With each game I used Nvidia Inspector to force 4x SSAA and 8x Sparse Grid Supersampling.

I also used a mix of the built in ISO mounting and WinCDEmu to mount the games media.


My computer (the specs that matter):

  • AMD Ryzen 5900X
@navjack
navjack / navjack_dldsr.md
Last active January 15, 2022 01:59
deep learning dynamic super resolution compare

Unreal Engine 5 Testing

Today (January 14th 2022) Nvidia released driver version 511.23 with a new feature drop of Deep Learning Dynamic Super Resolution. Here is a big thing where I'll throw screenshots of things using it.

Unreal Engine 5.1

Native 1080P with no in-engine AA

image

Native 1080P with TSR

@navjack
navjack / Benchmarking_In_Progress.md
Last active November 22, 2021 21:25
NavJack27's Testing Stuff

Benchmarking Data For Upcoming Alder Lake Testing (due to popular demand)

So, let me explain myself here. I am curious about Alder Lake, and I’d love to have some benchmarking data that I could add to my existing previous data from other processors. There are many curiosities about Alder Lake as a whole that lead to a ballooning iteration count for the tests. I have existing data for 3.2GHz clock speed tests so I’d need to run these CPUs at that speed. The L3 cache is interesting as heck in how the different core types might utilize it. I’ve gotten a request for 4GHz testing too. I’d also have to run these at stock. There is also the DDR4/DDR5 performance difference. I’m sure there are more things too… I’m going to try to list everything here that I’d like to eventually get data for but when it comes down to it, I’m going to have to self-manage and prune the testing back quite a bit. You’ll notice that this doesn’t even cover power or temperature or overclocking. It won’t and I wont ever. This is already e

@navjack
navjack / part1.md
Created August 2, 2021 06:42
Jack's thoughts on game engines and ray tracing and nvidia and unreal engine 5 PART 1

i apologize for misspelling or basically just everything here LOL i've been wanting to write this for weeks and fuck it here i wrote it on here deal with it

ok hear me out. i've been wanting to write a thing about unreal engine 5 and my thoughts about nvidia and their rtx branded ray tracing.

first of all nvidia is responsible for kick starting all of this. without them we wouldn't be doing all of this... i think... maybe it is actually the advent of DXR making it possible and nvidia kick starting developer imaginations? either way, they do deserve credit, obviously.

so let's get into the meat of things.

lumen fucking rocks! its universal. you dont need ray tracing hardware to use it. you just need a modern GPU that can do shader model 5/6 and dx12 at this point. if you have ray tracing hardware then you just get more fidelity! lumen will cover essentially infinate pass global illumination along with real time reflections. this is the way for the future and it has me so excited!

@navjack
navjack / main.c
Last active December 3, 2020 21:02
thing to run stuff in a single core? maybe? i still don't know!
//
// main.c
// imfucked
//
// Created by Jack Mangano on 12/2/20.
// Code Stolen from - https://gist.github.com/Coneko/4234842
#include <stdio.h>
#import <pthread.h>
#import <mach/thread_act.h>