- AbdBarho's Stable Diffusion WebUI Docker
- PublicPrompt's All-In-One-Pixel-Model - You need a Hugging Face account. Click on "Files and Versions" and then click on "Public-Prompts-Pixel-Model.ckpt" and then click "Download".
This should would on GPUs with as little as 8GB of ram but in practice I've seen usage go up to 9-10GB
I have only personally tested this to be functional in WSL 2 and Windows 11's latest Dev preview build. Attempts to run natively in Windows didn't work but I won't stop you from trying.
I have personally backed up any possibly one day could be lost to time remote files. I could provide those if needed.
Now, why is this neat? Why is this cool?
Results from my testing. Note that each game did have some funny bidness related to being fullscreen so if a game just randomly decides to disapear on you but is still showing up in the task manager to use alt+tab or win+tab to try and get focus again OR to disable fullscreen optimizations but that is an iffy fix usually this just resolves itself after you set your correct resolution in game.
With each game I used Nvidia Inspector to force 4x SSAA and 8x Sparse Grid Supersampling.
I also used a mix of the built in ISO mounting and WinCDEmu to mount the games media.
My computer (the specs that matter):
- AMD Ryzen 5900X
So, let me explain myself here. I am curious about Alder Lake, and I’d love to have some benchmarking data that I could add to my existing previous data from other processors. There are many curiosities about Alder Lake as a whole that lead to a ballooning iteration count for the tests. I have existing data for 3.2GHz clock speed tests so I’d need to run these CPUs at that speed. The L3 cache is interesting as heck in how the different core types might utilize it. I’ve gotten a request for 4GHz testing too. I’d also have to run these at stock. There is also the DDR4/DDR5 performance difference. I’m sure there are more things too… I’m going to try to list everything here that I’d like to eventually get data for but when it comes down to it, I’m going to have to self-manage and prune the testing back quite a bit. You’ll notice that this doesn’t even cover power or temperature or overclocking. It won’t and I wont ever. This is already e
- I'm playing an Italian Bud Spencer game
- I dunno what a Voodoo Lounge is but it sounds awesome
- The Game of the Artist Formerly Known as Prince
- Alone in the Dark - First time playing
- Friday the 13th on NES
- Bioforge - The conclusion!? - The cybernetic intrigue continues
- Bioforge - A Cybernetic Nightmare Awaits
- The X-Files Conclusion - Will we find the truth!?
- THE X-FILES GAME
- Steve Spielberg helping me direct a movie with Quentin Tarantino and Jennifer Aniston in it
i apologize for misspelling or basically just everything here LOL i've been wanting to write this for weeks and fuck it here i wrote it on here deal with it
ok hear me out. i've been wanting to write a thing about unreal engine 5 and my thoughts about nvidia and their rtx branded ray tracing.
first of all nvidia is responsible for kick starting all of this. without them we wouldn't be doing all of this... i think... maybe it is actually the advent of DXR making it possible and nvidia kick starting developer imaginations? either way, they do deserve credit, obviously.
so let's get into the meat of things.
lumen fucking rocks! its universal. you dont need ray tracing hardware to use it. you just need a modern GPU that can do shader model 5/6 and dx12 at this point. if you have ray tracing hardware then you just get more fidelity! lumen will cover essentially infinate pass global illumination along with real time reflections. this is the way for the future and it has me so excited!
// | |
// main.c | |
// imfucked | |
// | |
// Created by Jack Mangano on 12/2/20. | |
// Code Stolen from - https://gist.github.com/Coneko/4234842 | |
#include <stdio.h> | |
#import <pthread.h> | |
#import <mach/thread_act.h> |
07-03-2020 7:00am EST
NavJack27 — Jack.Mangano@Gmail.com @ https://TheChipCollective.com