Skip to content

Instantly share code, notes, and snippets.

View xcjs's full-sized avatar

Zackary Lowery xcjs

View GitHub Profile
@nitschis
nitschis / tunneling.md
Last active June 4, 2024 09:21 — forked from tombowditch/tunneling.md
Tunneling a whole docker container through wireguard

Tunneling a whole docker container through wireguard

Certain company blocking a certain hosting provider? No problem, just tunnel the docker container through a small VPS with wireguard.

Consider server A your blocked server and server B your VPS.

Step 1: Generate a keypair on server A and server B

Server A:

@osy
osy / tpm-rant.md
Last active October 5, 2024 16:39
TPM provides zero practical security

TPM provides zero practical security

TPM (Trusted Platform Module) is as useful for preventing real attackers as the TSA is at preventing real terrorists. The architecture is fundamentally flawed and most existing implementations are completely broken. I thought this argument was settled decades ago[1] when "trusted computing" was introduced mostly as a way to provide DRM and ownership capabilities to organizations. It has largely failed to impact the consumer market when it was introduced back in the early 2000s. However, recently there seems to be a movement by certain parties to reintroduce this failed product back to the market. Microsoft argues that in order to use Windows 11, you need TPM 2.0 compatible hardware because[2]:

The Trusted Platform Module(TPM) requirement ena

@rain-1
rain-1 / llama-home.md
Last active November 9, 2024 03:49
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@luzhuomi
luzhuomi / pocketchip_debian10.md
Last active November 14, 2024 23:21
A tutorial to upgrade NXT pocket C.H.I.P to Debian Buster

Pocket Chip Debian 10 Upgrade Guide

The purpose of this tutorial is to walk through the required steps to upgrade NXT chip (or pocketchip) from debian jessie to debian buster.

If you would like to start your Chip from scratch, follow the steps in the Preparation section.

Preparation (Optional)

A linux host machine, recommended Ubuntu 18.04. However I managed to do it with 20.10 with some tweak.

@MichaelPote
MichaelPote / himawari.ps1
Created February 3, 2016 19:11
Windows Powershell Script to download the latest image from the Himawari-8 satelite, combine the tiles into a single image, convert to jpg and then set as the desktop background.
#
# Himawari-8 Downloader
#
#
#
# This script will scrape the latest image from the Himawari-8 satellite, recombining the tiled image,
# converting it to a JPG which is saved in My Pictures\Himawari\ and then set as the desktop background.
#
# http://himawari8.nict.go.jp/himawari8-image.htm
#
@ramiabraham
ramiabraham / rom_suffix_codes.md
Last active October 23, 2024 20:25
Video game rom suffix codes (decoded)

Video game rom codes

You wouldn't download a car...


Primary rom codes

Probably what you're looking for

  • [a] Alternate (alternate version of the game, usually trying a different output method)
  • [p] Pirate
@gubatron
gubatron / compiling_building_c_cpp_notes.md
Last active October 2, 2024 03:02
Things to remember when compiling and linking C/C++ programs

Things to remember when compiling/linking C/C++ software

by Angel Leon. March 17, 2015;

Last update on December 14, 2023

Updated on February 27, 2023

Updated August 29, 2019.

@adunning
adunning / epub.css
Created August 12, 2014 02:51
Better default epub.css for Pandoc (tested with iBooks)
/* This defines styles and classes used in the book */
body { margin: 0; text-align: justify; font-size: medium; font-family: Athelas, Georgia, serif; }
code { font-family: monospace; }
h1 { text-align: left; }
h2 { text-align: left; }
h3 { text-align: left; }
h4 { text-align: left; }
h5 { text-align: left; }
h6 { text-align: left; }
h1.title { }
@nolanlawson
nolanlawson / completion-for-gradle.md
Last active September 25, 2024 12:51
Gradle tab completion for Bash. Works on both Mac and Linux.

Gradle tab completion script for Bash

A tab completion script that works for Bash. Relies on the BSD md5 command on Mac and md5sum on Linux, so as long as you have one of those two commands, this should work.

Usage

$ gradle [TAB]
@killercup
killercup / pandoc.css
Created July 3, 2013 11:31
Add this to your Pandoc HTML documents using `--css pandoc.css` to make them look more awesome. (Tested with Markdown and LaTeX.)
/*
* I add this to html files generated with pandoc.
*/
html {
font-size: 100%;
overflow-y: scroll;
-webkit-text-size-adjust: 100%;
-ms-text-size-adjust: 100%;
}