Skip to content

Instantly share code, notes, and snippets.

View houtianze's full-sized avatar
🎯
Focusing

Hou Tianze houtianze

🎯
Focusing
View GitHub Profile
@q3k
q3k / hashes.txt
Last active May 16, 2024 16:49
liblzma backdoor strings extracted from 5.6.1 (from a built-in trie)
0810 b' from '
0678 b' ssh2'
00d8 b'%.48s:%.48s():%d (pid=%ld)\x00'
0708 b'%s'
0108 b'/usr/sbin/sshd\x00'
0870 b'Accepted password for '
01a0 b'Accepted publickey for '
0c40 b'BN_bin2bn\x00'
06d0 b'BN_bn2bin\x00'
0958 b'BN_dup\x00'
@jauderho
jauderho / gist:6b7d42030e264a135450ecc0ba521bd8
Last active July 26, 2024 06:40
HOWTO: Upgrade Raspberry Pi OS from Bullseye to Bookworm
### WARNING: READ CAREFULLY BEFORE ATTEMPTING ###
#
# Officially, this is not recommended. YMMV
# https://www.raspberrypi.com/news/bookworm-the-new-version-of-raspberry-pi-os/
#
# This mostly works if you are on 64bit. You are on your own if you are on 32bit or mixed 64/32bit
#
# Credit to anfractuosity and fgimenezm for figuring out additional details for kernels
#
@rain-1
rain-1 / llama-home.md
Last active June 19, 2024 03:05
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@alkampfergit
alkampfergit / winget-upgrade.ps1
Last active January 14, 2024 19:38
Winget upgrade output parsed into a real Powershell Object
class Software {
[string]$Name
[string]$Id
[string]$Version
[string]$AvailableVersion
}
[Console]::OutputEncoding = [System.Text.Encoding]::UTF8
$upgradeResult = winget upgrade | Out-String
@brunerd
brunerd / maclTackle.command
Last active June 25, 2024 20:29
A hacky example to clean the com.apple.macl attribute from a file using zip to sidestep SIP on Catalina
#!/bin/bash
#clean the com.apple.macl attribute from a file or folders using zip to sidestep SIP on Catalina
#WARNING: This will overwrite the original file/folders with the zipped version - DO NOT use on production data
#hold down command key at launch or touch /tmp/debug to enable xtrace command expansion
commandKeyDown=$(/usr/bin/python -c 'import Cocoa; print Cocoa.NSEvent.modifierFlags() & Cocoa.NSCommandKeyMask > 1')
[ "$commandKeyDown" = "True" -o -f /tmp/debug ] && set -x && xtraceFlag=1
#hacky example to clean the com.apple.macl attribute from a file using zip to sidestep SIP
: <<-EOL
@IanColdwater
IanColdwater / twittermute.txt
Last active July 2, 2024 02:25
Here are some terms to mute on Twitter to clean your timeline up a bit.
Mute these words in your settings here: https://twitter.com/settings/muted_keywords
ActivityTweet
generic_activity_highlights
generic_activity_momentsbreaking
RankedOrganicTweet
suggest_activity
suggest_activity_feed
suggest_activity_highlights
suggest_activity_tweet
@kislayverma
kislayverma / steve-yegge-google-platform-rant.md
Created December 26, 2019 07:11
A copy (for posterity) of Steve Yegge's internal memo in Google about what platforms are and how Amazon learnt to build them

I was at Amazon for about six and a half years, and now I've been at Google for that long. One thing that struck me immediately about the two companies -- an impression that has been reinforced almost daily -- is that Amazon does everything wrong, and Google does everything right. Sure, it's a sweeping generalization, but a surprisingly accurate one. It's pretty crazy. There are probably a hundred or even two hundred different ways you can compare the two companies, and Google is superior in all but three of them, if I recall correctly. I actually did a spreadsheet at one point but Legal wouldn't let me show it to anyone, even though recruiting loved it.

I mean, just to give you a very brief taste: Amazon's recruiting process is fundamentally flawed by having teams hire for themselves, so their hiring bar is incredibly inconsistent across teams, despite various efforts they've made to level it out. And their operations are a mess; they don't really have SREs and they make engineers pretty much do everything,

@weimeng23
weimeng23 / 24-bit-truecolor.sh
Last active November 23, 2023 08:07
test if your terminal supports 24 bit truecolor
#!/bin/bash
#
# This file echoes four gradients with 24-bit color codes
# to the terminal to demonstrate their functionality.
# The foreground escape sequence is ^[38;2;<r>;<g>;<b>m
# The background escape sequence is ^[48;2;<r>;<g>;<b>m
# <r> <g> <b> range from 0 to 255 inclusive.
# The escape sequence ^[0m returns output to default
SEPARATOR=':'
@jamesmacfie
jamesmacfie / README.md
Created October 22, 2019 02:53
iTerm 2 - script to change theme depending on Mac OS dark mode

How to use

In iTerm2, in the menu bar go to Scripts > Manage > New Python Script

Select Basic. Select Long-Running Daemon

Give the script a decent name (I chose auto_dark_mode.py)

Save and open the script in your editor of choice.

@FradSer
FradSer / iterm2_switch_automatic.md
Last active June 20, 2024 14:14
Switch iTerm2 color preset automatic base on macOS dark mode.

The latest beta (3.5) includes separate color settings for light & dark mode. Toggling dark mode automatically switches colors.

Vist iTerm2 homepage or use brew install iterm2-beta to download the beta. Thanks @stefanwascoding.


  1. Add switch_automatic.py to ~/Library/ApplicationSupport/iTerm2/Scripts/AutoLaunch with: