Skip to content

Instantly share code, notes, and snippets.

View ShivamKumar2002's full-sized avatar
🎯
Focusing

Shivam Kumar ShivamKumar2002

🎯
Focusing
  • India
View GitHub Profile
@rain-1
rain-1 / llama-home.md
Last active June 19, 2024 03:05
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@rain-1
rain-1 / Prompt Injection and AutoGPT.md
Last active September 11, 2023 11:12
Prompt Injection and AutoGPT

Does prompt injection matter to AutoGPT?

Executive summary: If you use AutoGPT, you need to be aware of prompt injection. This is a serious problem that can cause your AutoGPT agent to perform unexpected and unwanted tasks. Unfortunately, there isn't a perfect solution to this problem available yet.

Prompt injection can derail agents

If you set up an AutoGPT agent to perform task A, a prompt injection could 'derail' it into performing task B instead. Task B could be anything. Even something unwanted like deleting your personal files or sending all your bitcoins to some crooks wallet.

Docker helps limit the file system access that agents have. Measures like this are extremely useful. It's important to note that the agent can still be derailed.

@ShMcK
ShMcK / StableDiffusionUI_SageMaker.ipynb
Last active July 12, 2024 14:25
Config for running Automatic1111 Stable Diffusion WebUI on an AWS SageMaker Notebook
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@rain-1
rain-1 / GPT-4 Reverse Turing Test.md
Last active May 28, 2024 17:40
GPT-4 Reverse Turing Test

The reverse turing test

I asked GPT-4 to come up with 10 questions to determine if the answerer was AI or human.

I provided my own answers for these questions and I also asked ChatGPT to answer them.

The result is that GPT-4 was able to correctly differentiate between AI and Human.

@rain-1
rain-1 / LLM.md
Last active July 25, 2024 18:44
LLM Introduction: Learn Language Models

Purpose

Bootstrap knowledge of LLMs ASAP. With a bias/focus to GPT.

Avoid being a link dump. Try to provide only valuable well tuned information.

Prelude

Neural network links before starting with transformers.

@akii-i
akii-i / lora_convert.py
Last active July 18, 2023 19:13
LORA format conversion from cloneofsimo lora format to AUTOMATIC1111 webui format (kohya's format)
# Convert cloneofsimo lora format to AUTOMATIC1111 webui format (kohya's format)
# Will generate extra embedding files along with the converted lora
# Usage: python lora_convert.py path_to_lora output_folder [--overwrite]
# path_to_lora_file : path to lora safetensors in cloneofsimo lora format
# output_folder : path to folder for results in AUTOMATIC1111 webui format
# overwrite : overwrite the results in output_folder
# Example: python lora_convert.py .\lora_krk.safetensors .\
from safetensors import safe_open
from safetensors.torch import save_file
@TheArchivistsDomain
TheArchivistsDomain / CrackingBitcoin.md
Created August 29, 2022 10:40
Tutorial on how to break into bitcoin accounts with poorly censored private keys.

image Imagine you find a screenshot of a private key for a bitcoin wallet with money in it, but some of the characters are censored. Here is how you crack it.

First, note down the public address and private key: 1KtcmtacFsN5SR2Lt5rn1gfHe9enEHaBoZ L43kTaWVaQGZNfsCrRtZv9n7JUsXimR3x7pqKn??...??1km

Remember, private keys (WIF's) usually have 52 characters in them and only occasionally 51. Use this to figure out how many characters you are missing in the private key.

@kepano
kepano / obsidian-web-clipper.js
Last active July 22, 2024 07:29
Obsidian Web Clipper Bookmarklet to save articles and pages from the web (for Safari, Chrome, Firefox, and mobile browsers)
javascript: Promise.all([import('https://unpkg.com/turndown@6.0.0?module'), import('https://unpkg.com/@tehshrike/readability@0.2.0'), ]).then(async ([{
default: Turndown
}, {
default: Readability
}]) => {
/* Optional vault name */
const vault = "";
/* Optional folder name such as "Clippings/" */
@sindresorhus
sindresorhus / esm-package.md
Last active July 25, 2024 04:47
Pure ESM package

Pure ESM package

The package that linked you here is now pure ESM. It cannot be require()'d from CommonJS.

This means you have the following choices:

  1. Use ESM yourself. (preferred)
    Use import foo from 'foo' instead of const foo = require('foo') to import the package. You also need to put "type": "module" in your package.json and more. Follow the below guide.
  2. If the package is used in an async context, you could use await import(…) from CommonJS instead of require(…).
  3. Stay on the existing version of the package until you can move to ESM.