Skip to content

Instantly share code, notes, and snippets.

@adtac
adtac / Dockerfile
Last active April 13, 2024 22:33
#!/usr/bin/env docker run
#!/usr/bin/env -S bash -c "docker run -p 8080:8080 -it --rm \$(docker build --progress plain -f \$0 . 2>&1 | tee /dev/stderr | grep -oP 'sha256:[0-9a-f]*')"
# syntax = docker/dockerfile:1.4.0
FROM node:20
WORKDIR /root
RUN npm install sqlite3
@rain-1
rain-1 / llama-home.md
Last active April 28, 2024 18:42
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@lizthegrey
lizthegrey / attributes.rb
Last active February 24, 2024 14:11
Hardening SSH with 2fa
default['sshd']['sshd_config']['AuthenticationMethods'] = 'publickey,keyboard-interactive:pam'
default['sshd']['sshd_config']['ChallengeResponseAuthentication'] = 'yes'
default['sshd']['sshd_config']['PasswordAuthentication'] = 'no'
@swalkinshaw
swalkinshaw / tutorial.md
Last active November 13, 2023 08:40
Designing a GraphQL API
@depp
depp / problem.cpp
Last active December 21, 2021 19:04
A Faster Solution
// Faster solution for:
// http://www.boyter.org/2017/03/golang-solution-faster-equivalent-java-solution/
// With threading.
// g++ -std=c++11 -Wall -Wextra -O3 -pthread
// On my computer (i5-6600K 3.50 GHz 4 cores), takes about ~160 ms after the CPU
// has warmed up, or ~80 ms if the CPU is cold (due to Turbo Boost).
// How it works: Start by generating a list of losing states -- states where the
// game can end in one turn. Generate a new list of states by running the game
# Add this file to `spec/support/rails_log_splitter.rb`
require "stringio"
require "logger"
require "fileutils"
class RailsLogSplitter
def initialize(all: false)
@io = StringIO.new
@logger = Logger.new(@io)
@danielrw7
danielrw7 / replify
Last active October 24, 2023 12:03
replify - Create a REPL for any command
#!/bin/sh
command="${*}"
printf "Initialized REPL for `%s`\n" "$command"
printf "%s> " "$command"
read -r input
while [ "$input" != "" ];
do
eval "$command $input"
printf "%s> " "$command"
@sj26
sj26 / buildkite-agent.service
Created May 27, 2016 04:57
Starting multiple buildkite-agents per-machine with systemd
[Unit]
Description=Buildkite Agents
Documentation=https://buildkite.com/agent
Wants=buildkite-agent@1.service
Wants=buildkite-agent@2.service
# ...
[Service]
Type=oneshot
ExecStart=/bin/true

Creating a redis Module in 15 lines of code!

A quick guide to write a very very simple "ECHO" style module to redis and load it. It's not really useful of course, but the idea is to illustrate how little boilerplate it takes.

Step 1: open your favorite editor and write/paste the following code in a file called module.c

#include "redismodule.h"
/* ECHO <string> - Echo back a string sent from the client */
int EchoCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {