Skip to content

Instantly share code, notes, and snippets.

View jumping's full-sized avatar
🎯
Focusing

Jumping Qu jumping

🎯
Focusing
  • Shanghai, China
View GitHub Profile
@jumping
jumping / gh-check
Created May 29, 2022 23:42 — forked from lilydjwg/gh-check
gh-check: speed test to known GitHub IPs
#!/usr/bin/python3
import asyncio
import time
import socket
import argparse
import aiohttp
class MyConnector(aiohttp.TCPConnector):
@jumping
jumping / sender.go
Created January 31, 2023 07:30 — forked from douglasmakey/sender.go
Golang - send an email with attachments.
package main
import (
"bytes"
"encoding/base64"
"fmt"
"io/ioutil"
"mime/multipart"
"net/smtp"
"os"
@jumping
jumping / llama2-mac-gpu.sh
Created September 12, 2023 14:32 — forked from adrienbrault/llama2-mac-gpu.sh
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin