Skip to content

Instantly share code, notes, and snippets.

View cedrickchee's full-sized avatar
⚒️
⚡ 🦀 🐿️ 🐘 🐳 ⬡ ⚛️ 🚢 🚀 🦄 🍵

Cedric Chee cedrickchee

⚒️
⚡ 🦀 🐿️ 🐘 🐳 ⬡ ⚛️ 🚢 🚀 🦄 🍵
View GitHub Profile
@cedrickchee
cedrickchee / programming-vs-software-engineering.md
Created September 9, 2023 13:15
Distinction Between Programming and Software Engineering
View programming-vs-software-engineering.md

Distinction Between Programming and Software Engineering

"Programming" differs from "software engineering" in dimensionality: programming is about producing code. Software engineering extends that to include the maintenance of that code for its useful life span.

Programming is certainly a significant part of software engineering.

With this distinction, we might need to delineate between programming tasks (development) and software engineering tasks (development, modification, maintenance). The addition of time adds an important new dimension to programming. Software engineering isn't programming.

@cedrickchee
cedrickchee / bun-1.0.md
Last active September 9, 2023 11:38
A First Look at Bun 1.0
View bun-1.0.md

A First Look at Bun 1.0

Blog post: https://bun.sh/blog/bun-v1.0

Video: Bun 1.0 is here.

What is Bun?

Bun is a fast, all-in-one toolkit for running, building, testing, and debugging JavaScript and TypeScript, from a single file to a full-stack application. Today, Bun is stable and production-ready.

@cedrickchee
cedrickchee / vim-thoughts.md
Last active August 11, 2023 16:19
VIM is more than just a code editor
View vim-thoughts.md

VIM Thoughts and Software Enshittification

I recently learned a new meme (word?), "enshittification".

Quoting the article:

Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

The writer called this enshittification.

@cedrickchee
cedrickchee / leaked-sys-prompts.md
Last active August 11, 2023 14:41
Leaked System Prompts
View leaked-sys-prompts.md
@cedrickchee
cedrickchee / llama2c-micro-llm.md
Last active August 11, 2023 16:15
Llama2.c — The rise of micro-LLMs (Large Language Models)
View llama2c-micro-llm.md

Llama2.c — The rise of micro-LLMs (Large Language Models)

Status: Draft

A glimpse into the future of smaller, better models.

I like llama2.c contributing guide. It motivates me to write this post.

Below is a copy of it.

@cedrickchee
cedrickchee / llama-home.md
Created July 12, 2023 05:22 — forked from rain-1/llama-home.md
How to run Llama 13B with a 6GB graphics card
View llama-home.md

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@cedrickchee
cedrickchee / software-craftmanship-wisdom.md
Created May 30, 2023 06:40
What I've Learned in 45 Years in the Software Industry
View software-craftmanship-wisdom.md

What I've Learned in 45 Years in the Software Industry

BTI360 teammate Joel Goldberg recently retired after working in the software industry for over four decades. When he left he shared with our team some of the lessons he learned over his career. With his permission, we reshare his wisdom here.

Looking back on four decades in the software industry, I’m struck by how much has changed. I started my career with punch cards and I am ending in the era of cloud computing. Despite all this change, many principles that have helped me throughout my career haven’t changed and continue to be relevant. As I step away from the keyboard, I want to share six ideas I’ve learned from my career as a software engineer.

1. Beware of the Curse of Knowledge

When you know something it is almost impossible to imagine what it is like not to know that thing. This is the curse of knowledge, and it is the root of countless misunderstandings and inefficiencies. Smart people who are comfortable with complexity can be especi

@cedrickchee
cedrickchee / ai-plugin.json
Created March 25, 2023 14:22 — forked from danielgross/ai-plugin.json
ChatGPT Plugin for Twilio
View ai-plugin.json
{
"schema_version": "v1",
"name_for_model": "twilio",
"name_for_human": "Twilio Plugin",
"description_for_model": "Plugin for integrating the Twilio API to send SMS messages and make phone calls. Use it whenever a user wants to send a text message or make a call using their Twilio account.",
"description_for_human": "Send text messages and make phone calls with Twilio.",
"auth": {
"type": "user_http",
"authorization_type": "basic"
},
@cedrickchee
cedrickchee / alpaca-native-langchain-chatbot-tutorial.md
Last active October 20, 2023 06:58
Creating a chatbot using Alpaca native and LangChain
View alpaca-native-langchain-chatbot-tutorial.md

Creating a chatbot using Alpaca native and LangChain

Let's talk to an Alpaca-7B model using LangChain with a conversational chain and a memory window.

Setup and installation

Install python packages using pip. Note that you need to install HuggingFace Transformers from source (GitHub) currently.

$ pip install git+https://github.com/huggingface/transformers
@cedrickchee
cedrickchee / alpaca-inference.py
Last active March 22, 2023 19:34
HuggingFace Transformers inference for Stanford Alpaca (fine-tuned LLaMA)
View alpaca-inference.py
# Based on: Original Alpaca Model/Dataset/Inference Code by Tatsu-lab
import time, torch
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
tokenizer = LlamaTokenizer.from_pretrained("./checkpoint-1200/")
def generate_prompt(instruction, input=None):
if input:
return f"""The following is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.