Skip to content

Instantly share code, notes, and snippets.

@vdt
vdt / is_fine_tuning_valuable.md
Created March 27, 2024 04:53 — forked from hamelsmu/is_fine_tuning_valuable.md
My thoughts re: Is fine tuning still valuable?

Here is my personal opinion about the questions I posed in this tweet:


I think that fine-tuning is still very valuable in many situations. I’ve done some more digging and I find that people who say that fine-tuning isn't useful are indeed often working on products where fine-tuning isn't likely to be useful:

  • They are making developer tools - foundation models have been trained extensively on coding tasks.
  • They are building foundation models and testing for the most general cases. But the foundation models themselves are also being trained for the most general cases.
  • They are building a personal assistant that isn’t scoped to any type of domain or use case, and is essentially similar to the same folks building foundation models.
@vdt
vdt / Mock Exam 1.md
Created February 27, 2024 13:47 — forked from luckylittle/Mock Exam 1.md
Prometheus Certified Associate (PCA)

Mock Exam

1

Q1. The metric node_cpu_temp_celcius reports the current temperature of a nodes CPU in celsius. What query will return the average temperature across all CPUs on a per node basis? The query should return {instance=“node1”} 23.5 //average temp across all CPUs on node1 {instance=“node2”} 33.5 //average temp across all CPUs on node2.

node_cpu_temp_celsius{instance="node1", cpu="0"} 28
node_cpu_temp_celsius{instance="node1", cpu="1"} 19
node_cpu_temp_celsius{instance="node2", cpu="0"} 36
node_cpu_temp_celsius{instance="node2", cpu="1"} 31

During the past days, this great article by Sam Pruden has been making the rounds around the gamedev community. While the article provides an in-depth analysis, its a bit easy to miss the point and exert the wrong conclusions from it. As such, and in many cases, users unfamiliar with Godot internals have used it points such as following:

  • Godot C# support is inefficient
  • Godot API and binding system is designed around GDScript
  • Godot is not production ready

In this brief article, I will shed a bit more light about how the Godot binding system works and some detail on the Godot

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@vdt
vdt / CoC.ml
Created August 8, 2023 03:40 — forked from Hirrolot/CoC.ml
How to implement dependent types in 80 lines of code
type term =
| Lam of (term -> term)
| Pi of term * (term -> term)
| Appl of term * term
| Ann of term * term
| FreeVar of int
| Star
| Box
let unfurl lvl f = f (FreeVar lvl)
@vdt
vdt / finetune_llama_v2.py
Created July 20, 2023 05:32 — forked from younesbelkada/finetune_llama_v2.py
Fine tune Llama v2 models on Guanaco Dataset
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@vdt
vdt / llama-home.md
Created May 14, 2023 14:43 — forked from rain-1/llama-home.md
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
    • git clone https://github.com/ggerganov/llama.cpp.git
  • cd llama.cpp
@vdt
vdt / macOS Internals.md
Created May 9, 2023 15:03 — forked from kconner/macOS Internals.md
macOS Internals

macOS Internals

Understand your Mac and iPhone more deeply by tracing the evolution of Mac OS X from prelease to Swift. John Siracusa delivers the details.

Starting Points

How to use this gist

You've got two main options:

@vdt
vdt / GPT-4 Reverse Turing Test.md
Created March 26, 2023 12:09 — forked from rain-1/GPT-4 Reverse Turing Test.md
GPT-4 Reverse Turing Test

The reverse turing test

I asked GPT-4 to come up with 10 questions to determine if the answerer was AI or human.

I provided my own answers for these questions and I also asked ChatGPT to answer them.

The result is that GPT-4 was able to correctly differentiate between AI and Human.

20 million digits of pi in under a minute with Julia

I recently discovered a relatively obscure algorithm for calculating the digits of pi: https://en.wikipedia.org/wiki/Gauss–Legendre_algorithm. Well, at least obscure compared to Chudnovsky's. Wikipedia notes that it is "memory-intensive" but is it really? Let's compare to the MPFR pi function:

function gauss_legendre(prec)
    setprecision(BigFloat, prec, base=10)
    GC.enable(false)