I rarely see the classical three-tier architecture in the wild; I frequently see a different architecture.
I don't know this architecture's name. Do you?
The "three-tier architecture" has been the reference pattern for Internet services:
VERSION = \"1.0.0\" | |
PREFIX ?= out | |
INCDIR = inc | |
SRCDIR = src | |
LANG = c | |
OBJDIR = .obj | |
MODULE = binary_name | |
CC = gcc |
I was one of the people who didn't vote for ThePhd's keynote; originally because I simply preferred another promising candidate. Later, after hearing technical concerns from an expert that I mostly agreed with, also because of the topic, although I must admit I am no expert on the topic.
The candidate I voted for originally got a few approving comments at first, but ended up being mostly ignored later, mostly because of our lack of process and proper voting.
When it was brought up in the leadership chat that 'there are concerns', I focused on the talk itself rather than focusing on the process failure. That was a mistake, for which I apologize. I am not an expert on this topic, and should not have rushed myself into talking about things outside my expertise, even under pressure.
In another situation this could have been a minor mistake with no consequences, just one of several opinions in a discussion, but in this situation it became part
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
08737ef720f0510c7ec2aa84d7f70c691073c35d
.package main | |
import ( | |
"bufio" | |
"fmt" | |
"io" | |
"math" | |
"os" | |
"runtime" | |
"sort" |
I always gripe about Python not having useful (i.e. performant and with adoption) built-in array type and Numpy doesn't distinguish "vector of vector" from "matrix", but this still surprised me.
It seems that Numpy uses intersect
logic to check a in b
:
This gist contains a sorted list of programming related discord invite links. If you want to say high, this is my current Hangout.
I've been debating for weeks whether or not I was going to write any of this down. I'm a dad with two kids and a house to take care of and a business to run. Adding story-telling like this to my plate is exhausting.
Until yesterday, I had decided to forget about the whole thing, until I received the email that broke the camels back, as it were.
The best way I can describe why I'm writing this email is for the same reason why you might spend two hours dealing with an uncooperative mobile phone carrier to get them to remove that $5 charge on your bill that shouldn't be there. Some combination of the feeling of frustration and injustice that really pushes my proverbial buttons.
In this particular case, the "$5 charge on my phone bill" turned out to be literally hundreds of recurring subscription invoices that Stripe disabled collection for because, apparently, those subscriptions required "location inputs".
Generally speaking, I don't blog much anymore, and the last thing I wa
Question Real Guess Correct? RelErr Digits | |
What is 18857 - 592? 18265 18265 Yes 0.0% 5/5 | |
What is 30752 - 3087? 27665 27365 No 1.1% 4/5 | |
What is 2241 + 19873? 22114 22114 Yes 0.0% 5/5 | |
What is 5412 + 10169? 15581 15581 Yes 0.0% 5/5 | |
What is 11831 - 9178? 2653 3153 No 18.8% 2/4 | |
What is 1701 * 19933? 33906033 33953373 No 0.1% 4/8 | |
What is 11648 + 17851? 29499 30509 No 3.4% 1/5 | |
What is 29253 - 6202? 23051 22151 No 3.9% 3/5 | |
What is 27365 + 24989? 52354 53554 No 2.3% 3/5 |