I have moved this over to the Tech Interview Cheat Sheet Repo and has been expanded and even has code challenges you can run and practice against!
\
// for an updated version see https://github.com/jsdf/react-native-refreshable-listview | |
var React = require('react-native') | |
var { | |
ListView, | |
ActivityIndicatorIOS, | |
StyleSheet, | |
View, | |
Text, | |
} = React |
Here's my own list of the interesting stuff announced during this year's WWDC, collected from the keynotes, various Apple docs, blog posts and tweets.
If you're planning to watch the videos, I really recommend this Mac app that helps you download and watch them: https://github.com/insidegui/WWDC.
http://www.apple.com/osx/elcapitan-preview/
- split view - two apps side by side on full screen
import android.text.TextUtils; | |
import io.realm.Case; | |
import io.realm.Realm; | |
import io.realm.RealmObject; | |
import io.realm.RealmResults; | |
public class RealmFullTextSearch { | |
public static <T extends RealmObject> RealmResults<T> search(Realm realm, Class<T> modelClass, String query, String fieldName, boolean partialSearch){ |
Disclaimer: This piece is written anonymously. The names of a few particular companies are mentioned, but as common examples only.
This is a short write-up on things that I wish I'd known and considered before joining a private company (aka startup, aka unicorn in some cases). I'm not trying to make the case that you should never join a private company, but the power imbalance between founder and employee is extreme, and that potential candidates would
# Taken from https://johanwind.github.io/2023/03/23/rwkv_details.html. | |
# I've added additional comments restructured it a tiny bit, which makes it clearer for me. | |
import numpy as np | |
from torch import load as torch_load # Only for loading the model weights | |
from tokenizers import Tokenizer | |
exp = np.exp | |
layer_norm = lambda x, w, b : (x - np.mean(x)) / np.std(x) * w + b | |
sigmoid = lambda x : 1/(1 + exp(-x)) |
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
- Clone llama.cpp from git, I am on commit
08737ef720f0510c7ec2aa84d7f70c691073c35d
.