To start working with Read/Write Context, create a pair of Provider/Consumer components in the same way as for React Context API:
let CounterContext = createContext();
System: Host: vit-HP-ProBook-650-G2 Kernel: 4.4.0-75-generic x86_64 (64 bit gcc: 5.4.0) | |
Desktop: Cinnamon 3.2.8 (Gtk 3.18.9) dm: lightdm Distro: Ubuntu 16.04 xenial | |
Machine: System: HP (portable) product: HP ProBook 650 G2 Chassis: type: 10 | |
Mobo: HP model: 8125 v: KBC Version 23.37 Bios: HP v: N87 Ver. 01.01 date: 01/05/2016 | |
CPU: Quad core Intel Core i5-6440HQ (-MCP-) cache: 6144 KB | |
flags: (lm nx sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx) bmips: 20702 | |
clock speeds: min/max: 800/3500 MHz 1: 876 MHz 2: 862 MHz 3: 893 MHz 4: 871 MHz | |
Graphics: Card: Intel Skylake Integrated Graphics bus-ID: 00:02.0 chip-ID: 8086:191b | |
Display Server: X.Org 1.18.4 drivers: intel (unloaded: fbdev,vesa) | |
Resolution: 1920x1080@60.01hz, 1920x1080@60.00hz |
import com.datastax.driver.core.*; | |
public class GettingStartedTwo { | |
public static void main(String[] args) { | |
Cluster cluster; | |
Session session; | |
ResultSet results; | |
Row rows; |
# Output json-esque output for logstash to parse easily. | |
http { | |
# ... | |
log_format logstash_json '{"@timestamp": "$time_iso8601", ' | |
'"remote_addr": "$remote_addr", ' | |
'"remote_user": "$remote_user", ' | |
'"body_bytes_sent": "$body_bytes_sent", ' | |
'"request_time": "$request_time", ' |
// default exports | |
export default 42; | |
export default {}; | |
export default []; | |
export default foo; | |
export default function () {} | |
export default class {} | |
export default function foo () {} | |
export default class foo {} |
# put this in your .bash_profile | |
pull_request() { | |
to_branch=$1 | |
if [ -z $to_branch ]; then | |
to_branch="master" | |
fi | |
# try the upstream branch if possible, otherwise origin will do | |
upstream=$(git config --get remote.upstream.url) | |
origin=$(git config --get remote.origin.url) |
Правила публикации вакансий в Frontend UA Jobs / Dev-UA Jobs
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
08737ef720f0510c7ec2aa84d7f70c691073c35d
.