Skip to content

Instantly share code, notes, and snippets.

@MohamedAlaa
MohamedAlaa / tmux-cheatsheet.markdown
Last active July 4, 2024 17:58
tmux shortcuts & cheatsheet

tmux shortcuts & cheatsheet

start new:

tmux

start new with session name:

tmux new -s myname
@roachhd
roachhd / README.md
Last active July 4, 2024 03:12
EMOJI cheatsheet 😛😳😗😓🙉😸🙈🙊😽💀💢💥✨💏👫👄👃👀👛👛🗼🔮🔮🎄🎅👻

EMOJI CHEAT SHEET

Emoji emoticons listed on this page are supported on Campfire, GitHub, Basecamp, Redbooth, Trac, Flowdock, Sprint.ly, Kandan, Textbox.io, Kippt, Redmine, JabbR, Trello, Hall, plug.dj, Qiita, Zendesk, Ruby China, Grove, Idobata, NodeBB Forums, Slack, Streamup, OrganisedMinds, Hackpad, Cryptbin, Kato, Reportedly, Cheerful Ghost, IRCCloud, Dashcube, MyVideoGameList, Subrosa, Sococo, Quip, And Bang, Bonusly, Discourse, Ello, and Twemoji Awesome. However some of the emoji codes are not super easy to remember, so here is a little cheat sheet. ✈ Got flash enabled? Click the emoji code and it will be copied to your clipboard.

People

:bowtie: 😄

@anvaka
anvaka / 00.Intro.md
Last active July 4, 2024 09:20
npm rank

npm rank

This gist is updated daily via cron job and lists stats for npm packages:

  1. Top 1,000 most depended-upon packages
  2. Top 1,000 packages with largest number of dependencies
  3. Top 1,000 packages with highest PageRank score
@wzed
wzed / guzzle_pool_example.php
Created February 8, 2017 08:57
GuzzleHttp\Pool example: identifying responses to concurrent async requests
<?php
/*
* Using a key => value pair with the yield keyword is
* the cleanest method I could find to add identifiers or tags
* to asynchronous concurrent requests in Guzzle,
* so you can identify which response is from which request!
*/
$client = new GuzzleHttp\Client(['base_uri' => 'http://httpbin.org']);
@younesbelkada
younesbelkada / finetune_llama_v2.py
Last active May 14, 2024 05:46
Fine tune Llama v2 models on Guanaco Dataset
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active July 1, 2024 05:32
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin