GIT
Estados
- Modificado (modified);
- Preparado (staged/index)
- Consolidado (comitted);
patch: | |
# 菜单 | |
menu: | |
page_size: 8 # 候选词个数 | |
# alternative_select_labels: [ ①, ②, ③, ④, ⑤, ⑥, ⑦, ⑧, ⑨, ⑩ ] # 修改候选项标签 | |
# alternative_select_keys: ASDFGHJKL # 如编码字符占用数字键,则需另设选字键 | |
# ascii_mode、inline、no_inline、vim_mode 等等设定,可参考 /Library/Input Methods/Squirrel.app/Contents/SharedSupport/squirrel.yaml | |
# 中西文切换 | |
# | |
# 【good_old_caps_lock】 CapsLock 切换到大写或切换中英。 |
Let's say we're trying to load a LLaMA model via AutoModelForCausalLM.from_pretrained
with 4-bit quantization in order to inference from it:
python -m generate.py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, LlamaTokenizerFast, LlamaForCausalLM
import transformers
// Disable smooth scrolling for users who have set `prefers-reduced-motion` in their operating system | |
// 1. Place this snippet before the end of the <body> tag; | |
// NOT in the <head> tag! | |
// 2.Make sure it's inside $(function() {})! | |
$(function() { | |
const mediaQuery = window.matchMedia('(prefers-reduced-motion: reduce)'); | |
if (mediaQuery.matches) $(document).off('click.wf-scroll'); | |
}) |
nvidia-smi
said this required 11181MiB, at least to train on the sequence lengths of prompt that occurred initially in the alpaca dataset (~337 token long prompts).
You can get this down to about 10.9GB if (by modifying qlora.py) you run torch.cuda.empty_cache()
after PEFT has been applied to your loaded model and before you begin training.
All instructions are written assuming your command-line shell is bash.
Clone repository:
Originally for Python 3.7 and PythonNet 2.4.0 I wrote a snippet of code to
transform NumPy ndarray
into System.Array
from CLR and back again using
pure python and the ctypes
package memmove
function:
https://github.com/pythonnet/pythonnet/issues/514
https://github.com/pythonnet/pythonnet/issues/652
However, after the release of PythonNet 2.5.0 there were some changes to the PythonNet interface that created some small breaks in my code snippet:
//Override in your touch-enabled view (this can be differen than the view you use for displaying the cam preview) | |
@Override | |
public boolean onTouch(View view, MotionEvent motionEvent) { | |
final int actionMasked = motionEvent.getActionMasked(); | |
if (actionMasked != MotionEvent.ACTION_DOWN) { | |
return false; | |
} | |
if (mManualFocusEngaged) { | |
Log.d(TAG, "Manual focus already engaged"); | |
return true; |
{"lastUpload":"2021-03-29T14:30:37.960Z","extensionVersion":"v3.4.3"} |
import openai | |
import asyncio | |
from typing import Any | |
async def dispatch_openai_requests( | |
messages_list: list[list[dict[str,Any]]], | |
model: str, | |
temperature: float, | |
max_tokens: int, | |
top_p: float, |