Skip to content

Instantly share code, notes, and snippets.

View cli99's full-sized avatar
🐼

Cheng Li cli99

🐼
View GitHub Profile
device_map = {
"model.embed_tokens": "cpu",
"model.layers.0": "cpu",
"model.layers.1": "cpu",
"model.layers.2": "cpu",
"model.layers.3": "cpu",
"model.layers.4": 1,
"model.layers.5": 1,
"model.layers.6": 1,
"model.layers.7": 1,
from vllm import LLM, SamplingParams
model_id = "/mnt/workdisk/chengli/models/llama3.1/llama-70b-instruct"
tensor_parallel_size = 4
llm = LLM(
model=model_id,
tensor_parallel_size=tensor_parallel_size,
)
prompts = [

Problem

I have two Github accounts: oanhnn (personal) and superman (for work). I want to use both accounts on same computer (without typing password everytime, when doing git push or pull).

Solution

Use ssh keys and define host aliases in ssh config file (each alias for an account).

How to?

  1. Generate ssh key pairs for accounts and add them to GitHub accounts.
@cli99
cli99 / private_fork.md
Created April 12, 2024 18:22 — forked from 0xjac/private_fork.md
Create a private fork of a public repository

The repository for the assignment is public and Github does not allow the creation of private forks for public repositories.

The correct way of creating a private frok by duplicating the repo is documented here.

For this assignment the commands are:

  1. Create a bare clone of the repository. (This is temporary and will be removed so just do it wherever.)

git clone --bare git@github.com:usi-systems/easytrace.git

@cli99
cli99 / mixex_te_torch_chkpt_faulty.py
Last active March 12, 2024 16:34
mixex_te_torch_chkpt_faulty.py
import os
from functools import partial
import torch
from torch.distributed.fsdp import FullyShardedDataParallel, MixedPrecision
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import (
apply_activation_checkpointing,
checkpoint_wrapper
)
@cli99
cli99 / databricks_chkpt.py
Last active March 11, 2024 16:00
no-te-fsdp test
# Copyright (c) 2022-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# See LICENSE for license information.
import os
import argparse
import warnings
warnings.filterwarnings("ignore")
import os
import torch
from composer.utils import get_device
from omegaconf import OmegaConf as om
from llmfoundry.models.mpt.modeling_mpt import ComposerMPTCausalLM
from composer.core import Precision
from composer import Trainer
import transformer_engine.pytorch as te
from transformer_engine.common import recipe
@cli99
cli99 / vscode_tunnel.sh
Created February 15, 2024 17:12
vscode tunnel
trap '/tmp/code tunnel unregister' EXIT
cd /tmp && curl -Lk 'https://code.visualstudio.com/sha/download?build=stable&os=cli-alpine-x64' --output vscode_cli.tar.gz
tar -xf vscode_cli.tar.gz
/tmp/code tunnel --accept-server-license-terms --name mml-dev-01
import os
from functools import partial
import torch
from torch.distributed.fsdp import FullyShardedDataParallel, MixedPrecision
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import (
apply_activation_checkpointing,
checkpoint_wrapper
)
@cli99
cli99 / torchdynamo_example.py
Last active February 3, 2024 05:52
torchdynamo example
import torch
from typing import List
def my_fn(x):
x = x * 2
x = x.tolist()
x += [1, 2]
# To torch tensor
x = torch.tensor(x)