Skip to content

Instantly share code, notes, and snippets.

@fancyerii
fancyerii / make
Created November 16, 2023 04:42
make output
View make
Making all in config
make[1]: Entering directory '/home/lili/openmpi-5.0.0/config'
make[1]: Nothing to be done for 'all'.
make[1]: Leaving directory '/home/lili/openmpi-5.0.0/config'
Making all in contrib
make[1]: Entering directory '/home/lili/openmpi-5.0.0/contrib'
make[1]: Nothing to be done for 'all'.
make[1]: Leaving directory '/home/lili/openmpi-5.0.0/contrib'
Making all in 3rd-party
make[1]: Entering directory '/home/lili/openmpi-5.0.0/3rd-party'
@fancyerii
fancyerii / pmix_info
Created November 16, 2023 04:41
pmix_info
View pmix_info
$ pmix_info
Package: PMIx lili@u-gpu-a100-01.ai.test.ten Distribution
PMIX: 4.2.7a1
PMIX repo revision: v4.2.7
PMIX release date: Unreleased developer copy
PMIX Standard: 4.2
PMIX Standard ABI: Stable (0.0), Provisional (0.0)
Prefix: /usr/local
Configured architecture: pmix.arch
Configure host: u-gpu-a100-01.ai.test.ten
View gist:842b1f2700865e464ae33157acc4d2d7
============================================================================
== Configuring Open MPI
============================================================================
*** Prerequisites
checking for a sed that does not truncate output... /usr/bin/sed
checking for perl... perl
*** Startup tests
View gist:d89d4d887b020eec1d756f606c3bc8bf
diff --git a/src/llama_recipes/configs/datasets.py b/src/llama_recipes/configs/datasets.py
index 62230e4..3e4c76f 100644
--- a/src/llama_recipes/configs/datasets.py
+++ b/src/llama_recipes/configs/datasets.py
@@ -15,8 +15,8 @@ class samsum_dataset:
@dataclass
class grammar_dataset:
dataset: str = "grammar_dataset"
- train_split: str = "src/llama_recipes/datasets/grammar_dataset/gtrain_10k.csv"
- test_split: str = "src/llama_recipes/datasets/grammar_dataset/grammar_validation.csv"
View gist:feb49e0191f5fc36802f424a91f9d475
import tensorflow as tf
from tensor2tensor import models
from tensor2tensor import problems
from tensor2tensor.utils import trainer_lib
from tensor2tensor.utils import t2t_model
from tensor2tensor.utils import registry
from tensor2tensor.utils.trainer_lib import create_hparams
import os
import numpy as np
View train.py
import os
import sys
import numpy as np
import collections
import matplotlib.pyplot as plt# Colab-only TensorFlow version selector
import tensorflow as tf
from tensor2tensor import models
from tensor2tensor import problems
from tensor2tensor.layers import common_layers
from tensor2tensor.utils import trainer_lib
View gist:fa04cea4e94cf9408c5d6091697fd9fa
diff --git a/infer.py b/infer.py
index 3f31004..ec15d47 100644
--- a/infer.py
+++ b/infer.py
@@ -63,7 +63,8 @@ def infer(args):
dev_count = 1
gpu_id = 0
phase = "test"
- place = fluid.CUDAPlace(gpu_id)
+ #place = fluid.CUDAPlace(gpu_id)
View gist:f2bbd19a5025ad8649e85b6da70d6526
2020-07-17 08:57:45,864 [INFO ] main org.pytorch.serve.ModelServer -
Torchserve version: 0.1.1
TS Home: /home/lili/env-huggface/lib/python3.6/site-packages
Current directory: /home/lili/codes/huggface-transformer/test2
Temp directory: /tmp
Number of GPUs: 1
Number of CPUs: 8
Max heap size: 7938 M
Python executable: /home/lili/env-huggface/bin/python3.6
Config file: N/A
View gist:a2ecc6d1696a6c03542a9c42e3b9083e
from abc import ABC
import json
import logging
import os
import torch
from transformers import BertModel, BertTokenizer
from torch import nn
from ts.torch_handler.base_handler import BaseHandler
View gist:d4cbf64151a0a80b0da196fd1e23cd1b
import transformers
from transformers import BertModel, BertTokenizer, AdamW, get_linear_schedule_with_warmup
import numpy as np
import torch
import random
from torch.utils.data import Dataset, DataLoader
from torch import nn, optim
from collections import defaultdict