Skip to content

Instantly share code, notes, and snippets.

View msaroufim's full-sized avatar
🤖
Putting the finishing touches on my robot army

Mark Saroufim msaroufim

🤖
Putting the finishing touches on my robot army
View GitHub Profile
git clone https://github.com/pytorch/pytorch
conda install ccache -c conda-forge
pip install ninja
cd pytorch
git submodule deinit -f .
git clean -xdf
python setup.py clean
git submodule update
DEBUG=1 USE_DISTRIBUTED=0 USE_MKLDNN=0 BUILD_TEST=0 USE_FBGEMM=0 USE_NNPACK=0 USE_QNNPACK=0 USE_XNNPACK=0 USE_FLASH_ATTENTION=0 USE_MEM_EFF_ATTENTION=0 python setup.py develop
sudo apt-get install pylint
pyreverse serve/ts
dot -Tpng  classes.dot -o serve.png
  • Densetnet handler needs to be moved out into an example
  • Model loader has max pixel size as an argument but should be generic to any model type
  • context class very useful to look at
  • Argmax model seems out of place

Title

Hello all

Here is some code

def hello():
  print("hello")
import transformers
from pathlib import Path
import os
import json
import torch
from transformers import (AutoModelForSequenceClassification, AutoTokenizer, AutoModelForQuestionAnswering,
AutoModelForTokenClassification, AutoConfig)
from transformers import set_seed, AdamW, get_scheduler
from datasets import load_dataset
import transformers
from pathlib import Path
import os
import json
import torch
from transformers import (AutoModelForSequenceClassification, AutoTokenizer, AutoModelForQuestionAnswering,
AutoModelForTokenClassification, AutoConfig)
from transformers import set_seed, AdamW, get_scheduler
from datasets import load_dataset
> uname -m && cat /etc/*release
x86_64
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS"
NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
ubuntu@ip-172-31-1-60:~/model_analyzer$ docker run -it --rm --gpus all \
> -v /var/run/docker.sock:/var/run/docker.sock \
> -v $HOME/model_analyzer/examples/quick-start:/quick_start_repository \
> --net=host --name model-analyzer \
> model-analyzer /bin/bash
root@ip-172-31-1-60:/opt/triton-model-analyzer# mkdir analysis_results
root@ip-172-31-1-60:/opt/triton-model-analyzer# model-analyzer -m /quick_start_repository -n add_sub --triton-launch-mode=local --export-path=analysis_results
2021-05-03 18:08:04.55 INFO[entrypoint.py:210] Triton Model Analyzer started: config={'model_repository': '/quick_start_repository', 'model_names': [{'model_name': 'add_sub', 'objectives': {'perf_throughput': 10}, 'parameters': {'batch_sizes': [1], 'concurrency': []}}], 'objectives': {'perf_throughput': 10}, 'constraints': {}, 'batch_sizes': [1], 'concurrency': [], 'perf_analyzer_timeout': 600, 'perf_analyzer_cpu_util': 80.0, 'run_config_search_max_concurrency': 1024, 'run_config_search_max_instanc
root@ip-172-31-1-60:/opt/triton-model-analyzer# rm -r output_model_repository/
root@ip-172-31-1-60:/opt/triton-model-analyzer# ls
CONTRIBUTING.md Dockerfile LICENSE README.md VERSION analysis_results build_wheel.sh docs examples helm-chart model_analyzer qa requirements.txt setup.py tests wheels
root@ip-172-31-1-60:/opt/triton-model-analyzer# model-analyzer -m /quick_start_repository -n add_sub --triton-launch-mode=local --export-path=analysis_results
2021-04-30 22:06:10.93 INFO[entrypoint.py:210] Triton Model Analyzer started: config={'model_repository': '/quick_start_repository', 'model_names': [{'model_name': 'add_sub', 'objectives': {'perf_throughput': 10}, 'parameters': {'batch_sizes': [1], 'concurrency': []}}], 'objectives': {'perf_throughput': 10}, 'constraints': {}, 'batch_sizes': [1], 'concurrency': [], 'perf_analyzer_timeout': 600, 'perf_analyzer_cpu_util': 80.0, 'run_config_search_max_concurrency': 1024, 'run_config_search_max_instance_count': 5, 'run_config_search_disable': False, '
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Grid
{
private int width;
private int height;
private float cellSize;
private float cellDistance;
""" Full assembly of the parts to form the complete network """
import torch
import torch.nn as nn
import torch.nn.functional as F
class UNet(nn.Module):
def __init__(self, n_channels, n_classes, bilinear=False):
super(UNet, self).__init__()
self.n_channels = n_channels