Skip to content

Instantly share code, notes, and snippets.

View sapjunior's full-sized avatar
🙄
crazy

Thananop Kobchaisawat sapjunior

🙄
crazy
View GitHub Profile
@sapjunior
sapjunior / set-access-point.sh
Created March 27, 2019 13:08 — forked from archy-bold/set-access-point.sh
Script to find the access points for the given network SSID and set the BSSID for that network to the MAC of the access point with the highest strength
#!/bin/bash
# Usage: ./set-access-points.sh [network SSID] [network interface id = wlan0]
# Read in the arguements
ssid=$1;
interface=$2;
# SSID is required
if [ -z "$ssid" ]; then
@sapjunior
sapjunior / idle-shutdown.sh
Created January 13, 2020 05:29 — forked from JustinShenk/idle-shutdown.sh
Google Cloud Platform (GCP) instance idle shutdown
#!/bin/bash
# Add to instance metadata with `gcloud compute instances add-metadata \
# instance-name --metadata-from-file startup-script=idle-shutdown.sh` and reboot
# NOTE: requires `bc`, eg, sudo apt-get install bc
# Modified from https://stackoverflow.com/questions/30556920/how-can-i-automatically-kill-idle-gce-instances-based-on-cpu-usage
threshold=0.1
count=0
wait_minutes=60
while true
nvidia-docker run -it 0743c2de0ada
sudo apt update
sudo apt install xfonts-thai ffmpeg libgl1-mesa-glx qv4l2 -y
pip install pip --upgrade
pip install opencv-python-headless pyside2 qimage2ndarray scikit-learn scipy tqdm pydub
exit
docker ps -a
### ดู Container ID
### Frame Diff ###
import cv2
import numpy as np
diffThreshold = 50
inputStream = cv2.VideoCapture(0)
_, currentFrame = inputStream.read()
previousFrame = currentFrame
while(inputStream.isOpened()):
root@2fb9c7802a3a:/models# polygraphy run sample.onnx --trt --onnxrt --onnx-outputs mark all --trt-outputs mark all --input-shapes input:[1,1,32,512]
[I] Will generate inference input data according to provided TensorMetadata: {input [shape=(1, 1, 32, 512)]}
[I] trt-runner-N0-04/09/21-17:26:01 | Activating and starting inference
[I] Loading bytes from /models/sample.onnx
[TensorRT] WARNING: /home/jenkins/workspace/OSS/L0_MergeRequest/oss/parsers/onnx/onnx2trt_utils.cpp:226: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[W] Loop detected. Please ensure the network is topologically sorted so that layers within the loop body are not marked as network outputs in layerwise mode
[I] Configuring with profiles: [Profile([('input', ShapeTuple(min=[1, 1, 32, 512], opt=[1, 1, 32, 512], max=[1, 1, 32, 512]))])]
[I] Building engine with configuration: max_workspace_size=16777216 (16.00 MB) | tf32=False, fp16=False, int8=False, s
@sapjunior
sapjunior / driver-cuda-cudnn.sh
Last active December 9, 2023 21:01
[eikonnex] To install Nvidia driver + CUDA on Ubuntu 22.04
# To install driver
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
sudo apt install nvidia-driver-515 nvidia-modprobe --no-install-recommends -y
# To install latest cuda-11-7
sudo apt install cuda-toolkit-11-7 --no-install-recommends -y
# Docker & Nvidia-docker
curl https://get.docker.com | sh \
&& sudo systemctl --now enable docker