Skip to content

Instantly share code, notes, and snippets.

View ildoonet's full-sized avatar
🎯
Focusing

curtis.abcd ildoonet

🎯
Focusing
View GitHub Profile
@abulka
abulka / fncache.py
Last active April 24, 2024 11:03 — forked from kwarrick/fncache.py
Redis-backed LRU cache decorator in Python.
#!/usr/bin/env python
__author__ = 'Kevin Warrick'
__email__ = 'kwarrick@uga.edu, abulka@gmail.com'
__version__ = '2.0.0'
import pickle
from collections import namedtuple
from functools import wraps
import inspect
from icecream import ic
@julio-kim
julio-kim / CovidStat.js
Last active February 27, 2021 06:34
[Scriptable] 코로나 확진자 현황
const source = 'http://ncov.mohw.go.kr'
let webView = new WebView()
await webView.loadURL(source)
let covid = await webView.evaluateJavaScript(`
const baseSelector = 'div.mainlive_container div.liveboard_layout '
let date = document.querySelector(baseSelector + 'h2 span.livedate').innerText
let domestic = document.querySelector(baseSelector + 'div.liveNum_today_new ul li:nth-child(1) span.data').innerText
let overseas = document.querySelector(baseSelector + 'div.liveNum_today_new ul li:nth-child(2) span.data').innerText
@SuperShinyEyes
SuperShinyEyes / f1_score.py
Created October 15, 2019 10:16
F1 score in PyTorch
def f1_loss(y_true:torch.Tensor, y_pred:torch.Tensor, is_training=False) -> torch.Tensor:
'''Calculate F1 score. Can work with gpu tensors
The original implmentation is written by Michal Haltuf on Kaggle.
Returns
-------
torch.Tensor
`ndim` == 1. 0 <= val <= 1
@XinDongol
XinDongol / profile_pyt.md
Last active March 28, 2022 11:26
How to profile your pytorch codes

Inside profiler

import torch
import torchvision.models as models

model = models.densenet121(pretrained=True)
x = torch.randn((1, 3, 224, 224), requires_grad=True)

with torch.autograd.profiler.profile(use_cuda=True) as prof:
    model(x)
@shagunsodhani
shagunsodhani / CurriculumLearning.md
Created May 8, 2016 17:14
Notes for Curriculum Learning paper

Curriculum Learning

Introduction

  • Curriculum Learning - When training machine learning models, start with easier subtasks and gradually increase the difficulty level of the tasks.
  • Motivation comes from the observation that humans and animals seem to learn better when trained with a curriculum like a strategy.
  • Link to the paper.

Contributions of the paper

@mitmul
mitmul / cuda_ubuntu_memo.sh
Last active September 6, 2016 14:51
Construct CUDA & Anaconda & Caffe environment on EC2 g2.2xlarge instance
#! /bin/bash
sudo aptitude update
sudo aptitude full-upgrade -y
sudo reboot
wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_6.5-14_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1404_6.5-14_amd64.deb
sudo aptitude update
sudo aptitude install -y linux-image-extra-virtual
sudo aptitude install -y cuda
@gschizas
gschizas / requestsprogress.py
Created September 16, 2012 11:04
Requests with progressbar in python
r = requests.get(file_url)
size = int(r.headers['Content-Length'].strip())
self.bytes = 0
widgets = [name, ": ", Bar(marker="|", left="[", right=" "),
Percentage(), " ", FileTransferSpeed(), "] ",
self,
" of {0}MB".format(str(round(size / 1024 / 1024, 2))[:4])]
pbar = ProgressBar(widgets=widgets, maxval=size).start()
file = []
for buf in r.iter_content(1024):