Skip to content

Instantly share code, notes, and snippets.

View lancetw's full-sized avatar
🐈

Hsin-lin Cheng lancetw

🐈
View GitHub Profile
原文
ループが書けなくなる(或いは再帰依存症)レベル10
http://d.hatena.ne.jp/yuki_neko_nyan/20090217/1234850409
level 0
不會寫遞迴,也沒辦法用遞迴思考。只覺得用迴圈寫就好了。
level 1
開始學習遞迴,但只要一用遞迴思考就覺得煩。有時還會忘了寫終止條件。覺得實在太麻煩了還是想寫迴圈就好。
@lancetw
lancetw / hch.py
Last active December 4, 2016 13:42 — forked from lanfon72/hch.py
NTU hospital hsin-chu branch ER board.
# !/usr/bin/env python
# coding:UTF-8
import requests, json, re
from datetime import datetime
html = requests.get('http://reg.ntuh.gov.tw/EmgInfoBoard/NTUHEmgInfoT4.aspx', verify=False)
keys = ['pending_doctor', 'pending_ward', 'pending_icu', 'pending_bed']
pending0 = re.findall(r"<td(.*?)>(.+?)</td>", html.text)

Scaling your API with rate limiters

The following are examples of the four types rate limiters discussed in the accompanying blog post. In the examples below I've used pseudocode-like Ruby, so if you're unfamiliar with Ruby you should be able to easily translate this approach to other languages. Complete examples in Ruby are also provided later in this gist.

In most cases you'll want all these examples to be classes, but I've used simple functions here to keep the code samples brief.

Request rate limiter

This uses a basic token bucket algorithm and relies on the fact that Redis scripts execute atomically. No other operations can run between fetching the count and writing the new count.

const actions = [
() => Promise.resolve(0),
() => Promise.resolve(1),
() => Promise.resolve(2),
() => Promise.resolve(3),
() => Promise.reject(new Error('4')),
() => Promise.resolve(5),
() => Promise.reject(new Error('6')),
() => Promise.reject(new Error('7'))
]
let actions = []
for (let i = 0; i < 10000; i++) {
actions.push(() => Promise.resolve(i))
}
const process = (task) => task.reduce((promised, task) => promised.then(acc => task().then(value => [...acc, value])), Promise.resolve([]))
process(actions)
@lancetw
lancetw / longest_chinese_tokens_gpt4o.py
Created May 14, 2024 13:36 — forked from ctlllll/longest_chinese_tokens_gpt4o.py
Longest Chinese tokens in gpt4o
import tiktoken
import langdetect
T = tiktoken.get_encoding("o200k_base")
length_dict = {}
for i in range(T.n_vocab):
try:
length_dict[i] = len(T.decode([i]))
except: