Skip to content

Instantly share code, notes, and snippets.

View Nana2929's full-sized avatar
🚙
Yeah I need a car

Ching Wen Yang Nana2929

🚙
Yeah I need a car
View GitHub Profile
This file has been truncated, but you can view the full file.
阿爸 a1'ba4 18137
阿昌族 a1'chang1'zu2 50849
阿斗 a1'dou3 42632
阿飞 a1'fei1 48603
阿富汗 a1'fu4'han4 3461
阿訇 a1'hong1 34432
阿拉伯数字 a1'la1'bo2'shu4'zi4 35937
阿拉伯语 a1'la1'bo2'yu3 30476
阿妈 a1'ma1 16220
阿门 a1'men2 47913
@xirixiz
xirixiz / Set up GitHub push with SSH keys.md
Last active July 22, 2024 10:35 — forked from developius/README.md
Set up GitHub push with SSH keys

SSH keypair setup for GitHub (or GitHub/GitLab/BitBucket, etc, etc)

Create a repo.

Make sure there is at least one file in it (even just the README.md)

Generate a SSH key pair (private/public):

ssh-keygen -t rsa -C "your_email@example.com"
@thomwolf
thomwolf / gradient_accumulation.py
Last active January 16, 2024 02:38
PyTorch gradient accumulation training loop
model.zero_grad() # Reset gradients tensors
for i, (inputs, labels) in enumerate(training_set):
predictions = model(inputs) # Forward pass
loss = loss_function(predictions, labels) # Compute loss function
loss = loss / accumulation_steps # Normalize our loss (if averaged)
loss.backward() # Backward pass
if (i+1) % accumulation_steps == 0: # Wait for several backward steps
optimizer.step() # Now we can do an optimizer step
model.zero_grad() # Reset gradients tensors
if (i+1) % evaluation_steps == 0: # Evaluate the model when we...
def parse_category(self, url, depth):
"""
Collects the links from a category and downloads/parses them
:param url:
:param depth:
:return:
"""
page_content = self.download_page(url)
if page_content is None:
return []