Skip to content

Instantly share code, notes, and snippets.

@gruber
gruber / Liberal Regex Pattern for All URLs
Last active May 29, 2024 00:03
Liberal, Accurate Regex Pattern for Matching All URLs
The regex patterns in this gist are intended to match any URLs,
including "mailto:foo@example.com", "x-whatever://foo", etc. For a
pattern that attempts only to match web URLs (http, https), see:
https://gist.github.com/gruber/8891611
# Single-line version of pattern:
(?i)\b((?:[a-z][\w-]+:(?:/{1,3}|[a-z0-9%])|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:'".,<>?«»“”‘’]))
import networkx as nx
import numpy as np
import itertools
## We define each S* motif as a directed graph in networkx
motifs = {
'S1': nx.DiGraph([(1,2),(2,3)]),
'S2': nx.DiGraph([(1,2),(1,3),(2,3)]),
'S3': nx.DiGraph([(1,2),(2,3),(3,1)]),
'S4': nx.DiGraph([(1,2),(3,2)]),
@myusuf3
myusuf3 / delete_git_submodule.md
Created November 3, 2014 17:36
How effectively delete a git submodule.

To remove a submodule you need to:

  • Delete the relevant section from the .gitmodules file.
  • Stage the .gitmodules changes git add .gitmodules
  • Delete the relevant section from .git/config.
  • Run git rm --cached path_to_submodule (no trailing slash).
  • Run rm -rf .git/modules/path_to_submodule (no trailing slash).
  • Commit git commit -m "Removed submodule "
  • Delete the now untracked submodule files rm -rf path_to_submodule
@melvincabatuan
melvincabatuan / latex install
Created August 25, 2015 23:45
Centos 7 latex install
yum -y install texlive texlive-latex texlive-xetex
yum -y install texlive-collection-latex
yum -y install texlive-collection-latexrecommended
yum -y install texlive-xetex-def
yum -y install texlive-collection-xetex
Only if needed:
yum -y install texlive-collection-latexextra
@basaundi
basaundi / multi_bleu.py
Last active September 20, 2020 07:28
python rewrite of Moses' multi-bleu.perl; usable as a library
#!/usr/bin/env python
# Ander Martinez Sanchez
from __future__ import division, print_function
from math import exp, log
from collections import Counter
def ngram_count(words, n):
if n <= len(words):
@heiswayi
heiswayi / repo-reset.md
Created February 5, 2017 01:32
GitHub - Delete commits history with git commands

First Method

Deleting the .git folder may cause problems in our git repository. If we want to delete all of our commits history, but keep the code in its current state, try this:

# Check out to a temporary branch:
git checkout --orphan TEMP_BRANCH

# Add all the files:
git add -A
anonymous
anonymous / demo_gp.ipynb
Created February 8, 2018 07:57
github_inside_dropbox/advanced-tensorflow/gp/demo_gp.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@MikulasZelinka
MikulasZelinka / pytorch_pad_pack_minimal.py
Last active May 30, 2024 01:47
pytorch: handling sentences of arbitrary length (dataset, data_loader, padding, embedding, packing, lstm, unpacking)
"""
sort-of minimal end-to-end example of handling input sequences (sentences) of variable length in pytorch
the sequences are considered to be sentences of words, meaning we then want to use embeddings and an RNN
using pytorch stuff for basically everything in the pipeline of:
dataset -> data_loader -> padding -> embedding -> packing -> lstm -> unpacking (~padding)
based mostly on: https://github.com/HarshTrivedi/packing-unpacking-pytorch-minimal-tutorial
pytorch version 1.4.0
gist url: https://gist.github.com/MikulasZelinka/9fce4ed47ae74fca454e88a39f8d911a
"""
@yanqd0
yanqd0 / dl_requests_tqdm.py
Last active July 12, 2024 09:40
Python requests download file with a tqdm progress bar
import requests
from tqdm import tqdm
def download(url: str, fname: str, chunk_size=1024):
resp = requests.get(url, stream=True)
total = int(resp.headers.get('content-length', 0))
with open(fname, 'wb') as file, tqdm(
desc=fname,
total=total,
@cedrickchee
cedrickchee / llama-7b-m1.md
Last active July 13, 2024 04:59
4 Steps in Running LLaMA-7B on a M1 MacBook with `llama.cpp`

4 Steps in Running LLaMA-7B on a M1 MacBook

The large language models usability

The problem with large language models is that you can’t run these locally on your laptop. Thanks to Georgi Gerganov and his llama.cpp project, it is now possible to run Meta’s LLaMA on a single computer without a dedicated GPU.

Running LLaMA

There are multiple steps involved in running LLaMA locally on a M1 Mac after downloading the model weights.