Skip to content

Instantly share code, notes, and snippets.

@iMilnb
iMilnb / boto3_hands_on.md
Last active October 19, 2022 09:15
Programmatically manipulate AWS resources with boto3 - a quick hands on

boto3 quick hands-on

This documentation aims at being a quick-straight-to-the-point-hands-on AWS resources manipulation with [boto3][0].

First of all, you'll need to install [boto3][0]. Installing it along with [awscli][1] is probably a good idea as

  • [awscli][1] is boto-based
  • [awscli][1] usage is really close to boto's
@alwerner
alwerner / retroactively add items to .gitignore
Last active March 21, 2022 23:14
retroactively add items to .gitignore
// make change to .gitignore
git rm --cached <filename>
// or, for all files
git rm -r --cached .
git add .
// then
@0xjjpa
0xjjpa / chrome.md
Created December 9, 2012 04:37
Understanding Google Chrome Extensions

#Introduction

Developing Chrome Extensions is REALLY fun if you are a Front End engineer. If you, however, struggle with visualizing the architecture of an application, then developing a Chrome Extension is going to bite your butt multiple times due the amount of excessive components the extension works with. Here are some pointers in how to start, what problems I encounter and how to avoid them.

Note: I'm not covering chrome package apps, which although similar, work in a different way. I also won't cover the page options api neither the new brand event pages. What I explain covers most basic chrome applications and should be enough to get you started.

Table of Contents

  1. Understand the Chrome Architecture
  2. Understand the Tabs-Extension Relationship
  3. Picking the right interface for the job
@MohamedAlaa
MohamedAlaa / tmux-cheatsheet.markdown
Last active May 11, 2024 07:10
tmux shortcuts & cheatsheet

tmux shortcuts & cheatsheet

start new:

tmux

start new with session name:

tmux new -s myname
@onyxfish
onyxfish / example1.py
Created March 5, 2010 16:51
Basic example of using NLTK for name entity extraction.
import nltk
with open('sample.txt', 'r') as f:
sample = f.read()
sentences = nltk.sent_tokenize(sample)
tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences]
tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences]
chunked_sentences = nltk.batch_ne_chunk(tagged_sentences, binary=True)