Skip to content

Instantly share code, notes, and snippets.

View joereddington's full-sized avatar

Joe Reddington joereddington

View GitHub Profile
@joereddington
joereddington / gist:358ea23802c5a2e2388049dc7693dc85
Created June 8, 2021 11:17
Working out what encode/decode really do.
#!/usr/bin/python
# -*- coding: utf-8 -*-
from unittest import TestCase
import unittest
import io
class unicodeTest(TestCase):
def test_ascii_start(self):
uni_str=u'hello'
@joereddington
joereddington / index.html
Last active February 1, 2021 08:04 — forked from CodeMyUI/index.html
Responsive Comic Book Layout
<html>
<!-- this might be useful https://codemyui.com/hand-drawn-border-buttons-css/ -->
<head>
<link rel="stylesheet" type="text/css" href="style.css" media="screen"/><!-- could be "print"-->
</head>
<body>
<img src="test.png" class="panel">
<div class="panel">
<img src="test.png">
<p class="text top-left">Suddenly...</p>
import requests
from bs4 import BeautifulSoup
import argparse
def check(word):
gr = requests.get('http://www.google.com/search', params={'q':'"'"she was "+word+'"', "tbs":"li:1"})
br = requests.get('http://www.google.com/search', params={'q':'"'"he was "+word+'"', "tbs":"li:1"})
girlsoup = BeautifulSoup(gr.text, "lxml")
girltext= girlsoup.find('div',{'id':'resultStats'}).text[6:-8]
boysoup = BeautifulSoup(br.text, "lxml")
@joereddington
joereddington / norepeatingletters
Created September 11, 2019 09:58
Finds the longest sections in a text file that don't repeat letters.
import sys
input_file=[]
with open(sys.argv[1]) as f:
while True:
c = f.read(1)
if not c:
print "End of file"
break
input_file.append(c)
We can make this file beautiful and searchable if this error is corrected: It looks like row 2 should actually have 4 columns, instead of 2. in line 1.
1, (A) 1. due:TODAY Looking for urgent: Check Calendar, reminders, and notebooks @singleton +PlanningAndTracking
1, (A) 1a. due:TODAY 1L of Water @singleton +Health
1, (B) 1. Exercise @singleton +Exercise
1, (C) 1. due:TODAY Process Email @singleton +Overhead
1, (C) 2. due:TODAY Switch to live account and start streaming +EQT @singleton
1, (E) Fill out fear/passion table +Health
3, (E) Floss +Health
3, (E) Take Vitamin Tablet +Health
4, (E) Check Desktop Tracker is working +Overhead
4, (E) Process Voicemail +Overhead
.headers on
.mode csv
.output data.csv
SELECT datetime(moz_historyvisits.visit_date/1000000,'unixepoch'), moz_places.url FROM moz_places, moz_historyvisits WHERE moz_places.id = moz_historyvisits.place_id;

Joe's Public To-do List

As part of a commitment to personal transparency I’m making my personal to-do list publicly and permanently accessible here.

You can read about the reasons why and the benefits at the blog post I did.

I'm using the todo.txt format, you can read about it here

Priority is set based on integrity. You can read more about my prioritisation at this blog post.

As a shorthand, the levels are these:

  • 1, y, 10, "Brainstorm on books into issues", 17-05-03 14:29
  • 1, 0, 0, "Text the other volentter about her expenses", 17-05-03 16:02
  • 3, h, 10, "Pick a WWW invoice, find all the receipts for it, and export it", 17-04-22 18:32
  • 3, y, 10, "Design the tinkerbell for Flwoers for Turing", 17-04-24 15:25
  • 3, t, 10, "Screenshot all the VA expenses (including foiloi 71) from scratch", 17-04-25 00:00
  • 3, t, 30, "Write blog about the 'postcodes of your usual delivery locations'", 17-04-25 00:00
  • 3, y, 20, "Do last VA invoice of 2016-2017", 17-04-25 10:10
  • 3, y, 20, "Make a list of all the things to change about the Flowers for turing site", 17-05-01 00:00
  • 3, t, 30, "Plan fianance for 2017/18 ", 17-05-01 00:00
  • 3, g, 4, "Make sure all issues have a priority in both equality time and personal projects", 17-05-02 00:00
@joereddington
joereddington / export_repo_issues_to_csv.py
Created February 10, 2017 14:46 — forked from unbracketed/export_repo_issues_to_csv.py
Export Issues from Github repo to CSV (API v3)
"""
Exports Issues from a specified repository to a CSV file
Uses basic authentication (Github username + password) to retrieve Issues
from a repository that username has access to. Supports Github API v3.
"""
import csv
import requests
@joereddington
joereddington / tweet_dumper.py
Created October 3, 2015 11:47 — forked from yanofsky/LICENSE
A script to download all of a user's tweets into a csv
#!/usr/bin/env python
# encoding: utf-8
import tweepy #https://github.com/tweepy/tweepy
import csv
#Twitter API credentials
consumer_key = ""
consumer_secret = ""
access_key = ""