Skip to content

Instantly share code, notes, and snippets.


Amy Durcan durcana

View GitHub Profile
durcana / answers.txt
Created Jun 6, 2016
Wiki Crawler Answers
View answers.txt
8a. 94.6%
8b. {9: 78, 10: 52, 12: 50, 11: 45, 13: 42, 17: 35, 16: 34, 15: 28, 18: 28,
14: 26, 8: 21, 19: 14, 20: 10, 6: 3, 21: 3, 22: 2, 23: 2}
8c. To reduce the number of http requests, we can check if the current page in the search has
already been searched in a previous path. The search path of the same page will always give
the same result, and thus does not need to be repeated. Once that page is in a new path,
we can just add the path length of the previous search to the length of the current search
up to that page.
durcana /
Created Feb 22, 2016
Trying to create generate random tree using Node objects. Code not working yet but here's my progress. Skips every other node in node_children, and dfs not working yet.
import random
class Node:
def __init__(self, name): = name
self.children = []
self.visited = False
def add_child(self, new_node):
durcana /
Created Feb 21, 2016
Both breadth first search and depth first search scripts after speaking with David from Recurse Center
import networkx as nx
Since append() adds the node to the end of check_list, it will check all of the nodes at one level
before moving on to the next. To check this to be true without doing pdb, you can uncomment the
line in the code and see how check_list changes. Done this way, we do not change the actual graph,
but still do not need a 'visited' attribute.
durcana /
Created Feb 21, 2016
Breadth first search code written before interview with David for Recurse Center. Will also change this as I change dfs
import networkx as nx
def bfs(node, graph):
roots = [n for n, d in graph.in_degree().items() if d == 0]
for root in roots:
if node in graph.successors(root):
return node
durcana /
Created Feb 18, 2016
Depth first search code for pair programming with the Recurse Center
import networkx as nx
def dfs(node, graph, start):
leaves = [n for n, d in graph.out_degree().items() if d == 0]
root = [n for n, d in graph.in_degree().items() if d == 0]
if start == "":
start = root[0]
if start == node:
durcana /
Last active Aug 31, 2016
The main script streams live filtered tweets that contain the words birth and conviction. It returns the tweets along with their sentiment value, evaluated from I'm looking to add a script that will use all the collected tweets to return the average sentiment for the words birth and conviction.
from json import loads
from tweepy import OAuthHandler
import os
from tweepy import Stream
from tweepy.streaming import StreamListener
import naive_bayes
ckey = os.environ.get('CKEY')
csecret = os.environ.get('CSECRET')