Skip to content

Instantly share code, notes, and snippets.

@tcramm0nd
tcramm0nd / boundary_file_downloader.py
Last active July 13, 2023 01:12
Downloads US Census Bureau Cartographoc Boundary files to a dedicated folder
import os
import io
import requests
import zipfile
def boundary_file_downloader(year=2019, state='us', entity='state', resolution='500k', filetype='shp', path=None):
"""Downloads US Census Bureau Cartographoc Boundary files to a dedicated folder.
Args:
year (int, optional): Year the data should be pulled from. Defaults to 2019.
@tcramm0nd
tcramm0nd / amortize.py
Last active October 9, 2022 19:55
Create an Amortization Table using Python
import pandas as pd
# Initialize the paramaters of the loan
loan_amount = 18000
apr = 5.29
loan_term = 60
# Get a monthly percentage rate
apr /= 100
mpr = apr / 12
@tcramm0nd
tcramm0nd / shp_to_geopandas.py
Last active May 25, 2020 17:15
How to import an SHP file into a Geopandas DataFrame
# A breif overview of how to how to import an SHP file to a Geopandas DataFrame. For a more detailed breakdown of the
#process you can find the original post here: http://timcrammond.com/blog/creating-a-geopandas-dataframe-from-a-shp-file
import shapefile
import geopandas as gpd
from shapely.geometry import shape
import osr
tracts = shapefile.Reader('data/cb_2018_42_tract_500k.shp')
@tcramm0nd
tcramm0nd / sklearn_random_forest.py
Created May 25, 2020 17:00
Quick overview of how to use a Random Forest Classifier using Scitkit Learn
# A breif overview of how to create a Random Forest Classifier using Scikit-Learn. For a more detailed breakdown and
# an overview of what a Random Forest is, you can find the original post here: http://timcrammond.com/blog/what-is-random-forest/
from sklearn import datasets, metrics
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
wine = datasets.load_wine()
X = wine.data
y = wine.target
@tcramm0nd
tcramm0nd / text_preprocessing.py
Last active April 25, 2020 03:52
Basic NLP Text Preprocessing
def text_preprocessing(text):
'''This is a pretty basic NLP text cleaner that takes in a corpus/text,
applies some cleaning functions to remove undesirable
characters, and returns the text in the same format
This requires the following to be imported:
re
nltk
'''
# text cleaning