Skip to content

Instantly share code, notes, and snippets.

View willjobs's full-sized avatar

Will Jobs willjobs

View GitHub Profile
@willjobs
willjobs / wifi-passwords.ps1
Created October 2, 2017 03:33
PowerShell script to show all wifi passwords saved in Windows
(netsh wlan show profiles) | Select-String "\:(.+)$" | %{$name=$_.Matches.Groups[1].Value.Trim(); $_} | %{(netsh wlan show profile name="$name" key=clear)} | Select-String "Key Content\W+\:(.+)$" | %{$pass=$_.Matches.Groups[1].Value.Trim(); $_} | %{[PSCustomObject]@{ PROFILE_NAME=$name;PASSWORD=$pass }} | Format-Table -AutoSize
@willjobs
willjobs / ynab-google-sheets.js
Last active June 2, 2023 06:16 — forked from notself/ynab-google-sheets.js
Revisions to the original gist to add columns, show split subtransactions on multiple rows by default, fix time discrepancies, and add an indicator of the last data pull
function YNABAccounts(accessToken, budgetId) {
const accounts = _getBudgetAccounts(accessToken, budgetId);
if(accounts == null) {
return null;
}
const columns = ["Name", "Type", "Budget", "Closed", "Balance"];
const rows = accounts.map(function (acc) {
return [
@willjobs
willjobs / exiftool snippets.txt
Created November 26, 2022 22:18
exiftool snippets
#### Rename all images in current directory according to the CreateDate date and time, adding a copy number with leading '-' if the file already exists (%-c), and preserving the original file extension (%e).
# Note the extra '%' necessary to escape the filename codes (%c and %e) in the date format string.
exiftool '-FileName<CreateDate' -d '%Y%m%d %H%M%S%%-c.%%e' .
#### Shift values of CreateDate by 2 years, 9 months and 14 days. 0 hours, minutes and seconds
exiftool "-CreateDate+=2:9:14 0:0:0" -m .
#### View all time info about a given file:
## System:
# FileModifyDate
@willjobs
willjobs / medium-public-comments-08.py
Created July 25, 2021 16:11
medium-public-comments-08
docket_ids = downloader.get_ids_from_csv("EPA_water_dockets.csv", data_type="dockets")
for docket_id in my_dockets:
print(f"\n********************************\nSTARTING {docket_id}\n********************************")
downloader.gather_comments_by_docket(docket_id, csv_filename="EPA_water_comments.csv")
@willjobs
willjobs / medium-public-comments-07.py
Created July 25, 2021 16:10
medium-public-comments-07
# get the comment headers for these criteria
params = {"filter[lastModifiedDate][ge]": "2017-01-01 00:00:00", # API for dockets doesn't have postedDate filter
"filter[lastModifiedDate][le]": "2020-12-31 23:59:59", # also, these times are in Eastern time zone
"filter[agencyId]": "EPA",
"filter[searchTerm]": "water"}
# this will download the headers 250 dockets at a time, and save the headers into a CSV
# (you could also save them into a SQLite database)
downloader.gather_headers("dockets", params, csv_filename="EPA_water_dockets.csv")
@willjobs
willjobs / medium-public-comments-06.py
Created July 25, 2021 16:10
medium-public-comments-06
# download the comments
comment_ids = downloader.get_ids_from_csv("EPA_water_comments_header.csv", data_type="comments")
downloader.gather_details("comments", comment_ids, csv_filename="EPA_water_comments.csv")
@willjobs
willjobs / medium-public-comments-05.py
Created July 25, 2021 16:09
medium-public-comments-05
for object_id in object_ids: # taken from EPA_water_documents.csv
params = {"filter[commentOnId]": object_id}
downloader.gather_headers("comments", params, csv_filename="EPA_water_comments_header.csv")
@willjobs
willjobs / medium-public-comments-04.py
Created July 25, 2021 16:09
medium-public-comments-04
for docket_id in docket_ids:
params = {"filter[docketId]": docket_id}
downloader.gather_headers("documents", params, csv_filename="EPA_water_documents.csv")
@willjobs
willjobs / medium-public-comments-02.py
Last active July 25, 2021 16:07
medium-public-comments-02
downloader.gather_comments_by_docket("FDA-2021-N-0270", db_filename="my_database.db", csv_filename="my_csv.csv")
@willjobs
willjobs / medium-public-comments-03.py
Created July 25, 2021 16:07
medium-public-comments-03
my_dockets = ["FDA-2009-N-0501", "EERE-2019-BT-STD-0036", "NHTSA-2019-0121"]
for docket_id in my_dockets:
print(f"\n********************************\nSTARTING {docket_id}\n********************************")
downloader.gather_comments_by_docket(docket_id, db_filename="my_database2.db")
print("\nDONE")