Skip to content

Instantly share code, notes, and snippets.

View nicobrenner's full-sized avatar

☺ Nico nicobrenner

View GitHub Profile
@nicobrenner
nicobrenner / ask_gpt_job_listings.py
Created February 28, 2024 06:30
Allows quick personalized filtering of hundreds of job listings to find the best jobs to apply to (uses GPT to match resume to listings)
# This allows you to easily filter hundreds of jobs listings
# to quickly determine which are the best fit for your resume
# It enriches job listings descriptions by doing two things:
# 1) extracts structured data from the description (eg. company name)
# 2) getting answers about selection criteria (eg. is the job remote?)
# It first gets job listing descriptions from a job_listings table
# then assembles a prompt using a template, a resume and the
# job listing
@nicobrenner
nicobrenner / filtered_job_listings.sql
Created February 28, 2024 06:03
SQLite query to get filtered job listings according to json data stored in a table
-- SQLite
-- Sometimes the json output from GPT is not valid
-- -> This query excludes invalid JSONs (about 1 in 150 for the data tested)
SELECT gi.job_id,
json_extract(gi.answer, '$.fit_for_resume') AS fit_for_resume,
json_extract(gi.answer, '$.company_name') AS company_name,
json_extract(gi.answer, '$.how_to_apply') AS how_to_apply,
json_extract(gi.answer, '$.fit_justification') AS fit_justification,
json_extract(gi.answer, '$.available_positions') AS available_positions,
json_extract(gi.answer, '$.remote_positions') AS remote_positions,
@nicobrenner
nicobrenner / get_ask_hn_jobs.py
Created February 28, 2024 05:48
Scrapes "Ask HN: Who is hiring? (February 2024)" page and saves job listings to local sqlite3 db
# This connects to an "Ask HN: Who is hiring?" page
# eg. https://news.ycombinator.com/item?id=39217310&p=1
# Gets every top-level job listing and saves it to a sqlite3 db (job_listings.db)
# every entry has id, listing text, listing html and source url
# (html is saved to preserve link urls, which are usually redacted within the text)
# It handles pagination (follows More link at the bottom of the page)
import requests
from bs4 import BeautifulSoup