Skip to content

Instantly share code, notes, and snippets.

View etewiah's full-sized avatar

Ed Tee etewiah

View GitHub Profile
@kieranklaassen
kieranklaassen / chat_gpt_service.rb
Created December 5, 2022 14:07
Unofficial ChatGPT API Wrapper Ruby
# The ChatGptService class provides an easy way to integrate with the OpenAI ChatGPT API.
# It allows you to send and receive messages in a conversation thread, and reset the thread
# if necessary.
#
# Example usage:
# chat_gpt_service = ChatGptService.new(BEARER_TOKEN)
# response = chat_gpt_service.chat("Hello, how are you?")
# puts response
# # => {"message"=>
# # {"id"=>"8e78691a-1fde-4bd5-ad68-46b23ea65d8f",
@pgruener
pgruener / master_page.html.erb
Created January 7, 2022 22:49
SpinaCMS Extension for modular master pages as lightweight pagebuilder
# corresponding to the sample theme with the master view_template "master_page"
# this template iterates the sub_pages / descendents of the master_page and renders
# them concatenated:
<% current_page.children.sorted.active.live.each do |page| %>
<%= render_spina_part(page) %>
<% end %>
@enzienza
enzienza / index.html
Created October 5, 2018 15:30
Vue2 page transitions with GSAP
<!-- App -->
<div id="app">
<component :is="state.view">
<h1>{{ state.view }}</h1>
</component>
<controls></controls>
</div>
<!-- Controls -->
var r1 = /(\$|£|€)\d+(k|K)/m
var r2 = /\d+-\d+(k|K)/m
var r3 = /\d+(k|K)\s?(€|EUR)/m
var candidates = [r1, r2, r3];
var linkQueue = []
var comments = document.getElementsByTagName("tr")
for (var i in comments) {
var comment = comments[i];
if (comment.className && comment.className.indexOf("athing comtr") != -1) {
@pooot
pooot / uppy-vue-example
Created January 2, 2018 15:54
Very basic vuejs usage of uppy
<template>
<div :id="uppyId">
<div class="ThumbnailContainer" v-if="collection === 'thumbnail'">
<button id="open-thumbnail-modal" class="button">Select file</button>
</div>
<div class="DashboardContainer" v-else></div>
</div>
</template>
@scrapehero
scrapehero / zillow.py
Last active December 13, 2023 16:05
Python 3 script to find real estate listings of properties up for sale on zillow.com
from lxml import html
import requests
import unicodecsv as csv
import argparse
import json
def clean(text):
if text:
return ' '.join(' '.join(text).split())
@patpohler
patpohler / Big List of Real Estate APIs.md
Last active June 26, 2024 21:36
Evolving list of Real Estate APIs by Category

Big List of Real Estate APIs

Listings / Property Data

####Rets Rabbit http://www.retsrabbit.com

Rets Rabbit removes the nightmare of importing thousands of real estate listings and photos from RETS or ListHub and gives you an easy to use import and Web API server so you can focus on building your listing search powered website or app.

@paulirish
paulirish / how-to-view-source-of-chrome-extension.md
Last active June 29, 2024 22:54
How to view-source of a Chrome extension

Option 1: Command-line download extension as zip and extract

extension_id=jifpbeccnghkjeaalbbjmodiffmgedin   # change this ID
curl -L -o "$extension_id.zip" "https://clients2.google.com/service/update2/crx?response=redirect&os=mac&arch=x86-64&nacl_arch=x86-64&prod=chromecrx&prodchannel=stable&prodversion=44.0.2403.130&x=id%3D$extension_id%26uc" 
unzip -d "$extension_id-source" "$extension_id.zip"

Thx to crxviewer for the magic download URL.

@henrik
henrik / dokku_on_digital_ocean.md
Last active May 12, 2022 14:38
Notes from running Dokku on Digital Ocean.

My notes for Dokku on Digital Ocean.

These may be a bit outdated: Since I originally wrote them, I've reinstalled on a newer Dokku and may not have updated every section below.

Commands

Install dokku-cli (gem install dokku-cli) for a more Heroku-like CLI experience (dokku config:set FOO=bar).

# List/run commands when not on Dokku server (assuming a "henroku" ~/.ssh/config alias)

ssh henroku dokku

@hijonathan
hijonathan / readme.md
Last active September 16, 2019 11:43
Fun in-browser web scraper.

What is this?

This little script lets you easily define a few things you want from a web page (using CSS selectors), the crawl the site until you get them all.

It's based on the way Kimono works, but it's much simpler and has no limit to the number of results you can get. It also uses your auth tokens from the browser, so it's just as secure as your browser (which you should still be suspect of).

How do I use it?

Put that script into your browser terminal and run it. If you use Chrome, I highly recommend saving it as a snippet for easy reuse. To start scraping a site, create a Scraper instance with your desired options: