Skip to content

Instantly share code, notes, and snippets.

Avatar
🍜
focusing on pho

Louis Maddox lmmx

🍜
focusing on pho
View GitHub Profile
@lmmx
lmmx / sp_session.py
Created June 8, 2023 15:32
Data from EC2 instance
View sp_session.py
CmZyb20gZGF0ZXRpbWUgaW1wb3J0IGRhdGV0aW1lLCB0aW1lZGVsdGEKaW1wb3J0IGpzb24KaW1w b3J0IG9zCmltcG9ydCB0aW1lCgppbXBvcnQgYm90bzMKZnJvbSBib3RvMy5keW5hbW9kYi5jb25k aXRpb25zIGltcG9ydCBLZXksIEF0dHIKZnJvbSBkYXRldXRpbCBpbXBvcnQgcGFyc2VyCmltcG9y dCByZXF1ZXN0cwoKCkNMSUVOVF9JRCA9IG9zLmVudmlyb24uZ2V0KCdTUE9USUZZX0NMSUVOVF9J RCcpCkNMSUVOVF9TRUNSRVQgPSBvcy5lbnZpcm9uLmdldCgnU1BPVElGWV9DTElFTlRfU0VDUkVU JykKICAgICAgICAKaWYgbm90IChDTElFTlRfSUQgYW5kIENMSUVOVF9TRUNSRVQpOgogICAgYXdz X3NzbSA9IGJvdG8zLmNsaWVudCgnc3NtJykKICAgIENMSUVOVF9JRCA9IGF3c19zc20uZ2V0X3Bh cmFtZXRlcigKICAgICAgICBOYW1lPSdiZWF0Y2hhaW4tc3BvdGlmeS1jbGllbnQtaWQnLAogICAg ICAgIFdpdGhEZWNyeXB0aW9uPVRydWUpWydQYXJhbWV0ZXInXVsnVmFsdWUnXQogICAgICAgICAg ICAKICAgIENMSUVOVF9TRUNSRVQgPSBhd3Nfc3NtLmdldF9wYXJhbWV0ZXIoCiAgICAgICAgTmFt ZT0nYmVhdGNoYWluLXNwb3RpZnktc2VjcmV0JywKICAgICAgICBXaXRoRGVjcnlwdGlvbj1UcnVl KVsnUGFyYW1ldGVyJ11bJ1ZhbHVlJ10KCmFzc2VydCAoQ0xJRU5UX0lEIGFuZCBDTElFTlRfU0VD UkVUKQoKCmNsYXNzIFNwb3RpZnlTZXNzaW9uKG9iamVjdCk6CiAgICAKICAgIGRlZiBfX2luaXRf XyhzZWxmKToKICAgICAgICA
@lmmx
lmmx / prompt-1-input.md
Last active June 4, 2023 10:54
GPT-4 explains datamodel-code-generator
View prompt-1-input.md

I'm working with a library called datamodel-code-generator that has a command datamodel-codegen which can be used to access its internal generate function as a command line tool. The library has 2 main supported formats: jsonschema and openapi, for which it has separate parsers. Following 'raw' parsing by these specific parsers, the base parser does a bunch of intricate routines that I find quite inscrutable, and was wondering if I could get some advice on by showing them to you. These routines have method names beginning with two underscores (I call these 'private' methods). Please read the following code and explain it in summary to me.

def parse(
    self,
    with_import: Optional[bool] = True,
    format_: Optional[bool] = True,
    settings_path: Optional[Path] = None,
) -> Union[str, Dict[Tuple[str, ...], Result]]:
    self.parse_raw()
@lmmx
lmmx / json_field_demo.py
Created May 25, 2023 07:53
Ingesting a DTO with reserved word keys using dataclasses (JSONWizard helper class from the dataclass-wizard package). One way is to name all the classes manually, but better is to set the letter case to 'PASCAL' which will 'upper camel case' them. The only cases this won't work for are if "True", "False", or "None" are keys in the DTO.
View json_field_demo.py
from __future__ import annotations
from dataclasses import dataclass, field
from datetime import date
from dataclass_wizard import JSONWizard, json_field
@dataclass
class Lesson(JSONWizard):
_def: str = json_field('def', all=True)
_if: date = json_field('if', all=True)
View chatgpt_summary.md

The talk at Google IO 2023 covered several advancements in AI and new features across Google's products and services. Here's a summary:

  1. Help Me Write in Gmail: Introducing a feature that uses AI to help users draft emails. Users can request assistance in crafting emails, such as asking for a full refund, and Help Me Write generates a draft using prompts and relevant information from previous emails.

  2. Magic Eraser and Magic Editor in Google Photos: AI-powered computational photography tools for photo editing. Magic Eraser removes unwanted distractions, while Magic Editor allows users to make advanced edits like removing objects and adjusting elements in photos.

  3. Palm 2 and MedPalm 2: New models in Google's AI lineup. Palm 2 is a highly capable model for various tasks, and MedPalm 2 is fine-tuned on medical knowledge, performing at an expert level on medical licensing exam-style questions.

  4. Gemini and Bard: Google's next-generation foundation model, Gemini, designed to be multimodal and enable futu

View gpt4_tour_check.py
import folium
import numpy as np
import pandas as pd
from python_tsp.exact import solve_tsp_dynamic_programming
from sklearn.metrics import DistanceMetric
cities = ["London", "Paris", "Madrid", "Berlin", "Rome"]
print("Cities:", cities, "\n")
# Source: https://www.kaggle.com/datasets/juanmah/world-cities
df = pd.read_csv("worldcities.csv")
@lmmx
lmmx / pseudo-bar-chart.mermaid
Last active March 31, 2023 11:36
Using a Mermaid diagram Gantt chart to display a frequency table of values as a pseudo-bar chart. Syntax: https://mermaid.js.org/syntax/gantt.html
View pseudo-bar-chart.mermaid
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@lmmx
lmmx / gist:bc65be66de8bbbe09b6fa5671f19847b
Created March 29, 2023 09:26
GPT-4 AsyncRun custom handler function for sending output to the buffer rather than the QuickFix window (seems hallucinated). Kind of close to a "custom runner" in https://github.com/skywind3000/asyncrun.vim/wiki/Customize-Runner but I can't see which ones get data from the process output, so unsure which type of runner to adapt
View gist:bc65be66de8bbbe09b6fa5671f19847b
You can customize the behavior of the AsyncRun plugin to insert the output into the current buffer by using a custom handler function.
Follow these steps to set up the custom handler:
- In your .vimrc, add the following function:
```vim
function! MyAsyncHandler(job, data) abort
let l:lines = split(a:data, "\n")
call append(line('$'), l:lines[:-2])
@lmmx
lmmx / dict_lcp_trie_gpt.py
Created March 26, 2023 16:54
LCP trie (trie with multi-character prefixes so nodes are 'common prefix' branches rather than common ancestor) adapted from dict trie in https://ychai.uk/notes/2019/03/03/Programming/Tricks-of-Python/ and mostly generated with GPT-3.5 by describing the algorithm as pseudo-code comments
View dict_lcp_trie_gpt.py
class TrieNode(object):
def __init__(self):
self.data = {}
self.is_word = False
class Trie(object):
def __init__(self):
self.root = TrieNode()
@lmmx
lmmx / algo_profiling.py
Last active March 26, 2023 21:27
Grouping strings by longest common prefix
View algo_profiling.py
import cProfile
import pstats
from pathlib import Path
from dict_lcp_trie_gpt import Trie
from lcp_groups import lcp_trie
n_words = 23
words = Path("wordlist_google-10000-english.txt").read_text().splitlines()[:n_words]
@lmmx
lmmx / split_string.py
Created March 19, 2023 09:28
Split the string "/a/b/c" into 2 parts: "a" and "/b/c" (first simple task I found both GPT-3.5 and GPT-4 unable to achieve)
View split_string.py
first, second = "/a/b/c".lstrip("/").replace("/", "//", 1).split("/", 1)