Skip to content

Instantly share code, notes, and snippets.

View v1shwa's full-sized avatar
:octocat:
⚡️

Vishwa v1shwa

:octocat:
⚡️
View GitHub Profile
@levelsio
levelsio / gist:5bc87fd1b1ffbf4a705047bebd9b4790
Last active May 28, 2024 17:40
Secret of Monkey Island: Amsterdam (by @levelsio) or how to create your own ChatGPT image+text-based adventure game
# 2023-11-27 MIT LICENSE
Here's the open source version of my ChatGPT game MonkeyIslandAmsterdam.com.
It's an unofficial image+text-based adventure game edition of Monkey Island in Amsterdam, my home town.
Please use it however you want. It'd be nice to see more ChatGPT-based games appear from this. If you get inspired by it, please link back to my X https://x.com/levelsio or this Gist so more people can do the same!
Send me your ChatGPT text adventure game on X, I'd love to try it!
@veekaybee
veekaybee / chatgpt.md
Last active June 18, 2024 13:49
Everything I understand about chatgpt

ChatGPT Resources

Context

ChatGPT appeared like an explosion on all my social media timelines in early December 2022. While I keep up with machine learning as an industry, I wasn't focused so much on this particular corner, and all the screenshots seemed like they came out of nowhere. What was this model? How did the chat prompting work? What was the context of OpenAI doing this work and collecting my prompts for training data?

I decided to do a quick investigation. Here's all the information I've found so far. I'm aggregating and synthesizing it as I go, so it's currently changing pretty frequently.

Model Architecture

Rank Type Prefix/Suffix Length
1 Prefix my+ 2
2 Suffix +online 6
3 Prefix the+ 3
4 Suffix +web 3
5 Suffix +media 5
6 Prefix web+ 3
7 Suffix +world 5
8 Suffix +net 3
9 Prefix go+ 2
@spektom
spektom / generate_hive_schema.scala
Created May 3, 2018 13:47
Generate Hive schema from Spark Dataframe
import org.apache.spark.sql.DataFrame
def dataFrameToDDL(dataFrame: DataFrame, tableName: String): String = {
val columns = dataFrame.schema.map { field =>
" " + field.name + " " + field.dataType.simpleString.toUpperCase
}
s"CREATE TABLE $tableName (\n${columns.mkString(",\n")}\n)"
}
import spark.sqlContext.implicits._
@kittinan
kittinan / client.py
Last active June 6, 2024 10:29
Python OpenCV webcam send image frame over socket
import cv2
import io
import socket
import struct
import time
import pickle
import zlib
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client_socket.connect(('192.168.1.124', 8485))
@hakanilter
hakanilter / setup.sh
Last active April 9, 2021 19:56
AWS EMR Examples Master Setup
#!/bin/bash
# install git
sudo yum install git
# maven
sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo
sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo
sudo yum install -y apache-maven
mvn --version
@pimatco
pimatco / all-angular-material-components-imports.txt
Last active December 6, 2023 15:24
All Angular Material Components Imports from app.module.ts
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';
//Angular Material Components
import {BrowserAnimationsModule} from '@angular/platform-browser/animations';
import {MatCheckboxModule} from '@angular/material';
import {MatButtonModule} from '@angular/material';
@apeletz512
apeletz512 / build_hive_ddl.py
Created March 12, 2017 21:41
Generate Hive DDL string from pyspark.sql.DataFrame.schema object
def build_hive_ddl(
table_name, object_schema, location, file_format, partition_schema=None, verbose=False):
"""
:param table_name: the name of the table you want to register in the Hive metastore
:param object_schema: an instance of pyspark.sql.Dataframe.schema
:param location: the storage location for this data (and S3 or HDFS filepath)
:param file_format: a string compatible with the 'STORED AS <format>' Hive DDL syntax
:param partition_schema: an optional instance of pyspark.sql.Dataframe.schema that stores the
columns that are used for partitioning on disk
:param verbose:
@colophonemes
colophonemes / create_triggers
Last active February 17, 2024 15:15
Postgres TRIGGER to call NOTIFY with a JSON payload
CREATE TRIGGER person_notify AFTER INSERT OR UPDATE OR DELETE ON income
FOR EACH ROW EXECUTE PROCEDURE notify_trigger(
'id',
'email',
'username'
);
CREATE TRIGGER income_notify AFTER INSERT OR UPDATE OR DELETE ON income
FOR EACH ROW EXECUTE PROCEDURE notify_trigger(
'id',