start new:
tmux
start new with session name:
tmux new -s myname
#!/usr/bin/env python | |
"""Simple HTTP Server With Upload. | |
This module builds on BaseHTTPServer by implementing the standard GET | |
and HEAD requests in a fairly straightforward manner. | |
""" |
#!/bin/sh | |
# Install brew | |
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" | |
# Apple hides old versions of stuff at https://developer.apple.com/download/more/ | |
# Install the latest XCode (8.0). | |
# We used to install the XCode Command Line Tools 7.3 here, but that would just upset the most recent versions of brew. | |
# So we're going to install all our brew dependencies first, and then downgrade the tools. You can switch back after | |
# you have installed caffe. | |
# Install CUDA toolkit 8.0 release candidate | |
# Register and download from https://developer.nvidia.com/cuda-release-candidate-download |
const SECRET_KEY = ENTER YOUR SECRET KEY HERE; | |
const MAX_TOKENS = 200; | |
// For more cool AI snippets and demos, follow me on Twitter: https://twitter.com/_abi_ | |
/** | |
* Completes your prompt with GPT-3 | |
* | |
* @param {string} prompt Prompt | |
* @param {number} temperature (Optional) Temperature. 1 is super creative while 0 is very exact and precise. Defaults to 0.4. |
# Setup: | |
# conda create -n whisper python=3.9 | |
# conda activate whisper | |
# https://github.com/openai/whisper | |
# pip install git+https://github.com/openai/whisper.git | |
# Usage: | |
# python whisper-audio-to-text.py --audio_dir my_files --out_dir texts | |
import argparse |
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
from gradio_client import Client | |
API_URL = "https://sanchit-gandhi-whisper-jax.hf.space/" | |
# set up the Gradio client | |
client = Client(API_URL) | |
def transcribe_audio(audio_path, task="transcribe", return_timestamps=False): |
/* | |
the twitter api is stupid. it is stupid and bad and expensive. hence, this. | |
Literally just paste this in the JS console on the bookmarks tab and the script will automatically scroll to the bottom of your bookmarks and keep a track of them as it goes. | |
When finished, it downloads a JSON file containing the raw text content of every bookmark. | |
for now it stores just the text inside the tweet itself, but if you're reading this why don't you go ahead and try to also store other information (author, tweetLink, pictures, everything). come on. do it. please? | |
*/ |
/* Enhancements to the Twitter Scraping Script: | |
This update to the script introduces a more robust mechanism for extracting detailed interaction data from tweets as they are scraped from Twitter. Previously, the script focused on collecting basic content such as the tweet's text. Now, it has been augmented to include a comprehensive extraction of interaction metrics, including replies, reposts, likes, bookmarks, and views, for each tweet. | |
Key Changes: | |
1. Improved Data Extraction: | |
- The script now searches through all elements within a tweet that have an `aria-label` attribute, filtering for labels that contain key interaction terms (replies, reposts, likes, bookmarks, views). This ensures that only relevant `aria-labels` are considered for data extraction. | |
2. Flexible Interaction Data Parsing: |