Skip to content

Instantly share code, notes, and snippets.

@brockmanmatt
Last active June 2, 2021 18:27
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save brockmanmatt/9c1b465ff0b853cef201dc7b7d607931 to your computer and use it in GitHub Desktop.
Save brockmanmatt/9c1b465ff0b853cef201dc7b7d607931 to your computer and use it in GitHub Desktop.
rhymeLineGeneration.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "rhymeLineGeneration.ipynb",
"provenance": [],
"collapsed_sections": [],
"authorship_tag": "ABX9TyM3Rv4zmmh3Akn0susbEOzp",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/brockmanmatt/9c1b465ff0b853cef201dc7b7d607931/rhymelinegeneration.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "code",
"metadata": {
"id": "J7wnsgT2kPut",
"colab_type": "code",
"colab": {
"resources": {
"http://localhost:8080/nbextensions/google.colab/files.js": {
"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7CgpmdW5jdGlvbiBfdXBsb2FkRmlsZXMoaW5wdXRJZCwgb3V0cHV0SWQpIHsKICBjb25zdCBzdGVwcyA9IHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCk7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICAvLyBDYWNoZSBzdGVwcyBvbiB0aGUgb3V0cHV0RWxlbWVudCB0byBtYWtlIGl0IGF2YWlsYWJsZSBmb3IgdGhlIG5leHQgY2FsbAogIC8vIHRvIHVwbG9hZEZpbGVzQ29udGludWUgZnJvbSBQeXRob24uCiAgb3V0cHV0RWxlbWVudC5zdGVwcyA9IHN0ZXBzOwoKICByZXR1cm4gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpOwp9CgovLyBUaGlzIGlzIHJvdWdobHkgYW4gYXN5bmMgZ2VuZXJhdG9yIChub3Qgc3VwcG9ydGVkIGluIHRoZSBicm93c2VyIHlldCksCi8vIHdoZXJlIHRoZXJlIGFyZSBtdWx0aXBsZSBhc3luY2hyb25vdXMgc3RlcHMgYW5kIHRoZSBQeXRob24gc2lkZSBpcyBnb2luZwovLyB0byBwb2xsIGZvciBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcC4KLy8gVGhpcyB1c2VzIGEgUHJvbWlzZSB0byBibG9jayB0aGUgcHl0aG9uIHNpZGUgb24gY29tcGxldGlvbiBvZiBlYWNoIHN0ZXAsCi8vIHRoZW4gcGFzc2VzIHRoZSByZXN1bHQgb2YgdGhlIHByZXZpb3VzIHN0ZXAgYXMgdGhlIGlucHV0IHRvIHRoZSBuZXh0IHN0ZXAuCmZ1bmN0aW9uIF91cGxvYWRGaWxlc0NvbnRpbnVlKG91dHB1dElkKSB7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICBjb25zdCBzdGVwcyA9IG91dHB1dEVsZW1lbnQuc3RlcHM7CgogIGNvbnN0IG5leHQgPSBzdGVwcy5uZXh0KG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSk7CiAgcmV0dXJuIFByb21pc2UucmVzb2x2ZShuZXh0LnZhbHVlLnByb21pc2UpLnRoZW4oKHZhbHVlKSA9PiB7CiAgICAvLyBDYWNoZSB0aGUgbGFzdCBwcm9taXNlIHZhbHVlIHRvIG1ha2UgaXQgYXZhaWxhYmxlIHRvIHRoZSBuZXh0CiAgICAvLyBzdGVwIG9mIHRoZSBnZW5lcmF0b3IuCiAgICBvdXRwdXRFbGVtZW50Lmxhc3RQcm9taXNlVmFsdWUgPSB2YWx1ZTsKICAgIHJldHVybiBuZXh0LnZhbHVlLnJlc3BvbnNlOwogIH0pOwp9CgovKioKICogR2VuZXJhdG9yIGZ1bmN0aW9uIHdoaWNoIGlzIGNhbGxlZCBiZXR3ZWVuIGVhY2ggYXN5bmMgc3RlcCBvZiB0aGUgdXBsb2FkCiAqIHByb2Nlc3MuCiAqIEBwYXJhbSB7c3RyaW5nfSBpbnB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIGlucHV0IGZpbGUgcGlja2VyIGVsZW1lbnQuCiAqIEBwYXJhbSB7c3RyaW5nfSBvdXRwdXRJZCBFbGVtZW50IElEIG9mIHRoZSBvdXRwdXQgZGlzcGxheS4KICogQHJldHVybiB7IUl0ZXJhYmxlPCFPYmplY3Q+fSBJdGVyYWJsZSBvZiBuZXh0IHN0ZXBzLgogKi8KZnVuY3Rpb24qIHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IGlucHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKGlucHV0SWQpOwogIGlucHV0RWxlbWVudC5kaXNhYmxlZCA9IGZhbHNlOwoKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIG91dHB1dEVsZW1lbnQuaW5uZXJIVE1MID0gJyc7CgogIGNvbnN0IHBpY2tlZFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgaW5wdXRFbGVtZW50LmFkZEV2ZW50TGlzdGVuZXIoJ2NoYW5nZScsIChlKSA9PiB7CiAgICAgIHJlc29sdmUoZS50YXJnZXQuZmlsZXMpOwogICAgfSk7CiAgfSk7CgogIGNvbnN0IGNhbmNlbCA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2J1dHRvbicpOwogIGlucHV0RWxlbWVudC5wYXJlbnRFbGVtZW50LmFwcGVuZENoaWxkKGNhbmNlbCk7CiAgY2FuY2VsLnRleHRDb250ZW50ID0gJ0NhbmNlbCB1cGxvYWQnOwogIGNvbnN0IGNhbmNlbFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgY2FuY2VsLm9uY2xpY2sgPSAoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9OwogIH0pOwoKICAvLyBXYWl0IGZvciB0aGUgdXNlciB0byBwaWNrIHRoZSBmaWxlcy4KICBjb25zdCBmaWxlcyA9IHlpZWxkIHsKICAgIHByb21pc2U6IFByb21pc2UucmFjZShbcGlja2VkUHJvbWlzZSwgY2FuY2VsUHJvbWlzZV0pLAogICAgcmVzcG9uc2U6IHsKICAgICAgYWN0aW9uOiAnc3RhcnRpbmcnLAogICAgfQogIH07CgogIGNhbmNlbC5yZW1vdmUoKTsKCiAgLy8gRGlzYWJsZSB0aGUgaW5wdXQgZWxlbWVudCBzaW5jZSBmdXJ0aGVyIHBpY2tzIGFyZSBub3QgYWxsb3dlZC4KICBpbnB1dEVsZW1lbnQuZGlzYWJsZWQgPSB0cnVlOwoKICBpZiAoIWZpbGVzKSB7CiAgICByZXR1cm4gewogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgICAgfQogICAgfTsKICB9CgogIGZvciAoY29uc3QgZmlsZSBvZiBmaWxlcykgewogICAgY29uc3QgbGkgPSBkb2N1bWVudC5jcmVhdGVFbGVtZW50KCdsaScpOwogICAgbGkuYXBwZW5kKHNwYW4oZmlsZS5uYW1lLCB7Zm9udFdlaWdodDogJ2JvbGQnfSkpOwogICAgbGkuYXBwZW5kKHNwYW4oCiAgICAgICAgYCgke2ZpbGUudHlwZSB8fCAnbi9hJ30pIC0gJHtmaWxlLnNpemV9IGJ5dGVzLCBgICsKICAgICAgICBgbGFzdCBtb2RpZmllZDogJHsKICAgICAgICAgICAgZmlsZS5sYXN0TW9kaWZpZWREYXRlID8gZmlsZS5sYXN0TW9kaWZpZWREYXRlLnRvTG9jYWxlRGF0ZVN0cmluZygpIDoKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJ24vYSd9IC0gYCkpOwogICAgY29uc3QgcGVyY2VudCA9IHNwYW4oJzAlIGRvbmUnKTsKICAgIGxpLmFwcGVuZENoaWxkKHBlcmNlbnQpOwoKICAgIG91dHB1dEVsZW1lbnQuYXBwZW5kQ2hpbGQobGkpOwoKICAgIGNvbnN0IGZpbGVEYXRhUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICAgIGNvbnN0IHJlYWRlciA9IG5ldyBGaWxlUmVhZGVyKCk7CiAgICAgIHJlYWRlci5vbmxvYWQgPSAoZSkgPT4gewogICAgICAgIHJlc29sdmUoZS50YXJnZXQucmVzdWx0KTsKICAgICAgfTsKICAgICAgcmVhZGVyLnJlYWRBc0FycmF5QnVmZmVyKGZpbGUpOwogICAgfSk7CiAgICAvLyBXYWl0IGZvciB0aGUgZGF0YSB0byBiZSByZWFkeS4KICAgIGxldCBmaWxlRGF0YSA9IHlpZWxkIHsKICAgICAgcHJvbWlzZTogZmlsZURhdGFQcm9taXNlLAogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbnRpbnVlJywKICAgICAgfQogICAgfTsKCiAgICAvLyBVc2UgYSBjaHVua2VkIHNlbmRpbmcgdG8gYXZvaWQgbWVzc2FnZSBzaXplIGxpbWl0cy4gU2VlIGIvNjIxMTU2NjAuCiAgICBsZXQgcG9zaXRpb24gPSAwOwogICAgd2hpbGUgKHBvc2l0aW9uIDwgZmlsZURhdGEuYnl0ZUxlbmd0aCkgewogICAgICBjb25zdCBsZW5ndGggPSBNYXRoLm1pbihmaWxlRGF0YS5ieXRlTGVuZ3RoIC0gcG9zaXRpb24sIE1BWF9QQVlMT0FEX1NJWkUpOwogICAgICBjb25zdCBjaHVuayA9IG5ldyBVaW50OEFycmF5KGZpbGVEYXRhLCBwb3NpdGlvbiwgbGVuZ3RoKTsKICAgICAgcG9zaXRpb24gKz0gbGVuZ3RoOwoKICAgICAgY29uc3QgYmFzZTY0ID0gYnRvYShTdHJpbmcuZnJvbUNoYXJDb2RlLmFwcGx5KG51bGwsIGNodW5rKSk7CiAgICAgIHlpZWxkIHsKICAgICAgICByZXNwb25zZTogewogICAgICAgICAgYWN0aW9uOiAnYXBwZW5kJywKICAgICAgICAgIGZpbGU6IGZpbGUubmFtZSwKICAgICAgICAgIGRhdGE6IGJhc2U2NCwKICAgICAgICB9LAogICAgICB9OwogICAgICBwZXJjZW50LnRleHRDb250ZW50ID0KICAgICAgICAgIGAke01hdGgucm91bmQoKHBvc2l0aW9uIC8gZmlsZURhdGEuYnl0ZUxlbmd0aCkgKiAxMDApfSUgZG9uZWA7CiAgICB9CiAgfQoKICAvLyBBbGwgZG9uZS4KICB5aWVsZCB7CiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICB9CiAgfTsKfQoKc2NvcGUuZ29vZ2xlID0gc2NvcGUuZ29vZ2xlIHx8IHt9OwpzY29wZS5nb29nbGUuY29sYWIgPSBzY29wZS5nb29nbGUuY29sYWIgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYi5fZmlsZXMgPSB7CiAgX3VwbG9hZEZpbGVzLAogIF91cGxvYWRGaWxlc0NvbnRpbnVlLAp9Owp9KShzZWxmKTsK",
"ok": true,
"headers": [
[
"content-type",
"application/javascript"
]
],
"status": 200,
"status_text": ""
}
},
"base_uri": "https://localhost:8080/",
"height": 89
},
"outputId": "b93186dc-eaaa-4d7d-9e0f-30ee7e5192d9"
},
"source": [
"from google.colab import files\n",
"uploaded = files.upload()\n",
"print(\"done\")"
],
"execution_count": 2,
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <input type=\"file\" id=\"files-c258c032-6ae2-4c80-a48d-d42d7df6230c\" name=\"files[]\" multiple disabled\n",
" style=\"border:none\" />\n",
" <output id=\"result-c258c032-6ae2-4c80-a48d-d42d7df6230c\">\n",
" Upload widget is only available when the cell has been executed in the\n",
" current browser session. Please rerun this cell to enable.\n",
" </output>\n",
" <script src=\"/nbextensions/google.colab/files.js\"></script> "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {
"tags": []
}
},
{
"output_type": "stream",
"text": [
"Saving key.json to key.json\n",
"done\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WHPHrUnhpKnI",
"colab_type": "text"
},
"source": [
"I'll install the API"
]
},
{
"cell_type": "code",
"metadata": {
"id": "zq0ltp2xn4yt",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 292
},
"outputId": "88d8ac59-d8a5-4776-9486-d1bc7d5505f5"
},
"source": [
"!pip install openai\n",
"import openai, json, random, pandas as pd, numpy as np, re"
],
"execution_count": 3,
"outputs": [
{
"output_type": "stream",
"text": [
"Collecting openai\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/a8/65/c7461f4c87984534683f480ea5742777bc39bbf5721123194c2d0347dc1f/openai-0.2.4.tar.gz (157kB)\n",
"\r\u001b[K |██ | 10kB 20.4MB/s eta 0:00:01\r\u001b[K |████▏ | 20kB 6.9MB/s eta 0:00:01\r\u001b[K |██████▎ | 30kB 7.7MB/s eta 0:00:01\r\u001b[K |████████▍ | 40kB 8.7MB/s eta 0:00:01\r\u001b[K |██████████▍ | 51kB 7.1MB/s eta 0:00:01\r\u001b[K |████████████▌ | 61kB 7.8MB/s eta 0:00:01\r\u001b[K |██████████████▋ | 71kB 8.7MB/s eta 0:00:01\r\u001b[K |████████████████▊ | 81kB 9.2MB/s eta 0:00:01\r\u001b[K |██████████████████▊ | 92kB 9.4MB/s eta 0:00:01\r\u001b[K |████████████████████▉ | 102kB 9.8MB/s eta 0:00:01\r\u001b[K |███████████████████████ | 112kB 9.8MB/s eta 0:00:01\r\u001b[K |█████████████████████████ | 122kB 9.8MB/s eta 0:00:01\r\u001b[K |███████████████████████████ | 133kB 9.8MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▏ | 143kB 9.8MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▎| 153kB 9.8MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 163kB 9.8MB/s \n",
"\u001b[?25hRequirement already satisfied: requests>=2.20 in /usr/local/lib/python3.6/dist-packages (from openai) (2.23.0)\n",
"Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->openai) (2.10)\n",
"Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->openai) (1.24.3)\n",
"Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->openai) (3.0.4)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->openai) (2020.6.20)\n",
"Building wheels for collected packages: openai\n",
" Building wheel for openai (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for openai: filename=openai-0.2.4-cp36-none-any.whl size=170709 sha256=b32fc534c67eb22610e64a381c620521c117d4e0a482957cba260e4e99428335\n",
" Stored in directory: /root/.cache/pip/wheels/74/96/c8/c6e170929c276b836613e1b9985343b501fe455e53d85e7d48\n",
"Successfully built openai\n",
"Installing collected packages: openai\n",
"Successfully installed openai-0.2.4\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Q2yE0jcnpMEV",
"colab_type": "text"
},
"source": [
"Loading in key.json that I uploaded; I do this so I don't need to worry about accidently leaking creds if I share the colab (which I'm 99% sure is just a json file that won't expose them)"
]
},
{
"cell_type": "code",
"metadata": {
"id": "bwNXXwHen5x9",
"colab_type": "code",
"colab": {}
},
"source": [
"openai.api_key = json.load(open(\"key.json\", \"r\"))[\"key\"]"
],
"execution_count": 4,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "k67w5H0fpTkT",
"colab_type": "text"
},
"source": [
"Default keyword arguments to pass the aPI"
]
},
{
"cell_type": "code",
"metadata": {
"id": "CHHFR46YJ58i",
"colab_type": "code",
"colab": {}
},
"source": [
"# steps\n",
"#### SELECT PREIVOUS SENTENCE TO RHYME NOT DONE, just taking one randomly ####\n",
"# Generate new word for next sentence\n",
"# generrate next sentence\n",
"# cycle"
],
"execution_count": 5,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "NAsuHFqGlbpk",
"colab_type": "code",
"colab": {}
},
"source": [
"def getPossibleNextRhymingWords(lines, lineNumberToRhyme, numWords=10, verbose=False):\n",
" \"\"\"\n",
" This generatess numWords words that rhyme with the lineNumberToRhyme of lines\n",
" \"\"\"\n",
" kwargs = {\n",
" \"engine\":\"davinci\",\n",
" \"temperature\":0,\n",
" \"max_tokens\":20,\n",
" \"stop\":\"\\n\",\n",
" \"logprobs\":1,\n",
" \"presence_penalty\":5,\n",
" }\n",
"\n",
" rhymeLine = lines.split(\"\\n\")[lineNumberToRhyme]\n",
" rhymeWord = re.sub(r'[^\\w\\s]','',rhymeLine).strip().split()[-1]\n",
" if verbose:\n",
" print(\"RHYMING FOR: {}\".format(rhymeWord))\n",
"\n",
" context = \"\"\"Words are rhyming if their endings sound the same. For example, 'snout' rhymes with 'about' and 'hurtle' rhymes with 'turtle'. Consider this poem:\n",
"{}\n",
"\n",
"q: What is a word to add that rhymes with '{}'?\n",
"\"\"\".format(lines, rhymeWord)\n",
"\n",
" prompt = context + \"a:\"\n",
"\n",
" possibilities = []\n",
" attempt = 0\n",
" while len(possibilities) < numWords:\n",
" attempt += 1\n",
" r = openai.Completion.create(prompt=prompt, **kwargs)\n",
" newWord = r[\"choices\"][0][\"text\"].strip()\n",
" newWord = re.sub(r'[^\\w\\s]', '', newWord)\n",
" if verbose:\n",
" print(\"attempt {}: {}/{}\".format(newWord, len(possibilities),attempt))\n",
" if len(newWord.split()) > 2:\n",
" continue\n",
" if newWord in possibilities+[rhymeWord]:\n",
" if kwargs[\"temperature\"] < 2:\n",
" kwargs[\"temperature\"] += .1\n",
" continue\n",
" kwargs[\"temperature\"] = 0\n",
" possibilities.append(newWord)\n",
"\n",
" prompt = context\n",
" idx = 0\n",
" for example in random.sample(possibilities, k=len(possibilities)):\n",
" idx += 1\n",
" prompt += \"{}: {}\\n\".format(idx, example)\n",
" prompt += \"{}:\".format(len(possibilities)+1)\n",
"\n",
"\n",
" return possibilities"
],
"execution_count": 80,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "7U8pZBexmRv5",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 527
},
"outputId": "3e6539d6-b389-4424-f48a-0d29b0c94bdd"
},
"source": [
"partial = \"\"\"I met a traveler from a far off land,\n",
"who said what is the meaning of this here sign,\"\"\"\n",
"\n",
"possibilities = getPossibleNextRhymingWords(partial, 1, numWords=5, verbose=True)"
],
"execution_count": 81,
"outputs": [
{
"output_type": "stream",
"text": [
"RHYMING FOR: sign\n",
"attempt sign: 0/1\n",
"attempt sign: 0/2\n",
"attempt rhyme: 0/3\n",
"attempt rhyme: 1/4\n",
"attempt rhyme: 1/5\n",
"attempt rhyme: 1/6\n",
"attempt rhyme: 1/7\n",
"attempt sign: 1/8\n",
"attempt time: 1/9\n",
"attempt rhyme: 2/10\n",
"attempt rhyme: 2/11\n",
"attempt rhyme: 2/12\n",
"attempt rhyme: 2/13\n",
"attempt rhyme: 2/14\n",
"attempt rhyme: 2/15\n",
"attempt rhyme: 2/16\n",
"attempt mine: 2/17\n",
"attempt rhyme: 3/18\n",
"attempt sign: 3/19\n",
"attempt sign: 3/20\n",
"attempt sign: 3/21\n",
"attempt sign: 3/22\n",
"attempt sign: 3/23\n",
"attempt fine: 3/24\n",
"attempt mine: 4/25\n",
"attempt mine: 4/26\n",
"attempt rhyme: 4/27\n",
"attempt rhyme: 4/28\n",
"attempt line: 4/29\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "J5HDBof0zhmj",
"colab_type": "text"
},
"source": [
"# need additional checking, sometimes starts generating non-rhymes"
]
},
{
"cell_type": "code",
"metadata": {
"id": "ri_BcaloynIX",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
},
"outputId": "c1181b79-14ff-400b-aaf7-b5699f11a041"
},
"source": [
"\", \".join(possibilities)"
],
"execution_count": 20,
"outputs": [
{
"output_type": "execute_result",
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"'pine, mine, line, fine, nine'"
]
},
"metadata": {
"tags": []
},
"execution_count": 20
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "e7WAT2PGxhLf",
"colab_type": "code",
"colab": {}
},
"source": [
"def getAvgUntilNewLineFromLogprobs(someDict):\n",
" \"\"\"\n",
" get mean logprob from the API JSON for a series, assumes the 'stop' token is '\\n'\n",
" \"\"\"\n",
"\n",
" probs = dict(someDict)\n",
" end = len(someDict[\"token_logprobs\"])\n",
" if \"\\n\" in someDict[\"tokens\"]:\n",
" end = someDict[\"tokens\"].index(\"\\n\")\n",
" return np.mean(someDict[\"token_logprobs\"][:end])\n"
],
"execution_count": 21,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "uaFQLyu8mnfM",
"colab_type": "code",
"colab": {}
},
"source": [
"def getBestNextLine(lines, possibilities, returnAll=False, verbose=False):\n",
"\n",
" \"\"\"\n",
" Generates line to continue lines.\n",
" each line ends with a possibleWord in possibilities\n",
" default returns highest logprob line (like in best_of)\n",
" with returnAll returns a pandas dataframe of the lines + logprobs\n",
" \"\"\"\n",
"\n",
" rawContext = \"\"\"Line 1: I was walking outside when I saw a cat\n",
"Line 2: When I came closer I saw clearly,\n",
"Line 3: *illegible text* hat\n",
"q: what does the illegible text say?\n",
"a: on the feline's head was a tall texas hat\n",
"\n",
"Line 1: The bartender was pouring a drink,\n",
"Line 2: While the weather outside turned sour,\n",
"Line 3: *illegible text* think\n",
"q: what does the illegible text say?\n",
"a: and I was starting to think\n",
"\n",
"{}\n",
"q: what does the illegible text say?\n",
"a:\"\"\"\n",
"\n",
" sentenceKwargs = {\n",
" \"engine\":\"davinci\",\n",
" \"temperature\":.5,\n",
" \"max_tokens\":25,\n",
" \"stop\":\"\\n\",\n",
" \"logprobs\":1,\n",
" \"presence_penalty\":.3\n",
" }\n",
"\n",
" good_sentences = []\n",
" for word in possibilities:\n",
" if verbose:\n",
" print(\"TRYING {}\".format(word.upper()))\n",
"\n",
" examples = \"\"\n",
" i = 0\n",
" for line in lines.split(\"\\n\"):\n",
" i+=1\n",
" examples += \"Line {}: {}\\n\".format(i, line)\n",
" examples += \"Line {}: *illegible text* {}\".format(i+1, word)\n",
"\n",
" context = rawContext.format(examples)\n",
"\n",
" for i in range(1,25):\n",
" r = openai.Completion.create(prompt=context.format(word, word), **sentenceKwargs)\n",
" newSentence = r[\"choices\"][0][\"text\"]\n",
" lastWord = re.sub(r'[^\\w\\s]','',newSentence.strip().split()[-1])\n",
" if lastWord.lower() == word.lower():\n",
" if verbose:\n",
" print(\"FOUN D IT! {} on try {}\".format(newSentence, i))\n",
" good_sentences.append(r)\n",
" break\n",
" if i % 10 == 0:\n",
" if verbose:\n",
" print(\"{}: {}\".format(i, newSentence))\n",
" \n",
" texts = []\n",
" logprobs = []\n",
" for sentence in good_sentences:\n",
" texts.append(sentence[\"choices\"][0][\"text\"])\n",
" logprobs.append(sentence[\"choices\"][0][\"logprobs\"])\n",
"\n",
" df = pd.DataFrame(list(zip(texts, logprobs)), columns=[\"text\", \"logprobs\"])\n",
" df[\"mean_logprob\"] = df.logprobs.apply(lambda x: getAvgUntilNewLineFromLogprobs(x))\n",
"\n",
" if returnAll:\n",
" return df\n",
" return df.loc[df.mean_logprob.idxmin(), \"text\"]"
],
"execution_count": 74,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "TzO-8DAuw-ok",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 187
},
"outputId": "6e3c568a-94b1-4e78-8df2-5f6b0f7049e2"
},
"source": [
"test = getBestNextLine(partial, possibilities, returnAll=True, verbose=True)"
],
"execution_count": 23,
"outputs": [
{
"output_type": "stream",
"text": [
"TRYING PINE\n",
"FOUN D IT! I was thinking that the sign said pine on try 2\n",
"TRYING MINE\n",
"FOUN D IT! and I told him it was mine on try 1\n",
"TRYING LINE\n",
"FOUN D IT! I'm going to have to draw a line on try 3\n",
"TRYING FINE\n",
"FOUN D IT! and I said it means you have to be fine on try 4\n",
"TRYING NINE\n",
"FOUN D IT! he said the meaning of this here sign is nine on try 1\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "u9DizKlJFncQ",
"colab_type": "text"
},
"source": [
"## k, try randomly rhyming lines and adding new ones"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Q2_nKdS_xESy",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 119
},
"outputId": "d4c39068-6bfe-4208-d1b9-9c42952316a4"
},
"source": [
"testSentence = partial\n",
"for i in range(3):\n",
" possibilities = getPossibleNextRhymingWords(testSentence, random.randint(0,len(testSentence.split(\"\\n\"))-1), numWords=5)\n",
" print(\"Posssibilities: {}\".format(\", \".join(possibilities)))\n",
" newLine = getBestNextLine(testSentence, possibilities).strip()\n",
" testSentence += \"\\n\" + newLine\n",
" print(newLine)"
],
"execution_count": 26,
"outputs": [
{
"output_type": "stream",
"text": [
"Posssibilities: bine, vine, line, pine, shine\n",
"that the answer to the question is pine\n",
"Posssibilities: line, sign, plane, fine, mine\n",
"I said that it's a pine needle plane\n",
"Posssibilities: plane, line, vine, cone, tree\n",
"but he was from another line\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "pPD6y9bqG6qk",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 102
},
"outputId": "daa10e66-f224-441f-8ecc-545f2087eb8a"
},
"source": [
"print(testSentence)"
],
"execution_count": 28,
"outputs": [
{
"output_type": "stream",
"text": [
"I met a traveler from a far off land,\n",
"who said what is the meaning of this here sign,\n",
"that the answer to the question is pine\n",
"I said that it's a pine needle plane\n",
"but he was from another line\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "j7cn0WEf6E-X",
"colab_type": "text"
},
"source": [
"# Final Poem! (well, it's horrible but that's not the point)"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Cmt6gAAhGeiT",
"colab_type": "code",
"colab": {}
},
"source": [
"def generateLineOnIssue(someContext):\n",
" \"\"\"\n",
" generates content about the context\n",
" \"\"\"\n",
"\n",
" #arguments to send the API\n",
" kwargs = {\n",
" \"engine\":\"davinci\",\n",
" \"temperature\":1,\n",
" \"max_tokens\":100,\n",
" \"stop\":\"\\n\\n\",\n",
" }\n",
" \n",
" prompt = \"\"\"My friend wrote this story about a dog. It's only a few sentences:\n",
"Oh dog, how I like your tail.\n",
"You wag and run about.\n",
"Every time you make it flail\n",
"about in the air like a trout\n",
"it's as if you're trying to flout\n",
"although you run as fast as a smail\n",
"\n",
"My son wrote this poem about {}. It's a couple paragraphs, but I enjoyed it:\"\"\"\n",
" r = openai.Completion.create(prompt=prompt.format(someContext), **kwargs)\n",
" newWord = r[\"choices\"][0][\"text\"].strip()\n",
" \n",
" return newWord"
],
"execution_count": 31,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "1I-uSz9bHk9_",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 86
},
"outputId": "8f8efd32-c3c6-4bd9-d170-28f7ec1aceee"
},
"source": [
"generateLineOnIssue(\"a cat sleeping\")"
],
"execution_count": 32,
"outputs": [
{
"output_type": "execute_result",
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"'His silky fur reflects a deep nocturnal state.\\nNo snoring cat that ever lived can compare,\\nHis snore is soft, his purr is slick\\n(wait: what breed of cat is he?)\\nA flamingo and a minimalist take a walk\\n(A flamingo? Yes, one does exist that looks like that)\\nA flamingo and a minimalist walk home overlooked\\n(A flamingo? Yes, one is abstract, like me?\\n('"
]
},
"metadata": {
"tags": []
},
"execution_count": 32
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "slucojQLH2GD",
"colab_type": "code",
"colab": {}
},
"source": [
"def continueProseContext(someContext):\n",
" \"\"\"\n",
" generate new line from someContext\n",
" \"\"\"\n",
" #arguments to send the API\n",
" kwargs = {\n",
" \"engine\":\"davinci\",\n",
" \"temperature\":1,\n",
" \"max_tokens\":100,\n",
" \"stop\":\"\\n\",\n",
" }\n",
" r = openai.Completion.create(prompt=someContext + \"\\n\", **kwargs)\n",
" newLine = r[\"choices\"][0][\"text\"].strip()\n",
" return newLine"
],
"execution_count": 39,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "7YR9JKQxK-Gm",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 102
},
"outputId": "7fa2d1ff-0e42-40f4-ea52-500268a0bdbf"
},
"source": [
"print(testSentence)"
],
"execution_count": 42,
"outputs": [
{
"output_type": "stream",
"text": [
"I met a traveler from a far off land,\n",
"who said what is the meaning of this here sign,\n",
"that the answer to the question is pine\n",
"I said that it's a pine needle plane\n",
"but he was from another line\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "LbwOhIKqIgFT",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
},
"outputId": "baeaa4d6-286d-4374-874b-2057cfdef81c"
},
"source": [
"continueProseContext(testSentence)"
],
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"\"he said it's a pipe of tobacco\""
]
},
"metadata": {
"tags": []
},
"execution_count": 40
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "m-qzWDpbK35t",
"colab_type": "text"
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wg2Qj3NnLF6k",
"colab_type": "text"
},
"source": [
"# K, now gonna keep in groups of 5 and do monorhyme - each stanza is same rhyme end"
]
},
{
"cell_type": "code",
"metadata": {
"id": "1AIFlSC86HuI",
"colab_type": "code",
"colab": {}
},
"source": [
""
],
"execution_count": 34,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "q9ZcQiXZIZup",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 221
},
"outputId": "319cb039-e7d6-4f9e-ed5d-9f3ef4e58ecf"
},
"source": [
"myIssue = \"I watched the cat chasing the mouse down the stairs\"\n",
"verbose=True\n",
"poem = []\n",
"\n",
"#make 4 graphs\n",
"for graph in range(2):\n",
" preceeding_poem = \"\\n\".join(poem)\n",
" newGraph = []\n",
"\n",
" # add first line of graph\n",
" if len(poem) == 0:\n",
" newLine = myIssue\n",
" else:\n",
" newLine = \"\"\n",
" while len(newLine) < 10:\n",
" newLine = continueProseContext(preceeding_poem)\n",
" newGraph.append(newLine)\n",
" poem.append(newLine)\n",
" print(newLine)\n",
"\n",
" #add 2 more lines per graph\n",
" for line in range(4):\n",
" preceeding_poem = \"\\n\".join(poem)\n",
" preceeding_graph = \"\\n\".join(newGraph)\n",
"\n",
" possibilities = getPossibleNextRhymingWords(preceeding_graph, random.randint(0,len(preceeding_graph.split(\"\\n\"))-1), numWords=3)\n",
" newLine = getBestNextLine(preceeding_poem, possibilities).strip()\n",
"\n",
" newGraph.append(newLine)\n",
" poem.append(newLine)\n",
"\n",
" print(newLine)\n",
" print(\"\")\n",
"\n",
"\n",
"\n"
],
"execution_count": 86,
"outputs": [
{
"output_type": "stream",
"text": [
"I watched the cat chasing the mouse down the stairs\n",
"I watched the cat and the mouse run out of the case\n",
"and I'm not sure how many mice\n",
"and I saw the cat running in place\n",
"the cat ran around in a race\n",
"\n",
"but it could never catch the mouse.\n",
"and I saw the cat take a piece of cheese\n",
"the mouse sees\n",
"the cat's nose\n",
"and I was thinking about my nose\n",
"\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "l6wFnPPAfUiU",
"colab_type": "code",
"colab": {}
},
"source": [
""
],
"execution_count": null,
"outputs": []
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment