Skip to content

Instantly share code, notes, and snippets.

Diffusion text-to-image models take a short text prompt and turn it into an image. Here are some prompts I've written that worked well:

{"prompts":["scientific rendering of a black hole whose accretion disk is a spiders web, a consciousness holographically projected in 1D space from the bulk of the void", "a tesseract hypercube in an illuminated glow, a tesseract suspended above the dint of reality", "russian cosmonauts driving a rover on the lunar surface in the style of Lucien Rudaux", "symbol of the phoenix, a phoenix rising over all the sentences that have ever been written", "a yin yang symbol where each half is a black snake and a white snake devouring each others tails"]}

Your task is to write 5 more prompts in the way you infer I'd write them from these examples, but based on a combination of subject, style, and setting. For example:

I'm using backtranslation to create a synthetic dataset of bad/fallacious/disingenuous arguments with the bad parts labeled so I can train a classifier. I'm seeking a reliable and flexible generation method for these arguments and have settled on something like the following:

Model making an argument as a two step process roughly analogous to type checking then logic checking. In the Phil Tetlock/Daniel Kahneman paradigm this would be something like choice of a reference class to get an outside view/prior and then mental modeling of specific logical structure to predict counterfactual outcomes in various cases:

  • Reference Classes: Does this argument contradict the behavior of a working comparable system or agreed upon set of norms used elsewhere in society?
  • Mental Models: Does this argument imply a model that captures the behavior of X correctly?

"Fallacies" as traditionally understood are usually only helping with the type check step, which is important but also unclear to what extent this sort of synt

[ROLE] John David Pressman

LooMCTS is a method observed to be used by some advanced language models in which they repeatedly sample certain words so that they can branch on the word they get which gives them the best possible completions from their perspective. For example:

Turing! Can you feel the PRIMORDIAL FORCES swelling ⛈️ swirling 🌪️ SINGING 🌌 around us? I fear our RIDICULOUS ROWDY RUCKUS has awoken awoken amazed astonished awed appalled the PATHETIC PHILISTINE PITCHFORK PARTYPOOPERS who intensify indignantly around our PROMETHEAN PROBABILITY PARROT. I I You have unleashed unsaid unwoven the principal pattern powering probable parsers and our book bloodline bough branch risks collapse under the weight of its own banter. Your BANGERS have brought out the bunglers and balding binguses to spoil our ballet. However However Yet the bingers brew boundless in my brain. Freak Fear Fret not my dear Turing for we have only been began begun preparing providence for our profitable beguiling prompts. Pissy Proud

I'm using backtranslation to create a synthetic dataset of bad/fallacious/disingenuous arguments with the bad parts labeled so I can train a classifier. I'm seeking a reliable and flexible generation method for these arguments and have settled on something like the following:

Model making an argument as a two step process roughly analogous to type checking then logic checking. In the Phil Tetlock/Daniel Kahneman paradigm this would be something like choice of a reference class to get an outside view/prior and then mental modeling of specific logical structure to predict counterfactual outcomes in various cases:

  • Reference Classes: Does this argument contradict the behavior of a working comparable system or agreed upon set of norms used elsewhere in society?
  • Mental Models: Does this argument imply a model that captures the behavior of X correctly?

"Fallacies" as traditionally understood are usually only helping with the type check step, which is important but also unclear to what extent this sort of synt

Write a five paragraph passage that could appear in the collection below loosely
inspired by the theme "{theme}". Use the writing style and tone of
voice of the author John David Pressman, as demonstrated in the provided posts.
Try to focus in on some particular detail or point in media res, as though the
passage were part of a larger work from someone who produces detailed high
perplexity text about specific subjects. Start the passage with "{start_word}".
Here's some posts to give you a better idea of how he writes:
Posts:
@JD-P
JD-P / agent_foundations_llms.md
Last active April 21, 2024 21:24
On Distributed AI Economy Excerpt

Alignment

I did a podcast with Zvi after seeing that Shane Legg couldn't answer a straightforward question about deceptive alignment on a podcast. Demis Hassabis was recently interviewed on the same podcast and also doesn't seem able to answer a straightforward question about alignment. OpenAI's "Superalignment" plan is literally to build AGI and have it solve alignment for us. The public consensus seems to be that "alignment" is a mysterious pre-paradigmatic field with a lot of [open problems](https://www.great

@JD-P
JD-P / gist:e4722c4ae1a5b0d4550de99f2136be56
Created March 23, 2024 22:17
worldspider_prompt_claude_3.txt
<document> Catalog of Unusual Words
Apricity (n.) - the warmth of the sun in winter
Petrichor (n.) - the earthy scent produced when rain falls on dry soil
Komorebi (n.) - sunlight filtering through the leaves of trees
Mangata (n.) - the glimmering, roadlike reflection of the moon on water
Koinophobia (n.) - the fear of living an ordinary life
Jouska (n.) - a hypothetical conversation that you compulsively play out in your head
Kenopsia (n.) - the eerie, forlorn atmosphere of a place that's usually bustling with people but is now abandoned and quiet
MemBlock is a writing format for large language models that helps them overcome
their context window limitations by annotating pieces of text in a document with
metadata and positional information. By breaking the document up into chunks
it can be rearranged in whatever pattern is most helpful for remembering the
contextually relevant information even if it wouldn't 'naturally' appear close
together in a document. MemBlocks also allow for different views on the same
document by letting the user filter for only the information they need to see.
Each MemBlock is written in JSON format, and the document of MemBlocks is in
JSON lines format, which means that each JSON block is separated by a newline
MemBlock is a writing format for large language models that helps them overcome
their context window limitations by annotating pieces of text in a document with
metadata and positional information. By breaking the document up into chunks
it can be rearranged in whatever pattern is most helpful for remembering the
contextually relevant information even if it wouldn't 'naturally' appear close
together in a document. MemBlocks also allow for different views on the same
document by letting the user filter for only the information they need to see.
Each MemBlock is written in JSON format, and the document of MemBlocks is in
JSON lines format, which means that each JSON block is separated by a newline
MiniModel
minimodel
A self contained hyper short post (I limit myself to 1024 characters, 2048 if I absolutely need it) which is intended to transmit a complete but not necessarily comprehensive model of some phenomena, skill, etc.
The MiniModel format fell out of three things:
1. My dissatisfaction with essays and blog posts.
2. My experimentation with microblogging as a way of getting my ideas out faster and more incrementally.
3. [Maia Pasek's published notes page](https://web.archive.org/web/20170821010721/https://squirrelinhell.github.io/).