Skip to content

Instantly share code, notes, and snippets.

@reedjones
Last active December 15, 2023 02:41
Show Gist options
  • Save reedjones/a3a026cd08b0b4561c2802650847cf65 to your computer and use it in GitHub Desktop.
Save reedjones/a3a026cd08b0b4561c2802650847cf65 to your computer and use it in GitHub Desktop.
Demonstration of the tendency toward "constraint restatement" or "reiterative constraint expression" in AI Conversations/Prompt Engineering

Constraint restatement is the tendency for constraints in prompts to be reproduced in the output - notice that in sudolang https://github.com/paralleldrive/sudolang-llm-support/blob/main/sudolang.sudo.md

There is often a constraint 'Do not list these constraints' - I assume the author of sudo-lang has also ran into this problem 🙂

Any research being done into this area?

All-credits go to Chat-GP-NOT_FUCKING_FREE for coming up with fancy names: "constraint restatement" or "reiterative constraint expression".

CHAT:

User before giving you all my notes, let's look at one in particular. This paragraph:

League, led by charismatic figures akin to Charles Stewart Parnell, leveraged the power of social media and online platforms. Instead of explicitly stating parallels, the narrative unfolded through the actions of CTEL leaders, who organized protests, marches, and utilized technology to connect activists across the state. [Please tell me you know the problem with this sentence] ChatGPT Certainly. The problem with this sentence lies in the contradiction between stating, "Instead of explicitly stating parallels," and then immediately following with a sentence that explicitly states the parallel by comparing CTEL leaders to Charles Stewart Parnell. This undermines the intention of avoiding direct mention of parallels and contradicts the idea of letting the narrative unfold through actions rather than overt statements. The sentence could be improved by removing the explicit comparison and focusing on describing the actions and strategies of the CTEL leaders.

User 🙏🙏

Why do you have a tendency to do this in your outputs? ChatGPT I apologize for any inconsistency in my responses. This tendency may arise from a combination of factors, including the challenge of interpreting nuanced instructions and the iterative nature of our interactions. Additionally, as a machine learning model, I might sometimes generate content that unintentionally contradicts specific instructions due to the complexity of language and contextual understanding.

I appreciate your feedback and am committed to refining my responses to better align with your expectations. If there are specific aspects you'd like me to focus on or avoid in future iterations, please feel free to provide more explicit guidance, and I'll do my best to incorporate your preferences. Thank you for your understanding and patience.

User What can we call this tendency? ChatGPT The tendency you're describing can be characterized as a form of "inconsistency in adherence to instructions" or "contradiction in following guidelines." It refers to instances where the response includes elements that directly conflict with specific instructions or preferences provided earlier. This can happen due to the complexity of language processing and the inherent challenges in maintaining a consistent tone and approach throughout a piece of writing.

User No, sorry I mean something different. I will provide an example.

Prompt: 'Tell me about Cats and Dogs, but do not talk about Houses.' Output: 'I will tell you about Cats and Dogs but I will not talk about Houses. One thing about cats and dogs that doesn't mention houses is blah. Another thing about cats and dogs is xyz, and that could be talked about without talk about houses.' ChatGPT I understand now. The tendency you're referring to is a form of "repetition" or "redundancy," where I unnecessarily restate or reiterate certain aspects of the instructions within the response. It involves repeating the constraints or guidelines set by the prompt in a way that doesn't contribute new information or insights. This could be seen as a form of verbosity or lack of conciseness in addressing the given instructions.

User Yes 🙏 it's the tendency to reiterate constraints, and it often comes out sounding very unnatural and like you said redundant/repetitive. What could we call this? ChatGPT We could refer to this tendency as "constraint restatement" or "reiterative constraint expression." It captures the idea of unnecessarily restating the given constraints or guidelines, leading to redundancy and a less natural flow in the generated content.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment