Skip to content

Instantly share code, notes, and snippets.

@lost-books
Created February 20, 2023 22:56
Show Gist options
  • Save lost-books/3ad807dcc8d393bbdc6c4ecb24ed7d87 to your computer and use it in GitHub Desktop.
Save lost-books/3ad807dcc8d393bbdc6c4ecb24ed7d87 to your computer and use it in GitHub Desktop.
Preliminary sketch of best practice around chatbots having to identify what models they use, and link to model card

Chatbots should identify the LLMs the use

When requested by the user, or as part of any standard pattern of self-identification, AI-powered chatbots should be able to clearly and accurately identify what LLM or other technology they are using. If the models are proprietary or open-source, they should be identified as such.

Chatbots should link to model cards for LLMs used

Where an AI-powered chatbot is based on an LLM or similar technology, they should be able to link to model cards or other equivalent official documentation that describes the models abilities, limitations, risks, etc.

Chatbots should link to their Terms, Privacy Policy & Acceptable Use policies

AI-powered chatbots should also be able to link to (and answer questions about, if requested & results can be returned with a high degree of accuracy) their Terms of Service, Privacy Policies, and Acceptable Use guidelines.

Model cards should include human impact assessments

Based on the known abilities and limitations of the model and the chatbot, the developer should undertake to create a meaningful human impact assessment

Where requested prompts or generations might violate Acceptable Use guidelines, the model should identify the potential violation.

If a chatbot is unable to complete a generation due to the prompt or the potential generation violating platform rules, it should be able to granularly identify which rule might be at risk of infringement, in addition to linking to the full policy.

Chatbots should not hallucinate or invent model cards, links, or policies that don't exist

If requested by the user, and the above information has not been made available or does not exist, the chatbot should not invent or "hallucinate" any of the above. If it does not have access to verified accurate information regarding the above, it should clearly indicate that the data is not available. Further, it ought to indicate how users who need access to this information can go about contacting the developers to gain access to it.

@lost-books
Copy link
Author

https://syntheticmedia.partnershiponai.org/

Section 2:

Be transparent to users about tools and technologies’ capabilities, functionality, limitations, and the potential risks of synthetic media.

points towards things like model cards as transparency tool

@lost-books
Copy link
Author

provide information about third party bodies able to receive complaints about the system

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment