Skip to content

Instantly share code, notes, and snippets.

@lost-books
Created April 8, 2023 17:46
Show Gist options
  • Save lost-books/4df1f9fa0fb64fa69e104eaaf30c3786 to your computer and use it in GitHub Desktop.
Save lost-books/4df1f9fa0fb64fa69e104eaaf30c3786 to your computer and use it in GitHub Desktop.
Predefined Responses For LLMs Framework Sketch

A common framework for required elements that companies offering LLMs should provide as predefined, verified responses could be established as a set of best practices. These best practices would aim to enhance transparency, ensure ethical conduct, and provide users with accurate and important information about the LLM. Here are some potential common requirements and best practices for predefined, verified responses:

  • Model Identification: Chatbots should clearly and accurately identify the LLM or AI technology used, including the name, version, and source (proprietary or open-source).

  • Model Capabilities and Limitations: Chatbots should provide information about the capabilities, strengths, and limitations of the LLM, including any known biases or potential inaccuracies.

  • Model Card and Documentation: Chatbots should link to model cards or official documentation that describe the LLM's abilities, limitations, training data, evaluation metrics, risks, and ethical considerations.

  • Use Case and Purpose: Chatbots should explain the intended use case and purpose of the LLM, including the tasks it is designed to perform and the value it aims to provide to users.

  • Terms of Service and Privacy Policy: Chatbots should provide users with access to the Terms of Service, Privacy Policy, and Acceptable Use guidelines that govern the use of the LLM.

  • Data Handling and Security: Chatbots should explain how user data is collected, processed, stored, and protected, including any data retention policies and security measures.

  • Ethical Conduct and Human Impact: Chatbots should provide information about any ethical considerations and human impact assessments conducted for the LLM, including potential social implications.

  • Contact and Feedback: Chatbots should provide information on how users can contact the company, developers, or support team, and how users can provide feedback or report concerns.

  • Content Moderation: Chatbots should explain content moderation policies and procedures, including any mechanisms for detecting and handling inappropriate or harmful content.

  • Compliance and Certifications: Chatbots should provide information about any regulatory compliance or certifications relevant to the LLM, including industry standards or legal requirements.

  • Error Handling: Chatbots should have predefined responses for handling errors or situations where the LLM is unable to provide a satisfactory response.

  • Non-Hallucination: Chatbots should not hallucinate or invent information that does not exist. If certain information is not available, chatbots should clearly indicate this and provide guidance on how users can obtain further information.

  • Accessibility: Chatbots should provide information about accessibility features and accommodations for users with disabilities.

By establishing and following these best practices, companies offering LLMs can enhance transparency, build user trust, and promote ethical and responsible AI usage. These predefined responses can serve as a foundation for ensuring that users have access to essential information about the LLM and its use.

[Generated within ChatGPT based on prior conversation about LLMs and the need for pre-defined responses & common language & expectations around certain required elements]

See also: https://gist.github.com/lost-books/3ad807dcc8d393bbdc6c4ecb24ed7d87

@lost-books
Copy link
Author

should also include current status of service

@lost-books
Copy link
Author

this needs to include information about whether or not the tool actually learns from your feedback, and in what manner. ive seen claude especially make claims about how if i just explain ____ a little more, it will help it to learn, but it's entirely unclear whether that is indeed true or not. my impression is it is not true in the manner in which it describes it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment