Skip to content

Instantly share code, notes, and snippets.

@chainhead
Last active March 28, 2024 05:41
Show Gist options
  • Save chainhead/1fdfdbcc48a07946f4a61cb64f289d56 to your computer and use it in GitHub Desktop.
Save chainhead/1fdfdbcc48a07946f4a61cb64f289d56 to your computer and use it in GitHub Desktop.
LLM notes

Notebook

Theory

LLM

LLM metrics

LLM Customizations

LLM Evaluation

LLM fine-tuning

RAG

GenAI stack

Prompt Engineering

Implementations - AWS

Fine-tuning

LLMOps

RAG

LangChain

Others

Implementations - others

Hugging Face

  • Total noob’s intro to Hugging Face Transformers
  • Hugging Face Transformers - Hugging Face Transformers is an open-source Python library that provides access to thousands of pre-trained Transformers models for natural language processing (NLP), computer vision, audio tasks, and more. It simplifies the process of implementing Transformer models by abstracting away the complexity of training or deploying models in lower level ML frameworks like PyTorch, TensorFlow and JAX.
  • Hugging Face Hub - The Hugging Face Hub is a collaboration platform that hosts a huge collection of open-source models and datasets for machine learning, think of it being like Github for ML. The hub facilitates sharing and collaborating by making it easy for you to discover, learn, and interact with useful ML assets from the open-source community. The hub integrates with, and is used in conjunction with the Transformers library, as models deployed using the Transformers library are downloaded from the hub.
  • Hugging Face Spaces - Spaces from Hugging Face is a service available on the Hugging Face Hub that provides an easy to use GUI for building and deploying web hosted ML demos and apps. The service allows you to quickly build ML demos, upload your own apps to be hosted, or even select a number of pre-configured ML applications to deploy instantly.

Tools

Miscellaneous

According to Alireza Goudarzi, senior researcher of machine learning (ML) for GitHub Copilot: “LLMs are not trained to reason. They’re not trying to understand science, literature, code, or anything else. They’re simply trained to predict the next token in the text.” Source

StackOverflow

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment