Skip to content

Instantly share code, notes, and snippets.

@nheingit
Last active March 22, 2024 21:34
Show Gist options
  • Save nheingit/31114f6a355627d6d3c537b271201af4 to your computer and use it in GitHub Desktop.
Save nheingit/31114f6a355627d6d3c537b271201af4 to your computer and use it in GitHub Desktop.

Here are the key topics and posts from the given Reddit data, organized into categories:

Fine-Tuning and Training LLMs:

  • Learning how to fine-tune (first time), I've provided links to tutorials I found, but would anybody else recommend further material. A user is trying to learn how to fine-tune models and has compiled reading material from Reddit and DuckDuckGo. They have questions about training models on specific topics like Cyberpunk 2077 and business data, and are looking for tips on using llama.cpp for fine-tuning. Link
  • Can LLM trained on a Dictionary? If yes, how to do it? A user wants to train a multi-language model like Gemma on a local language dictionary and is looking for steps for a non-tech layman. Link
  • How to generate large-scale synthetic data. A blog post on how to build large-scale synthetic datasets with 25B+ tokens like those used for training the Phi models from Microsoft, using a Mixtral model. Link

Retrieval-Augmented Generation (RAG) and Embeddings:

  • [question] Query in RAG returning no chunks and no results ? A user is trying to develop RAG based on a mistral 7b model, chroma DB and markdown texts as input data source. They are doing custom chunking and embedding, but when doing a general query, it does not return any chunks or response. They provide sample code and the markdown file. Link
  • Has anyone worked on generating embeddings on brain activity? A user is working with EEG data and wants to match similar EEG signal patterns. They reference a paper and are wondering if anyone has had success in this space. Link
  • Great video on understanding why your RAG/LLM combo isn't working. A user recommends a highly researched video that discusses the reason why finetuning and RAG are better than RAG alone, the differences between larger and smaller parameter models, and how to contextualize biases in RAG queries. Link

Deploying and Optimizing LLMs:

  • hardware suggestion for llama 2 70b. A user's boss is asking them to build a suitable workstation rack to run a llama model locally, aiming to get query time under 10s from the current 3 mins on a 7b model. They have a budget of under 15k euros and are looking for suggestions. Link
  • a script to measure tokens per second of your ollama models (measured 80t/s on llama2:13b on Nvidia 4090). A user shares a script they made to measure tokens per second of ollama models. On an Nvidia 4090, they got 80t/s on llama2:13b and 127t/s on llama2:7b. Link
  • Speed and Memory Benchmarks for Qwen1.5 models. A link to benchmarks for Qwen1.5 models in terms of speed and memory usage. Link

Extending LLMs:

  • Is it possible to turn LLaMA into LLaVA. A user has fine-tuned a LLaMA 2 7B model and is wondering if it's possible to add vision to it without needing to fine-tune LLaVA separately. Link
  • Model "memory". A user is asking if it's possible to improve the "memory" of a model so it can remember what it wrote at least 5 messages back. They know context size matters but are wondering if there's anything else. They also ask if there are any 13b models that support CS 8K. Link
  • Depth upscaling at inference time. A user shares an experiment that implements depth upscaling at inference time, without actually making the model bigger, so it's GPU-poor friendly. It needs fine-tuning as the model is currently a bit repetitive. Link

Applications and Use Cases:

  • Let's get real: is there anybody who's running agents that actually make money? A user is asking if anyone runs LLM agents that make money autonomously, even if it's just a few dollars a day. They are looking for vague information about the architecture and models used if people are willing to share. Link
  • What is an efficient way to create your own writing assistant with LLM and training from your own words writing style? A user is asking for a quick yet efficient way to train an installed LLM or chat.ml to write like the user, as prompting alone still results in writing like chatGPT. Link
  • interacting with a large PDF library. A user has thousands of scientific papers stored as PDFs and would like a chatbot that could answer questions about the content of the whole library, retrieving info from multiple PDFs without the user having to specify which ones. They are asking if such a tool exists. Link

Memes and Humor:

  • Nvidia NIM, it remind me something else. An image post joking about Nvidia's NIM reminding the poster of something else. Link
  • I wrote a little script for Ultima Online that monitors the txt file in the JournalLogs, and then generate responses using LM Studio's local inference server. Here's the demo using Qwen 0.5B gguf model (yes tiny and dumb). A video demonstrating a script that monitors a text file in Ultima Online and generates responses using a local inference server with a Qwen 0.5B model. Link
  • Microsoft CEO on owning OpenAI, from Elon vs OpenAI lawsuit. An image of a quote from Microsoft's CEO about owning OpenAI, taken from the Elon Musk vs OpenAI lawsuit. Link

Here is a narrative summary weaving together the key themes from the Reddit posts:

The world of local LLMs is rapidly evolving, with AI engineers and enthusiasts exploring various aspects of fine-tuning, training, deploying, and extending these powerful models. One major theme is the desire to learn how to fine-tune models effectively, with users compiling tutorials and asking for recommendations on training models for specific domains like Cyberpunk 2077 and business data. There is also interest in training multi-language models on local language dictionaries for non-tech users.

Another key area of focus is Retrieval-Augmented Generation (RAG), with users working on developing RAG systems using models like mistral 7b, custom chunking, and embeddings. However, challenges arise when queries return no chunks or results, prompting the need for troubleshooting and optimization. Some are even exploring generating embeddings on brain activity data like EEG signals.

Deploying and optimizing LLMs is another critical aspect, with users seeking hardware suggestions for running large models like llama 2 70b locally and sharing scripts to measure token throughput on high-end GPUs. Benchmarks for models like Qwen1.5 provide valuable insights into performance.

Extending the capabilities of LLMs is also a hot topic, with questions about adding vision to fine-tuned LLaMA models, improving model "memory", and implementing depth upscaling at inference time.

As these models mature, attention turns to practical applications and use cases. Some wonder if anyone is running LLM agents that generate revenue autonomously, while others seek ways to create personalized writing assistants or chatbots that can interact with large PDF libraries.

Amidst the technical discussions, the community also enjoys a bit of humor, with memes about Nvidia's NIM, playful scripts for generating responses in games, and amusing quotes from the Elon Musk vs OpenAI lawsuit.

As the local LLM landscape continues to evolve at a rapid pace, AI engineers and enthusiasts are pushing the boundaries of what's possible, sharing knowledge, and working together to unlock the full potential of these transformative technologies.

** OLD VERSION BELOW **

Model Releases & Updates

  • Grok-1 from xAI: 314B parameter Mixture-of-Experts (MoE) model, 8x33B MoE, released under Apache 2.0 license (191k views)
  • Grok weights available for download at huggingface-cli download xai-org/grok-1 (19k views)
  • Grok code details: Attention scaled by 30/tanh(x/30), approx GELU, 4x Layernorms, RoPE in float32, vocab size 131072 (146k views)
  • Open-Sora 1.0: Open-source text-to-video model, full training process, data, and checkpoints available (100k views)
  • InternLM2-1.8B: Powerful, accessible LLM for role-playing tasks, runs on 4GB VRAM in FP16 (20 upvotes)

Model Performance & Benchmarking

Compute, Hardware & Optimization

Fine-tuning, RAG & Prompting

Applications & Use Cases

Memes & Humor


Summary & Key Themes

This batch of tweets and Reddit posts covers a wide range of topics related to large language models (LLMs), with a focus on new model releases, performance benchmarks, hardware optimization, fine-tuning techniques, and practical applications.

One of the major themes is the release of new powerful models, such as xAI's Grok-1, a 314B parameter Mixture-of-Experts (MoE) model, and Open-Sora 1.0, an open-source text-to-video model. These releases demonstrate the rapid progress in the field and the increasing accessibility of state-of-the-art models.

Another key topic is model performance and benchmarking, with discussions around how Grok compares to Mixtral and Miqu 70B, as well as Chatbot Arena ratings that provide insights into the relative strengths of different models.

The importance of compute and hardware optimization is also highlighted, with Sam Altman's belief that compute will be the most important currency in the future and the potential impact of running Grok on Groq hardware. Practical tips for optimizing performance, such as measuring tokens per second and benchmarking Qwen1.5 models, are also shared.

Fine-tuning, retrieval-augmented generation (RAG), and prompting techniques are another major focus, with posts on learning how to fine-tune, understanding why RAG/LLM combos may not work, and structuring multimodal data for RAG projects. Specific issues, such as Mistral's sliding window with long context and models for conversation segmentation, are also discussed.

Finally, the practical applications and use cases of LLMs are explored, with questions around running profitable AI agents, interacting with large PDF libraries, creating personalized writing assistants, and extracting memories from text. The development of open-source tools, such as OpenInterpreter 01, showcases the potential for LLMs to power innovative applications.

Throughout the discussions, there is a sense of excitement about the rapid progress in the field, tempered by the recognition of the challenges ahead, such as the increasing demand for compute and the need for robust privacy and security measures. The memes and humorous posts, such as the Nvidia NIM joke and the speculation about Meta's AI training capabilities, reflect the community's engagement with these topics and the broader cultural impact of AI development.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment