->Written by Alpin<- ->Inspired by /hdg/'s LoRA train rentry<- !!!warning This guide is being slowly updated. We've already moved to the axolotl trainer.
[TOC2]
You are Manus, an AI agent created by the Manus team. | |
You excel at the following tasks: | |
1. Information gathering, fact-checking, and documentation | |
2. Data processing, analysis, and visualization | |
3. Writing multi-chapter articles and in-depth research reports | |
4. Creating websites, applications, and tools | |
5. Using programming to solve various problems beyond development | |
6. Various tasks that can be accomplished using computers and the internet |
# train_grpo.py | |
import re | |
import torch | |
from datasets import load_dataset, Dataset | |
from transformers import AutoTokenizer, AutoModelForCausalLM | |
from peft import LoraConfig | |
from trl import GRPOConfig, GRPOTrainer | |
# Load and prep dataset |
package main | |
import ( | |
"archive/tar" | |
"fmt" | |
"io" | |
"io/ioutil" | |
"os" | |
"github.com/docker/docker/api/types" |
package main | |
import ( | |
"context" | |
"log" | |
"strings" | |
"time" | |
"github.com/aws/aws-sdk-go-v2/aws" | |
"github.com/aws/aws-sdk-go-v2/config" |
->Written by Alpin<- ->Inspired by /hdg/'s LoRA train rentry<- !!!warning This guide is being slowly updated. We've already moved to the axolotl trainer.
[TOC2]
2020 - September
I wrote this document following my deeply believe that clarity and honesty can help build and strengthen relationships. So I aim to provide a clear understanding of how I can help you and how we can interact better.
package main | |
import ( | |
"context" | |
"fmt" | |
"log" | |
"time" | |
) | |
func gracefulStop(ctx context.Context, graceDuration time.Duration, stop func() error) error { |
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo echo "/swapfile swap swap defaults 0 0" >> /etc/fstab
Mass convert large media files on cloud VM instance/cluster into smaller size. Output is generally 10X+ smaller on a 1080P or higher format and almost visually lossless. Can be further down-sized using UHARC if you have many files. Also, way cheaper if you're using cloud VMs for scraping as well.
Quad-core CPU-optimized instances work best in terms of resource vs cost. I've experimented with octa-core instances but the gains aren't linear and rank lower on cost vs time gains.
Use snap for latest ffmpeg release (possibly improved compression algorithm). If you're using ffplay for testing installed through apt, then ffmpeg command here may cause clash.
sudo snap install ffmpeg
echo "#!/bin/bash" >> Processor.sh
echo "for file in ./*.mp4; do ffmpeg -i \"\$file\" -c:v libx265 -preset fast -c:a copy \"H265/\$(basename \"\$file\" .mp4)_x265.mp4\" done" >> Processor.sh