Skip to content

Instantly share code, notes, and snippets.

View wolfeidau's full-sized avatar
🐺
Building data science projects

Mark Wolfe wolfeidau

🐺
Building data science projects
View GitHub Profile
@andif888
andif888 / playbook.yml
Created April 13, 2019 04:17
Ansible Playbook for Ubuntu Desktop minimal
---
- hosts: localhost
tasks:
- name: Add Apt-Keys
apt_key:
url: '{{item.name}}'
state: present
with_items:
@dongri
dongri / uninstall-netskope.sh
Last active February 16, 2024 15:23
uninstall netskope
#!/bin/sh
sudo ps aux | grep Netskope | grep -v grep | awk '{ print "kill -9", $2 }' | sudo sh
echo '[✓] Kill Netskope Process'
sudo rm -rf /Applications/Remove\ Netskope\ Client.app
echo '[✓] Removed Remove Netskope Client.app'
sudo rm -rf /Library/Application\ Support/Netskope
echo '[✓] Removed Agent of Netskope Client.app'
@lewtun
lewtun / sft_trainer.py
Last active April 27, 2024 21:28
Fine-tuning Mistral 7B with TRL & DeepSpeed ZeRO-3
# This is a modified version of TRL's `SFTTrainer` example (https://github.com/huggingface/trl/blob/main/examples/scripts/sft_trainer.py),
# adapted to run with DeepSpeed ZeRO-3 and Mistral-7B-V1.0. The settings below were run on 1 node of 8 x A100 (80GB) GPUs.
#
# Usage:
# - Install the latest transformers & accelerate versions: `pip install -U transformers accelerate`
# - Install deepspeed: `pip install deepspeed==0.9.5`
# - Install TRL from main: pip install git+https://github.com/huggingface/trl.git
# - Clone the repo: git clone github.com/huggingface/trl.git
# - Copy this Gist into trl/examples/scripts
# - Run from root of trl repo with: accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero3.yaml --gradient_accumulation_steps 8 examples/scripts/sft_trainer.py