Skip to content

Instantly share code, notes, and snippets.

View trinib's full-sized avatar
👾
¯\_(ツ)_/¯👾

☣┌͜∩͜┐͜(͜◣͜_͜◢͜)͜┌͜∩͜┐☣ trinib

👾
¯\_(ツ)_/¯👾
  • ꍇ̸̷̶̵̴̶̷̨̧̢̡̨͚͙͖͕͔͓͎͍͉͈͇̼̻̺̹̳̲̱̰̯̮̭̬̫̪̩̦̥̤̣̠͚͙͖͕͔͓̩̺̻̼͇͈͉̳̯̮̭̬ͣͤͥͧͦͨͪͩͯͮͭͬͫͯ͛͗͒͑͐͌͋͊͆̈́̓͂́̀̿̾̽ͣͥͧͩͫͭͯͮͬͪͨͦͤ͛͗͒͑͐̽̾̿̀́͂̓̈́͆͊͋͘͘͢͟͜͢͟͜ t̶̛̛̻͙͑͆̽͑̈́͆͌̆̿͛͐́̐̕r̴̨̺̱͓͖̤̯̦̝̳͉̯̟͔̱̳̗̀ͅi̸̖̳̼̱̦̺͚̬̩̠͇̻̾͋͑̓͋̽͐͝ṅ̴̪̇̍̄̾̔͒̿͗̄̑̕ĩ̷̛̖̹̜̤̙̜̰͎̙̫͉̱͓̭̦͈͌b̴̢̜̼̖̝̹̲̬͎̟̘́̅̄̒́̏̏ͅ
  • ❤Trinidad & Tobago❤
  • 15:22 (UTC -04:00)
View GitHub Profile
@trinib
trinib / repo-reset.md
Created March 16, 2022 12:01 — forked from heiswayi/repo-reset.md
GitHub - Delete commits history with git commands

First Method

Deleting the .git folder may cause problems in our git repository. If we want to delete all of our commits history, but keep the code in its current state, try this:

# Check out to a temporary branch:
git checkout --orphan TEMP_BRANCH

# Add all the files:
git add -A
1. Download latest apktool version.
2. Download the batch file and aapt.exe.
3. Create a folder anywhere in the PC and put all the apktool.jar, aapt.exe and the batch script in that folder.
4. Open command prompt.
5. Navigate to the folder where you placed apktool.jar, batch script and the aapt.exe.
6. Now, you need to install the file using the " IF " command.
7. Type the following command.
apktool if name-of-the-app.apk
@trinib
trinib / llm_papers.txt
Created February 23, 2024 12:47 — forked from masta-g3/llm_papers.txt
Updated 2024-02-10
Cedille: A large autoregressive French language model
The Wisdom of Hindsight Makes Language Models Better Instruction Followers
ChatGPT: A Study on its Utility for Ubiquitous Software Engineering Tasks
Query2doc: Query Expansion with Large Language Models
The Internal State of an LLM Knows When its Lying
Structured information extraction from complex scientific text with fine-tuned large language models
TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models
Large Language Models Encode Clinical Knowledge
PoET: A generative model of protein families as sequences-of-sequences
Fine-Grained Human Feedback Gives Better Rewards for Language Model Training