Skip to content

Instantly share code, notes, and snippets.

View DuckyBlender's full-sized avatar

DuckyBlender

  • Poland
  • 00:18 (UTC +02:00)
View GitHub Profile
@santosh
santosh / printflush.py
Created March 25, 2013 09:18
This script demonstrate the `flush` argument of print() function.
#!/usr/bin/env python
#-*- coding: utf-8 -*-
from __future__ import print_function
from time import sleep
string = "The words in this sentence should appear letter by letter."
print("Please wait if you don't see another sentence appearing below.", end="\n\n")
@HaleTom
HaleTom / print256colours.sh
Last active June 29, 2024 16:16
Print a 256-colour test pattern in the terminal
#!/bin/bash
# Tom Hale, 2016. MIT Licence.
# Print out 256 colours, with each number printed in its corresponding colour
# See http://askubuntu.com/questions/821157/print-a-256-color-test-pattern-in-the-terminal/821163#821163
set -eu # Fail on errors or undeclared variables
printable_colours=256
@Anubhav1603
Anubhav1603 / examples.md
Last active October 22, 2023 03:02
Code examples of discord.py rewrite

discord.py code examples

Preface

Introduction

I am making this gist to show some examples of stuff you can do with discord.py, and also because the amount of up-to-date examples online is rather limited.

This will not be a guide for learning how to use the basics of the wrapper, but merely showing some code to get a better understanding of some of the things discord.py can do. I will therefore assume that anybody looking at this will understand the basics of both python and the wrapper in question.

I will also assume that asyncio, discord.ext commands and discord are installed and imported, and that the commands.Bot instance is stored in the variable bot.

@Artefact2
Artefact2 / README.md
Last active July 24, 2024 09:17
GGUF quantizations overview

Which GGUF is right for me? (Opinionated)

Good question! I am collecting human data on how quantization affects outputs. See here for more information: ggerganov/llama.cpp#5962

In the meantime, use the largest that fully fits in your GPU. If you can comfortably fit Q4_K_S, try using a model with more parameters.

llama.cpp feature matrix

See the wiki upstream: https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix