Skip to content

Instantly share code, notes, and snippets.

View tahercoolguy's full-sized avatar
💭
Developing and training AI models 🔥

Taher Ali badnawarwala tahercoolguy

💭
Developing and training AI models 🔥
View GitHub Profile
@markasoftware
markasoftware / enterprise_token.rb
Last active November 3, 2025 13:17
OpenProject Enterprise mode for free
############ If you are using DOCKER all-in-one image, create Dockerfile like: ################
############ FROM openproject/openproject:16 ################
############ COPY ./enterprise_token.rb app/models/enterprise_token.rb ################
############ If you are runing a manual installation: ################
############ REPLACE app/models/enterprise_token.rb in the source code with this file! ################
############ also be sure to RESTART OpenProject after replacing the file. ################
############ If using some other set up (eg docker-compose), read the comments on ################
############ https://gist.github.com/markasoftware/f5b2e55a2c2e3abb1f9eefcdf0bfff45 ################
@tahercoolguy
tahercoolguy / top-k-top-p.py
Created July 18, 2022 11:27 — forked from thomwolf/top-k-top-p.py
Sample the next token from a probability distribution using top-k and/or nucleus (top-p) sampling
def top_k_top_p_filtering(logits, top_k=0, top_p=0.0, filter_value=-float('Inf')):
""" Filter a distribution of logits using top-k and/or nucleus (top-p) filtering
Args:
logits: logits distribution shape (vocabulary size)
top_k >0: keep only top k tokens with highest probability (top-k filtering).
top_p >0.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering).
Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
"""
assert logits.dim() == 1 # batch size 1 for now - could be updated for more but the code would be less clear
top_k = min(top_k, logits.size(-1)) # Safety check
@ighoshsubho
ighoshsubho / fast_NAG_with_cache.py
Last active July 7, 2025 11:44
NAG impl with teacache and sage on 4090, enjoy!
from typing import Any, Dict, Optional, Tuple, Union
import gc
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig
from diffusers.utils import USE_PEFT_BACKEND, logging, scale_lora_layers, unscale_lora_layers
from diffusers.models.modeling_outputs import Transformer2DModelOutput
from diffusers.models.transformers.transformer_flux import FluxTransformer2DModel
from diffusers import AutoencoderTiny
from transformers import T5EncoderModel