Skip to content

Instantly share code, notes, and snippets.

View bitsnaps's full-sized avatar
🌍
Working @ CorpoSense

Ibrahim H. bitsnaps

🌍
Working @ CorpoSense
View GitHub Profile
@bartowski1182
bartowski1182 / calibration_datav3.txt
Last active July 15, 2024 05:48
Calibration data provided by Dampf, combines his own efforts on top of Kalomaze's. Used for calibrating GGUF imatrix files
In addition to a significant decrease in hepatic lipid accumulation in the IOE group, which inhibited energy intake by propionate enrichment, hepatic lipids were also significantly reduced in the mice in the IOP group, which was largely enriched with butyrate. Compared with the IOE group, IOP had a stronger regulatory effect on hepatic metabolism and triglyceride metabolism and higher levels of TCA cycle in the host. In addition, butyrate has the ability to promote browning of white adipose tissue (WAT) to brown adipose tissue (BAT).^[@ref39],[@ref40]^ WAT stores energy, whereas BAT uses energy for heating and consequently host energy expenditure increases.^[@ref41],[@ref42]^ However, adipose tissue weight does not change after WAT browning.^[@ref43]^ Therefore, the weight of adipose tissue of mice in the IOP group dominated by butyrate was greater than that of the mice in the IOE group dominated by propionate.
In conclusion ([Figure [7](#fig7){ref-type="fig"}](#fig7){ref-type="fig"}C), the improvement of ob
@niklasmtj
niklasmtj / ci.yaml
Created March 7, 2024 13:12
Github Action that can be used for running CI tasks in Deno. Run tests, check code for formatting errors and lint the code base. Fails on found irregularities in tests, format (fmt) and linting
name: "CI"
on:
- pull_request
jobs:
ci:
runs-on: ubuntu-latest
steps:
- name: "Checkout"
@ashhadulislam
ashhadulislam / streamChat.py
Created January 10, 2024 17:50
Streamlit chatbot connected to MemGPT (OpenAI Back end)
import streamlit as st
import requests
base_url = "http://localhost:8283/"
headers = {"accept": "application/json"}
st.title("MemGPT Connected Bot")
# check if memgpt server and agents are available
agent_name="agent_1"
@langecrew
langecrew / OAI_CONFIG_LIST
Created January 3, 2024 00:08
Autogen Autobuild
[
{
"model": "gpt-4",
"api_key": "PASTE_YOUR_API_KEY_HERE"
},
{
"model": "gpt-4-1106-preview",
"api_key": "PASTE_YOUR_API_KEY_HERE"
},
{
@langecrew
langecrew / OAI_CONFIG_LIST
Last active June 3, 2024 15:49
Taking the Autogen Teachable Agent one step further with some customization
[
{
"model": "gpt-4",
"api_key": "PASTE_YOUR_API_KEY_HERE"
},
{
"model": "gpt-4-1106-preview",
"api_key": "PASTE_YOUR_API_KEY_HERE"
},
{
import os
import autogen
import memgpt.autogen.memgpt_agent as memgpt_autogen
import memgpt.autogen.interface as autogen_interface
import memgpt.agent as agent
import memgpt.system as system
import memgpt.utils as utils
import memgpt.presets as presets
import memgpt.constants as constants
import memgpt.personas.personas as personas
@datasciencemonkey
datasciencemonkey / gptq_lora.py
Created September 13, 2023 21:57
train a gptq model using peft/lora
# %%
# this is run from /notebooks on paperspace
from huggingface_hub import login
from dotenv import load_dotenv
load_dotenv("/notebooks/.env")
import os
os.environ["TOKENIZERS_PARALLELISM"]="false"
login(token=os.getenv("HUGGINGFACE_TOKEN"))
@Sharktheone
Sharktheone / Arch-Mojo.md
Last active December 7, 2023 15:44
Install the new mojo programming language on Arch. This will be obsolete when mojo adds official Arch support.
@younesbelkada
younesbelkada / finetune_llama_v2.py
Last active July 12, 2024 06:54
Fine tune Llama v2 models on Guanaco Dataset
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@sadasant
sadasant / _README.md
Last active August 30, 2023 17:33
Bash function for piping command outputs to OpenAI's GPT-4 model

This gpt() function reads input from the command line and then sends a formatted JSON object to the OpenAI API for the GPT-4 model. It properly escapes special characters and checks for the .env file containing the OpenAI API key.

Usage:

  • Load the function into your environment: source gpt.sh.
  • Make sure to have an .env file in the current directory with the OPENAI_API_KEY.
  • Pipe anything itno it, and also add a prompt, example:
git diff main | gpt "As a programmer, review this diff. Provide feedback only if necessary. Be brief"