Skip to content

Instantly share code, notes, and snippets.

barron9 / gist:6ae6e6ee86384c02f1863cd76889f995
Last active October 17, 2023 05:59
bufferbloat_fix_workaround, bufferbloat çözümü tplink Vc220-g3u
View gist:6ae6e6ee86384c02f1863cd76889f995
verified on Tplink Vc220-g3u
Dns cache disable from your os
Dhcp ip rental set 1 or 5 or 8 min not like 60
Ağ geçidi sil
qos [ssid] = (max download) - 1000~
dns cache i windowstan disable edin
View backpressure.cpp
#include <iostream>
#include <queue>
#include <thread>
#include <mutex>
#include <condition_variable>
std::queue<int> dataQueue;
std::mutex mtx;
std::condition_variable cv;
const int maxQueueSize = 10; // Maximum queue size before backpressure is applied
View queensatttack_hacekrrank_solution_alternative.cpp
#include <iostream>
#include <vector>
#include <cmath>
#include <string>
int main(){
std::vector<std::pair<int, int>> vec;
std::vector<std::pair<int, int>> vecquen;
barron9 / attention.cpp
Created September 10, 2023 05:33
View attention.cpp
#include <iostream>
#include <vector>
#include <cmath>
#include "attention.h"
// Function to compute the attention weights
std::vector<double> computeAttentionWeights(const std::vector<double>& query, const std::vector< std::vector<double> >& keys) {
int numKeys = keys.size();
std::vector<double> attentionWeights(numKeys, 0.0);
double totalWeight = 0.0;
View ThreadInterruptionSimulator.swift
import Foundation
class ThreadInterruptionSimulator {
var threads: [Thread] = []
var interrupterThread: Thread?
// Function to simulate work done by threads
func worker(threadNum: Int) {
// for _ in 0..<5 {
barron9 / elf.h
Created August 12, 2023 08:13 — forked from mlafeldt/elf.h
elf.h for OSX
View elf.h
/* This is the original elf.h file from the GNU C Library; I only removed
the inclusion of feature.h and added definitions of __BEGIN_DECLS and
__END_DECLS as documented in
On macOS, simply copy the file to /usr/local/include/.
Mathias Lafeldt <> */
/* This file defines standard ELF types, structures, and macros.
barron9 /
Created July 28, 2023 18:03 — forked from cedrickchee/
4 Steps in Running LLaMA-7B on a M1 MacBook with `llama.cpp`

4 Steps in Running LLaMA-7B on a M1 MacBook

The large language models usability

The problem with large language models is that you can’t run these locally on your laptop. Thanks to Georgi Gerganov and his llama.cpp project, it is now possible to run Meta’s LLaMA on a single computer without a dedicated GPU.

Running LLaMA

There are multiple steps involved in running LLaMA locally on a M1 Mac after downloading the model weights.

barron9 / wakefix.service
Last active May 5, 2023 13:33
ubuntu 22.04 a4tech fstyler wireless keyboard wake fix service
View wakefix.service
Description=Run script after screen unlock/wake
barron9 /
Last active September 10, 2023 20:12
generative network with dense layers
from keras.models import Model
from keras.layers import Input, Dense
from keras.optimizers import Adam
# Define the input layer
inputs = Input(shape=(784,))
# Define the hidden layers
hidden1 = Dense(128, activation='relu')(inputs)
hidden2 = Dense(64, activation='relu')(hidden1)
barron9 / deadlock.c
Last active August 10, 2023 06:26
deadlock example
View deadlock.c
#include <stdio.h>
#include <pthread/pthread.h>
void* someprocess(void* arg);
void *anotherprocess(void* arg);
pthread_mutex_t lock;
pthread_t tid[2];
int counter;
int main(int argc, const char * argv[]) {