Skip to content

Instantly share code, notes, and snippets.

@ForceBru
ForceBru / base_n.py
Created March 10, 2015 17:41
Python base converter
class MathExc(Exception):
def __init__(self,msg):
print "\nError: "+str(msg)+" !"
class Converter (object):
def __init__(self,base):
self._BaseN_alpha=None
self._BaseN_num=None
self._BaseN_base=None
base=int(base)
@ForceBru
ForceBru / SHSH_saver.py
Created March 11, 2019 19:37
Save iOS 12.1.1~b3 SHSH blobs automatically when Apple randomly opens the signing window.
import socket
import requests
from bs4 import BeautifulSoup
import time, datetime
import subprocess
import re
import os
# This script works on macOS and Linux only!
#
@ForceBru
ForceBru / Docker_output.md
Last active April 11, 2022 13:59
Weird encoding of the BL instruction on ARM Thumb

Result of assembling on different platforms

Apparently, both GCC and Clang encode bl label in such a way that the resulting machine code ends up jumping to itself, not label. However, when assembled on an actual thumbv7 machine, the machine code suddenly becomes correct. What's more, in many cases of "incorrect" encoding that should jump to itself, objdump somehow recognizes that it jumps to the correct label.

armv7-alpine-linux-musleabihf

Put Dockerfile and mve_docker.s in the same directory and run:

@ForceBru
ForceBru / Makefile
Last active August 21, 2020 15:07
Higher half kernel either doesn't link or triple faults
# brew tap nativeos/i386-elf-toolchain
# brew install i386-elf-binutils i386-elf-gcc i386-elf-grub
CC=i386-elf-gcc
CFLAGS=-ffreestanding -nostdlib -fno-pic -Wall -Wextra -I .
.PHONY: all test clean
all: kernel.elf
@ForceBru
ForceBru / README.md
Last active October 16, 2020 08:41
How (not to) speed up your Python code

I've recently encountered an article that showed some ways to speed up one's Python code. Some of these methods seemed rather peculiar, so I decided to do some extra complex data analysis to try to understand whether these methods actually work.

Python version: 3.7.1

Here's a quick summary of the methods proposed in that article that I found odd.

Decreasing the use of for loop

The article said that, since for loops are "dynamic" (not sure what this means), they're slower than while loops. I compared the following two loops and found that, on average, the for loop was about 2.5 times faster than the corresponding while loop:

@ForceBru
ForceBru / SampleRun.md
Last active January 25, 2021 21:12
NumPy vs Julia

Results:

forcebru ~/t/julia_numpy_bench> ipython bench.ipy 
Python with Nx=50000, Np=500...
9.84 ms ± 73 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
forcebru ~/t/julia_numpy_bench> julia-1.6 bench.jl 
Julia with Nx=50000, Np=500...
Trial(32.285 ms)
forcebru ~/t/julia_numpy_bench> julia-1.6 -O3 bench.jl
@ForceBru
ForceBru / hessian_code
Created December 1, 2021 11:01
Hessian generated by Symbolics.jl
This file has been truncated, but you can view the full file.
function (ˍ₋out, ˍ₋arg1, b)
#= ~/.julia/packages/SymbolicUtils/zaGjS/src/code.jl:301 =#
#= ~/.julia/packages/SymbolicUtils/zaGjS/src/code.jl:302 =#
let var"p[1]" = #= ~/.julia/packages/SymbolicUtils/zaGjS/src/code.jl:188 =# @inbounds(ˍ₋arg1[1]), var"p[2]" = #= ~/.julia/packages/SymbolicUtils/zaGjS/src/code.jl:188 =# @inbounds(ˍ₋arg1[2]), var"p[3]" = #= ~/.julia/packages/SymbolicUtils/zaGjS/src/code.jl:188 =# @inbounds(ˍ₋arg1[3]), var"μ[1]" = #= ~/.julia/packages/SymbolicUtils/zaGjS/src/code.jl:188 =# @inbounds(ˍ₋arg1[4]), var"μ[2]" = #= ~/.julia/packages/SymbolicUtils/zaGjS/src/code.jl:188 =# @inbounds(ˍ₋arg1[5]), var"μ[3]" = #= ~/.julia/packages/SymbolicUtils/zaGjS/src/code.jl:188 =# @inbounds(ˍ₋arg1[6]), var"σ2[1]" = #= ~/.julia/packages/SymbolicUtils/zaGjS/src/code.jl:188 =# @inbounds(ˍ₋arg1[7]), var"σ2[2]" = #= ~/.julia/packages/SymbolicUtils/zaGjS/src/code.jl:188 =# @inbounds(ˍ₋arg1[8]), var"σ2[3]" = #= ~/.julia/packages/SymbolicUtils/zaGjS/src/code.jl:188 =# @inbounds(ˍ₋arg1[9])
#= ~
@ForceBru
ForceBru / README.md
Last active August 10, 2022 07:54
Benchmark of Julia autodiff

Benchmark of several Julia autodiff packages (including Symbolics.jl)

The goal is to differentiate a log-likelihood function - the workhorse of probability theory, mathematical statistics and machine learning.

Here, it's the log-likelihood of a Gaussian mixture model:

normal_pdf(x::Real, mean::Real, var::Real) =
    exp(-(x - mean)^2 / (2var)) / sqrt(2π * var)
@ForceBru
ForceBru / CODE.jl
Created January 7, 2023 23:19
JuMP can use IPOPT to optimize a function, but OptimizationMOI.jl and NLPModelsIpopt.jl cannot
import Pkg
Pkg.activate(temp=true)
@info "Installing Ipopt, Optimization, ADNLPModels and JuMP. Output is suppressed."
Pkg.add([
Pkg.PackageSpec(name="Ipopt", version="1.1.0"),
Pkg.PackageSpec(name="Optimization", version="3.10.0"),
Pkg.PackageSpec(name="OptimizationMOI", version="0.1.5"),
Pkg.PackageSpec(name="ADNLPModels", version="0.4.0"),
Pkg.PackageSpec(name="NLPModelsIpopt", version="0.10.0"),
Pkg.PackageSpec(name="JuMP", version="1.6.0"),