Skip to content

Instantly share code, notes, and snippets.

View acharles7's full-sized avatar
💭
open-source

Charles Patel acharles7

💭
open-source
View GitHub Profile
@acharles7
acharles7 / iphone12_iOS_keys.txt
Created February 24, 2023 19:03 — forked from yarshure/iphone12_iOS_keys.txt
gestalt_query keys
Key Name Description
======== ===========
3GProximityCapability Whether the device has a 3G proximity sensor
3GVeniceCapability Whether the device supports FaceTime over cellular
720pPlaybackCapability Whether the device supports 720p video (identical to kMGQDeviceSupports720p)
APNCapability
ARM64ExecutionCapability Whether the device supports executing arm64 binaries
ARMV6ExecutionCapability Whether the device supports executing armv6 binaries
ARMV7ExecutionCapability Whether the device supports executing armv7 binaries
ARMV7SExecutionCapability Whether the device supports executing armv7s binaries
{
"citation": "@article{raecompressive2019,\nauthor = {Rae, Jack W and Potapenko, Anna and Jayakumar, Siddhant M and\n Hillier, Chloe and Lillicrap, Timothy P},\ntitle = {Compressive Transformers for Long-Range Sequence Modelling},\njournal = {arXiv preprint},\nurl = {https://arxiv.org/abs/1911.05507},\nyear = {2019},\n}",
"description": "This dataset contains the PG-19 language modeling benchmark. It includes a set\nof books extracted from the Project Gutenberg books project\n(https://www.gutenberg.org), that were published before 1919. It also contains\nmetadata of book titles and publication dates.\nPG-19 is over double the size of the Billion Word benchmark and contains\ndocuments that are 20X longer, on average, than the WikiText long-range\nlanguage modelling benchmark.\n\nBooks are partitioned into a train, validation, and test set. Books metadata is\nstored in metadata.csv which contains\n(book_id, short_book_title, publication_date, book_link).",
"location": {
"urls": [
"http
{
"citation": "@misc{Dua:2019 ,\nauthor = \"Janosi, Steinbrunn and Pfisterer, Detrano\",\nyear = \"1988\",\ntitle = \"{UCI} Machine Learning Repository\",\nurl = \"http://archive.ics.uci.edu/ml/datasets/Heart+Disease\",\ninstitution = \"University of California, Irvine, School of Information and Computer Sciences\"\n}",
"description": "This data set contain 13 attributes and labels of heart disease from 303 participants from Cleveland since Cleveland data was most commonlyused in modern research.\n\nAttribute by column index\n1. age : age in years\n2. sex : sex (1 = male; 0 = female)\n3. cp : chest pain type\n (1 = typical angina; 2 = atypical angina; 3 = non-anginal pain; 4 = asymptomatic)\n4. trestbps : resting blood pressure (in mm Hg on admission to the hospital)\n5. chol : serum cholestoral in mg/dl\n6. fbs : (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false)\n7. restecg : resting electrocardiographic results\n8. thalach : maximum heart rate achieved\n9. exang :
import numpy as np
import matplotlib.pyplot as plt
# generate random data and plot them
data = np.random.random_integers(10, 20, 100)
num, count = np.unique(data, return_counts=True)
plt.plot(num, count)
each_sample_size = 10
sample_size = 500
@acharles7
acharles7 / global_threshold_segmentation.py
Created August 2, 2019 19:16
The pixel values falling below or above that threshold can be classified accordingly as an object or the background is known as Threshold Segmentation
from skimage.color import rgb2gray
import matplotlib.pyplot as plt
img = plt.imread('cat.jpg')
gray = rgb2gray(img)
gray_r = gray.reshape(gray.shape[0]*gray.shape[1])
global_threshold = gray_r.mean()
for i in range(gray_r.shape[0]):
@acharles7
acharles7 / l2Distance.py
Created July 20, 2019 04:18
Compute squared L2 distance matrix
# dataset [n x d]
X = np.array(data_set)
Y = np.array([[93, 7], [50, 34], [51, 79]])
def l2-Distance(X, MU):
x = np.sum(X**2, axis=1, keepdims=True)
y = np.sum(MU**2, axis=1)
xy = np.dot(X, MU.T)
@acharles7
acharles7 / information_gain.py
Created July 18, 2019 21:17
calculate information gain in decision tree algorithm
# x = np.random.rand(10,2) * 10
# y = np.random.rand(10,1) * 10
def calculate_entropy(pi):
total = 0
for p in pi:
p = p / sum(pi)
if p != 0:
total += p * np.log2(p)
else:
@acharles7
acharles7 / entropy.py
Created July 18, 2019 21:14
calculate entropy in decision tree algorithm
# x = np.random.rand(10,2) * 10
# y = np.random.rand(10,1) * 10
def calculate_entropy(pi):
total = 0
for p in pi:
p = p / sum(pi)
if p != 0:
total += p * np.log2(p)
else:
@acharles7
acharles7 / Neuralnet.py
Created June 29, 2019 00:22
Simple two layer NN
import numpy as np
w1 = 0.01 * np.random.randn(1, 3)
b1 = np.zeros((1,3))
w2 = 0.01 * np.random.randn(1, 4)
b2 = np.zeros((1,4))
w3 = 0.01 * np.random.randn(1, 1)
f = lambda x: 1/(1 + np.exp(-1)) #Sigmoid Function