Skip to content

Instantly share code, notes, and snippets.

View meetps's full-sized avatar
IDDQD

Meet Shah meetps

IDDQD
View GitHub Profile
@meetps
meetps / matrix.sh
Created July 16, 2015 06:01
Matrix Effect for Linux Terminal [ MELT ]
#!/bin/bash
### Customization:
blue="\033[0;34m"
brightblue="\033[1;34m"
cyan="\033[0;36m"
brightcyan="\033[1;36m"
green="\033[0;32m"
brightgreen="\033[1;32m"
red="\033[0;31m"
brightred="\033[1;31m"
@meetps
meetps / The Technical Interview Cheat Sheet.md
Last active October 16, 2017 02:55 — forked from tsiege/The Technical Interview Cheat Sheet.md
This is my technical interview cheat sheet. Feel free to fork it or do whatever you want with it. PLEASE let me know if there are any errors or if anything crucial is missing. I will add more links soon.

Studying for a Tech Interview Sucks, so Here's a Cheat Sheet to Help

This list is meant to be a both a quick guide and reference for further research into these topics. It's basically a summary of that comp sci course you never took or forgot about, so there's no way it can cover everything in depth. It also will be available as a gist on Github for everyone to edit and add to.

Data Structure Basics

###Array ####Definition:

  • Stores data elements based on an sequential, most commonly 0 based, index.
  • Based on tuples from set theory.
@meetps
meetps / README.md
Created March 24, 2016 18:48 — forked from dannguyen/README.md
Using Google Cloud Vision API to OCR scanned documents to extract structured data

Using Google Cloud Vision API's OCR to extract text from photos and scanned documents

Just a quickie test in Python 3 (using Requests) to see if Google Cloud Vision can be used to effectively OCR a scanned data table and preserve its structure, in the way that products such as ABBYY FineReader can OCR an image and provide Excel-ready output.

The short answer: No. While Cloud Vision provides bounding polygon coordinates in its output, it doesn't provide it at the word or region level, which would be needed to then calculate the data delimiters.

On the other hand, the OCR quality is pretty good, if you just need to identify text anywhere in an image, without regards to its physical coordinates. I've included two examples:

####### 1. A low-resolution photo of road signs

import numpy as np
import matplotlib.pyplot as plt
x= range(1,15)
x1= range(2,16)
x2= range(3,17)
y= np.asarray([0.8219895422 , 0.8403141689 , 0.9581152138 , 0.9921465969 , 0.9921465969 , 0.9895288058, 0.9973822383 , 0.9895288158, 0.997382199 , 0.997382199 , 0.997382199, 0.9921466362, 0.9947644372, 0.9947643979])
variability = [0.0264397719, 0.0228848261, 0.0287958109, 0.0157067863, 0.0081151739, 0.0104711742, 0.0026177617, 0.0078533832, 0.002617801, 0.00011, 0.002346846, 0.002617801, 0.0052355628, 0.0052356021]
# example data
@meetps
meetps / computeIoU.py
Created February 11, 2017 14:02
Intersection over Union for Python [ Keras ]
import numpy as np
def computeIoU(y_pred_batch, y_true_batch):
return np.mean(np.asarray([pixelAccuracy(y_pred_batch[i], y_true_batch[i]) for i in range(len(y_true_batch))]))
def pixelAccuracy(y_pred, y_true):
y_pred = np.argmax(np.reshape(y_pred,[N_CLASSES_PASCAL,img_rows,img_cols]),axis=0)
y_true = np.argmax(np.reshape(y_true,[N_CLASSES_PASCAL,img_rows,img_cols]),axis=0)
y_pred = y_pred * (y_true>0)
# def predict_id(id, model, trs):
# img = utils.M(id)
# x = utils.stretch_n(img)
# cnv = np.zeros((960, 960, 8)).astype(np.float32)
# prd = np.zeros((n_classes, 960, 960)).astype(np.float32)
# cnv[:img.shape[0], :img.shape[1], :] = x
# for i in range(0, 6):
# line = []
@meetps
meetps / chogadiya.py
Last active February 12, 2024 07:17
Chogadiya py3status external_script
import requests
from bs4 import BeautifulSoup
url = 'https://www.drikpanchang.com/muhurat/choghadiya.html?geoname-id=5375480'
chogadiya_selector = "body > div.dpPageWrapper > div.dpInnerWrapper > div.dpPHeaderWrapper > div.dpPHeaderContent.dpFlex > div.dpPHeaderLeftWrapper > div.dpPHeaderLeftContent.dpFlex > div:nth-child(2) > div.dpPHeaderLeftTitle"
time_selector = "body > div.dpPageWrapper > div.dpInnerWrapper > div.dpPHeaderWrapper > div.dpPHeaderContent.dpFlex > div.dpPHeaderLeftWrapper > div.dpPHeaderLeftContent.dpFlex > div:nth-child(2) > div:nth-child(2)"
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')