Skip to content

Instantly share code, notes, and snippets.


Venkata Sai Krishna Vanama saiplanner

View GitHub Profile
saiplanner / spk.js
Created Nov 26, 2019 — forked from saketkunwar/spk.js
Google Earth Engine Speckle Noise Reduction
View spk.js
var sentinel1 = ee.ImageCollection('COPERNICUS/S1_GRD');
var poly = ee.Geometry.Polygon(
[[[-95.83648681640625, 29.561512529746743],
[-95.042724609375, 29.57345707301757],
[-95.02899169921875, 30.099989515377835],
[-95.82275390625, 30.10711788709236]]]);
var spatialFiltered = sentinel1.filter(ee.Filter.eq('instrumentMode', 'IW')).filterBounds(poly)
.filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VV'))
var image = spatialFiltered.filterDate('2017-08-25', '2017-09-05').mosaic().clip(poly);
saiplanner /
Created May 16, 2019 — forked from al42and/
Kittler-Illingworth Thresholding
import numpy as np
def Kittler(im, out):
The reimplementation of Kittler-Illingworth Thresholding algorithm by Bob Pepin
Works on 8-bit images only
Original Matlab code:
Paper: Kittler, J. & Illingworth, J. Minimum error thresholding. Pattern Recognit. 19, 41–47 (1986).
h,g = np.histogram(im.ravel(),256,[0,256])
saiplanner / naivebayes
Created May 7, 2019 — forked from vickyqian/naivebayes
Naive Bayes Classifier
View naivebayes
import nltk
training_set = nltk.classify.util.apply_features(extract_features, tweets)
# Train the classifier Naive Bayes Classifier
NBClassifier = nltk.NaiveBayesClassifier.train(training_set)
#ua is a dataframe containing all the united airline tweets
ua['sentiment'] = ua['tweets'].apply(lambda tweet: NBClassifier.classify(extract_features(getFeatureVector(processTweet2(tweet)))))
View preprocesstweet.txt
###Preprocess tweets
def processTweet2(tweet):
# process the tweets
#Convert to lower case
tweet = tweet.lower()
#Convert www.* or https?://* to URL
tweet = re.sub('((www\.[^\s]+)|(https?://[^\s]+))','URL',tweet)
#Convert @username to AT_USER
tweet = re.sub('@[^\s]+','AT_USER',tweet)
View getfeaturevector
def getFeatureVector(tweet):
featureVector = []
#split tweet into words
words = tweet.split()
for w in words:
#replace two or more with two occurrences
w = replaceTwoOrMore(w)
#strip punctuation
w = w.strip('\'"?,.')
#check if the word stats with an alphabet
saiplanner / twitter crawler.txt
Created May 7, 2019 — forked from vickyqian/twitter crawler.txt
A Python script to download all the tweets of a hashtag into a csv
View twitter crawler.txt
import tweepy
import csv
import pandas as pd
####input your credentials here
consumer_key = ''
consumer_secret = ''
access_token = ''
access_token_secret = ''
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
You can’t perform that action at this time.