TL;DR
Install Postgres 9.5, and then:
sudo pg_dropcluster 9.5 main --stop
sudo pg_upgradecluster 9.3 main
sudo pg_dropcluster 9.3 main
A2 = Temperature in Celsius | |
B2 = Humidity | |
=IF((A2*9/5+32)<=80,A2,IF(AND(B2<13,((A2*9/5+32)>80),((A2*9/5+32)<112)),(((-42.379+2.04901523*(A2*9/5+32)+10.14333127*B2-0.22475541*(A2*9/5+32)*B2-0.00683783*(A2*9/5+32)*(A2*9/5+32)-0.05481717*B2*B2+0.00122874*(A2*9/5+32)*(A2*9/5+32)*B2+0.00085282*(A2*9/5+32)*B2*B2-0.00000199*(A2*9/5+32)*(A2*9/5+32)*B2*B2)-32)*5/9)-(((13-B2)/4)*SQRT((17-ABS((A2*9/5+32)-95))/17)),IF(AND(B2>85,((A2*9/5+32)>80),((A2*9/5+32)<87)),(((-42.379+2.04901523*(A2*9/5+32)+10.14333127*B2-0.22475541*(A2*9/5+32)*B2-0.00683783*(A2*9/5+32)*(A2*9/5+32)-0.05481717*B2*B2+0.00122874*(A2*9/5+32)*(A2*9/5+32)*B2+0.00085282*(A2*9/5+32)*B2*B2-0.00000199*(A2*9/5+32)*(A2*9/5+32)*B2*B2)-32)*5/9)+((B2-85)/10)*((87-(A2*9/5+32))/5),(((-42.379+2.04901523*(A2*9/5+32)+10.14333127*B2-0.22475541*(A2*9/5+32)*B2-0.00683783*(A2*9/5+32)*(A2*9/5+32)-0.05481717*B2*B2+0.00122874*(A2*9/5+32)*(A2*9/5+32)*B2+0.00085282*(A2*9/5+32)*B2*B2-0.00000199*(A2*9/5+32)*(A2*9/5+32)*B2*B2)-32)*5/9)))) |
import numpy | |
from scipy.ndimage.interpolation import map_coordinates | |
from scipy.ndimage.filters import gaussian_filter | |
def elastic_transform(image, alpha, sigma, random_state=None): | |
"""Elastic deformation of images as described in [Simard2003]_. | |
.. [Simard2003] Simard, Steinkraus and Platt, "Best Practices for | |
Convolutional Neural Networks applied to Visual Document Analysis", in |
# original example from Digg Data website (Takashi J. OZAKI, Ph. D.) | |
# http://diggdata.in/post/58333540883/k-fold-cross-validation-in-r | |
library(plyr) | |
library(randomForest) | |
data <- iris | |
# in this cross validation example, we use the iris data set to |
TL;DR
Install Postgres 9.5, and then:
sudo pg_dropcluster 9.5 main --stop
sudo pg_upgradecluster 9.3 main
sudo pg_dropcluster 9.3 main
import logging | |
import matplotlib.pyplot as plt | |
import numpy as np | |
import os | |
import scipy.stats as stats | |
import sys | |
def read_data(filename): | |
"""Reads a data file assumed to have at least 2 columns: 1) lat, 2) lng.""" |
#!/usr/bin/env python | |
# -*- coding: utf-8 -*- | |
# | |
# This is an implementation of adenoising autoencoder as | |
# described on the following paper: | |
# http://www.jmlr.org/papers/volume11/vincent10a/vincent10a.pdf | |
# | |
import numpy as np |
import os | |
# | |
# Kayo nalang maglagay kung paano kukunin iyong | |
# data from the database tapos either gawin niyo sya | |
# na script running on the background or | |
# naka cron job per minute | |
# | |
data = "get from local database" |
[operating-hadoop]
HBase is used widely at Facebook and one of the biggest usecase is Facebook Messages. With a billion users there are a lot of reliability and performance challenges on both HBase and HDFS. HDFS was originally designed for a batch processing system like MapReduce/Hive. A realtime usecase like Facebook Messages where the p99 latency can`t be more than a couple hundreds of milliseconds poses a lot of challenges for HDFS. In this talk we will share the work the HDFS team at Facebook has done to support a realtime usecase like Facebook Messages : (1) Using system calls to tune performance; (2) Inline checksums to reduce iops by 40%; (3) Reducing the p99 for read and write latencies by about 10x; (4) Tools used to determine root cause of outliers. We will discuss the details of each technique, the challenges we faced, lessons learned and results showing the impact of each improvement.
speaker: Pritam Damania