Skip to content

Instantly share code, notes, and snippets.

View nickwhite917's full-sized avatar

Nicholas White nickwhite917

View GitHub Profile
@nickwhite917
nickwhite917 / Python_Sorting_Insertion_Sort.py
Created February 13, 2017 01:18
Insertion Sort in Python
def insertion_sort(a):
for i in range(1, len(a)):
position = i
value = a[i]
while position > 0 and a[position - 1] > value:
a[position] = a[position - 1]
position -= 1
a[position] = value
@nickwhite917
nickwhite917 / Python_Queue_from_Two_Stacks.py
Created February 10, 2017 20:32
Building a queue from two stacks in Python
class MyQueue(object):
def __init__(self):
self.stack_s1 = []
self.stack_s2 = []
def peek(self):
if len(self.stack_s2) == 0:
self.load_stack_s2()
return self.stack_s2[-1]
else:
@nickwhite917
nickwhite917 / Python_Linked_List_Cycle_Detection.py
Created February 10, 2017 17:35
Detecting Cycles in Linked Lists Using PYthon
class Node(object):
def __init__(self, data = None, next_node = None):
self.data = data
self.next = next_node
def has_cycle(head):
temp = head.next
ls = []
while temp is not None:
if temp in ls:
@nickwhite917
nickwhite917 / Python_Sorting_Merge_Sort_Zen.py
Last active February 9, 2017 06:04
Merge Sort in Python - Zen
def merge_sort(input_list):
if len(input_list) < 2:
return input_list
result = []
mid = int(len(input_list) / 2)
right_half = merge_sort(input_list[:mid])
left_half = merge_sort(input_list[mid:])
def merge_sort(input_list):
"""Given a list, sorts the list in ascending order.
Returns reference to original list.
"""
print("Splitting ", input_list)
if len(input_list) > 1:
mid = len(input_list) // 2
left_half = input_list[:mid]
right_half = input_list[mid:]
@nickwhite917
nickwhite917 / r-spark.r
Created January 25, 2017 12:54
Connect to Spark (using YARN) from R
setupSparkR <- function(databaseName){
Sys.setenv(HADOOP_CONF_DIR='/etc/hadoop/conf.cloudera.hdfs')
Sys.setenv(YARN_CONF_DIR='/etc/hadoop/conf.cloudera.yarn')
library("rJava")
library(SparkR, lib.loc = "/home/rstudio/sparkr2.6/spark-2.1.0-bin-hadoop2.7/R/lib")
sc <- sparkR.session(master = "yarn",
sparkHome = "/home/rstudio/sparkr2.6/spark-2.1.0-bin-hadoop2.7", sparkEnvir = list(spark.driver.memory="50g"),
sparkExecutorEnv = list(spark.executor.memory="50g"), sparkJars = "", sparkPackages = "")
setLogLevel("WARN")
useCmd <- paste("use ", databaseName, sep = "")
@nickwhite917
nickwhite917 / r-hive.r
Created January 25, 2017 12:53
Connect to Hive tables from R
### Load the necessary libraries
library("DBI")
library("rJava")
library("RJDBC")
queryHive <- function(query){
classpath = c("/usr/lib/hive/lib/hive-jdbc.jar", "/usr/lib/hadoop/client/hadoop-common.jar", "/usr/lib/hive/lib/libthrift-0.9.2.jar", "/usr/lib/hive/lib/hive-service.jar", "/usr/lib/hive/lib/httpclient-4.2.5.jar", "/usr/lib/hive/lib/httpcore-4.2.5.jar", "/usr/lib/hive/lib/hive-jdbc-standalone.jar")
.jinit(classpath=classpath)
driver <- JDBC("org.apache.hive.jdbc.HiveDriver", "/usr/lib/hive/lib/hive-jdbc.jar", identifier.quote="`")
@nickwhite917
nickwhite917 / r-db2.r
Created January 25, 2017 12:52
Connect to DB2 from R
library(RJDBC)
.jaddClassPath( "/home/517066/sqllib/SQLLIB/java/db2jcc_license_cu.jar" )
.jaddClassPath( "/home/517066/sqllib/SQLLIB/java/db2jcc4.jar" )
#Then we need to load the DB2 JDBC driver:
driver <- JDBC("com.ibm.db2.jcc.DB2Driver","home/517066/sqllib/SQLLIB/java/db2jcc4.jar")
#At this point we can establish a database connection:
conn = dbConnect(driver,
"jdbc:db2://ip:port/databaseName",
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Net;
using System.Text;
using System.Threading.Tasks;
namespace FTPLib
{
<!DOCTYPE html>
<html lang="">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Title Page</title>
<link rel="stylesheet" href="http://bootswatch.com/yeti/bootstrap.min.css"/>
</head>