Skip to content

Instantly share code, notes, and snippets.

View jettify's full-sized avatar
🇺🇦

Nikolay Novik jettify

🇺🇦
View GitHub Profile
@jettify
jettify / gist:4354883
Last active December 10, 2015 00:58
My codingforinterviews task

Explain what a binary search tree is and what to consider when implementing one.

Binary search tree (BST) can be represented by a linked data structure. Each node contains key value, data and references to left and right subtree. BST has following properties:

  1. Left subtree of a node contains only nodes with keys less than the node's key.
  2. Right subtree of a node contains only ondes with keys greater than the node's key.
  3. Left and Right subtrees are also binary search trees.

Explain what merge sort is and what to consider when implementing it.

MergeSort is a recursive sorting algorithm that uses O(n log n) comparisons in the worst case. To sort an array of n elements, we perform the following three steps in sequence:

  1. Divide the unsorted list into two sublists of about half the size
  2. Sort each of the two sublists
  3. Merge the two sorted sublists back into one sorted list.

There are two merge sort implementations: top-down (uses recursion) and bottom-up. Last one is more efficient and popular.

@jettify
jettify / calc.py
Created May 7, 2014 21:07 — forked from ascv/calc.py
"""
exp ::= term | exp + term | exp - term
term ::= factor | factor * term | factor / term
factor ::= number | ( exp )
"""
class Calculator():
def __init__(self, tokens):
self._tokens = tokens
self._current = tokens[0]
  • Update HISTORY.rst
  • Update version number in my_project/__init__.py
  • Update version number in setup.py
  • Install the package again for local development, but with the new version number:
python setup.py develop
  • Run the tests:
python setup.py test
@jettify
jettify / aiohttp_injections.py
Last active August 13, 2018 13:15
aiohttp dependency injection example
import asyncio
import injections
import aiopg.sa
from aiohttp import web
@injections.has
class SiteHandler:
# this is only place holder, actual connection
@asyncio.coroutine
def do_select(pool, i):
with (yield from pool) as conn:
cur = yield from conn.cursor()
yield from cur.execute("SELECT 10")
yield from cur.close()
@asyncio.coroutine
@jettify
jettify / OptimizedSparkInnerJoin.scala
Created August 9, 2016 20:50 — forked from mkolod/OptimizedSparkInnerJoin.scala
Optimized Inner Join in Spark
/** Hive/Pig/Cascading/Scalding-style inner join which will perform a map-side/replicated/broadcast
* join if the "small" relation has fewer than maxNumRows, and a reduce-side join otherwise.
* @param big the large relation
* @param small the small relation
* @maxNumRows the maximum number of rows that the small relation can have to be a
* candidate for a map-side/replicated/broadcast join
* @return a joined RDD with a common key and a tuple of values from the two
* relations (the big relation value first, followed by the small one)
*/
private def optimizedInnerJoin[A : ClassTag, B : ClassTag, C : ClassTag]
@jettify
jettify / script.py
Created January 24, 2018 22:42 — forked from saiteja09/script.py
Glue Job Script for reading data from DataDirect Salesforce JDBC driver and write it to S3
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.dynamicframe import DynamicFrame
from awsglue.job import Job
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
@jettify
jettify / script.py
Created January 24, 2018 22:42 — forked from saiteja09/script.py
Glue Job Script for reading data from DataDirect Salesforce JDBC driver and write it to S3
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.dynamicframe import DynamicFrame
from awsglue.job import Job
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
@jettify
jettify / CMakeLists.txt
Created October 24, 2020 01:40 — forked from zeryx/CMakeLists.txt
minimal pytorch 1.0 pytorch -> C++ full example demo image at: https://i.imgur.com/hiWRITj.jpg
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(cpp_shim)
set(CMAKE_PREFIX_PATH ../libtorch)
find_package(Torch REQUIRED)
find_package(OpenCV REQUIRED)
add_executable(testing main.cpp)
message(STATUS "OpenCV library status:")
message(STATUS " config: ${OpenCV_DIR}")