Skip to content

Instantly share code, notes, and snippets.

View jettify's full-sized avatar
🇺🇦

Nikolay Novik jettify

🇺🇦
View GitHub Profile
@jettify
jettify / vim-shortcuts.md
Created January 30, 2023 02:57 — forked from tuxfight3r/vim-shortcuts.md
VIM SHORTCUTS

VIM KEYBOARD SHORTCUTS

MOVEMENT

h        -   Move left
j        -   Move down
k        -   Move up
l        -   Move right
$        -   Move to end of line
0        -   Move to beginning of line (including whitespace)
@jettify
jettify / CMakeLists.txt
Created October 24, 2020 01:40 — forked from zeryx/CMakeLists.txt
minimal pytorch 1.0 pytorch -> C++ full example demo image at: https://i.imgur.com/hiWRITj.jpg
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(cpp_shim)
set(CMAKE_PREFIX_PATH ../libtorch)
find_package(Torch REQUIRED)
find_package(OpenCV REQUIRED)
add_executable(testing main.cpp)
message(STATUS "OpenCV library status:")
message(STATUS " config: ${OpenCV_DIR}")
@jettify
jettify / script.py
Created January 24, 2018 22:42 — forked from saiteja09/script.py
Glue Job Script for reading data from DataDirect Salesforce JDBC driver and write it to S3
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.dynamicframe import DynamicFrame
from awsglue.job import Job
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
@jettify
jettify / script.py
Created January 24, 2018 22:42 — forked from saiteja09/script.py
Glue Job Script for reading data from DataDirect Salesforce JDBC driver and write it to S3
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.dynamicframe import DynamicFrame
from awsglue.job import Job
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
@jettify
jettify / OptimizedSparkInnerJoin.scala
Created August 9, 2016 20:50 — forked from mkolod/OptimizedSparkInnerJoin.scala
Optimized Inner Join in Spark
/** Hive/Pig/Cascading/Scalding-style inner join which will perform a map-side/replicated/broadcast
* join if the "small" relation has fewer than maxNumRows, and a reduce-side join otherwise.
* @param big the large relation
* @param small the small relation
* @maxNumRows the maximum number of rows that the small relation can have to be a
* candidate for a map-side/replicated/broadcast join
* @return a joined RDD with a common key and a tuple of values from the two
* relations (the big relation value first, followed by the small one)
*/
private def optimizedInnerJoin[A : ClassTag, B : ClassTag, C : ClassTag]
  • Update HISTORY.rst
  • Update version number in my_project/__init__.py
  • Update version number in setup.py
  • Install the package again for local development, but with the new version number:
python setup.py develop
  • Run the tests:
python setup.py test
@jettify
jettify / calc.py
Created May 7, 2014 21:07 — forked from ascv/calc.py
"""
exp ::= term | exp + term | exp - term
term ::= factor | factor * term | factor / term
factor ::= number | ( exp )
"""
class Calculator():
def __init__(self, tokens):
self._tokens = tokens
self._current = tokens[0]