Skip to content

Instantly share code, notes, and snippets.

View renardeinside's full-sized avatar
🦊
Make the elephant dance!

Ivan Trusov renardeinside

🦊
Make the elephant dance!
View GitHub Profile
@dead
dead / README.md
Last active April 23, 2023 17:44
Databricks Integration Test Coverage POC

How to use

First you will need to change the testcicd to the name of your project. In the deployment.json you will also need to change the pkg-testcicd to pkg-{your-project-name}. (This was a small hack to avoid dbx to upload the entire project folder)

Then you can run the normal deploy/launch commands to run the integration test.

dbx deploy --jobs=cov-sample-integration-test --files-only
dbx launch --job=cov-sample-integration-test --as-run-submit --trace
databricks fs cp dbfs:/tmp/coverage.xml .
@davideicardi
davideicardi / README.md
Last active June 22, 2020 04:39
Alpakka Kafka connector (akka-stream-kafka) example. Produce and consumer kafka messages using Akka Stream.

Alpakka Kafka connector (akka-stream-kafka) example

Simple solution to use Alpakka Kafka connector to produce and consume kafka messages.

I assume that you have 2 scala apps, a producer and a consumer.

Producer

Add the following dependencies:

@loilo
loilo / split-pull-requests.md
Last active June 18, 2025 12:15
Split a large pull request into two
@480
480 / gist:3b41f449686a089f34edb45d00672f28
Last active September 15, 2025 21:47
MacOS X + oh my zsh + powerline fonts + visual studio code terminal settings

MacOS X + oh my zsh + powerline fonts + visual studio code (vscode) terminal settings

Thank you everybody, Your comments makes it better

Install oh my zsh

http://ohmyz.sh/

sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
@dmyersturnbull
dmyersturnbull / groupyby_parallel.py
Last active February 6, 2024 00:43
Performs a Pandas groupby operation in parallel
import pandas as pd
import itertools
import time
import multiprocessing
from typing import Callable, Tuple, Union
def groupby_parallel(
groupby_df: pd.core.groupby.DataFrameGroupBy,
func: Callable[[Tuple[str, pd.DataFrame]], Union[pd.DataFrame, pd.Series]],
num_cpus: int = multiprocessing.cpu_count() - 1,
@didip
didip / tornado-nginx-example.conf
Created January 30, 2011 05:19
Nginx config example for Tornado
worker_processes 2;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
}