show dbs
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: v1 | |
kind: Pod | |
metadata: | |
name: ubuntu-debug | |
spec: | |
containers: | |
- name: ubuntu | |
image: ubuntu:latest | |
# Just spin & wait forever | |
command: [ "/bin/bash", "-c", "--" ] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import tensorflow as tf | |
def set_gpu_memory_limit(memory_limit=6096): | |
"""Set limit to GPU memory usage. | |
from https://github.com/tensorflow/tensorflow/issues/43174#issuecomment-782222166 | |
Parameters | |
---------- | |
memory_limit : int, optional | |
Amount of memory to use in MegaBytes, by default 6096 MB (6GB) |
title | subtitle | author | date | source |
---|---|---|---|---|
Docker Compose Cheatsheet |
Quick reference for Docker Compose commands and config files |
Jon LaBelle |
April 7, 2019 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env python3 | |
''' | |
Thanks to Andres Torres | |
Source: https://www.pythoncentral.io/introduction-to-sqlite-in-python/ | |
''' | |
import sqlite3 | |
# Create a database in RAM |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import tensorflow as tf | |
import argh | |
# see: https://github.com/tensorflow/tensorflow/issues/46107 | |
def quantize_model( | |
original_model_path='models/best-model.h5', | |
quantized_model_path='models/quantized_model.tflite'): | |
"""Converts .h5 model to .tflite by applying | |
quantization. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# pip install func-timeout | |
from func_timeout import func_timeout, FunctionTimedOut | |
try: | |
func_result = func_timeout(10, func, args=(arg1, arg2)) | |
except FunctionTimedOut: | |
print("The function could not complete within 10 seconds, hence terminated.\n") | |
except Exception as e: | |
print(f"ERROR: {e} on executing the function") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from pandas.testing import assert_frame_equal | |
assert_frame_equal(df1, df2, check_dtype=False) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import pyspark.sql.functions as F | |
def count_missings(spark_df, sort=True): | |
""" | |
Counts number of nulls and nans in each column | |
""" | |
df = spark_df.select( | |
[ | |
F.count(F.when(F.isnan(c) | F.isnull(c), c)).alias(c) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from pyspark.ml import Pipeline | |
from pyspark.ml.regression import GBTRegressor | |
from pyspark.ml.feature import VectorAssembler, StandardScaler | |
from pyspark.ml.evaluation import RegressionEvaluator | |
# Get the names of the input features | |
input_cols = df.columns[:-1] | |
# Rename target col and split the dataset | |
df = df.withColumnRenamed('target_column_original_name', 'label') |
NewerOlder