Skip to content

Instantly share code, notes, and snippets.

View slitayem's full-sized avatar
📈

Saloua Litayem slitayem

📈
View GitHub Profile
@slitayem
slitayem / postgres_sheet_cheat.md
Last active August 7, 2023 07:55
Postgres Cheat Sheet

Note: the commands were tested on Postgres 9.5.4

PSQL

Connect with the user USER_NAME

psql -h REMOTE_SERVER_ADDRESS -U USER_NAME
@slitayem
slitayem / classification_report_as_df.py
Created June 2, 2022 16:20
Get the sklearn classification report as a DataFrame object
from sklearn import metrics
def get_classification_as_df(y_test: list, y_pred: list, sort_by: list =[])-> pd.DataFrame:
''' Get the classification report as a DataFrame'''
report = metrics.classification_report(y_test, y_pred, output_dict=True)
df_classification_report = pd.DataFrame(report).transpose()
df_classification_report = df_classification_report.sort_values(
by=sort_by, ascending=False)
return df_classification_report
@slitayem
slitayem / bash_strict_mode.md
Created December 25, 2021 10:30 — forked from mohanpedala/bash_strict_mode.md
set -e, -u, -o, -x pipefail explanation
@slitayem
slitayem / spark2.2_hadoop2.7.4.md
Last active January 30, 2021 07:36
Install Spark 2.2 and Hadoop 2.7.4 with Jupyter and zeppelin on macOS Sierra
  • Install Homebrew if you don't have it yet
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

The script will explain what changes it will make and prompt you before the installation begins. Once you’ve installed Homebrew, insert the Homebrew directory at the top of your PATH environment variable. You can do this by adding the following line at the bottom of your ~/.bash_profile file

export PATH=/usr/local/bin:/usr/local/sbin:$PATH
  • Install Python 3:
@slitayem
slitayem / Spark Dataframe Cheat Sheet.py
Created January 7, 2021 17:18 — forked from evenv/Spark Dataframe Cheat Sheet.py
Cheat sheet for Spark Dataframes (using Python)
# A simple cheat sheet of Spark Dataframe syntax
# Current for Spark 1.6.1
# import statements
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark.sql.functions import *
#creating dataframes
df = sqlContext.createDataFrame([(1, 4), (2, 5), (3, 6)], ["A", "B"]) # from manual data
@slitayem
slitayem / models.py
Created April 20, 2016 09:11 — forked from kageurufu/models.py
PostgreSQL JSON Data Type support for SQLAlchemy, with Nested MutableDicts for data change notifications To use, simply include somewhere in your project, and import JSON Also, monkey-patches pg.ARRAY to be Mutable @zzzeek wanna tell me whats terrible about this?
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Integer, Column
from postgresql_json import JSON
Base = declarative_base()
class Document(Base):
id = Column(Integer(), primary_key=True)
data = Column(JSON)
#do whatever other work
import contextlib
import os
import tempfile
half_lambda_memory = 10**6 * (
int(os.getenv('AWS_LAMBDA_FUNCITON_MEMORY_SIZE', '0')) / 2)
@contextlib.contextmanager
@slitayem
slitayem / Simple-S3Bucket-SNS
Created October 29, 2020 14:50 — forked from austoonz/Simple-S3Bucket-SNS
A CloudFormation template sample for creating an S3 Bucket with an SNS Trigger.
---
AWSTemplateFormatVersion: '2010-09-09'
Description: Simple S3 Bucket with SNS Trigger
Parameters:
BucketName:
Type: String
Description: The name of the S3 Bucket to create
@slitayem
slitayem / get_docker_tf_version.sh
Last active October 2, 2020 10:36
Get tensorflow package version in the latest available GPU Docker image
#!/bin/bash
set -e
docker run -dit --rm --name test tensorflow/tensorflow:latest-gpu-py3-jupyter
TF_VERSION=`docker exec -it test bash -c "pip freeze | grep tensorflow-gpu | cut -d'=' -f3"`
TF_VERSION=`echo $TF_VERSION | tr -d '\r'`
docker stop test