Skip to content

Instantly share code, notes, and snippets.

View oneryalcin's full-sized avatar

Mehmet Öner Yalçın oneryalcin

View GitHub Profile
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
@oneryalcin
oneryalcin / description.md
Created April 4, 2017 13:28 — forked from mangecoeur/description.md
Pandas PostgresSQL support for loading to DB using fast COPY FROM method

This small subclass of the Pandas sqlalchemy-based SQL support for reading/storing tables uses the Postgres-specific "COPY FROM" method to insert large amounts of data to the database. It is much faster that using INSERT. To acheive this, the table is created in the normal way using sqlalchemy but no data is inserted. Instead the data is saved to a temporary CSV file (using Pandas' mature CSV support) then read back to Postgres using Psychopg2 support for COPY FROM STDIN.

@oneryalcin
oneryalcin / add_deploy_user.yml
Created February 22, 2019 14:17
Ansible 2.7 compatible Playbook that deploys a new user with sudo rights and passwordless login to remote server. It also disables root user and password authentication
######### ansible.cfg FILE ############
[defaults]
inventory = ./dev
######### DEV FILE ############
# Dev file
[servers]
@oneryalcin
oneryalcin / corporate_messaging.py
Last active August 6, 2019 11:27
Example NLTK (Corporate messaging - Udacity example)
import nltk
nltk.download(['punkt', 'wordnet'])
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
@oneryalcin
oneryalcin / custom_transformer.py
Last active August 6, 2019 12:09
sklearn's FeatureUnion in Action. We use additional custom defined StartingVerbExtractor to use in our pipeline in parallel.
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
import re
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.base import BaseEstimator, TransformerMixin
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
@oneryalcin
oneryalcin / gridsearchcv_pipeline.py
Last active August 6, 2019 13:19
Sklearn's GridSearchCV with Pipelines
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix
@oneryalcin
oneryalcin / sparkify_1_import_libs.py
Created September 23, 2019 20:42
1 Sparkify Import libs
# import libraries
from pyspark import SparkConf, SparkContext
from pyspark.sql import SparkSession, Window
from pyspark.sql.functions import count, when, isnan, isnull, desc_nulls_first, desc, \
from_unixtime, col, dayofweek, dayofyear, hour, to_date, month
import pyspark.sql.functions as F
from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer, VectorAssembler, StandardScaler, MinMaxScaler
from pyspark.ml.classification import DecisionTreeClassifier, RandomForestClassifier
# sc = SparkContext(appName="Project_workspace")