This small subclass of the Pandas sqlalchemy-based SQL support for reading/storing tables uses the Postgres-specific "COPY FROM" method to insert large amounts of data to the database. It is much faster that using INSERT. To acheive this, the table is created in the normal way using sqlalchemy but no data is inserted. Instead the data is saved to a temporary CSV file (using Pandas' mature CSV support) then read back to Postgres using Psychopg2 support for COPY FROM STDIN.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"cells": [ | |
{ | |
"cell_type": "code", | |
"execution_count": 1, | |
"metadata": { | |
"collapsed": true | |
}, | |
"outputs": [], | |
"source": [ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"cells": [ | |
{ | |
"cell_type": "code", | |
"execution_count": 1, | |
"metadata": { | |
"collapsed": true | |
}, | |
"outputs": [], | |
"source": [ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
######### ansible.cfg FILE ############ | |
[defaults] | |
inventory = ./dev | |
######### DEV FILE ############ | |
# Dev file | |
[servers] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import nltk | |
nltk.download(['punkt', 'wordnet']) | |
import re | |
import numpy as np | |
import pandas as pd | |
from nltk.tokenize import word_tokenize | |
from nltk.stem import WordNetLemmatizer | |
from sklearn.metrics import confusion_matrix | |
from sklearn.model_selection import train_test_split |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import nltk | |
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger']) | |
import re | |
import pandas as pd | |
from nltk.tokenize import word_tokenize | |
from nltk.stem import WordNetLemmatizer | |
from sklearn.base import BaseEstimator, TransformerMixin | |
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+' |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import nltk | |
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger']) | |
import re | |
import numpy as np | |
import pandas as pd | |
from nltk.tokenize import word_tokenize | |
from nltk.stem import WordNetLemmatizer | |
from sklearn.metrics import confusion_matrix |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# import libraries | |
from pyspark import SparkConf, SparkContext | |
from pyspark.sql import SparkSession, Window | |
from pyspark.sql.functions import count, when, isnan, isnull, desc_nulls_first, desc, \ | |
from_unixtime, col, dayofweek, dayofyear, hour, to_date, month | |
import pyspark.sql.functions as F | |
from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer, VectorAssembler, StandardScaler, MinMaxScaler | |
from pyspark.ml.classification import DecisionTreeClassifier, RandomForestClassifier | |
# sc = SparkContext(appName="Project_workspace") |
OlderNewer