This small subclass of the Pandas sqlalchemy-based SQL support for reading/storing tables uses the Postgres-specific "COPY FROM" method to insert large amounts of data to the database. It is much faster that using INSERT. To acheive this, the table is created in the normal way using sqlalchemy but no data is inserted. Instead the data is saved to a temporary CSV file (using Pandas' mature CSV support) then read back to Postgres using Psychopg2 support for COPY FROM STDIN.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"cells": [ | |
{ | |
"cell_type": "code", | |
"execution_count": 1, | |
"metadata": { | |
"collapsed": true | |
}, | |
"outputs": [], | |
"source": [ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"cells": [ | |
{ | |
"cell_type": "code", | |
"execution_count": 1, | |
"metadata": { | |
"collapsed": true | |
}, | |
"outputs": [], | |
"source": [ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def diff_df(df1, df2, how="left"): | |
""" | |
Find Difference of rows for given two dataframes | |
this function is not symmetric, means | |
diff(x, y) != diff(y, x) | |
however | |
diff(x, y, how='left') == diff(y, x, how='right') | |
Ref: https://stackoverflow.com/questions/18180763/set-difference-for-pandas/40209800#40209800 | |
""" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
######### ansible.cfg FILE ############ | |
[defaults] | |
inventory = ./dev | |
######### DEV FILE ############ | |
# Dev file | |
[servers] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import nltk | |
nltk.download(['punkt', 'wordnet']) | |
import re | |
import numpy as np | |
import pandas as pd | |
from nltk.tokenize import word_tokenize | |
from nltk.stem import WordNetLemmatizer | |
from sklearn.metrics import confusion_matrix | |
from sklearn.model_selection import train_test_split |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import nltk | |
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger']) | |
import re | |
import pandas as pd | |
from nltk.tokenize import word_tokenize | |
from nltk.stem import WordNetLemmatizer | |
from sklearn.base import BaseEstimator, TransformerMixin | |
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+' |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import nltk | |
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger']) | |
import re | |
import numpy as np | |
import pandas as pd | |
from nltk.tokenize import word_tokenize | |
from nltk.stem import WordNetLemmatizer | |
from sklearn.metrics import confusion_matrix |
OlderNewer