Error while installing through pip install psycopg2
looks like this:
Please add the directory containing pg_config to the PATH
or specify the full executable path with the option (...)
Reference to the solution here.
Error while installing through pip install psycopg2
looks like this:
Please add the directory containing pg_config to the PATH
or specify the full executable path with the option (...)
Reference to the solution here.
select s3l.message, s3l.*, sle.* | |
from stl_load_errors sle | |
left join svl_s3log s3l | |
on sle.query = s3l.query | |
order by sle.starttime desc | |
limit 10; |
import pandas as pd | |
pd.set_option("display.max_rows", 500) | |
pd.set_option("display.max_columns", 500) | |
pd.set_option("display.width", 1000) | |
pd.set_option("display.max_colwidth", None) |
variable = "This is the value" | |
print(f"{variable=}") | |
# >>> variable='This is the value' |
Definition: combine multiple commits into one. More related to get tidy commits than a technical problem about not doing that.
You need to first figure out how many commits do you have to squash. To check that you can use:
git log
Imagine you wanna combine the last 3 commits into one. You'll do a soft reset from HEAD
minus 3 commits:
-- This setting disables the results cache, so we can see the full processing runtime each time we run the query | |
SET enable_result_cache_for_session TO OFF; |
from inspect import currentframe, getframeinfo | |
print(getframeinfo(currentframe()).lineno) # prints 3 |
-- Problem: you don't see all the schemas when querying PG_TABLE_DEF | |
-- Solution: | |
-- 1. First check if the schema you're trying to query is on the search path | |
show search_path; | |
-- 2. Add the missing one(s) to the search path (imagine the result was only public and you're missing data_warehouse and matching) | |
set search_path to '$user', public, data_warehouse, matching; -- No matter which is your user, use '$user' |
import pandas as pd | |
import numpy as np | |
df = pd.DataFrame({"A": [1, 2, 3], "B": [1.2, np.NaN, 3.4]}) | |
result = ( | |
df | |
.replace([np.nan], [None], regex=False) | |
.to_dict(orient="records") | |
) |