# create a new screen called sumpTin & nutTin
$ screen -S sumpTin
$ screen -S nutTin
# Exit sumpTin in detached state
# Ctrl A, Ctrl D
# list existing screens
$ screen -ls
def press_statistic(y_true, y_pred, xs): | |
""" | |
Calculation of the `Press Statistics <https://www.otexts.org/1580>`_ | |
""" | |
res = y_pred - y_true | |
hat = xs.dot(np.linalg.pinv(xs)) | |
den = (1 - np.diagonal(hat)) | |
sqr = np.square(res/den) | |
return sqr.sum() |
I try to keep virtualenv
and conda
going to ensure I stay versed in both virtual environment
methodologies. In virtualenv
I can simply moding the script located in $PATH/bin/activate
where
$PATH
is the path to the virtual environment I'm working on. However, the process in conda
is a
little more involved (but also straightforward, as all things conda
seem to be).
# `conda` env vars | |
## Quick `gist` to setup `conda` env variables | |
I try to keep `virtualenv` and `conda` going to ensure I stay versed in both virtual environment | |
methodologies. In `virtualenv` I can simply moding the script located in `$PATH/bin/activate` where | |
`$PATH` is the path to the virtual environment I'm working on. However, the process in `conda` is a | |
little more involved (but also straightforward, as all things `conda` seem to be). | |
## Create `activate / deactivate` directories |
##Turning The Billion Prices Data Series into Actual Inflation Data
They keep the data in tough to reach nooks... but I gotcha..
data_url = 'https://globalmarkets.statestreet.com/Proxy/Public/csv/US_monthly_series.csv'
bpp = pandas.DataFrame.from_csv(data_url)
#data is "monthly" and in points, but needs to be converted into daily
apr = bpp/100.*12
#Intro
If you're dealing with categorical variables, it's highly valuable to have:
- Number representations for the category (so you can do things like
sklearn
algorithms) - A dictionary mapping of the numbers to the categories (for
groupby
and other such methods)
I couldn't find something built-in, but the solve is pretty easy (and sexy), and a on-liner
Disclaimer: Although I'm a python developer, I'm not someone who has built libraries from source. Use this gist at your own peril.
##Intro
If you're running Linux, you may have installed anaconda to replace your package management needs, and loved it. Except when you fire up iPython you get an annoying "extra space" for your tab completion... that hasn't been patched yet.
When looking here and here, it appears to still be an issue.
With some hand-holding of @cpcloud (as in, "he held mine"), he was able to fix that problem for me today with very little effort.
#Parsing API xml
data using BeautifulSoup
I had a difficult time extracting data from a xml
object retrieved using the requests
. Simply, I had dome something like:
[In] 1: import requests
[In] 2: socket = requests.get('https://the?xml?shitting&api?url')
Most of the documents I found pointed me towards 'xml' and using ElementTree. After several attempts, with no success at all, I was able to get the desired result by using BeautifulSoup. Specifically the following:
##Step 1:
#Motivation
I was getting fed up with the usual, list
of keys, looping through, and assigning a value to the key... I wanted something a little shexxier using Python's map
functionality.
Here's what I came up with
##Makin' a Sexy Dict