#Why
I haven't seen any visualizations of "Exploding Bar Charts" using matplotlib
like the this one:
So I put together some very quick code to show how it could be done
#WIP
def gen_tickers(num_tickers, num_new): | |
""" | |
This function generates `num_tickers` of random, length 3 tickers and | |
`num_new` random, length 3 tickers | |
""" | |
s = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' | |
r = numpy.random.randint(1, 27, 3) | |
ticks = [] | |
#create the list of tickers | |
for i in numpy.arange(num_tickers): |
{ | |
"metadata": { | |
"name": "", | |
"signature": "sha256:12e3c9503b4648f28853f643f8c87ee662a50293d691268338736c4f49e808f0" | |
}, | |
"nbformat": 3, | |
"nbformat_minor": 0, | |
"worksheets": [ | |
{ | |
"cells": [ |
#Motivation
I was getting fed up with the usual, list
of keys, looping through, and assigning a value to the key... I wanted something a little shexxier using Python's map
functionality.
Here's what I came up with
##Makin' a Sexy Dict
#Parsing API xml
data using BeautifulSoup
I had a difficult time extracting data from a xml
object retrieved using the requests
. Simply, I had dome something like:
[In] 1: import requests
[In] 2: socket = requests.get('https://the?xml?shitting&api?url')
Most of the documents I found pointed me towards 'xml' and using ElementTree. After several attempts, with no success at all, I was able to get the desired result by using BeautifulSoup. Specifically the following:
##Step 1:
Disclaimer: Although I'm a python developer, I'm not someone who has built libraries from source. Use this gist at your own peril.
##Intro
If you're running Linux, you may have installed anaconda to replace your package management needs, and loved it. Except when you fire up iPython you get an annoying "extra space" for your tab completion... that hasn't been patched yet.
When looking here and here, it appears to still be an issue.
With some hand-holding of @cpcloud (as in, "he held mine"), he was able to fix that problem for me today with very little effort.
#Intro
If you're dealing with categorical variables, it's highly valuable to have:
sklearn
algorithms)groupby
and other such methods)I couldn't find something built-in, but the solve is pretty easy (and sexy), and a on-liner
##Turning The Billion Prices Data Series into Actual Inflation Data
They keep the data in tough to reach nooks... but I gotcha..
data_url = 'https://globalmarkets.statestreet.com/Proxy/Public/csv/US_monthly_series.csv'
bpp = pandas.DataFrame.from_csv(data_url)
#data is "monthly" and in points, but needs to be converted into daily
apr = bpp/100.*12
# `conda` env vars | |
## Quick `gist` to setup `conda` env variables | |
I try to keep `virtualenv` and `conda` going to ensure I stay versed in both virtual environment | |
methodologies. In `virtualenv` I can simply moding the script located in `$PATH/bin/activate` where | |
`$PATH` is the path to the virtual environment I'm working on. However, the process in `conda` is a | |
little more involved (but also straightforward, as all things `conda` seem to be). | |
## Create `activate / deactivate` directories |