Skip to content

Instantly share code, notes, and snippets.

View ricklentz's full-sized avatar

RWL ricklentz

View GitHub Profile
@ricklentz
ricklentz / stablediffusionwalk.py
Created August 16, 2022 07:09 — forked from karpathy/stablediffusionwalk.py
hacky stablediffusion code for generating videos
"""
draws many samples from a diffusion model by slerp'ing around
the noise space, and dumps frames to a directory. You can then
stitch up the frames with e.g.:
$ ffmpeg -r 10 -f image2 -s 512x512 -i out/frame%04d.jpg -vcodec libx264 -crf 10 -pix_fmt yuv420p test.mp4
THIS FILE IS HACKY AND NOT CONFIGURABLE READ THE CODE, MAKE EDITS TO PATHS AND SETTINGS YOU LIKE
THIS FILE IS HACKY AND NOT CONFIGURABLE READ THE CODE, MAKE EDITS TO PATHS AND SETTINGS YOU LIKE
THIS FILE IS HACKY AND NOT CONFIGURABLE READ THE CODE, MAKE EDITS TO PATHS AND SETTINGS YOU LIKE
@ricklentz
ricklentz / 00_cqf_ml_elective.md
Created November 22, 2019 15:58 — forked from yhilpisch/00_cqf_ml_elective.md
Machine Learning for Finance | Dr. Yves J. Hilpisch | CQF Elective | London, 23. May 2017

Machine Learning for Finance

A CQF elective with Dr. Yves J. Hilpisch, The Python Quants GmbH

General resources:

'''
Created on Jul 2, 2013
@author: sdejonckheere
pyDes must be installed (pip install pydes)
'''
from pyDes import des, PAD_NORMAL, CBC
import binary
from binary import ror, set_odd_parity
import codecs
Darren,
I like your Excel tool and found the side-by-side visual helpful in understanding the changes in power metric conveyed by the author.
I was curious this past weekend and set out to replicate data gathering practice outlined in the Zhang paper. I found that even as a registered Facebook developer, it is intentionally hard to pull public user data using the graph API within the current Terms of Use (even for just brand pages).
I am struggling to connect the dots on how the paper's benchmarks integrate with a real company's marketing strategy metrics. I see that there is value in the structure of a social graph and methods to measure a node's relative power. But I feel that capturing and proving value directly from these metrics in a restricted data collection ecosystem is no longer feasible.
SAP, Oracle, IBM, and Acxiom acquired data aggregators of historical, individual-level transaction data to help brands efficiently run marketing campaigns. Then as social network adoption grew, be
The spectrum of data available to Pepsi for performing targeting operations in both of these articles seems limited given the context of our past coursework. From reads in other DS courses, we learned that firms that perform targeted marketing as a service:
1) Model the market to identify current customers and those that look like current customers (new targets)
2) Model engagement level, geography, personality, and
sociodemographics of both existing and new targets for comparison and segment generation
3) Generate targeted actions crafted to drive each segment toward the desired state of behavior
It seems that both methodologies provide network analysis metrics useful for understanding a target customer or segment. I would think that Pepsi may want to augment their advertising campaign's process with this information. There are companies with very detailed individual-level customer behavior data (IBM and Oracle). Pepsi may want to study if their network data and these metrics can further improve the
Venmathi,
I like the principles you selected for this analysis. I feel that Gestalt Laws are revealing since correlation is often 'good enough' for some of the types of analysis we take on as data scientists. When data are no longer correlated or are newly correlated, we are drawn to find out 'why.' Rosling's talk inspired many similar time series animations, including an internal request (back when I first started working in finance as a software engineer) to recreate this (http://www.nytimes.com/interactive/2009/07/02/business/economy/20090705-cycles-graphic.html) with slightly different data.
I also took in much from the Few readings. It is great to have these principles and laws as we work through the first assignment and leverage tools like plot.ly (https://plot.ly/python/) to build out our assignment product. The higher level visualization concepts that were covered in Good Charts particularly resonated with my 'strategic planner' mindset. Great work!
Emanuel,
Excellent post, I like the relevance of the concepts around misinformation. These visualization concepts parallel familiar Information Operations and Information Warfare concepts. I recall taking a machine learning course (over 15 years ago) by Professor Michalski at GMU's Machine Learning and Inference laboratory. He mentioned that information could be evaluated over three axes: specificity, relevance, and accuracy/truthfulness. I've used this rapid assessment tool for information in nearly every domain in my career since that course.
From what I have learned so far in this course, information assessment tools are equally useful in the visualization domain. From a defensive stance, if we have 'gold copy' source data, we can evaluate the truthfulness of a visualization using the lie factor. Evaluate specificity using assessment of scale, as you referenced Wainer's Rule #2. Relevance is a bit harder, but likely the most important in online information sharing, since we see how seamlessly
# example
from IPython.display import YouTubeVideo
YouTubeVideo('Bqch2ptaAJ8', width=800, height=450)
Like others, I'm nearing the final stretch of this program (course 10/12). I've supplemented the material in our program with Udemy and Udacity courses and am preparing to compete in my first Kaggle competition as part of a Udacity capstone project. My dominant focus is on studying for a financial analysis credential that is required to advance at my current employer.
"I'm looking to increase information advantage by enriching analysis with machine learning workflows using current state of the art tools and methods."
I'm interested in unstructured data analysis more than visualization, namely because my organization is struggling to gain any value from it. I've been asked by my mentor to investigate methods that leverage unstructured data to narrow risk premium for certain investment and security events. I'm searching for methods that aid in generating machine-encoded knowledge representations of text data. I've looked at text summarizers (e.g. https://www.youtube.com/watch?v=ogrJaOIuBx4) and how IBM'
@ricklentz
ricklentz / README-Template.md
Last active January 19, 2018 03:08 — forked from PurpleBooth/README-Template.md
A template to make good README.md