Skip to content

Instantly share code, notes, and snippets.

View IronistM's full-sized avatar
🏗️
Under construction

Manos Parzakonis IronistM

🏗️
Under construction
View GitHub Profile
/*
* Instructions: Place *after* the Google Analytics Snippet,
* so this _gaq.push executes after the _gaq.push(["_trackPageview"]); call.
*/
_gaq.push(function() {
if (window.history && history.replaceState && location.search.match(/utm_/)) {
var query = {};
location.search.replace(/([^?=&]+)(=([^&]*))?/g, function($0, $1, $2, $3) {
if (! ($1).match(/^utm_/)) {
query[$1] = $3;
# *--------------------------------------------------------------------
# | FUNCTION: create_test_sets
# | Creates simple artifical marketing mix data for testing code and
# | techniques
# *--------------------------------------------------------------------
# | Version |Date |Programmer |Details of Change
# | 01 |29/11/2011|Simon Raper |first version.
# *--------------------------------------------------------------------
# | INPUTS: base_p Number of base sales
# | trend_p Increase in sales for every unit increase
# *--------------------------------------------------------------------
# | FUNCTION: visCorrel
# | Creates an MDS plot where the distance between variables represents
# | correlation between the variables (closer=more correlated)
# *--------------------------------------------------------------------
# | Version |Date |Programmer |Details of Change
# | 01 |05/01/2012|Simon Raper |first version.
# *--------------------------------------------------------------------
# | INPUTS: dataset A dataframe containing only the explanatory
# | variables. It should not contain missing
#TV now coincides with winter. Carry over is dec, theta is dim, beta is ad_p,
tv_grps<-rep(0,5*52)
tv_grps[40:45]<-c(390,250,100,80,120,60)
tv_grps[92:97]<-c(390,250,100,80,120,60)
tv_grps[144:149]<-c(390,250,100,80,120,60)
tv_grps[196:201]<-c(390,250,100,80,120,60)
tv_grps[248:253]<-c(390,250,100,80,120,60)
if (adstock_form==2){adstock<-adstock_calc_2(tv_grps, dec, dim)}
else {adstock<-adstock_calc_1(tv_grps, dec, dim)}
##Creating Random Sales Data of the format CustomerId (unique to each customer), Sales.Date,Purchase.Value
sales=data.frame(sample(1000:1999,replace=T,size=10000),abs(round(rnorm(10000,28,13))))
names(sales)=c("CustomerId","Sales Value")
sales.dates <- as.Date("2010/1/1") + 700*sort(stats::runif(10000))
#generating random dates
#### Connecting to Google Analytics API via R
#### Uses OAuth 2.0
#### https://developers.google.com/analytics/devguides/reporting/core/v3/ for documentation
# Install devtools package & rga - This is only done one time
install.packages("devtools")
library(devtools)
install_github("rga", "skardhamar")
###############################################
##
## Attempt no 2 at building a shiny web app
## for AB Testing use - using global.R
##
## global.R - loading and defining variables for the global environment
##
###############################################
# Pallette used in some charts as a general indicator color for better or worse that the control group
# Use a call to the Management API to query your Profiles and select among them
ga$getProfiles()
# Define the ids and web properties to work with. The web.proporties is reserved for
# the name of the ids for labeling reasons etc
ids<-c()
web.properties<-c()
# Only for the first and test runs!
final_dataset<-NA
j<-1
# In the future we should only get data for increment dates. Don't we?
# get.start.date<-min(final_dataset$date)
# Set up a filters vector to loop over the distinct categories of the blog
filters<-c("ga:pagePath=~^/blog/category/measure/*;ga:pageLoadSample>0","ga:pagePath=~^/blog/category/statistics/*;ga:pageLoadSample>0","ga:pagePath=~^/blog/category/music/*;ga:pageLoadSample>0")
page.group<-c("measure","statistics","music")
# setwd('C:/Users/m.parzakonis/Google Drive/MyCodeRants/GA/data')
@IronistM
IronistM / how_many_tweets.R
Last active December 17, 2015 01:19
source : Text Data Mining with Twitter and R (Posted on April 8, 2011) | http://heuristically.wordpress.com/2011/04/08/text-data-mining-twitter-r/
###
### Read tweets from Twitter using ATOM (XML) format
###
# loading the package is required once each session
require(XML)
# installation is required only required once and is rememberd across sessions. Uncomment the next line if you lack XML package
# install.packages('XML')
# initialize a storage variable for Twitter tweets along a query one