Skip to content

Instantly share code, notes, and snippets.

@cjauvin
Last active August 29, 2015 14:07
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save cjauvin/703dc6ec0c4d4cd98333 to your computer and use it in GitHub Desktop.
Save cjauvin/703dc6ec0c4d4cd98333 to your computer and use it in GitHub Desktop.
{
"metadata": {
"name": "",
"signature": "sha256:ef7ca5a14e7a01867f1d77880a532e12f70a22d5dc580b53ccfce1ba360ba3d1"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "heading",
"level": 1,
"metadata": {},
"source": [
"Mod\u00e8les pr\u00e9dictifs construits \u00e0 partir de features SA "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Sections\n",
"\n",
"* [Baseline \u00e0 partir des outils mass-check et fp-fn-statistics](#1)\n",
"* [Mod\u00e8les pr\u00e9dictifs bas\u00e9s sur les features SA](#2)\n",
" * [Corpus osbf-test](#2.1)\n",
" * [Corpus gold_std](#2.2)\n",
" * [Corpus errata-oscar](#2.3)\n",
" * [osbf-test + gold_std](#2.4)\n",
" * [osbf-test vs. gold_std](#2.5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"1\"></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Baseline \u00e0 partir des outils mass-check et fp-fn-statistics\n",
"\n",
"Apr\u00e8s avoir utilis\u00e9 l'outil [mass-check](https://wiki.apache.org/spamassassin/MassCheck) (MC) sur les trois corpus de test (\"osbf-test-corpus\", \"gold_std\" et \"errata-oscar\"), on peut utiliser l'outil [fp-fn-statistics](https://wiki.apache.org/spamassassin/MassesOverview) sur les fichiers de log qui en r\u00e9sultent pour mesurer la performance d'un syst\u00e8me SpamAssassin 3.3 en version \"vanilla\" (SA, SAV, sans option \"bayes\" ni \"netcheck\"), c'est-\u00e0-dire install\u00e9 \u00e0 partir du package Ubuntu de base, avec pour seule modification le fait que les r\u00e8gles ont \u00e9t\u00e9 mises \u00e0 jour avec l'outil \"sa-update\". La m\u00e9trique utilis\u00e9e dans ce document sera le F-score :\n",
"\n",
"$F = 2 \\cdot \\frac{precision \\cdot recall}{precision + recall}$\n",
"\n",
"On doit calculer cette m\u00e9trique nous-m\u00eame, car \"fp-fn-statistics\" ne le fait pas explicitement :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"def f_score(precision, recall):\n",
" return 2 * precision * recall / (precision + recall)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 1
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" $ fp-fn-statistics --cffile=/var/lib/spamassassin/3.003002/updates_spamassassin_org --ham=results/osbf-test-corpus_ham.log --spam=results/osbf-test-corpus_spam.log\n",
" Reading scores from \"/var/lib/spamassassin/3.003002/updates_spamassassin_org\"...\n",
" [../build/parse-rules-for-masses -o tmp/rules_25502.pl -d \"/var/lib/spamassassin/3.003002/updates_spamassassin_org\" -s 0]\n",
" Reading per-message hit stat logs and scores...\n",
"\n",
" # SUMMARY for threshold 5.0:\n",
" # Correctly non-spam: 946 94.22%\n",
" # Correctly spam: 2503 80.92%\n",
" # False positives: 58 5.78%\n",
" # False negatives: 590 19.08%\n",
" # TCR(l=50): 0.886246 SpamRecall: 80.925% SpamPrec: 97.735%"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"f_score(.97735, .80925)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 2,
"text": [
"0.8853917916713312"
]
}
],
"prompt_number": 2
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" $ fp-fn-statistics --cffile=/var/lib/spamassassin/3.003002/updates_spamassassin_org --ham=results/gold_std_ham.log --spam=results/gold_std_spam.log\n",
" Reading scores from \"/var/lib/spamassassin/3.003002/updates_spamassassin_org\"...\n",
" [../build/parse-rules-for-masses -o tmp/rules_25543.pl -d \"/var/lib/spamassassin/3.003002/updates_spamassassin_org\" -s 0]\n",
" Reading per-message hit stat logs and scores...\n",
"\n",
" # SUMMARY for threshold 5.0:\n",
" # Correctly non-spam: 1133 93.10%\n",
" # Correctly spam: 1546 78.40%\n",
" # False positives: 84 6.90%\n",
" # False negatives: 426 21.60%\n",
" # TCR(l=50): 0.426286 SpamRecall: 78.398% SpamPrec: 94.847%"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"f_score(.94847, .78398)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 3,
"text": [
"0.8584161281422263"
]
}
],
"prompt_number": 3
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" $ fp-fn-statistics --cffile=/var/lib/spamassassin/3.003002/updates_spamassassin_org --ham=results/errata-oscar_ham.log --spam=results/errata-oscar_spam.log\n",
" Reading scores from \"/var/lib/spamassassin/3.003002/updates_spamassassin_org\"...\n",
" [../build/parse-rules-for-masses -o tmp/rules_25550.pl -d \"/var/lib/spamassassin/3.003002/updates_spamassassin_org\" -s 0]\n",
" Reading per-message hit stat logs and scores...\n",
"\n",
" # SUMMARY for threshold 5.0:\n",
" # Correctly non-spam: 6977 90.10%\n",
" # Correctly spam: 2557 29.82%\n",
" # False positives: 767 9.90%\n",
" # False negatives: 6018 70.18%\n",
" # TCR(l=50): 0.193270 SpamRecall: 29.819% SpamPrec: 76.925%"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"f_score(.76925, .29819)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 4,
"text": [
"0.42978089166604216"
]
}
],
"prompt_number": 4
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Si on part de l'hypoth\u00e8se qu'un SAV fait un relativement bon travail \u00e0 priori (ce que semble indiquer sa performance raisonnable sur les deux premiers corpus), la performance sur le troisi\u00e8me corpus laisse entendre qu'il y a un probl\u00e8me avec celui-ci (\u00e0 moins qu'il ne soit particuli\u00e8rement plus difficile que les autres). Jacques a confirm\u00e9 le fait qu'il y a un probl\u00e8me avec ce corpus, et travaille \u00e0 le corriger."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"2\"></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Mod\u00e8les pr\u00e9dictifs bas\u00e9s sur les features SA\n",
"\n",
"En se basant sur ces r\u00e9sultats, nous allons maintenant examiner l'id\u00e9e de remplacer la composante de SA qui classifie un message (en r\u00e9pondant essentiellement \u00e0 la question \"est-ce que son score est au-del\u00e0 d'un certain seuil?) par un mod\u00e8le pr\u00e9dictif, qui d\u00e9terminera plut\u00f4t la r\u00e9ponse \u00e0 partir des \"features\" du message, c'est-\u00e0-dire les r\u00e8gles SA qui s'y appliquent. \u00c0 la diff\u00e9rence de SA, qui utilise les r\u00e8gles de mani\u00e8re rigide et d\u00e9terministe, avec des poids pr\u00e9d\u00e9finis, notre mod\u00e8le devrait \u00eatre plus souple, en d\u00e9terminant lui-m\u00eame l'importance relative des r\u00e8gles, en la d\u00e9rivant tout d'abord sur un ensemble d'entrainement. Cette m\u00e9thodologie d'une phase d'entrainement suivie d'une phase de test est toutefois passablement diff\u00e9rente de celle d\u00e9crite plus haut avec MC, et les comparaisons qui suivent doivent \u00eatre intepr\u00e9t\u00e9es en en tenant compte. Quand on utilise MC pour \u00e9valuer la performance de SA sur un corpus, la phase d'entrainement est implicite, et pr\u00e9alablement accomplie avant m\u00eame de commencer, avec comme r\u00e9sultat les param\u00e8tres du syst\u00e8me (poids des r\u00e8gles, etc) tels qu'ils sont au d\u00e9part. Pour les mod\u00e8les pr\u00e9dictifs, nous utiliserons plut\u00f4t la technique du \"$k$-fold cross-validation\", qui proc\u00e8de en d\u00e9coupant tout d'abord le corpus en $k$ parties \u00e9gales, pour entrainer ensuite un mod\u00e8le sur $k-1$ parties distinctes, et tester finalement sur la $k$-i\u00e8me. En r\u00e9p\u00e9tant ce processus $k$ fois (en changeant \u00e0 chaque fois la partie avec laquelle on teste), on s'assure d'obtenir un estimateur passablement robuste de la \"vraie\" performance en test sur ce corpus.\n",
"\n",
"D\u00e9finissons tout d'abord un it\u00e9rateur qui parcourera les fichiers de log d'un corpus (produits par MC) pour en extraire les features. Cet it\u00e9rateur extraiera \u00e9galement des informations suppl\u00e9mentaires : la vraie classification des messages (i.e. les \"bonnes r\u00e9ponses\"), ainsi que leur classification et score selon SA."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"class MassCheckLogIterator:\n",
"\n",
" def __init__(self, fns):\n",
" self.fns = fns\n",
" self.golds = [] # is_really_spam[i] (bools)\n",
" self.preds = [] # is_spam_according_to_sa[i] (bools)\n",
" self.scores = [] # not used for the moment (ints)\n",
"\n",
" def __iter__(self):\n",
" for fn in self.fns:\n",
" is_really_spam = 'spam' in fn # watch out, not very robust!\n",
" with open(fn) as f:\n",
" for line in f:\n",
" if line.startswith('#'):\n",
" continue\n",
" tokens = line.split()\n",
" self.preds.append(tokens[0] == 'Y')\n",
" self.golds.append(is_really_spam)\n",
" self.scores.append(int(tokens[1]))\n",
" yield tokens[3:][0]\n",
" assert len(unique(self.preds)) == 2\n",
" assert len(unique(self.golds)) == 2"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 5
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"D\u00e9finissons ensuite une fonction qui utilisera cet it\u00e9rateur pour extraire les features des fichiers de log MC. Cette fonction a deux options : le fait d'inclure ou non les r\u00e8gles d\u00e9butant par un \"underscore\" (et qui sont reli\u00e9es aux m\u00e9ta-r\u00e8gles) et le fait d'inclure ou non la classification selon SA (qu'il n'est pas \"ill\u00e9gal\" d'utiliser soit dit en passant, car c'est une information fournie par SA, au m\u00eame titre que les r\u00e8gles, et qu'on peut donc traiter comme une feature si on veut)."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"import scipy.sparse as ss\n",
"from sklearn.utils import shuffle\n",
"from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\n",
"\n",
"def extract_features_from_mc_logs(mc_log_fns, include_underscore_rules, include_sa_preds):\n",
" mcl_iter = MassCheckLogIterator(mc_log_fns)\n",
" token_pattern = r\"(?u)\\b\\w\\w+\\b\" if include_underscore_rules else r\"(?u)\\b[^_,]\\w+\\b\"\n",
" vec = CountVectorizer(token_pattern=token_pattern)\n",
" X = vec.fit_transform(mcl_iter)\n",
" if include_sa_preds:\n",
" sa_preds = np.array(mcl_iter.preds).reshape(X.shape[0], 1)\n",
" X = ss.hstack((X, sa_preds))\n",
" y = mcl_iter.golds\n",
" X, y = shuffle(X, y)\n",
" return X, y, vec.vocabulary_.keys()"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 6
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"2.1\"></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Corpus osbf-test\n",
"\n",
"Examinons tout d'abord le corpus osbf-test."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mc_log_fns = ('mc_logs/osbf-test-corpus_ham.log',\n",
" 'mc_logs/osbf-test-corpus_spam.log')\n",
"X, y, rules = extract_features_from_mc_logs(mc_log_fns, \n",
" include_underscore_rules=False, \n",
" include_sa_preds=False)\n",
"\n",
"print('X:', repr(X))\n",
"print('Ham/spam distribution:', bincount(y))\n",
"rules[:10]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"X: <4097x350 sparse matrix of type '<type 'numpy.int64'>'\n",
"\twith 26726 stored elements in Compressed Sparse Row format>\n",
"Ham/spam distribution: [1004 3093]\n"
]
},
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 7,
"text": [
"[u'uri_novowel',\n",
" u'fsl_interia_abuse',\n",
" u'na_dollars',\n",
" u'mime_header_ctype_only',\n",
" u'subj_obfu_punct_many',\n",
" u'livefilestore',\n",
" u'body_empty',\n",
" u'part_cid_stock',\n",
" u'missing_headers',\n",
" u'all_trusted']"
]
}
],
"prompt_number": 7
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Voyons tout d'abord la performance d'un mod\u00e8le trivial qui consisterait \u00e0 toujours pr\u00e9dire la classe la plus fr\u00e9quente (ce qui reviendrait \u00e0 classer tous les messages en tant que spam dans ce cas). La m\u00e9trique utilis\u00e9e ici est \"l'exactitude\" (traduction de \"accuracy\", qu'il est sp\u00e9cialement important de ne pas confondre avec la *pr\u00e9cision* dans ce contexte), qui correspond au \"taux de bonnes r\u00e9ponses\" (spam **et** ham) du mod\u00e8le :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"from sklearn.dummy import DummyClassifier\n",
"\n",
"dc = DummyClassifier(strategy='most_frequent').fit(X, y)\n",
"dc.score(X, y)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 8,
"text": [
"0.75494264095679764"
]
}
],
"prompt_number": 8
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Sans surprise, cette m\u00e9trique est \u00e9gale \u00e0 la proportion de spam par rapport \u00e0 la taille du corpus :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"sum(y) / len(y)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 9,
"text": [
"0.75494264095679764"
]
}
],
"prompt_number": 9
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Mais ce n'est pas l'exactitude qui nous int\u00e9resse en r\u00e9alit\u00e9, mais bien le F-score :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"from sklearn.metrics import f1_score\n",
"\n",
"f1_score(y, dc.predict(X))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 10,
"text": [
"0.86036161335187755"
]
}
],
"prompt_number": 10
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ceci est un peu surprenant, car \u00e7a veut dire que la performance obtenue par le SAV sur osbf-corpus (F=0.885) n'est probablement pas aussi bonne qu'on ne l'aurait cru \u00e0 priori.. Quoi qu'il en soit, on doit absolument faire mieux que le \"mod\u00e8le trivial\", car sinon, \u00e7a veut dire que nos efforts ne valent pas mieux que de faire la chose la plus simple qui soit. En fait, si on consid\u00e8re les d\u00e9finitions de \"pr\u00e9cision\" et \"rappel\", il n'y a rien de surprenant : la pr\u00e9cision est la proportion de ce qu'on a classifi\u00e9 comme spam qui en \u00e9tait v\u00e9ritablement, et le rappel est la proportion du spam qu'on a r\u00e9ussi \u00e0 classifier comme tel par rapport \u00e0 la totalit\u00e9 du spam, donc :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"precision = sum(y) / len(y) # i.e. equivalent to accuracy in this context\n",
"recall = 1 # saying that everything is spam means that we'll catch it all\n",
"f_score(precision, recall)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 11,
"text": [
"0.86036161335187755"
]
}
],
"prompt_number": 11
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"On note que dans un cas plus extr\u00eame, o\u00f9 la proportion de spam \u00e0 priori serait de 90% par exemple, le F-score trivial serait encore plus extr\u00eame :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"f_score(precision=.9, recall=1)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 12,
"text": [
"0.9473684210526316"
]
}
],
"prompt_number": 12
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Mais dans les faits c'est plut\u00f4t l'inverse qui se produit, avec une proportion de spam \u00e0 priori beaucoup plus basse (environ 6%), ce qui aurait essentiellement pour effet d'interdire l'usage de toute strat\u00e9gie triviale, comme le montre ce calcul :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"f_score(precision=.94, recall=0)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 13,
"text": [
"0.0"
]
}
],
"prompt_number": 13
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Voyons maintenant la performance d'un \"vrai\" mod\u00e8le pr\u00e9dictif, un SVM (qui n'est pas trop loin d'une r\u00e9gression logistique dans ce contexte) :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"from sklearn.svm import LinearSVC\n",
"from sklearn.cross_validation import cross_val_score\n",
"\n",
"# k=5 fold cross-validation\n",
"average(cross_val_score(LinearSVC(), X, y, cv=5, scoring='f1'))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 14,
"text": [
"0.95668626363156795"
]
}
],
"prompt_number": 14
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"C'est d\u00e9j\u00e0 beaucoup mieux que le mod\u00e8le trivial. Voyons maintenant ce qui se passe si on inclut les r\u00e8gles qui commencent par un underscore :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"X, y, ot_rules = extract_features_from_mc_logs(mc_log_fns,\n",
" include_underscore_rules=True, \n",
" include_sa_preds=False)\n",
"\n",
"print('X:', repr(X))\n",
"print('Ham/spam distribution:', bincount(y))\n",
"rules[:10]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"X: <4097x1016 sparse matrix of type '<type 'numpy.int64'>'\n",
"\twith 260024 stored elements in Compressed Sparse Row format>\n",
"Ham/spam distribution: [1004 3093]\n"
]
},
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 15,
"text": [
"[u'uri_novowel',\n",
" u'fsl_interia_abuse',\n",
" u'na_dollars',\n",
" u'mime_header_ctype_only',\n",
" u'subj_obfu_punct_many',\n",
" u'livefilestore',\n",
" u'body_empty',\n",
" u'part_cid_stock',\n",
" u'missing_headers',\n",
" u'all_trusted']"
]
}
],
"prompt_number": 15
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"On voit que le nombre de r\u00e8gles augmente de mani\u00e8re significative. Voyons l'effet de ces features suppl\u00e9mentaires sur le mod\u00e8le :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"average(cross_val_score(LinearSVC(), X, y, cv=5, scoring='f1'))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 16,
"text": [
"0.98460766371408925"
]
}
],
"prompt_number": 16
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ces features suppl\u00e9mentaires sont clairement utiles. Voyons maintenant l'effet de l'ajout de la pr\u00e9diction de SA en tant que feature additionnelle :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"X, y, rules = extract_features_from_mc_logs(mc_log_fns,\n",
" include_underscore_rules=True, \n",
" include_sa_preds=True)\n",
"average(cross_val_score(LinearSVC(), X, y, cv=5, scoring='f1'))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 17,
"text": [
"0.98624133373300804"
]
}
],
"prompt_number": 17
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Il semble que cette feature n'am\u00e9liore pas vraiment le mod\u00e8le."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"2.2\"></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Corpus gold_std\n",
"\n",
"Examinons maintenant le corpus gold_std :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mc_log_fns = ('mc_logs/gold_std_ham.log',\n",
" 'mc_logs/gold_std_spam.log')\n",
"X, y, gs_rules = extract_features_from_mc_logs(mc_log_fns, \n",
" include_underscore_rules=True, \n",
" include_sa_preds=False)\n",
"\n",
"print('X:', repr(X))\n",
"print('Ham/spam distribution:', bincount(y))\n",
"rules[:10]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"X: <3189x848 sparse matrix of type '<type 'numpy.int64'>'\n",
"\twith 200191 stored elements in Compressed Sparse Row format>\n",
"Ham/spam distribution: [1217 1972]\n"
]
},
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 18,
"text": [
"[u'__oldpeg',\n",
" u'tvd_rcvd_ip4',\n",
" u'__lucky_winner',\n",
" u'fsl_interia_abuse',\n",
" u'__fraud_vqe',\n",
" u'na_dollars',\n",
" u'to_malformed',\n",
" u'__money_fraud_3',\n",
" u'__from_ebay_loose',\n",
" u'__is_exch']"
]
}
],
"prompt_number": 18
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Examinons tout d'abord la performance d'un mod\u00e8le trivial :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"dc = DummyClassifier(strategy='most_frequent').fit(X, y)\n",
"f1_score(y, dc.predict(X))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 19,
"text": [
"0.76419298585545437"
]
}
],
"prompt_number": 19
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"On voit donc ici que la performance obtenue par MC (F=0.858) est plus acceptable, relativement \u00e0 ce corpus particulier."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Avec un SVM :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"average(cross_val_score(LinearSVC(), X, y, cv=5, scoring='f1'))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 20,
"text": [
"0.96668677183028695"
]
}
],
"prompt_number": 20
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"2.3\"></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Corpus errata-oscar\n",
"\n",
"M\u00eame s'il faut se m\u00e9fier du corpus errata-oscar \u00e9tant donn\u00e9 sa composition (comme on l'a vu plus haut), on peut tout de m\u00eame examiner ce qui se passerait si on lui appliquait un mod\u00e8le pr\u00e9dictif."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mc_log_fns = ('mc_logs/errata-oscar_ham.log',\n",
" 'mc_logs/errata-oscar_spam.log')\n",
"X, y, eo_rules = extract_features_from_mc_logs(mc_log_fns, \n",
" include_underscore_rules=True, \n",
" include_sa_preds=False)\n",
"\n",
"print('X:', repr(X))\n",
"print('Ham/spam distribution:', bincount(y))\n",
"rules[:10]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"X: <16319x1193 sparse matrix of type '<type 'numpy.int64'>'\n",
"\twith 943272 stored elements in Compressed Sparse Row format>\n",
"Ham/spam distribution: [7744 8575]\n"
]
},
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 21,
"text": [
"[u'__oldpeg',\n",
" u'tvd_rcvd_ip4',\n",
" u'__lucky_winner',\n",
" u'fsl_interia_abuse',\n",
" u'__fraud_vqe',\n",
" u'na_dollars',\n",
" u'to_malformed',\n",
" u'__money_fraud_3',\n",
" u'__from_ebay_loose',\n",
" u'__is_exch']"
]
}
],
"prompt_number": 21
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"average(cross_val_score(LinearSVC(), X, y, cv=5, scoring='f1'))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 22,
"text": [
"0.92730399897980909"
]
}
],
"prompt_number": 22
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ce r\u00e9sultat est \"inqui\u00e9tant\" dans la mesure o\u00f9 il montre bien comment un mod\u00e8le statistique peut \"apprendre\" n'importe quel signal (qu'il repr\u00e9sente quelque chose de r\u00e9el ou non), en autant qu'il ne soit pas compl\u00e8tement al\u00e9atoire (i.e. qu'il ait une certaine structure, donc qu'il v\u00e9hicule de l'information)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"2.4\"></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### osbf-test + gold_std\n",
"\n",
"Voyons voir ce qui se passe si on consid\u00e8re ces deux corpus \u00e0 la fois :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mc_log_fns = ('mc_logs/osbf-test-corpus_ham.log',\n",
" 'mc_logs/osbf-test-corpus_spam.log',\n",
" 'mc_logs/gold_std_ham.log',\n",
" 'mc_logs/gold_std_spam.log')\n",
"X, y, rules = extract_features_from_mc_logs(mc_log_fns, \n",
" include_underscore_rules=True, \n",
" include_sa_preds=False)\n",
"\n",
"print('X:', repr(X))\n",
"print('Ham/spam distribution:', bincount(y))\n",
"rules[:10]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"X: <7286x1086 sparse matrix of type '<type 'numpy.int64'>'\n",
"\twith 460215 stored elements in Compressed Sparse Row format>\n",
"Ham/spam distribution: [2221 5065]\n"
]
},
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 23,
"text": [
"[u'tvd_rcvd_ip4',\n",
" u'__is_exch',\n",
" u'__trusted_check',\n",
" u'body_empty',\n",
" u'__rdns_dynamic_ipaddr',\n",
" u'missing_headers',\n",
" u'__tvd_subj_num_obfu',\n",
" u'__for_sale_prc_10k',\n",
" u'all_trusted',\n",
" u'__xprio']"
]
}
],
"prompt_number": 23
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"average(cross_val_score(LinearSVC(), X, y, cv=5, scoring='f1'))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 24,
"text": [
"0.97557828494388588"
]
}
],
"prompt_number": 24
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"2.5\"></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### osbf-test vs. gold_std\n",
"\n",
"Voyons voir maintenant ce qui se passe si on entraine un mod\u00e8le sur osbf-test, mais qu'on le teste sur gold_str, et vice versa. Ceci n\u00e9cessite tout d'abord une nouvelle fonction d'extraction, qui encode les features de test \u00e0 partir de celles trouv\u00e9es pour le train :"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"def extract_train_test_features_from_mc_logs(train_mc_log_fns, test_mc_logs_fn,\n",
" include_underscore_rules=True,\n",
" include_mc_preds=False):\n",
" train_mcl_iter = MassCheckLogIterator(train_mc_log_fns)\n",
" token_pattern = r\"(?u)\\b\\w\\w+\\b\" if include_underscore_rules else r\"(?u)\\b[^_,]\\w+\\b\"\n",
" vec = CountVectorizer(token_pattern=token_pattern, binary=True)\n",
" X_train = vec.fit_transform(train_mcl_iter)\n",
" if include_mc_preds:\n",
" X_train = ss.hstack((X_train, np.array(train_mcl_iter.preds).reshape(X_train.shape[0], 1)))\n",
" y_train = train_mcl_iter.golds\n",
" test_mcl_iter = MassCheckLogIterator(test_mc_log_fns)\n",
" X_test = vec.transform(test_mcl_iter) # the vectorizer is not fitted here, that's important!\n",
" if include_mc_preds:\n",
" X_test= ss.hstack((X_test, np.array(test_mcl_iter.preds).reshape(X_test.shape[0], 1))) \n",
" y_test = test_mcl_iter.golds\n",
" X_train, y_train = shuffle(X_train, y_train)\n",
" X_test, y_test = shuffle(X_test, y_test) \n",
" return X_train, y_train, X_test, y_test, vec.vocabulary_.keys()"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 25
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"train_mc_log_fns = ('mc_logs/osbf-test-corpus_ham.log',\n",
" 'mc_logs/osbf-test-corpus_spam.log')\n",
"test_mc_log_fns = ('mc_logs/gold_std_ham.log',\n",
" 'mc_logs/gold_std_spam.log')\n",
"X_train, y_train, X_test, y_test, rules = extract_train_test_features_from_mc_logs(train_mc_log_fns, test_mc_log_fns)\n",
"\n",
"clf = LinearSVC()\n",
"\n",
"clf.fit(X_train, y_train)\n",
"print('Trained on OT, tested on GS:', f1_score(y_test, clf.predict(X_test)))\n",
"\n",
"clf.fit(X_test, y_test)\n",
"print('Trained on GS, tested on OT:', f1_score(y_train, clf.predict(X_train)))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"Trained on OT, tested on GS: 0.849945828819\n",
"Trained on GS, tested on OT:"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
" 0.869074889868\n"
]
}
],
"prompt_number": 26
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ces r\u00e9sultats (\u00e0 peu pr\u00e8s \u00e9quivalents aux baselines) laissent croire que les deux corpus sont trop diff\u00e9rents pour que l'un puisse extrapoler \u00e0 partir de l'autre, mais il me semblerait bon de les examiner plus en d\u00e9tail par contre.. \u00e0 suivre."
]
}
],
"metadata": {}
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment