Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@gllamat
Created January 29, 2021 13:22
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save gllamat/ae758bc22f2cb5de621d893ba7b6769e to your computer and use it in GitHub Desktop.
Save gllamat/ae758bc22f2cb5de621d893ba7b6769e to your computer and use it in GitHub Desktop.
SentimentAnalysisFinance_RedditExample.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "SentimentAnalysisFinance_RedditExample.ipynb",
"provenance": [],
"collapsed_sections": [],
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/gllamat/ae758bc22f2cb5de621d893ba7b6769e/sentimentanalysisfinance.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "QroVmDntMELx"
},
"source": [
"# Sentiment Analysis in Finance\n",
"\n",
"A very hyped use of AI in Finance comes from the application of Sentiment Analysis as a factor to predict the future price of a security.\n",
"\n",
"Below I will explain some of the basics and caveats of using it in a financial context.\n",
"\n",
"Sentiment analysis can be defined as:\n",
"\n",
"\n",
"> \"*Sentiment analysis, or opinion mining, is an active area of\n",
"study in the field of natural language processing that analyzes\n",
"people's opinions, sentiments, evaluations, attitudes,\n",
"and emotions via the** computational treatment** of subjectivity\n",
"in text. *\"\n",
"\n",
"A simple introduction to it can be found in [\"How Quant Traders Use Sentiment To Get An Edge On The Market\"](https://www.forbes.com/sites/kumesharoomoogan/2015/08/06/how-quant-traders-use-sentiment-to-get-an-edge-on-the-market/#5a38018d4b5d), and typical academic papers explaining in full detail the process are [\"Twitter mood predicts the stock market.\"](https://arxiv.org/pdf/1010.3003.pdf) and [\"Stock Prediction Using Twitter Sentiment Analysis\"](http://cs229.stanford.edu/proj2011/GoelMittal-StockMarketPredictionUsingTwitterSentimentAnalysis.pdf).\n",
"\n",
"The basic idea is the following:\n",
"* convert a pipeline text sources (news, twitters, posts) into one or many **quantitative** (numerical) values,\n",
"* feed the above values into a complex model (classical econometrics or newly developed neural networks) as input to predict the price of a security.\n",
"\n",
"If you did open one of the academic papers, you would have been drowned in jargon, but by now \n",
"big companies like Bloomberg and new companies like RavenPack have jumped on the bandwagon and now provide Sentiment Analysis indices as utilities: [\"How you can get an edge by trading on news sentiment data\"](https://www.bloomberg.com/professional/blog/can-get-edge-trading-news-sentiment-data/) and [\"Abnormal Media Attention Impacts Stock Returns\"](https://www.ravenpack.com/research/abnormal-media-attention-impacts-stock-returns/).\n",
"\n",
"As in the previous posts, on one hand I prefer to produce [reproducible research](https://en.wikipedia.org/wiki/Reproducibility#Reproducible_research) in the form of jupyter notebooks that actually run (instead of pdf papers like the links above), but in this case the complexity is such that I only will illustrate basic concepts.\n",
"\n",
"Let starts by downloading a python off-the-shelf sentiment analyzer ([\"VADER: A Parsimonious Rule-based Model for\n",
"Sentiment Analysis of Social Media Text\"](https://pdfs.semanticscholar.org/a6e4/a2532510369b8f55c68f049ff11a892fefeb.pdf)).\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "l8s1wFcMEv27",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "321af970-8a33-4a80-b6a5-d57dc1bd2fdc"
},
"source": [
"\n",
"import nltk\n",
"nltk.download('vader_lexicon')\n",
"from nltk.sentiment.vader import SentimentIntensityAnalyzer\n",
"from nltk import tokenize\n"
],
"execution_count": 1,
"outputs": [
{
"output_type": "stream",
"text": [
"[nltk_data] Downloading package vader_lexicon to /root/nltk_data...\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"/usr/local/lib/python3.6/dist-packages/nltk/twitter/__init__.py:20: UserWarning: The twython library has not been installed. Some functionality from the twitter package will not be available.\n",
" warnings.warn(\"The twython library has not been installed. \"\n"
],
"name": "stderr"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "LQIxsbZrFB18"
},
"source": [
"sid = SentimentIntensityAnalyzer()"
],
"execution_count": 2,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "Lb_lLZtJV96E"
},
"source": [
"That's it. The magic of open source collaboration allows you to get the underlying tools free of charge, although if you are partial to corporate solution (and like Jeopardy) you can use IBM's [Watson Natural Understanding kit](https://www.ibm.com/watson/services/natural-language-understanding/).\n",
"\n",
"If you have this notebook open in Google colaboratory you can change the sentences below."
]
},
{
"cell_type": "code",
"metadata": {
"id": "LUOI27_nWz8J",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "41e24d8f-aa6a-45a2-f23b-f65e976162f9"
},
"source": [
"#@title Example of a negative sentence:\n",
"sentence = \"North Korea threatens to cancel Trump summit\" #@param {type:\"string\"}\n",
"ss = sid.polarity_scores(sentence)\n",
"print (ss)"
],
"execution_count": 3,
"outputs": [
{
"output_type": "stream",
"text": [
"{'neg': 0.479, 'neu': 0.521, 'pos': 0.0, 'compound': -0.5574}\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "HZ7fZdDdqdqQ",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "6f7f9e0a-fb24-4c58-9d10-c154a28f2bb7"
},
"source": [
"#@title Example of a positive sentence:\n",
"sentence = \"Colombia's ex-fighters taught skills for peace\" #@param {type:\"string\"}\n",
"ss = sid.polarity_scores(sentence)\n",
"print (ss)"
],
"execution_count": 4,
"outputs": [
{
"output_type": "stream",
"text": [
"{'neg': 0.0, 'neu': 0.588, 'pos': 0.412, 'compound': 0.5423}\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "vVv6Ucb8Xj7I"
},
"source": [
"You can see how the tool converted the text into various numerical values.\n",
"\n",
"The last value ('compound') can be used as a measure (from -1 to 1) of how 'positive' or 'negative' the sentence is, which in turn can now be manipulated numerically:\n",
"\n",
"\n",
"* aggregate them to create an index of positive and negative total sentiment,\n",
"* use metadata to identify clusters of activity geographically\n",
"* separate text by likely subject (company) and use sentiment by company\n",
"\n",
"However, how does this off-the-shelf tool work in a financial context ? After all, it was developed as \"*a simple rule-based model for **general** sentiment analysis*\". \n",
"\n",
"Let's try it in another two examples:\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "hZ1Vz0s4qGo9",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "f41ef94e-db2c-4aa9-aac7-6d0b88fedcca"
},
"source": [
"#@title Example of a positive business sentence:\n",
"sentence = \"Paddy Power Betfair confirms it is in talks to buy FanDuel\" #@param {type:\"string\"}\n",
"ss = sid.polarity_scores(sentence)\n",
"print (ss)"
],
"execution_count": 5,
"outputs": [
{
"output_type": "stream",
"text": [
"{'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "WLkHvmZLGYQk",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "e00d7269-ca89-4726-be1b-b7244a1a3417"
},
"source": [
"#@title Example of a negative business sentence:\n",
"sentence = \"Court order threatens deal with Jio and sends shares down by a fifth\" #@param {type:\"string\"}\n",
"ss = sid.polarity_scores(sentence)\n",
"print (ss)"
],
"execution_count": 6,
"outputs": [
{
"output_type": "stream",
"text": [
"{'neg': 0.176, 'neu': 0.676, 'pos': 0.149, 'compound': -0.1027}\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "V_9CzYEgRoLp"
},
"source": [
"It does not do well in this very specific domain; a business analyst would have expected FanDuel shares to shot up while Jio's were already down 20% (a number which does not correlate well with only a compound score of -0.1027 instead of close to -1).\n",
"\n",
"(Update January 29 2021: See https://www.independent.co.uk/news/business/gamestop-share-price-reddit-hedge-fund-melvin-capital-b1793543.html)\n",
"\n",
"I will not spend time explaining the above Reddit versus Hedge Fund saga, read the story using the above link. \n",
"\n",
"However, a sample conversation shows the pitfalls of using non-domain sentiment analyser. If we take the following _stock positive_ message through a sentiment analyser used for movie reviews, feedback returns a _negative_ number. Be careful. \n",
"\n"
]
},
{
"cell_type": "code",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "ug8um0M-Rq-A",
"outputId": "7c826930-5ae5-429d-d682-3803fd967dd4"
},
"source": [
"#@title Example of a Reddit Positive business sentence:\n",
"sentence = \"HA 10k, those are rookie numbers. This shit is going to $69,420 and BEYOND\" #@param {type:\"string\"}\n",
"ss = sid.polarity_scores(sentence)\n",
"print (ss)"
],
"execution_count": 11,
"outputs": [
{
"output_type": "stream",
"text": [
"{'neg': 0.192, 'neu': 0.641, 'pos': 0.167, 'compound': -0.1197}\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "vodIyE9TsqJj"
},
"source": [
"\n",
"\n",
"## Some issues with Sentiment analysis in Finance\n",
"\n",
"It turns out that this off-the-shelf tool is not great for financial results (and that is only after checking 2 simple cases). Some additional problems:\n",
"\n",
"\n",
"* Sentiment analysis is very 'domain specific' (academic talk to say that a set of tools only works in a pre-defined context - so you need to find the correct one for your applications)\n",
"* Regime changes can happen - if you keep using 'old' models for current data you will miss opportunities: think what happens if you use use pre-2013 models (before [HODL](https://litecoinalliance.org/hodl-on-for-dear-life-the-history-and-meaning-of-hodl/) for cryptocurrencies entered the cybersphere) - in fact, if you look at Vaders [lexicon data](https://www.kaggle.com/nltkdata/vader-lexicon/data) you will not find 'hodl'.\n",
"* as the 'Jio' example shows, a piece of news can be very negative *but* the price impact can be muted (that particular piece of news is very negative but is also a laggard indicator as the price already tanked).\n",
"\n",
"Instead, we can:\n",
"\n",
"###Create our own sentiment tool. \n",
"\n",
"The Vader guys:\n",
"> \" collected\n",
"intensity ratings on each of our candidate lexical\n",
"features from ten independent human raters (for a total of\n",
"90,000+ ratings).\"\n",
"\n",
"Instead of using their \"lexical features\", we would have to design and implement a system to collect the ratings of something close to 90k (the more the merrier) in a business context (create a user interface). I could not find a publicly available corpus of annotated financial news (financial news with a score).\n",
"\n",
"Also, if we are training in our own we could change the numerical measure to reflect directly the impact on the stock.\n",
"\n",
"**Connecting to relevant (and timely) news pipeline:** assuming the sentiment tool is ok, we would need to connect it to a set of reliable news and relevant comments (no fake news, or add a fake news analyser).\n",
"\n",
"### Use Professional sentiment indicators \n",
"I mentioned above some sentiment analysis providers. The good thing is that the platforms handle the whole data connection. Unfortunately, their systems are proprietary (blackest black box of all):\n",
"* so we cannot test whether their sentiment tool is finely tuned to our specific financial topics (or generic business topics), \n",
"* we cannot control how often they update them,\n",
"* are available to competitors, hence the alpha they can provide will diminish in time.\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bMaodn4csn0K"
},
"source": [
"# Entity Recognition\n",
"\n",
"In financial news we would also like to to separate text documents that correspond to different companies. We can do so by using additional modules that perform [Entity Recognition](https://en.wikipedia.org/wiki/Named-entity_recognition): \"*a subtask of information extraction that seeks to locate and classify named entities in text into pre-defined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc*\"\n",
"\n",
"Applying the general purpose ER module on:\n",
"\n",
"\n",
"> \"Court order threatens deal with Jio and sends shares down by a fifth\"\n",
"\n",
"We get two entities, \"Court\" and \"Jio\": if we were monitoring \"Jio\" we could now separate this message into the 'bad' news bucket and see how its sentiment would affect the price.\n",
"\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "6iR1EL06nVRH",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "111086b3-88c0-45b6-bf68-51a0ed9b8c1d"
},
"source": [
"\n",
"nltk.download('maxent_ne_chunker')\n",
"nltk.download('words')\n",
"nltk.download('averaged_perceptron_tagger')\n",
"nltk.download('punkt')\n",
"\n",
"from nltk import ne_chunk, pos_tag, word_tokenize\n",
"from nltk.tree import Tree\n",
" \n",
"def get_continuous_chunks(text):\n",
" chunked = ne_chunk(pos_tag(word_tokenize(text)))\n",
" prev = None\n",
" continuous_chunk = []\n",
" current_chunk = []\n",
" for i in chunked:\n",
"# print(i)\n",
"# print (type(i))\n",
"\n",
" if type(i) == Tree:\n",
" print(i)\n",
" current_chunk.append(\" \".join([token for token, pos in i.leaves()]))\n",
" elif current_chunk:\n",
" named_entity = \" \".join(current_chunk)\n",
" if named_entity not in continuous_chunk:\n",
" continuous_chunk.append(named_entity)\n",
" current_chunk = []\n",
" else:\n",
" continue\n",
" return continuous_chunk\n",
"\n",
"sentence = \"Court order threatens deal with Jio and sends shares down by a fifth\"\n",
"get_continuous_chunks(sentence)\n",
"\n",
"\n",
"\n"
],
"execution_count": 8,
"outputs": [
{
"output_type": "stream",
"text": [
"[nltk_data] Downloading package maxent_ne_chunker to\n",
"[nltk_data] /root/nltk_data...\n",
"[nltk_data] Package maxent_ne_chunker is already up-to-date!\n",
"[nltk_data] Downloading package words to /root/nltk_data...\n",
"[nltk_data] Package words is already up-to-date!\n",
"[nltk_data] Downloading package averaged_perceptron_tagger to\n",
"[nltk_data] /root/nltk_data...\n",
"[nltk_data] Unzipping taggers/averaged_perceptron_tagger.zip.\n",
"[nltk_data] Downloading package punkt to /root/nltk_data...\n",
"[nltk_data] Unzipping tokenizers/punkt.zip.\n",
"(GPE Court/NNP)\n",
"(PERSON Jio/NNP)\n"
],
"name": "stdout"
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"['Court', 'Jio']"
]
},
"metadata": {
"tags": []
},
"execution_count": 8
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nIikAOf5GpeE"
},
"source": [
"As with Sentiment analysis, Entity recognition can be fine tuned to handle a specific financial domain (companies,equity, bonds, sovereigns, etc) - but needs lots of data and a training system, but also can be used independently:\n",
"* as a measure of media impact (aggregation of news mentioning a company, regardless of whether bad or good)\n",
"* to extract information and act on it -- for example in the news \"Paddy Power Betfair confirms it is in talks to buy FanDuel\" ER can be used to identify which company (Paddy Power Betfair) buys another (FanDuel) and act on it (e.g. [Risk arbitrage](https://en.wikipedia.org/wiki/Risk_arbitrage) - \"*a hedge fund investment strategy that speculates on the successful completion of mergers and acquisitions.*\")\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "KG1qVEL2GcMW",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "52e58288-8ecb-40f9-c55c-5c6614a43f62"
},
"source": [
"sentence = \"Paddy Power Betfair confirms it is in talks to buy FanDuel\" \n",
"get_continuous_chunks(sentence)"
],
"execution_count": 9,
"outputs": [
{
"output_type": "stream",
"text": [
"(PERSON Paddy/NNP)\n",
"(PERSON Power/NNP Betfair/NNP)\n",
"(ORGANIZATION FanDuel/NNP)\n"
],
"name": "stdout"
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"['Paddy Power Betfair']"
]
},
"metadata": {
"tags": []
},
"execution_count": 9
}
]
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment