Skip to content

Instantly share code, notes, and snippets.

@omarsar
Last active September 17, 2020 15:55
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save omarsar/2c68bb9203bea4b5634d534db4f96608 to your computer and use it in GitHub Desktop.
Save omarsar/2c68bb9203bea4b5634d534db4f96608 to your computer and use it in GitHub Desktop.

Title

Applied Deep Learning for NLP Applications

Abstract

Natural language processing (NLP) has become an important field with interest from many important sectors that leverage modern deep learning methods for approaching several NLP problems and tasks such as text summarization, question answering, and sentiment classification, to name a few. In this tutorial, we will introduce several of the fundamental NLP techniques and more modern approaches (BERT, GTP-2, etc.) and show how they can be applied via transfer learning to approach many real-world NLP problems. We will focus on how to build an NLP pipeline using several open-source tools such as Transformers, Tokenizers, spaCy, TensorFlow, and PyTorch, among others. Then we will learn how to use the NLP model to search over documents based on semantic relationships. We will use open-source technologies such as BERT and Elasticsearch for this segment to build a proof of concept. In essence, the learner will take away the important theoretical pieces needed to build practical NLP pipelines to address a wider variety of problems in the real world.

Audience

The target audience for this tutorial are ideally participants with some exposure to the Python programming language and have used natural language processing language tools such as spaCy or NLTK. Knowledge of machine learning tools and concepts is also a benefit as we will be using them in this tutorial. Beginners are also welcome but still require at least some theoretical understanding of NLP and machine learning concepts.

Tentative Outline (for 3.5 hours)

Below we outline the tentative schedule and topics for this tutorial. The tutorial consists of three modules with each module consisting of an exercise at the end. The purpose of the tutorials is to allow the participants to get as much hands-on experience with the methods and techniques taught during the modules. Each module and exercise will come in the form of online Jupyter notebook to ensure that participants make the most out of the exercise time and focus on solving the problem assigned as opposed to troubleshooting issues. This will also allow time for short discussions and questions if the attendees have any.

Module 1 - Introduction to NLP Fundamentals (30 minutes)

In this module we will brielfy introduce the fundamental NLP and deep learning concepts that are necessary to begin building more sophisticated modern NLP pipelines that will eventually lead to building models that can be used to approach several natural langauge problems. For instance, we will cover how to properly tokenize text, create vocabularies, train language models, etc.

Exercise (30 minutes): Participants may be asked to perform some exercises to further some of the concepts discussed in this section such as tokenization.

Coffee Break (15 mins)

Module 2 - Training and Fine-tuning an NLP model (30 minutes)

In this module we will build upon the previous module and begin to apply pretrained langauge models for addressing NLP tasks such as text classification, text generation, text summarization, etc. The participants will essentially learn how to apply the latest NLP transer learning techniques to properly define, train, test, and evaluate the NLP models using PyTorch or TensorFlow.

Exercise (30 minutes): Participants will be asked to build their own NLP models leveraging transfer learning techniques.

Coffee Break (15 mins)

Module 3 - Towards Building Real-World NLP-powered Applications (30 minutes)

In this module we are going to focus on putting all the learning that was acquired in the previous modules and work on test a proof of concept NLP powered application using purely open source technologies such as BERT and Elasticsearch. The application will be centered around a search capabilities which can be used to further build powerful tools such as recommender systems and improved document exploration.

IMPORTANT: To complete the exercise for this segment, participants are required to already have the latest version of Python and Jupyter installed on their computers. Particpants will be asked to install other libraries and applications during the session.

Exercise (30 minutes): Participants will be guided on how to leverage the techniques/knowledge learned and the tools used in the previous modules to brainstorm ideas and quickly create prototypes for real-world projects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment