Skip to content

Instantly share code, notes, and snippets.

View victordibia's full-sized avatar

Victor Dibia victordibia

View GitHub Profile
@victordibia
victordibia / explainBertQA.py
Last active June 22, 2020 03:00
How to Explain HuggingFace BERT for Question Answering NLP Models with TF 2.0 GradientTape
def get_gradient(question, context, model, tokenizer):
"""Return gradient of input (question) wrt to model output span prediction
Args:
question (str): text of input question
context (str): text of question context/passage
model (QA model): Hugging Face BERT model for QA transformers.modeling_tf_distilbert.TFDistilBertForQuestionAnswering, transformers.modeling_tf_bert.TFBertForQuestionAnswering
tokenizer (tokenizer): transformers.tokenization_bert.BertTokenizerFast
Returns:
@victordibia
victordibia / handtrack.js
Created September 28, 2019 21:20
Handtrack.js (via NPM)
\\ npm install --save handtrackjs
import * as handTrack from 'handtrackjs';
const img = document.getElementById('img');
// Load the model.
const img = document.getElementById('img');
handTrack.load().then(model => {
model.detect(img).then(predictions => {
@victordibia
victordibia / handtrackjs.html
Last active September 28, 2019 21:20
Load Handtrack.js (via JSdelivr)
<script src="https://cdn.jsdelivr.net/npm/handtrackjs/dist/handtrack.min.js"> </script>
const img = document.getElementById('img');
// Load the model.
const img = document.getElementById('img');
handTrack.load().then(model => {
model.detect(img).then(predictions => {
console.log('Predictions: ', predictions) // bbox predictions
});
});