Skip to content

Instantly share code, notes, and snippets.

@santosh-more
Last active October 4, 2020 18:45
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save santosh-more/f642a27230137a396b2cd965b0b9ded6 to your computer and use it in GitHub Desktop.
Save santosh-more/f642a27230137a396b2cd965b0b9ded6 to your computer and use it in GitHub Desktop.
startVideo()
video.addEventListener('play', () => {
//create the canvas from video element as we have created above
const canvas = faceapi.createCanvasFromMedia(video);
//append canvas to body or the dom element where you want to append it
document.body.append(canvas)
// displaySize will help us to match the dimension with video screen and accordingly it will draw our detections
// on the streaming video screen
const displaySize = { width: video.width, height: video.height }
faceapi.matchDimensions(canvas, displaySize)
setInterval(async () => {
const detections = await faceapi.detectAllFaces(video, new faceapi.TinyFaceDetectorOptions()).withFaceLandmarks().withFaceExpressions()
const resizedDetections = faceapi.resizeResults(detections, displaySize)
canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height)
faceapi.draw.drawDetections(canvas, resizedDetections)
faceapi.draw.drawFaceLandmarks(canvas, resizedDetections)
faceapi.draw.drawFaceExpressions(canvas, resizedDetections)
}, 100)
})
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment