Skip to content

Instantly share code, notes, and snippets.

@cyk
Last active July 28, 2020 17:35
Show Gist options
  • Star 8 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save cyk/8ec6481d3dcbe10376f8 to your computer and use it in GitHub Desktop.
Save cyk/8ec6481d3dcbe10376f8 to your computer and use it in GitHub Desktop.
Face Sentiments with Google Vision API via AWS API Gateway and Lambda

Face Sentiments with Google Vision API via AWS API Gateway and Lambda

My notes (and rudimentary guide) from a research spike that delved into the Google Vision API, AWS API Gateway and Lambda, prototyping a "serverless" API endpoint that returns sentiments expressed by faces in an image.

Introduction

Google's been rockin' their cloud offerings hard lately. Among their latest releases is the Cloud Vision API (in beta), a service that analyzes the content of an image; detecting things like words, phrases, objects, faces and their emotions. Let's prototype a "serverless" face sentiments endpoint using only the Vision API, AWS API Gateway and a Lambda function.

A quick note on Cloud Vision API pricing. As of this writing, the free tier for face detection is <1000/mo. This app can easily exceed this limit. To give you an idea of this, so far I've used 1103 face detection operations costing me $0.26. Good news is that Google is currently offerings a free 60-day trial period (to spend $300) using their Cloud Platform, I encourage you to sign up for this before proceeding.

A work-in-progress client app displaying sentiments in real-time from webcam:

This is what "JOY" looks like

When we're finished, we'll be able to send a payload of a base64 encoded image to a face-sentiments endpoint which will return the likely sentiment for the face found in it.

// Request
POST /api/face-sentiments
{ "image": "R0lGODlh9AH... }

// Response
{ "likely_sentiment": "JOY" } // one of JOY, SORROW, ANGER, SURPRISE

Face Sentiments Detection (Google Vision API)

Google's Face Detection Tutorial gets us most of the way to face sentiment processing. It's short. I suggest you work your way thru the Python guide and then come back here for some quick adaptations to…

  1. Focus concern to retrieving sentiments for an encoded image containing a face
  2. Add a lambda handler that AWS Lambda will hook into

Detecting faces like a champ, now? Let's adapt the tutorial's final code.

See get-face-sentiments.py included at the bottom of this gist.

In short, the changes are:

  • Rename the main() function to lambda_handler(), and adjust its signature to expect the event and context arguments (more on this later). Don't forget to update references. For testing purposes, adjust the CLI arguments to take a just single file containing a base64 encoded image, then read in its contents to the image value on the context object passed to the lambda_handler().
  • Adjust detect_face() signature to receive image_content (instead of face_file), remove file reading and encoding, and return the faceAnnotations of the first face.
  • Rename the highlight_faces() function to likely_sentiment() and adjust its signature to expect one argument: face. Return the first sentiment ranked LIKELY or VERY_LIKELY.

You can try out sentiment detection by encoding a photo with a face, dumping it to a file, and then passing the file path to the command:

> python get-face-sentiments.py face.jpg.base64
{ "likely_sentiment": "JOY" }

Now that we have a function to peer inside our deepest, darkest emotions. It still needs a happy home and a public endpoint to boot.

Face Sentiments Endpoint (AWS API Gateway / Lambda function)

Okay, I'm not gonna lie — this part is a bit more involved than the last. Like before, we're going to build from an existing walkthru to warp us to the boss level. Before you start on the walkthru, you'll need an AWS account, have an IAM user created (credentials downloaded), and granted this user access to API Gateway. Now you are ready to start creating an API Gateway for Lambda Functions.

Fast forward and by the end of that walkthru, you should have two methods for an endpoint driven by two Lambda functions: GetHelloWorld and GetHelloWithName, deployed to staging and gleefully tested.

Shall we kick this API into top gear with a GetFaceSentiments Lambda? Yes, we shall. This is a matter of bundling your sentiment detection Python module we created earlier, telling AWS Lambda to use it, and then creating and assigning a new API Gateway resource and method to it.

Do this:

  1. Save your Google app credentials in the same directory as the face detection code and update ENV var accordingly: os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = './PinHead-a5ac66c38f96.json'
  2. Bundle up your face detection code, along with app credentials, by creating a deployment package.
  3. Go to AWS Lambda service, and click Create a Lambda function
  4. Choose Skip on blueprint selection
  5. Name your function GetFaceSentiments, provide a description, choose Python for the runtime
  6. Our code requires custom libraries, so we'll choose Upload a ZIP file for code entry type. Upload the ZIP.
  7. Leave Handler as is, it corresponds to the function we renamed to lambda_handler early on and set Role to the same as you used during the walkthru. Click Next.

Finally, re-deploy your API, base64 encode an image with a face and initiate a POST request with the payload.

Note: From my own tests (and face), the most accurately detected emotion tends to be "joy".

# Returns likely sentiment for an image containing a face
# A crude adaptation of https://cloud.google.com/vision/docs/face-tutorial
# intended to be bundled for an AWS Lambda function
import httplib2
import argparse
import os
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = './PinHead-a5ac66c38f96.json'
# [START get_vision_service]
DISCOVERY_URL='https://{api}.googleapis.com/$discovery/rest?version={apiVersion}'
def get_vision_service():
credentials = GoogleCredentials.get_application_default()
return discovery.build('vision', 'v1', credentials=credentials,
discoveryServiceUrl=DISCOVERY_URL)
# [END get_vision_service]
# [START detect_face]
def detect_face(image_content, max_results=1):
batch_request = [{
'image': {
'content': image_content
},
'features': [{
'type': 'FACE_DETECTION',
'maxResults': max_results,
}]
}]
service = get_vision_service()
request = service.images().annotate(body={
'requests': batch_request,
})
response = request.execute()
faceAnnotations = response['responses'][0]['faceAnnotations']
return faceAnnotations[0]
# [END detect_face]
# [START likely_sentiment]
RATINGS = ['LIKELY', 'VERY_LIKELY'];
def likely_sentiment(face):
print(face)
if face['joyLikelihood'] in RATINGS:
return 'JOY'
if face['sorrowLikelihood'] in RATINGS:
return 'SORROW'
if face['angerLikelihood'] in RATINGS:
return 'ANGER'
if face['surpriseLikelihood'] in RATINGS:
return 'SURPRISE'
# [END likely_sentiment]
# [START lambda_handler]
def lambda_handler(event, context):
face = detect_face(event['image'])
return {
'likely_sentiment' : likely_sentiment(face)
}
# [END lambda_handler]
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description='Detects faces in the given image.')
parser.add_argument(
'input_image', help='the base64 encoded image you\'d like to detect faces in.')
args = parser.parse_args()
with open(args.input_image, 'rb') as image:
result = lambda_handler({'image': image.read()}, None)
print(result)
@agentleo
Copy link

Hi !
Thanks for this example and explanations !
Could you be kind to upload a ready made ZIP file with Google Vision modules for a blank aws lambda project ?
( for Python 3.7 )

Thanks!

@ivalles
Copy link

ivalles commented Jan 8, 2020

@agentleo I would recommend following the steps in this article:
https://serverless.com/blog/serverless-python-packaging/

I've been using Serverless Framework for several projects, and I love it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment