Skip to content

Instantly share code, notes, and snippets.

@sybeck2k
Created November 23, 2018 10:32
Show Gist options
  • Save sybeck2k/b89eb296eeb0e6779d0fa626af5486cf to your computer and use it in GitHub Desktop.
Save sybeck2k/b89eb296eeb0e6779d0fa626af5486cf to your computer and use it in GitHub Desktop.
Mapping an AWS ML real-time Predict entrypoint with API Gateway

Generate your ML Predict entrypoint

You should have an ML Predict entrypoint that can be called via POST with something like:

{
    "MLModelId": "model-id",
    "Record":{
        "key1": "value1",
        "key2": "value2"
    },
    "PredictEndpoint": "https://endpointUrl"
}

Prepare your API Gateway endpoint

Create a POST integration request with an AWS Service as backend. Choose your region where you deployed your ML model, chose Machine Learning as the target service, set realtime as subdomain, and Predict as Action. Create a mapping option for the content-type application/json (it's not mandatory...but it makes calling the API easier) as such:

#set($payload = $input.path('$'))
{
    "MLModelId": "<your-model-ID>",
    "Record":{
#foreach($mapEntry in $payload.entrySet())"$mapEntry.key": "$mapEntry.value"#if($foreach.hasNext),#end
#end
    },
    "PredictEndpoint": "<your predict endpoint>"
}

Give the API GW Entrypoint an appropriate role (you can just use the AWS managed role AmazonMachineLearningRealTimePredictionOnlyAccess

Profit!

You can now call the API GW with a map of your record, and get the response with the prediction for your ML model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment