You should have an ML Predict entrypoint that can be called via POST with something like:
{
"MLModelId": "model-id",
"Record":{
"key1": "value1",
"key2": "value2"
},
"PredictEndpoint": "https://endpointUrl"
}
Create a POST integration request with an AWS Service as backend. Choose your region where you deployed your ML model, chose Machine Learning as the target service, set realtime
as subdomain, and Predict
as Action. Create a mapping option for the content-type application/json
(it's not mandatory...but it makes calling the API easier) as such:
#set($payload = $input.path('$'))
{
"MLModelId": "<your-model-ID>",
"Record":{
#foreach($mapEntry in $payload.entrySet())"$mapEntry.key": "$mapEntry.value"#if($foreach.hasNext),#end
#end
},
"PredictEndpoint": "<your predict endpoint>"
}
Give the API GW Entrypoint an appropriate role (you can just use the AWS managed role AmazonMachineLearningRealTimePredictionOnlyAccess
You can now call the API GW with a map of your record, and get the response with the prediction for your ML model