Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

aws cli
#Talk python to me | |
import json | |
import boto3 | |
s3 = boto3.client('s3') | |
def lambda_handler(event, context): | |
bucket = 'aws-simplified-transactions' | |
key = 'transactions.json' |
#!/bin/bash | |
# | |
# This startup script replaces the normal neo4j startup process for cloud environments. | |
# The purpose of the script is to gather machine IP and other settings, such as key/value | |
# pairs from the instance tags, and use that to configure neo4j.conf. | |
# | |
# In this way, neo4j does not need to know ahead of time what it's IP will be, and | |
# can be controlled by tags put on the instance. | |
###################################################################################### | |
echo "pre-neo4j.sh: Fetching AWS instance metadata" |
AWS API Gateway has the ability to pre-authenticate connections prior to launching the endpoint, by passing the authorizationToken
to a Lambda function. There are clear benefits for simplifying end point security and also a reduction in duplicated code by utilising this feature. However I found the AWS examples were excessively complicated for what should be a very simple task.
So here's my example.
The main concern is that AWS Lambda authentication expects a very specific response and if that response is not given it will throw a 500 error with x-amzn-ErrorType: AuthorizerConfigurationException
in the response header if the response object is not exactly as expected.
I personally use to handle the publishing part of my Lambdas, but I'll include an image of the API Gateway config.
This is how I quickly got an Apache Zepplin notebook running against the AWS Glue Dev endpoint. None of the guides out there seemed concise, and I found some custom Docker containers doing what you can do easily. This gives you the power - it sets up port forwarding & runs the official Docker image.
ssh-keygen
)ssh -i ~/.ssh/glue-dev -vnNT -L :9007:*127.0.0.1*:9007 glue@<ec2-endpoint>.<region>.compute.amazonaws.com
Feel free to contact me at robert.balicki@gmail.com or tweet at me @statisticsftw
This is a rough outline of how we utilize next.js and S3/Cloudfront. Hope it helps!
It assumes some knowledge of AWS.
a4b.amazonaws.com | |
access-analyzer.amazonaws.com | |
account.amazonaws.com | |
acm-pca.amazonaws.com | |
acm.amazonaws.com | |
airflow-env.amazonaws.com | |
airflow.amazonaws.com | |
alexa-appkit.amazon.com | |
alexa-connectedhome.amazon.com | |
amazonmq.amazonaws.com |
-- | |
-- This will register the "planet" table within your AWS account | |
-- | |
CREATE EXTERNAL TABLE planet ( | |
id BIGINT, | |
type STRING, | |
tags MAP<STRING,STRING>, | |
lat DECIMAL(9,7), | |
lon DECIMAL(10,7), | |
nds ARRAY<STRUCT<ref: BIGINT>>, |
""" | |
Upsert gist | |
Requires at least postgres 9.5 and sqlalchemy 1.1 | |
Initial state: | |
[] | |
Initial upsert: |