Skip to content

Instantly share code, notes, and snippets.

@cwgem
Created July 20, 2017 00:22
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save cwgem/83ee2379d5336d5b0b04eec01d816637 to your computer and use it in GitHub Desktop.
Save cwgem/83ee2379d5336d5b0b04eec01d816637 to your computer and use it in GitHub Desktop.
Having lambada pass off a codepipline task to an EC2 instance
import boto3
import os
def lambda_handler(event, context):
job_id = event['CodePipeline.job']['id']
s3_info = (event['CodePipeline.job']['data']
['inputArtifacts'][0]['location']['s3Location'])
client = boto3.client('ssm')
client.put_parameter(
Name='/codepipeline/jobid',
Value=job_id,
Type='String',
Overwrite=True
)
client.put_parameter(
Name='/codepipeline/artifact/zip',
Value=s3_info['objectKey'],
Type='String',
Overwrite=True
)
client.put_parameter(
Name='/codepipeline/artifact/bucket',
Value=s3_info['bucketName'],
Type='String',
Overwrite=True
)
client = boto3.client('autoscaling')
client.update_auto_scaling_group(
AutoScalingGroupName=os.getenv('ASG_NAME'),
MinSize=1,
MaxSize=1
)
@cwgem
Copy link
Author

cwgem commented Jul 20, 2017

Some experimentation with the thought of EC2 handoff with Lambda. Essentially, a Lambda invoked by CodePipeline passes off the job id, artifact zip, and artifact bucket name to EC2 parameter store, then modifies an ASG to have one instance launched.

The reason I use ASG here is that it automatically handles which AZ to put the instance in, and can also handle spinning up another instance should an AZ failure occur. Since the basic data is in parameter store I can use it to pickup where I left off when the new, healthy instance comes back up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment