Skip to content

Instantly share code, notes, and snippets.

@giordanocardillo
Last active January 12, 2022 19:36
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save giordanocardillo/35142c3b450be3bc0165a647c8f48dd8 to your computer and use it in GitHub Desktop.
Save giordanocardillo/35142c3b450be3bc0165a647c8f48dd8 to your computer and use it in GitHub Desktop.
Codepipeline - Delete EKS pod

Lambda helper function to remove a running POD

Lambda function to remove a running POD from EKS cluster and allow the matching deployment to run it again, updating it.

Thia allows microservice updating after a codepipeline run.

It uses the awscli because the get-token command is not available in boto3 library.

Requirements

  • Codepipeline user parameters: you must insert the deployment name in the user parameters field for this to work
  • Lambda Layer
  • ENV variables
  • Permissions: must have the AWSLambdaBasicExecutionRole, AWSCodePipelineCustomActionAccess policies and of course be able to access the EKS cluster to read cluster info. Pay attention, use the right VPC/Subnet/SG for the lambda or you won't be able to access the cluster
  • Lambda configuration: at least 256MB and 5 minutes timeout

Lambda layer

This script requires a lambda layer containing two python packages, to build the layer you can do the following

pip3 install awscli kubernetes -t ./python --no-cache
zip -r lambda-layer.zip python/

Then upload the zip file on AWS as a lambda layer. The runtime it was tested on is Python 3.9 on x86_x64 architecture.

ENV Variables

Required variables are:

  • CLUSTER_NAME: the cluster name on EKS
  • NAMESPACE: the namespace where the deployment is located
from kubernetes import client
import subprocess as sp
import base64
import tempfile
import boto3
import os
from botocore import session
from awscli.customizations.eks.get_token import STSClientFactory, TokenGenerator
codePipeline = boto3.client('codepipeline')
eks = boto3.client('eks')
def get_eks_token(cluster_name):
work_session = session.get_session()
client_factory = STSClientFactory(work_session)
sts_client = client_factory.get_sts_client(role_arn=None)
token = TokenGenerator(sts_client).get_token(cluster_name)
return token
def get_k8s_v1_api(cluster, token):
ca_file = tempfile.NamedTemporaryFile(delete=False)
ca_file.write(base64.b64decode(cluster['certificateAuthority']['data']))
ca_file.flush()
configuration = client.Configuration()
configuration.host = cluster['endpoint']
configuration.api_key_prefix['authorization'] = 'Bearer'
configuration.api_key['authorization'] = token
configuration.ssl_ca_cert = ca_file.name
api_client = client.ApiClient(configuration=configuration)
return client.CoreV1Api(api_client=api_client)
def lambda_handler(event, context):
try:
job_data = event['CodePipeline.job']['data']
user_parameters = job_data['actionConfiguration']['configuration']['UserParameters']
cluster = eks.describe_cluster(name=os.environ['CLUSTER_NAME'])['cluster']
token = get_eks_token(os.environ['CLUSTER_NAME'])
v1 = get_k8s_v1_api(cluster, token)
api_response = v1.delete_collection_namespaced_pod(namespace=os.environ['NAMESPACE'], label_selector=f"app={user_parameters}")
codePipeline.put_job_success_result(
jobId=event["CodePipeline.job"]["id"]
)
except Exception as e:
codePipeline.put_job_failure_result(
jobId=event["CodePipeline.job"]["id"],
failureDetails={
'type': 'JobFailed',
'message': str(e)
}
)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment