Skip to content

Instantly share code, notes, and snippets.

View philschmid's full-sized avatar

Philipp Schmid philschmid

View GitHub Profile
@philschmid
philschmid / job.json
Last active September 6, 2019 05:49
Example Job json for Talos
[{
"userCompanyId": "c26ce5b0-9e3b-4146-9637-3047d5b92c56",
"jobId": "General Electric-Purchase-2019-09-06T04:51:11.720Z-Siemens",
"userId": "da957012-7329-4135-b278-e250810f5484",
"userCompany" :"General Electric",
"employee": "Stefan",
"department": "Purchase",
"startDate": "2019-09-06T04:51:11.720Z",
"endDate" : "2020-09-06T04:51:11.720Z",
"searchCompany": "Siemens",
@philschmid
philschmid / schema.gql
Last active September 8, 2019 14:39
job-amplify-schema
# @format
# cognito user needs
# - coginto:groups
# - coginto:name
# - coginto:userName || coginto:custom:id (uuid)
# - coginto:name
# - coginto:custom:userCompany
# - coginto:custom:department
# cognito groups (with userCompanyId) should be filled after signUP either by the existing companyId as group through a license-key
# or through an new generated license-key for a new created company
import boto3
def get_messages_from_queue(queue_url):
"""Generates messages from an SQS queue.
Note: this continues to generate messages until the queue is empty.
Every message on the queue will be deleted.
:param queue_url: URL of the SQS queue to drain.
@philschmid
philschmid / scan.py
Created October 4, 2019 11:04
document_scanner_python
import numpy as np
import cv2
def scan(input_image="test.jpg")
#read image
img = cv2.imread(input_image,1)
#resize image
@philschmid
philschmid / handler.py
Last active October 11, 2019 12:37
Scale your aws Fargate task 0 to 100 and 100 to 0
import boto3
import os
client = boto3.client('ecs', region_name='eu-central-1')
def autoscaler(event,context):
message_number = count_sqs(queue_url)
desired_count = evaluate_scale(message_number)
response = client.update_service(
@philschmid
philschmid / handler.py
Last active October 21, 2019 13:48
boto3 start ec2 instances
import boto3
client = boto3.client('ec2', region_name='us-west-2')
user_data= '''#!/bin/bash
echo 'test' > /tmp/hello'''
response = client.run_instances(
BlockDeviceMappings=[
{
from PIL import Image
import os
import argparse
def rescale_images(directory, size):
for img in os.listdir(directory):
im = Image.open(directory+img)
im_resized = im.resize(size, Image.ANTIALIAS)
im_resized.save(directory+img)if __name__ == '__main__':
@philschmid
philschmid / _Lambda_ec2_cluster.md
Last active October 28, 2019 14:20
demand concept for creating your own cluster with lambda and ec2

Idea

  1. set Var for functionality -> get CF output stack (AMI-ID, SQS_QUEUE, S3_bucket)
  2. get messages in queue
  3. evalute ec2 instances (atm message_count / 5 )
  4. check how many instances are running, based on filter (module)
  5. calculate required instance ( +2 means -> has to start to more, -2 means stop 2 )
  6. if less instances are running start instances
    6.1 start instance
    a. set min & max count to difference of actual running instances and evaluted_number
@philschmid
philschmid / _talog.py
Last active October 29, 2019 10:58
Talog Cloudwatch Alarms
#!/usr/local/bin/python3
import boto3
import time
cw_log = boto3.client('logs')
LOG_GROUP='MODULE-NAME'
LOG_STREAM='LAMBDA/DOCKER-ID'