Extracts perms from cloud-custodian repo, sanitizes extracted data, and transforms into terraform
Dependency is ripgrep and git which can be installed using brew
brew install rg git
The code will
#!/bin/sh | |
# ---- website-checker.sh ---- | |
# Pings a list of websites using cURL to see if they are up and | |
# there are no errors. If there are problems, we send an email using mailx | |
# to let ourselves know about the problem. | |
################################################################################ | |
# This is a path to a plain text list of URLs to check, one per line | |
# Make sure this uses proper unix newline characters or you will get 400 Bad Request errors | |
# when you try to curl the URLs |
#set -o xtrace | |
echo "please copy the output of this script and send it to the network team so they can diagnose your issue" | |
DESTINATION=destination.foo.com | |
PORT=8089 | |
echo "Dumping local network data" | |
ifconfig | |
# get the routing table |
import sys | |
import boto3 | |
from botocore.exceptions import ClientError | |
from datetime import datetime, timedelta | |
import time | |
import pytz | |
profile = "my_profile" | |
region = "us-west-2" | |
dry_run = True |
# Example uses GDELT dataset found here: https://aws.amazon.com/public-datasets/gdelt/ | |
# Column headers found here: http://gdeltproject.org/data/lookups/CSV.header.dailyupdates.txt | |
# Load RDD | |
lines = sc.textFile("s3://gdelt-open-data/events/2016*") # Loads 73,385,698 records from 2016 | |
# Split lines into columns; change split() argument depending on deliminiter e.g. '\t' | |
parts = lines.map(lambda l: l.split('\t')) | |
# Convert RDD into DataFrame | |
from urllib import urlopen | |
html = urlopen("http://gdeltproject.org/data/lookups/CSV.header.dailyupdates.txt").read().rstrip() |
Import-Module ADFS | |
Add-ADFSRelyingPartyTrust -Name "Amazon Web Services & AD Groups" -MetadataURL "https://signin.aws.amazon.com/static/saml-metadata.xml" -MonitoringEnabled:$true -AutoUpdateEnabled:$true | |
$ruleSet = New-AdfsClaimRuleSet -ClaimRuleFile ((pwd).Path + "\claims-AD-Groups.txt") | |
$authSet = New-AdfsClaimRuleSet -ClaimRuleFile ((pwd).Path + "\auth.txt") | |
Set-AdfsRelyingPartyTrust -TargetName "Amazon Web Services & AD Groups" -IssuanceTransformRules $ruleSet.ClaimRulesString -IssuanceAuthorizationRules $authSet.ClaimRulesString |
import boto3 | |
from botocore.exceptions import ClientError | |
try: | |
iam = boto3.client('iam') | |
user = iam.create_user(UserName='fred') | |
print("Created user: %s" % user) | |
except ClientError as e: | |
if e.response['Error']['Code'] == 'EntityAlreadyExists': | |
print("User already exists") |
We have remote developers who occassionally need access to AWS servers QA and Staging databases (RDS mysql instances). The AWS servers (EC2, fargate) are in a private VPC. The RDS databases are in different VPCs, they have the "publicly accessible" attribute set, which means they get a pubilc DNS, but only a handful or IPs are whitelisted for that access; developers should get access over a VPN.
This is summarized as:
laptop --ClientVPN--> VPC _A_ --VPC Peer--> RDS in VPC _B_
I choose the Cliet VPN Endpoint so that AWS would manage the remote side of the tunnel. I choose Viscosity (on a Mac) as our VPN client because it's easy to use and support split-dns and split-routing. It's affordable, but not free. Split DNS is important so that Amazon hostnames can be resolved to their internal IP addresses. Split routing is important so that only the AWS destined traffic goes over the VPC tunnel and other internet traffic can go direct to internet.