Skip to content

Instantly share code, notes, and snippets.

@cemeng
Last active April 1, 2021 10:51
Show Gist options
  • Save cemeng/d7ea5795a4e35ef2dc8b3a86e6214d4a to your computer and use it in GitHub Desktop.
Save cemeng/d7ea5795a4e35ef2dc8b3a86e6214d4a to your computer and use it in GitHub Desktop.
AWS Pathways

Certification Prep: https://aws.amazon.com/certification/certification-prep/

Solutions Architect: https://aws.amazon.com/training/course-descriptions/architect/ https://aws.amazon.com/certification/certified-solutions-architect-associate/

Reading Materials:

Exam Tips

Interesting links: Breaking the monolith: https://aws.amazon.com/getting-started/container-microservices-tutorial/ Make diagrams: https://draw.io

h3. S3 https://aws.amazon.com/s3/faqs/

@cemeng
Copy link
Author

cemeng commented Nov 28, 2017

4/12

Databases

Probably won't feature much on the exam.
RDS - for OLTP - SQL, MySQL etc
DynamoDB - noSQL
Redshift - OLAP
Elasticache - in memory - Memcached / Redis
RDS Multi AZ - can be turned on - off by default, in the event of primary DB down - it will failover to another instance - pretty cool.
I remember on having to have to manage this sort of things ourselves at Gruden - painful.
read replicas
Aurora scaling - 6 copies of the data (2 copies on min 3 AZ) automatically but remember this is copy of data NOT instance - you'd want multi AZ for instance and its gon cost ya mullah

Learn about storage sizes and stuffs -> https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html

FOCUS on RDS - read the RDS FAQ katanya Ryan

@cemeng
Copy link
Author

cemeng commented Dec 4, 2017

When you have deployed an RDS database into multiple availability zones, can you use the secondary database as an independent read node?

So what's the primary difference between read replica and multi AZ then?
multi AZ is so that you can fail over to - read replica to speed up / performance.
So I guess the answer is no -> YAY correct

@cemeng
Copy link
Author

cemeng commented Dec 7, 2017

07/12

Lecture ? Exam Tips EC2

Know the difference between: on demand, spot, reserved, dedicated hosts
For example: spot for minimising cost - when you don't care if your instance gets terminated.
steady state website, traffic doesn't spike a lot and you want to minimise cost in that case use reserved.
black friday traffic spike -> use on demand
regulatory requirement that you cannot use multi tenant compute -> dedicated host

Spot instance -> if you terminate the instance you pay for the hour, but AWS terminates - you get the hour it was terminated for free.

Remember DR. MC GIFT PX - you don't know in details.

  • D -> Dense, R -> Memory optimised (RAM),
  • M -> general -> all rounder this is what you should use in production - don't use T2 (tiny), they are just for toy.
  • C -> Compute
  • Two graphics specific family type G2 and P2, G is for encoding, P (general purpose GPU) for machine learning and bit coin

EBS consists of:

  • SSD general purpose - GP2 up to 10k IOPS
  • SSD provisioned IOPS IO1 more than 10k IOPS
  • HDD throughput optimised - used for data warehousing / transaction logging -> frequently accessed data
  • HDD cold SC1 - less frequently accessed data
  • HDD magnetic standard cheap - infrequently accessed storage -> you can use this for boot but the top HDD cannot

Termination pro

@cemeng
Copy link
Author

cemeng commented Dec 8, 2017

08/12

Route 53: Weighted Routing Lab

I was a bit confused between ELB and Routing w/ Route 53. With Route53, you can have more granular control, e.g having different routing strategies. In the example, you might even put Routing logic that direct traffic to more than one ELB.

Some useful comments from: https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-KTWtu_Y5HscAAS8NCyc/elb-vs-route-53-routing

ELB duty is to distribute traffic to instances while making sure that the instances are healthy to make sure your application is always available. Route 53 has many functionality which are called polices and they are; simple, weighted, latency, failover and geolocation. Unlike ELB, with Route 53 weighted policy you can manually set the traffic distribution for your applications like 20% traffic should be routed to instance A while 80% should be routed to instance B. In the case of failover, both Route 53 and ELB do have similar functionality by routing traffic to only health application or instances. But for Route 53, you use fail-over for active passive fail-over.

In summary, I believe ELBs are intended to load balance across EC2 instances in a 'single' region. Whereas DNS load-balancing (Route 53) is intended to help balance traffic 'across' regions. Route53 policies like geolocation may help direct traffic to preferred regions, then ELBs route between instances within one region.

Functionally, another difference is that DNS-based routing (e.g. Route 53) only changes the address that your clients' requests resolve to. On the other hand, an ELB actually reroutes traffic.

One analogy is: if you ask for the closest WalMart, you may get an address based on your location, but you could choose to go to another Walmart if you know one. That's Route 53; it just switches the address resolved based on some context. On the other hand, a policeman redirecting traffic because of construction, is more like an ELB, he/she is actually changing the traffic flow, not just suggesting.

A/B testing is a good use case of using weighted routing with Route53.

Route 53: Latency Based Routing

Latency with Route 53 respond to DNS queries with the resources that provide the best latency. This means that Route 53 chooses to respond to the DNS query based on which data centre gives your user the lowest latency.

Route 53: Failover Routing Policy

So i think this is similar to ELB. In this you specify passive / active route - just like Hagrid has production and DR instances.
In the case of failure - the passive will become active.

So just like ELB - you need to setup health check too. Setting this up feels a little bit complicated. So first of all you need to create health checks - which is a separate section on the route 53. And then you go to Hosted Zone - setup an A record with the failover routing policy - in here you are to associate the routing with a health check.
Ok - so say I have a domain felixt.org - in here I can then create multiple A records (hm interesting?) and each will have their own routing policy - ah okay, kinda get it - set these records as aliases and set target alias - which can be EC2, S3, ELB - what else?

There is only primary and secondary option for failover - production and DR I guess. So you create a health check attached to the primary A record, the secondary record has no need for health check I guess? But what if you want a cascading failover? e.g more than primary and secondary? Can you do that?

Geolocation is when you want Amazon Route 53 to respond to DNS queries based on the location of your users. But Geo-location do have a functionality of presenting the user the ability to change the content like languages to suit the users.

@cemeng
Copy link
Author

cemeng commented Dec 8, 2017

what is mx record?
there is a limit 50 domain you can register with route 53 - if you want more contact amazon sales.

@cemeng
Copy link
Author

cemeng commented Dec 10, 2017

10/12

VPC Lab

Create VPC manually - don't use Wizard - you won't learn by doing that.
When VPC is created, the following will be created:

  • route table
  • NACL
  • security group NACL
    Subnet and internet gateway you need to create yourself.

When creating subnet - Ryan likes the following naming convention: 10.0.1.0-us-east-1-a
Remember you will get 5 less adresess - 1st 4 and last are reserved by aws
Internet gateway is one to one relation with a VPC. Once VPC is connected to a gateway you cannot connect it to another gateway.

Uhm what is route table and internet gateway?

Then Ryan create a separate route table - that allows connection to the internet - he said this is a best practice, you don't want your default route table to be internet accessible.
Ok, so create a route table - under route tab add another entry with the following:
destination: 0.0.0.0/0
target: igw-420a753b -> this is our internet gateway that created on the previous step
now that's for ipv4 - add another record below for internet connectivity for ip6
::/0

Next on the lab creating an EC2 using the public VPC.
Note that security group does not span VPC - the webDMZ one was created for the default VPC - you won't be able to select it here. So create a new one for this VPC - using the same webDMZ security group.

@cemeng
Copy link
Author

cemeng commented Dec 11, 2017

11/12

NAT Instance and NAT Gateway

NAT - Network Access Translation - what is it? From AWS:
You can use a NAT device to enable instances in a private subnet to connect to the Internet (for example, for software updates) or other AWS services, but prevent the Internet from initiating connections with the instances.
Oh ok - I got it - so I guess NAT will be on a router / modem for example - where the modem has 1 public IP address but can have multiple private IPs? And the modem - will add private IP info on the header so it know how to connect the public to the private.

NAT devices are not supported for IPv6 traffic—use an egress-only Internet gateway instead. - uhm what?

So how to use this NAT instance? NAT instance is basically an EC2 - so you go to EC2 and pick NAT AMI. it's in the community AMI.

NAT Gateway is the preferred method for NATting - it is essentially managed NAT - it scales to 10GB traffic automatically - while with NAT instance you need to worry about scaling and redundancy etc2 - I suspect NAT Gateway is more expensive though.

NACL and security groups

OK so what is NACL?
A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.

So it's basically a firewall!

Important one subnet can only be associated with one NACL. Also NACL is related to security - duh! it is listed under Security heading on VPC - along with security group.

By default a new NACL will deny everything.

Ok I don't get this - Ryan said NACL is stateless - what does that mean? So you need to specify inbound and outbound rules, while security group you only need to specify once.

Let's talk about ephemeral ports - uhm what is that?
An ephemeral port is a short-lived endpoint that is created by the operating system when a program requests any available user port. The operating system selects the port number from a predefined range, typically between 1024 and 65535, and releases the port after the related TCP connection terminates.

@cemeng
Copy link
Author

cemeng commented Dec 15, 2017

Exams

24/11

Scored 58% You answered 35 of 60 questions correctly in 59:19
but reflective on the fact that I've only half through the course.

Design 58.3%
Data security 50%
Implementation / Deployment 83.3%
Troubleshooting 50%

29/11

Scored 62%

Design 69.4%
Data Security 50%
Implementation/Deployment 66.7%
Troubleshooting 33%

15/12

Scored 60% - discouraging result.

Design 61.1%
Data Security 66.7%
Implementation/Deployment 83.3%
Troubleshooting 16.7%

22/12

Scored 75% - aw yisss

Designing highly available, cost-efficient, fault-tolerant, scalable systems 80.6%
Data Security 66.7%
Implementation/Deployment 66.7%
Troubleshooting 66.7%

09/01

Scored 82% - awww yess

Designing highly available, cost-efficient, fault-tolerant, scalable systems 80.6%
Data Security 75%
Implementation/Deployment 83.3%
Troubleshooting 100.0%

@cemeng
Copy link
Author

cemeng commented Dec 16, 2017

16/12 - Saturday

Reviewing S3 FAQ

  • Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
  • S3 Standard is designed for 99.99% availability and Standard - IA is designed for 99.9% availability. Also IA - has higher cost to retrieve data - but lower storage.
  • Any publicly available data in Amazon S3 can be downloaded via the BitTorrent protocol, in addition to the default client/server delivery mechanism
  • By default, customers can provision up to 100 buckets per AWS account.
  • You can limit access to your bucket from a specific Amazon VPC Endpoint or a set of endpoints using Amazon S3 bucket policies. S3 bucket policies now support a condition, aws:sourceVpce, that you can use to restrict access. For more details and example policies, read Using VPC Endpoints.
  • There are two ways to get data into Standard – IA from within S3. You can directly PUT into Standard – IA by specifying STANDARD_IA in the x-amz-storage-class header. You can also set lifecycle policies to transition objects from Standard to Standard - IA.
  • Standard - IA is designed for larger objects and has a minimum object size of 128KB. Objects smaller than 128KB in size will incur storage charges as if the object were 128KB.
  • To retrieve Amazon S3 data stored in Amazon Glacier, initiate a retrieval request using the Amazon S3 APIs or the Amazon S3 Management Console.
  • Deleting data that is archived to Amazon Glacier is free if the objects being deleted have been archived in Amazon Glacier for three months or longer. If an object archived in Amazon Glacier is deleted or overwritten within three months of being archived then there will be an early deletion fee. This fee is prorated.
  • How should I choose between Transfer Acceleration and Amazon CloudFront’s PUT/POST? Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront's PUT/POST commands for optimal performance.

The following are not mentioned in acloud guru course - worth knowing what they are:

  • Query in Place - Amazon S3 allows customers to run sophisticated queries against data stored without the need to extract, transform, and load (ETL) into a separate analytics platform. S3 offers multiple query in place options, including S3 Select (currently in preview), Amazon Athena, and Amazon Redshift Spectrum, allowing you to choose one that best fits your use case. You can even use Amazon S3 Select with AWS Lambda to build serverless apps that can take advantage of the in-place processing capabilities provided by S3 Select.
  • S3 Analytics - Storage Class Analysis - provides analysis to your S3 usage. With storage class analysis, you can analyze storage access patterns and transition the right data to the right storage class.

@cemeng
Copy link
Author

cemeng commented Dec 17, 2017

17/12 - Sunday

VPC Summary and exam tips

  • NAT instance - disable source / destination check on the instance, it must be in public subnet. Traffic that it can handle - depends on the instance size. Create high availability using autoscaling groups, multiple subnets in different AZs and script to automate failover (hm?). It is behind a security group.
  • NAT gateways - remember to update route table - may take 15 mins to provision.

19/12

Overview of security processes (part 2)

Make sure you read the security white paper - it's quite long at 95 pages though.
Encryption of data is generally a good practice, you can encrypt EBS volumes and their volumes with AES-256.
This mean data moving EC2 instances and EBS storages become secure.
However this feature only available on more powerful instances such as M3, C3, R3, G2.

ELB - SSL termination on load balancer is supported -> why is this good? this is good because then your webserver doesn't need to do decryption on their end which saves them operation power. ELB does pass the originating IP address to your webserver.

Direct Connect - bypass internet service in your network path. You can buy rack space within AWS Direct Connect location and deploy your equipment nearby.

You can conduct vulnerability scan on your instances - but you must tell Amazon beforehand - failing in doing so is a violation to your thing. AWS conduct scans on their own systems - not customers' instance.

Compliance - AWS complies to a lot of standards, one of them is PCI DSS 1 - this is compliance on the infrastructure level though, you need to make sure your app is compliance.

Storage options in cloud whitepaper

S3, Glacier, EBS, EC2 instance storage -> ephemeral (gone when you terminate the instance)
AWS Import/Export -> a service that allows you to send your data to AWS Import/Export centre - not via the internet, but by mailing or sending your physical storage. Apparently Snowball is more preferable.
AWS Storage Gateway -> connect on prem software with cloud storage. The purpose is you use cloud for data storage. It is essentially and interestingly is a VM that you install on your prem. Once installed - you can create gateway cached or gateway stored volumes that can be mounted as iSCSI devices by your on prem apps.
Gateway cached -> uses S3 for primary data - while retain some of data local in a cache for frequently access data.
you can create storage volumes up to 32TB in size.
Gateway stored ->

@cemeng
Copy link
Author

cemeng commented Dec 21, 2017

21/12

Kinesis Firehose vs Kinesis Streams

Kinesis Streams - You must manually provision the appropriate number of shards for your stream to handle the volume of data you expect to process. Amazon helpfully provides a shard calculator when creating a stream to correctly determine this number. Once created, it is possible to dynamically scale up or down the number of shards to meet demand, but only with the AWS Streams API at this time. with kinesis streams you do stuffs

Kinesis Firehose is Amazon’s data-ingestion product offering for Kinesis. It is used to capture and load streaming data into other Amazon services such as S3 and Redshift. From there, you can load the streams into data processing and analysis tools like Elastic Map Reduce, and Amazon Elasticsearch Service. It is also possible to load the same data into S3 and Redshift at the same time using Firehose.
In Firehose - you don't have to worry about consumer.

You can check out this re-Invent recording on Kinesis: https://www.youtube.com/watch?v=SmcgiweeviY. What I found the key difference is where you flow data through Firehose, it doesn't store. It happens under the cover for streaming data to flow through to the configured destination where you store it persistently ie S3, Redshift or ES.

@cemeng
Copy link
Author

cemeng commented Dec 22, 2017

22/12

OK - so I have finished all the major components in acloudguru - progress is 76% now, except for:

  • ch 10: hands on lab
  • ch 12: well architected frameworks
  • ch 13: additional exam tips

24/12

Wordpress lab

draw.io -> website to create diagram.
The architecture of for this lab -> ELB, EC2 with auto scaling (2 instances), RDS multi AZ too (2 RDS).
The EC2s & ELB are inside a VPC with WebDMZ, RDS is inside private VPC.
IAM roles -> create a role that allow EC2 to have full access to S3
Creating security group for the EC2 -> created by going to VPC - so remember security group is part of VPC.

At this stage - I am a bit haze about EC2 auto scaling - so I am taking a detour here and re-read how to set it up

When it gets to setting up security group - I stopped the lecture and trying to remember myself how to set up a VPC that for my EC2 - the VPC should allow:

  • public access to my EC2 / ELB (ingress) on port 80, 443 and 22. egress on port 80, 443 - in the case of contacting updates? I don't think it needs egress on port 22?

Steps:
Security groups and VPC

  • create VPC with CIDR 10.0.0.0/16 -> wordpress VPC - so I guess VPC is per app in this case if I have to app / websites for different clients, I'd create different VPCs to isolate them.
  • create a security group - web DMZ - with inbound rule:
    HTTP (80) | TCP (6) | 80 | 0.0.0.0/0 |  
    SSH (22) | TCP (6) | 22 | 0.0.0.0/0 |  
    HTTPS (443) | TCP (6) | 443 | 0.0.0.0/0 |  
  • create security group for aurora - with 3306 on inbound rule - but hang on you can have multiple security groups in one VPC?
    see the inbound rule:
    MySQL/Aurora (3306) | TCP (6) | 3306 | sg-2133d255 -> the last one is the my web security group (kinda weird specifying security group as the allowable source - and we haven't touched on subnet yet). I guess what that means is allow MySQL connection from any instances that have security group of web - for instance you may have multiple EC2s with web security group, they will be allowed connection to this resource.
  • created 2 subnets for wordpress VPC - specifying CIDR range a bit tricky - used both cidr.xzy and example from default VPC -> 10.0.0.0/20 and 10.0.16.0/20. What's confusing is - these subnets also have route table - I am not sure what they are for - I forgot.
  • created an internet gateway and attach it to the VPC - also added the internet gateway to the route table
  • this route table seems to have been created automatically?

ELB

create an ELB application one - I am not sure, should I let it listens to port 80 only?
In this ELB you need to specify which VPC - when I choose wordpress VPC no AZ is shown!!! Which means I am missing some steps here - the error message says 2 subnets must be specified - OK, so that means I need to create at least 2 subnets and obviously on 2 different AZs? added subnet above.
ok added 2 subnets - OMG - now it's complaning: You are creating an internet-facing Load Balancer, but there is no Internet Gateway attached to these subnets you have selected: subnet-5a3d7d3e, subnet-25f87f0a

Side notes:

are you sure you want to delete this vpc? - the following will be deleted too: subnets, security groups, network acls, vpn attachments, internet gateways, route tables, network interfaces, vpc peering connections.

@cemeng
Copy link
Author

cemeng commented Dec 27, 2017

27/12

Took few days break - Christmas and spending time with kids is kinda important.

WordPress lab - setting up EC2

Now on setting up EC2 - after I set up my EC2 - I found that there's no public IP address on the EC2 - wooottt! Turns out I need to turn on auto assign public IP address setting on my subnet!!!!

30/12

Adding resilience and cloudfront lab

Ryan backs up the wordpress code inside /var/www/html into s3 bucket. Felix note: I would probably use git for this.

aws s3 sync --delete /var/www/html/wp-content/upload s3://my-little-pony (--dry-run) -> ah pretty cool - rsync for s3 - I could use this for my blog later.

Then we do some URL rewriting magic on WP so the files are served from CloudFront instead of EC2 / WP.
Next step is to automate the sync process using cron - which is basically pasting the command above into crontab.

Setting up AMIs lab

Why bother creating ELB for one EC2 instance? It's due to the public IP address, when EC2 is re-started - it will get a new IP address.
Note to Felix: can't Cloud53 automatically connect to that EC2 based on ARN? No -> https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values-alias.html - you can only set ELB, Elastic beanstalk, cloudfront distro, S3 buckets as alias record destination.
According to Ryan: you could avoid this issue by using ELB or Elastic IP.

In this lab - Ryan split the wordpress site into 2: the production WP and the writer WP and built AMI for both of them. I don't really understand the practicality of this lab in the real world hence I didn't do it.

Autoscaling and Load Testing

didn't do ..

Exam tips based on students feedback

Kinesis - it is the way to consume big data / data stream or bringing it to the cloud - like social media
business intelligence -> RedShift
big data processing -> EMR (processing)

EC2 - EBS backed vs Instance store, know the difference. EBS store data long term.

Opswork -> orchestration services that uses Chef.

SWF Actors: workflow starters (initiate workflow), deciders (control flow), activity workers (carry out activity tasks)

AWS Organisations & consolidated billing

This is a feature for larger organisations where it may have a lot of AWS accounts. Consolidated billing is sort of replaced by AWS Organizations?
What is AWS orgs?
one root account and you can have multiple organisation units below it. Then you can have granular control for policies.

Ok - for consolidated billing. In this setup, root account is the paying account - then we have several linked accounts under it such as test, production and back office.
You will receive one bill - with break down for individual linked accounts. The accounts are independent though they cannot access resources of other accounts. Limit for consolidate billing is 20 linked accounts.
The good thing about this - you will get volume pricing discount.

Best practices -> paying account should be used for billing purpose only - don't deploy your resource there.

@cemeng
Copy link
Author

cemeng commented Jan 1, 2018

01/18

Happy new year! :)

Cross Account Access

Cross account access - what is it? From a post in the internet:
Today, we made it possible for you to enable a user to switch roles directly in the AWS Management Console to access resources across multiple AWS accounts—while using only one set of credentials.

I have actually experienced this at FFX - using my login, and then switch to developer role and then I was able to access devs specific resources.

Not doing the lab - but taking the idea an apply it to MEC and my own account scenario - almost got it working.

Also reading IAM documentation on AWS as well as best practice for IAM - this has solidified my understanding of user, group, policy and role.

@cemeng
Copy link
Author

cemeng commented Jan 2, 2018

02/18

acloud guru 88% completed - the end is near, I think I can finish this course before the holiday ends - I am stoked!

Learnt a bit about docker and docker in AWS.
ECS - Elastic Container Service is managed docker service in AWS.
ECR - Elastic Container Registry is docker image registry in AWS. AWS version of DockerHub.

I had to read additional resources to wrap my head around docker stuffs again.
docker image -> template to create a docker container (in my own words).
I am still a bit fuzzy with Task Definition, Clusters.
In my own words again - Task Definition defines how to run a docker container in AWS. It is a container configuration.
Clusters is region specific and this is a place to deploy task definitions. An ECS cluster is basically autoscaling for docker, it will provision your required number of EC2 instances to run the docker image.

ECS quick tutorial from youtube not from acloudguru https://www.youtube.com/watch?v=kQBGbmrdYO4:

  • push an image to ECR
  • create task definition - so here you specify the image URL from the ECR. Then you configure the container here by specifying CPU requirements etc2.
  • then you create a cluster. in here you specify what EC2 instance type you want, the VPC config etc2.
  • and then you create a service - uhm what? don't really get it .

When you finish this if you go to EC2 - you'll see the instance that ECS created for the container. SSH in to that box, you'll see docker installed and provisioned for you. doing docker images will show 2 images, one is the ECS agent and the other one is your image.

@cemeng
Copy link
Author

cemeng commented Jan 3, 2018

03/18

Did Whizlabs diagnostic exam and scored 85% (51 out of 60) - pretty stocked!
Area to improve:

  • the details of things I guess for example: which DB doesn't support read replica in RDS answer Oracle
  • how long can a message stays in SQS - max is 14 days default is 4 days
  • autoscaling - what to do if you want to change instance type on your autoscaling group. answer: create new launch configuration and replace autoscaling's existing launch config with the new one.
  • direct connect is not VPN.

@cemeng
Copy link
Author

cemeng commented Jan 29, 2018

29/01

Few more days before the exam

Doing a cloud guru final exam, few things to review:

  • Site to site VPN vs direct connect - what's required -> You need to ensure that your application in your custom VPC can communicate back to the on-premise data center. You can do this by either using a site to site VPN or Direct Connect. It will be using an internal IP address range, so you must make sure that your internal IP addresses do not overlap.
  • cname vs a record
  • what is AWS WAF - what filters are available
  • in auto scaling - how to determine which instance to kill?
  • what services are offered by trusted advisor?
  • ECS - especially with regards to permission, permission can be applied to task and the instance themselves?
  • SQS - what is DelaySeconds mean?
  • To establish a successful site-to-site VPN connection from your on-premise network to an AWS Virtual Private Cloud, which of the following must be configured? (Choose 3)
    You must have a VPC with Hardware VPN Access, an on-premise Customer Gateway, and a Virtual Private Gateway to make the VPN connection work.
  • what is Virtual Private Gateway and Customer Gateway?

Got 72%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment