Skip to content

Instantly share code, notes, and snippets.

@pareddy113
Last active February 16, 2018 22:13
Show Gist options
  • Save pareddy113/8a21c6383b38efc57595c6068521e8cd to your computer and use it in GitHub Desktop.
Save pareddy113/8a21c6383b38efc57595c6068521e8cd to your computer and use it in GitHub Desktop.
AWS Info
-----------Getting Started----------
* https://aws.amazon.com/
* https://aws.amazon.com/training/intro_series/
* https://www.youtube.com/watch?v=b-gwhQ6GPFQ&list=PLhr1KZpdzukcMmx04RbtWuQ0yYOp1vQi4&index=6
* Acloudguru.com -> Not free
* qwiklabs.com -> Not free
------------Account, Users, Roles, Permissions & Account level security------------
+ AWS Organizations
- manage multiple AWS accounts
- Create Organizational Units, set policies for each OU and attach user accounts to OU's
- Tree like structure
- Set policies for each account, OU much like Active Directory
+ Hierarchy Example:
AWS Root Account user -> Access to all the AWS Services, Billing
+ Organizational Unit (Production Server)
+ AWS Account1 (Mercurio)
- IAM Users -> Create more users in an account, groups, roles, policies etc.
-> No access to Billing
+ AWS Account2 (PPM)
- IAM Users -> Create more users in an account, groups, roles, policies etc.
-> No access to Billing
+ Organizational Unit (Dev/Stage Server)
+ AWS Account3 (Mercurion)
- IAM Users -> Create more users in an account, groups, roles, policies etc.
-> No access to Billing
+ AWS Account4 (PPM)
- IAM Users -> Create more users in an account, groups, roles, policies etc.
-> No access to Billing
[ Ex:
OU: Production unit, Development unit, Stage Unit
Accounts under each OU: PPM, Mercurio i.e. once account for each production level project]
+ IAM
- Securly access control to resources to users of the AWS
- Use MFA for Admin's
- Never use your root account, use MFA, Hardware key and lock it down
- Users -> can have any number of users with differenct groups and policies
- Groups -> write policies to the group
- Roles -> temporarily give extra permissions to an user by creating a role & switching to it instead of changing policies
- Policies -> Create policies easily, policy generator, policy validator
- Password -> Set your password, rules for password
- Report -> report of all the users in the account
+ Hierarchy:
AWS Root Account user -> Access to all the AWS Services, Billing
- IAM Users -> Create more users in an account, groups, roles, policies etc.
-> No access to Billing
[ Ex: Admin group, Developers Group, Tester Group with different access policies and different users]
+ Conclusion:
- To perform an action, Both AWS Organizations and IAM permission must allow that action or else permission error
+ AWS Key Management Service
- Managed encryption service
- to manage and control your encryption keys to encrypt your data
- easily create and disable keys
- centralized control of your encryption keys
- Single view into all key usage in the organization
- Implement key rotation, create usage policies, and enable logging
- Where do we use the key?
- integrated into S3, EBS Redshift, RDS, Elastic Transcoder
- encrypt the data to store in these devices
- API can be used to access keys
- KMS integrates with CloudTrail to provide logs of accesses
- AWS manages physical security, scalability, high availability
- 1$ per key per month
- no one has access to keys
- IAM -> Encryption
------------Services------------
--------Compute-----
+ EC2
- Elastic Compute cloud -> resizable compute cloud
- Launch VM/ instance
- Create your own data center
- Customize your own network and secuirty settings
- Increase the hardware features on the go
- Cpu power, RAM size, memory, network capabilities
- AMI - Windows, Linux
- micro, small .... Xlarge instances -> depending on the requirements
- Security groups -> similar to firewalls
- DNS -> can be used to access the instance using SSH (similar to IP but url friendly)
+ EC2 Container Service
- highly scalable and high performance container management system
- Docker containers
- Run and manage Docker enabled applications across a cluster of EC2 instances
- Helps you to grow from 1 container to 1000's
- ECS supports docker
- Define tasks using JSON called ECS task definition -> Specify one or more containers, memory, link between containers, launch as my tasks as you want, version control, schedulers, CPU, RAM
- Monitor -> containers, cluster, CPU and Memory utilization using CloudWatch, cloudwatch alarms to scale up/ down
- Create a cluster with many containers
- Write a ECS Task definition where you specify all the container name, images, instance specs, links etc.
- Run the Task definition to automatically launch all the containers
- Make changes in the ECS Task definition and update the cluster to see the changes
- Easy to create, manage and deploy containers
+ ECR
- Stores all the containers
- Push and pull containers
- Versioning of containers
- Scalable and highly available
+ Elastic Load Balancer
- Works hand in hand with EC2
- Traffic manager
- To automatically distribute web traffic to different EC2 instances
- routes the traffic if instance failed
- Add and remove EC2 instances without downtime of the application
- minimze overloading and instance, monitors health, traffic, handles incoming requests,checks if load balancing functions properly
- Security groups to ELB
- ELB can route traffic between Single Availability zone or multiple AZ's of EC2 instances
- AZ: Collection of data sources at a place like Datacenter
- Application load balancer
- Classic load balancer
- requires atleast two subnets
+ Lambda
- Can run code without servers
- Pay only for the numbers of time your code runs
- Can be triggered when data changes, user made some actions, or using other AWS Services like S3, Kinesis, Cloudwatch, Dynamo etc.
- No need to run and maintainer servers so cheap
- Can scale easily based on the number of triggers
- Each trigger runs on a new instance in parallel so no performance hit
- Java, Node.js etc
- Example to save the file extension of a file uploaded to S3
- Create a Lambda function in Lambda
- Create the S3 bucket
- Add the S3 bucket as an event source and also the action in the previously created Lambda function
- Upload a file to S3
- See the Monitoring tab in the Lambda for the history
--------Storage and Content Delivary-----
+ S3
- Simple Storage Service
- Safe, secure, highly available object storage on the web
- Store as much data as your like
- Pay only for the storage and bandwidth you use
- Affordable solution to store data
- Backup and storage, appliaction or media hosting, hosting high traffic websites, software delivary
- Website -> Store static content for fast access and reduce cost
- Securely store info and backup data
- Versioning and roll back
- Reliable -> 99.99% durable
- Easy to use
- Rest and Soap web api to store and retrive data
- CLI or AWS web console
- Stores data as objects(files) in buckets(folders)
- All objects once uploaded are private unless make public
- Buckets are containers for object -> control access for the buckets
- Bucket name once given cannot be changed and it can be accessed outside based on bucket name
- Use region to optimize latency and also regulatory requirements
- Object can be text, image, video, application
- Can upload folder but need to install a plugin
- Versioning
- Once enabled, only can be disabled
- To revert to previous version, delete the latest version
- Bucket size is sum of all versions of all files combined, so if you have large file, version increases cost unless you have life cycle management
- Can not retrive the deleted version of a file
- Can retrive the deleted object in a bucket
+ Encryption
- Use S3 encryption
- KMS encryption
- Once encrypted, cannot view the data as before
+ Cross region replication
- Prereq: Versioning must be enabled for this
- To replicate the data in a bucket in one region to another bucket in another region
- Source and destination buckets must be in different regions
- Replication starts only for objects uploaded after its enabled, previously stored objects are not replicated so bucket must first be migrated to replicate previous objects and replication must be enabled
- Replication done:
- Object update
- Delete object
- Object upload
- Replication not done:
- Object delete marker
+ Life Cycle Management
- Can be used in conjunction with versioning
- Can be applied to current and previous versions
- Transition to S3-IA - 128KB min and 30 days after creation date
- Transition to Glacier - 30 days after S3 - IA
- Glacier: not available in all regions so check it out
- Permanently delete
+ MFA for deletion
- Prereq: Versioning must be enabled for this
+ Serve private content using CloudFront
- Use signed Url's or Signed Cookies while creating cloud front
- Use signed URLs in the following cases:
You want to use an RTMP distribution. Signed cookies aren't supported for RTMP distributions.
You want to restrict access to individual files, for example, an installation download for your application.
Your users are using a client (for example, a custom HTTP client) that doesn't support cookies.
- Use signed cookies in the following cases:
- You want to provide access to multiple restricted files, for example, all of the files for a video in HLS format or all of the files in the subscribers' area of a website.
- You don't want to change your current URLs.
- Use Origin Access Identity while creating cloudfront
- Create a cloudfront Origin Access Identity user and give access to the bucket
- Remove read access to Everyone
- Cross-check this in the "Bucket Policy" if ORI user is given access and Everyone read is disabled
- Now S3 can only be accessed using Cloudfront and not directly
+ S3 Security
- By default, all newly created buckets are private
- Make file/ bucket public to access it
- Bucket policies help you manages the permissions of the files
- Access Control Lists -> control of each file
- Access logs can be configured to another bucket or another aws account
- Encryption
- Intransit (PC to bucket)
- SSL/ TLS / HTTPS
- Rest
- Server Side Encryption (SSE)
- S3 Managed Keys
- KMS keys
- Log of who is using the keys and when
- create and use keys
- Customer Provided Keys so Amazon does not have the key
- Client Side Encryption
- encrypt the data on your client side and upload the data
+ Storage Gateway
-
+ CloudFront
- Can set up Origin Access Identity User
- Can set up signed url's or cookies
- Can set up Time to live so that an object is cached for the amount of time in the end point
- Cannot have large TTL as an updated object in the origin in updated in the cache in intervals of TTL
- Small TTL preffered
- Can set up redirect pages for error http codes
- Edge locations not only read the content (GET) but also support to PUT, POST content to the Origin
- Can restrict country level access
- Can have analytics of the accessed website
- Can use Invalidation to remove the cached elements in the end points but its expensive
- Alternative: upload a new version and update the old URL with the new updated URL
- AWF: 7th layer security: Prevents sql injection, cross site scripting
+ Bucket Policy Example
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow", // Allow or Deny
"Principal": {
"AWS": "arn:aws:iam::189506636723:user/Admin" // Access control of bucket * -> Everyone, Specify IAM User ARN
},
"Action": "s3:ListBucket", // All the actions that can be done on the Resource by the above Principal
"Resource": "arn:aws:s3:::avinashreddyp.me" //Resource to be accessed
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E1F0V70EPAP349"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::avinashreddyp.me/*"
}
]
}
+ Elastic Block Store (EBS)
- A Virtual Hard drive that can be used to attach to Amazon EC2 instances
- data separte from computing instance
- like a physical hard drive
- persists data independently and can be attached to any other instance
- Data is not lost if EC2 is failed
- 1 GB to 1 Tb
- created in a particular AZ
- Automatically replicated in the AZ to prevent data loss
- One EBS can only be attached to one instance
- many EBS can be attached to one instance
- Like a hard drive
- Database intensive applications
- Why not use Server's Hard drive?
- Becuase data is Server is deleted once the instance is stopped
- Ex: Web app -> serving many users, 2M pages and 4M transactions
- As workload is I/O intensive, use EBS Volume and leverage provisioned IOPS to reduce latency
- Create a Standard EBS volume
- Attach it to an EC2
- Shutdown the instance and detach the volume
- Take a Snapshot of the volume
+ CloudFront
- Traffic all over the world, good way to handle them?
- Simple, cost-effective way to improve performance, reliability and global reach of your entire website for static content & dyanmic content & media streaming
- Store original version in original server
- create cloudfront distribution that references the original location
- edge location serves the user request for the least time delay
- Detailed control on who can download
- Support HTTP methods -> Get, put, post etc
- works with Dynamic website
- CloudWatch Monitor to keep an eye on CloudFront distribution
- Header to find the device type and generate content based on the device
- Geotargeting -> find the country of the client
- easy way to distribute content to clients to reduce latency
- Detailed cache statics report
- Near real time alarms
- Business and web application developers with low latency and high data transfer speeds
- E-commerce and travel web apps that are highly customized
- Streams videos and live events
- News and sport applications customized
- Can be cached in the edge location
- Runs of EC2 or outside EC2
- integrates with IAM, CloudWatch, CloudFormation etc.
- Free data transfer between AWS services and CloudFront
- Ex:
- Have a S3 bucket with a publicy available image file
- Create a CloudFront distribution and link an S3 bucket with an publicy accessible image file
- Then use the CloudFront DNS/image.jpg in your html to make CloudFront serve the request
- Delete the distribution if you no longer use it
--------Network-----
+ VPC
- Virtual Private Cloud
- How EC2 instances are exposed to the internet?
- Public facing subnet -> Reverse proxy server
- Private facing network -> all the services inside
- Can create a VPN from your Company network to the VPC
- Security Groups -> Inbound and outbound filtering
- Network ACL
- S3 -> configured to access only VPC
- Route table - how your traffic is routed between subnets
- AWS VPC resources can be scaled up or down
- AWS VPC console, CLI, UI Console, Windows powershell
- Stop containers in the VPC
- Delete VPC, deletes all the resources in the VPC. Instances, Security groups, Network ACL's etc.
+ Route53 (DNS Provider)
- DNS Service, a reliable and cost effective to route consumers
- translate Domain name --> IP
- When website name typed, DNS server resolves the website name to IP to talk the server.
- Route53 -> returns the IP to load the website
- Globally distributed DNS
- Scale automatically to serve DNS requests
- Pay for what you use - hosted zone or number of hits you get
- Use IAM to control Route53
- Purchase domain name through Route53
- Create a hosted zone
- Create records within a hosted zone where IP mapped to domain name.
- Create Health check and alarms.
- Failover routing policy - when the dynamice website is down based on health check, it is routed to static website on S3
+ API Gateway
----------Logs-----------
+ CloudWatch
-
------------Highly Available------------
+ ELB
+ Autoscaling
+ Run your instances in different available regions for availability
-----------Security Measures----------
+ Create IAM users based on their role with least permissions
+ If someone needs temporary admin access, create a role from them and don't mess with the policy
+ Use MFA for Admin accounts, even if it's compromised you can use root account to control it
+ Never use your root account, use MFA, Hardware key and lock it down
+ Configure your AWS Organization and IAM user roles and permissions properly
+ VPC, Security Groups, Network ACL, no open ports outside
+ Reverse proxy server at the front hiding the other services from outside
+ SSL/ HTTPS/ Encryption
+ AWS CloudTrail
- Records all the logs for the AWS Account i.e. API Calls
- CloudTrail all regions
+ AWS Inspector
- Automatically assess your application for vulnerabilities
+ Netflix Security Monkey
- Security Monkey monitors your AWS and GCP accounts for policy changes and alerts on insecure configurations
+ AWS Valut
- A vault for securely storing and accessing AWS credentials in development environments
+ Autoscaling & Route 53
- Mitigate the risk of DDoS attack
--------Backup-------------
+ CloudBerry Labs
- Backup S3, Galicer etc.
---------Login----------
+ SuperPutty
- Bookmark all the sessions and save it xml file
+ Pem files
- Save all the .pem files properly
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment