Skip to content

Instantly share code, notes, and snippets.

@krishnamurthydasari
Last active April 19, 2024 04:32
Show Gist options
  • Save krishnamurthydasari/546873da27b9305b5ce9ef9def395a2f to your computer and use it in GitHub Desktop.
Save krishnamurthydasari/546873da27b9305b5ce9ef9def395a2f to your computer and use it in GitHub Desktop.
Domain 1: Incident response
Domain 2: Logging and Monitoring
Domain 3: Infrastructure Security
Domain 4: Identity and Access Management
Domain 5: Data Protection
Domain 1: Incident response:
============================
*****Notes by trainer: https://docs.google.com/document/d/11_1lNSNMI7tRTmfBR74FOkaQbDfVZPZ7u0H4tFXRrGs/edit?usp=sharing
When you get abuse incidents from AWS that means one of your server listed in email is actually hacked or tried to hack.
Abuse complaint can be related to server or server os or applications running on the server
Reasons for security incidents -
Improper firewall configuration
Lack of Web Application Firewall
Server hardening is must
File Intigrity Monitoring should always be there (FIM tools)
Patch Management
Always scan the code with Web Application Scanner (Vulnerability assessment / tools)
Monitor sudden open ports and logs change
Much more
Types of attacks:
------------------
> Malware - is some file download from pop-ups which can cause system damage
> Drive-by Downloads - a method of distributing malware. website index or php file got infected and which trigger all users when they go to website
> Phishing - presenting users a link or any other form like phone call to disclose sensitive information
> Brute-force attacks - attacking server with incorrect passwords untill he finds the password that works
> SQL Injections - (SQL) injections are when an attacker injects malicious code into a server to manipulate back end databases. The goal is to reveal private data such as user lists, customer details, and credit card numbers.
> Man-in-the-Middle (MITM) attacks - With MITM attacks, the criminal positions themselves between your device and the server. They eavesdrop on, intercept, and manipulate communication between two parties – this often happens on unsecured wireless networks such as public WiFi.
> Denial-of-Service (DoS) attacks - DoS attack sees an attacker flood a website with an overwhelming amount of traffic, often using ‘bots.’ As a result, the system crashes and denies access to real users.
https://managewp.com/blog/security-attacks
AWS GuardDuty:
--------------
threat intelligence service. by AWS which monitors for malicious traffic or behavirour
cloudtrail logs
vpc flow logs
dns logs
trusted IP:
you can whitelist known alerts by addign IPs to trusted IP list. For example, when you are running port scanning (pen test) from perticular server, Guardduty will generate lot of alerts. to overcome this, you can add IP address of server from which you are running pen test to trusted IP list so that alerts will not be generated from that IP address.
threat IP list:
this is to monitor all traffic coming from this IP and generate alerts (if you know IP of threat server)
Look in to aws documentation if you want to know about guard duty finding types.
Incident reponse:
-----------------
what happens when you find something in guardduty, related to brute-force
You can block the source IPs in Nacl
setup possword policy and account mgmt to deactivate account by N number of incorrect attempts etc.
log monitoring
Is your organization ready?
Detect brute force attacks?
Detect when someone make changes to critical files?
Identify if servers are sending traffic to un-expected locations?
Identify if someone is trying to attack web applications?
Track what exactly your users are doing in your network
Identity when some one is scanning your network
Incident response use cases:
---------------------------
Exposed AWS access and secret keys
Compromised EC2 instances
Exposed AWS access and secret keys:
Determince access associated with those keys
Invalidate credentials
Incalidate any temporary credentials issues with those exposed keys
Restore the access with new credentials
Review your AWS accounts
Invalidate any temporary creds issues with exposed keys:
1) by adding explicite deny policy for all services
2) Remove all permissions allocated to that user
3) Review aws account to see if anything else is done with exposed keys like creating new iam users, s3 buckets etc. based on permissions
Compromised EC2 instance:
1) lock instance down (by security groups lock)
2) take EBS snapshot
3) memory dump
4) Perform forensic analysis
5) Terminate the instance
Incident response plan: it will have several steps as below;
-----------------------
Preperation (make sure all controls are in place like all aws security services, loggig, Organizations to seperate accounts etc.)
Detection (cloudtrail logs, guardduty, waf and cloudwatch alerts)
Containment (for example, if any security groups are altered, use some aws cli or any other automation to restore all those SG rules )
Investigation (Cloudwatch, Config)
Recovery (once identified issue, automation can be used to launch fresh server)
Lessons Learned ()
Penetration test in AWS:
------------------------
Vulnerability assessment refers to scanning of a system to find known issues (e.g nessus)
Penetration testing refers to running exploits against given system with intention to compromise (e.g metasploit)
There is no prior approval needed for running pen test or vulnerability test, it was there earlier. Now AWS removed that clause for some services***
It is better not to run any VA or Pen scanning against any nano, micro and small instances
Logging and Monitoring:
=======================
***Notes: https://docs.google.com/document/d/1uv2T_huTXApm9Pu7fSz2D8Ir6gvHjC8wyiNy792LHTY/edit?usp=sharing
********************
Vulnerability, Explot, Payload
------------------------------
Vulnerability - A bad software code
Exploit - Program that exploits code to get inside
Payload - Stealing data, Ransomware etc.
try these commands:
nmap -sV <IP>
metasploit
Automated vulnerability scanners:
---------------------------------
nessus is automated vulnerability scanner
AWS Inspector
1) CVE - common vlnerabilities and exposure
2) CVEs are like publicly known information security vulnerabilities
3) Primary source or CVE is "National Vulnerability Database" managed by NIST (National Institute of Standards and Technology)
AWS Inspector:
--------------
Vulnerability scanner
Agent based scanner
AWS INspector has pre-defined templates
1. CVE
2. CIS Benchmark
3. Security Best Practices
4. Network Reachability
AWS Inspector Demo:
-------------------
install agent in ec2 instance - (if EC2 has aws-ssm agent running already, you have option to auto install from inspector console)
wget https://inspector-agent.amazonaws.com/linux/latest/install
systemctl status awsagent
Enable Inspector service
Configure targets
Assessment templates - selecte target and pre-defined templates
Go to assessment runs and check status. reports can be downloaded in html or pds OR you can look in to them in findings tab.
there is a bug in inspector that when you run assessment for first time, it will complete in a minute and the report will only have info related to network reachability, not all templates.
AWS Security HUB:
-----------------
Comprehensive view of below services
GuardDuty
Inspector
Macie
IAM Access Analyzer
Patch Manager
Firewall Manager
Apart from those 3 services consolication, security hub also supports following standards
- CIS AWS Foundation
- PCI DSS
**If you want to enable PCI DSS features, you would need to enable AWS Config service prior to enable Security HUB.
Web Application Firewall: (layer 7 - application layer)
-------------------------
Main aim of the firewall is "to block malicious and unauthorized traffic"
firewall in general works on layer 3 (network) and layer 4 (transport) of OSI model.
WAF is an application level firewall for HTTP applications (Layer 7), especially the ones defined in the OWASP top 10 metrics
It applies set of rules for HTTP based conversations
***WAF Association - ALB, CloudFront and API Gateway
***WAF can not be associated directly to EC2 instance
Systems Manager:
================
Run command
Session Manager
Parameter store
Patch manager
1) SSM agent is installed in all EC2 instances required to be managed by systerms manager
2) IAM role (AmazonEC2RoleForSSM)
Benifites on Session Manager:
-----------------------------
1. Centralized access control using IAM policies
2. No inbound rules needed to be open
3. Logging and auditing session activity to cloudwatch logs
4. one-click access to instances from aws console or cli
5. No need of VPN to connect to instances
Patch Manager:
--------------
Patch baseline - a service which determinte the list of missing patches that need to be installed in EC2 instance
Maintenance Window - scheduling activity on perticular time
CloudWatch Logs:
================
cloudwatch agent to be installed in ec2 instance
awslogsd package
Push EC2 logs to CLoudwatch logs:
---------------------------------
- create IAM role with appropriate policy
- install cloudwatch logs agent
yum install -y awslogs
- Configure
cd /etc/awslogs
awslogs.conf - at the end of this file, you can add all log files to be pushed to aws cloudwatch
awscli.conf - configure region you intend to use
systemctl start awslogsd
systemctl status awslogsd
log file for aws logs - /var/log/awslogs.log
- aws doc - https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
AWS Athena:
-----------
Create table
run select queries
AWS Config:
===========
- AWS config is used to record the resource configuration changes over time.
- An EC2 instance was hosting website from last 90 days. Suddenly, in last one week, there have been lot of issues with the requests. What was changed?
- These reports can be used for audit and compliance
Findings:
Root account login
MFA not enabled
Security group changes
cloudtrail enabled or not?
EC2 config changes
Container changes???
not using approved AMI?
AWS Managed rules
Conformance packs - a collection of aws managed config rules with recomended templates (Operational Best Practices for PCI DSS)
Audit and Compliance rules:
-----------------------------
approved-amis-by-id
Trusted Advisor:
================
CloudTrail:
===========
Log file validation
aws cloudtrail describe-trails
aws cloudtrail validate-logs --trail-arn [ARN-HERE] --start-time 20190101T19:00:00Z
- CLoudTrail will add digest files to the same s3 bucket with checksum/shaw of log files
- when you run validate-logs it will compare shaw of digest files against each log file and get you result
- if you try to delete digest file itself - cloudtrail will result as error as when cloudtrail creates new digest file, it will also have info about previous digest file.
- cloudtrail digest logs are delivered in every one hour
AWS Macie:
=========
- Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.
- Macie automatically detects as well as clasify the data like PII, database backups, SSL private keys and various others.
- It works on top of S3 buckets and also from cloudtrail logs
- If encryption enabled on s3 bucket, Macie will not be able to read files.
Centralized Logging:
====================
Cross account log data: with log destination and kinesis stream:
-----------------------
########## Recipent Account ###############
Receipent Account ID: [FILL-HERE]
Sender Account ID: [FILL-HERE]
1) Create Kinesis Stream
2) Create IAM Role to allow CloudWatch to put data into Kinesis
Trust Relationship
{
"Statement": {
"Effect": "Allow",
"Principal": { "Service": "logs.region.amazonaws.com" },
"Action": "sts:AssumeRole"
}
}
IAM Policy:
{
"Statement": [
{
"Effect": "Allow",
"Action": "kinesis:PutRecord",
"Resource": "arn:aws:kinesis:region:999999999999:stream/RecipientStream"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::999999999999:role/CWLtoKinesisRole"
}
]
}
3) Create CloudWatch Log Destination
aws logs put-destination \
--destination-name "testDestination" \
--target-arn "arn:aws:kinesis:ap-southeast-1:037742531108:stream/kplabs-demo-data-stream" \
--role-arn "arn:aws:iam::037742531108:role/DemoCWKinesis"
Output:
{
"destination": {
"targetArn": "arn:aws:kinesis:ap-southeast-1:037742531108:stream/kplabs-demo-data-stream",
"roleArn": "arn:aws:iam::037742531108:role/DemoCWKinesis",
"creationTime": 1548059004252,
"arn": "arn:aws:logs:ap-southeast-1:037742531108:destination:testDestination",
"destinationName": "testDestination"
}
}
4) Associate Policy to the CloudWatch Log Destination
{
"Version" : "2012-10-17",
"Statement" : [
{
"Sid" : "",
"Effect" : "Allow",
"Principal" : {
"AWS" : "585831649909"
},
"Action" : "logs:PutSubscriptionFilter",
"Resource" : "arn:aws:logs:ap-southeast-1:037742531108:destination:testDestination"
}
]
}
aws logs put-destination-policy \
--destination-name "testDestination" \
--access-policy file://DestinationPolicy
######## Sender Account ###########
aws logs put-subscription-filter \
--log-group-name "CloudTrail/DefaultLogGroup" \
--filter-name "RecipientStream" \
--filter-pattern "{$.userIdentity.type = Root}" \
--destination-arn "arn:aws:logs:ap-southeast-1:037742531108:destination:testDestination"
Reference:
aws logs put-subscription-filter --log-group-name "CloudTrail/DefaultLogGroup" --filter-name "RecipientStream" --filter-pattern "{$.userIdentity.type = Root}" --destination-arn "arn:aws:logs:ap-southeast-1:037742531108:destination:testDestination" --profile aws2 --region ap-southeast-1
Kinesis Commands:
aws kinesis get-shard-iterator --stream-name arn:aws:kinesis:ap-southeast-1:037742531108:stream/kplabs-demo-data-stream --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON
aws kinesis get-records --shard-iterator "AAAAAAAAAAFLZtIq6kmA2juoKilfVONKUHYXRENk+9CSC/PAK3gBb9SAFYd5YzxMxisA/NLwMhOaj6WZVE6qzf0/d0op7Qk6M66317w4EKWiKLUbP/S2VWjvLk36Sn+KWmOVFmEzze7RE+whtaUIJuDgJsKmnNHDw8u1q28Ox6kj79Iplq3Mg1Chjv7zlv9uGeBeMaUkrmi/NAdJmQuLUGgnvtluu7KDJ6T1JP3M5GqwlO3HwK3gog=="
Domain 3: Instrastructure Security:
===================================
Bastion Host:
-------------
- Never store private keys in Bastion hosts, You should always use SSH Agent forwarding, so that private key in your local machine will be forwarded to further SSH connections from bastion host.
- In putty,
. Open PuTTY.
. Under “Connection” -> “SSH” -> “Auth”.
. Check the “Allow agent forwarding“.
- Keep very minical packages installed in bastion, mostly only SSH
- Security hardening enable
VPN Server:
------------
Cyber Ghost
OpenVPN:
========
- Marketplace AMI
- default user is openvpnas
- Admin GUI portal user name openvpn
- You can use key to login and then change password for openvpn GUI user
- OpenVPNconnect client is needed to connect to VPN
OpenSWAN VPN server:
====================
- Install OpenSwan (yum install openswan), default available in Amazon Linux
- /etc/ipsec.conf
- /etc/ipsec.secrets
# IPSEC Tunnels with OpenSwan
yum -y install openswan
/etc/ipsec.conf
conn amazonec2
# preshared key
authby=secret
# load connection and initiate it on startup
auto=start
forceencaps=yes
# use %defaultroute to find our local IP, since it is dynamic
left=%defaultroute
leftsubnet=10.77.0.0/16
# set our desired source IP to the Elastic IP. Openswan will create interface address and route
right=34.202.169.201
rightsubnet=172.31.0.0/16
/etc/ipsec.secrets
35.160.152.84 0.0.0.0 %any: PSK "PpKB6SgRWwVvi9ZRNRdSF4lL0mJhLwn0"
Important Note
If you are using Amazon Linux 2, then the /etc/init.d/ipsec restart command will not work. For such cases, you can make use of systemctl.
systemctl status ipsec
systemctl restart ipsec
On the OpwnSwan server:
-----------------------
When you configure VPN between AWS VPN and OpenSwan, you need to add routes properly. from the region where you have AWS VPN, you will add route and it is good. But from Physical EC2 instance (VPN instance), you need to do few steps in order to forward traffic.
- Disable source destination checks
- Add route entry to send traffic to VPN ec2 instance
- enable port forwarding in IPSec instance
- service network restart
VPC Endpoints:
==============
Gateway endpoints - for s3 and dynamoDB etc, where you need to add routes in route table
Interface endpoints - for all other services, routes are not required
*****For AWS services and AWS Marketplace Partner services, the private DNS option (turned on by default) associates a private hosted zone with your VPC. The hosted zone contains a record set for the default DNS name for the service (for example, ec2.us-east-1.amazonaws.com) that resolves to the private IP addresses of the endpoint network interfaces in your VPC. This allows you to make requests to the service using its default DNS hostname instead of the endpoint-specific DNS hostnames.
***** Default DNS name to the interface endpoint private IP*****
*****when you enable private DNS in interface endpoint*****
****To control endpoint traffic, For example, ec2 to s3 traffic - You can control access via IAM role attached to EC2 instance as well as endpoint ACL policy
Gateway Endpoint ACL (Resource)
# Full Allow Policy
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
# Restricted based on Bucket Names
{
"Statement": [
{
"Sid": "Access-to-specific-bucket-only",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:List*",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::output-kplabs",
"arn:aws:s3:::output-kplabs/*"]
}
]
}
**************************
Gateway endpoint Vs Interface endpoint:
**************************
Gateway endpoints are created in AWS space, not attached to VPC
Hence, required adding routes
Access policy to control Gateway permissions
Route entries are available only from EC2 instance
via VPN, Dx - these endpoints are not available
Interface endpoints are created in VPC and inside subnets
They have ENI attached
Access control via Security groups
Gateway End Points Interface Endpoints
Supports only S3 and Dynamo DB Supports Most of AWS Services
Must be inside VPC to be used It is ENI which is attached with SG
Uses s3 public IP address Uses Private IP from your VPC to access s3
Uses same S3 DNS names Requires endpoint specific s3 DNS names
No access from on-premise Allow access from on-premise
Cross region access not allowed Allows cross region access via TGW or VPC Peering
Not billed Billed
Association at VPC level Association at Subnet level
Stateless Vs Stateful Firewalls:
================================
AWS Security Groups are Stateful.
NACLs are stateless
Statefull: When a request initiated to server on port 22, source IP and source port will be used. Source port will be from range 1024 to 64545. In stateful requests, server really does not care about source port and responds back to client on the same connection.
Statefull Firewall maintains the connection state and knows which packet to allow outbound even outbound is restricted.
Stateless Firewall does not maintain connection state and for them each packet traversing inbound and outbound is a new separate packet.
=========
IDS/IPS:
=========
Firewall checks whether port is allowed or not but it will not scan the actual payload/data coming in.
IDS/IPS will look in to the Data
Install IDS/IPS agents in EC2 instances and IDS/IPS server will apply all rules required to scan data
Denial of Service (DoS):
========================
How to mitigate DDOS:
----------------------
- Be ready to scale as traffic increases
- Minimize the attack surface area
- Know what is normal and abnormal
- Create a plan for attacks
AWS services where you can control DDoS attacks:
-------------------------------------------------
Shield (Layer 3 UDP Floods and Layer 4 SYN attacks)
CloudFront
Route53
WAF (Layer 3, Layer 4 and Layer 7 attacks HTTP POST & GET)
ELB
VPC & Security Groups
API:
=====
AWS Organizations:
===================
- Consolidated billing
- policy based management
- When you create Organizations, you have two options, 1) consoludated billing 2) All Features (policy based control and also consolidated billing)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment