Skip to content

Instantly share code, notes, and snippets.

@Millward2000
Created October 5, 2021 09:26
Show Gist options
  • Save Millward2000/9bb534c9f007f0863034ebed2bd37959 to your computer and use it in GitHub Desktop.
Save Millward2000/9bb534c9f007f0863034ebed2bd37959 to your computer and use it in GitHub Desktop.
SSM details added to TGW
DEMO Script
-----------
===S3 Bucket Policy===
!D! - Demo
- Website - create a web server by following the creation option on a bucket
- first upload the files using aws s3 sync from the cli
- aws s3 sync ~/saa/s3-demo/website s3://millwam.com
- bucket properties - enable web at the bottom
- test access
- this should fail
- reinforce the idea that you need a bucket policy (they will see this in the lab)
- add the access to allow the connection to succeed
--snippet: Bucket policy--
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::millwam.com/*"
}
]
}
--snippet: Bucket policy--
- test access is working
- demonstrate versioning by modifying an index.html file for an existing website, and showing how you can revert back to the original file version
- code . (in the ~/aai/s3-demo/website directory)
- either by recopying the original - thereby creating a new object version, and the one that is then used
- or by deleting the current latest version, which automatically considers the previous version to be the one to use
!!! - N.B.do not delete the file - just the version (maybe demonstrate what happens when you delete the file - it deletes all versions)
=======================================================================================================================================================
===S3 CORS===
!D! - Cors in action
- millwam-cors bucket already created
- verify the existing bucket policy with the class - only my IP should be permitted to access the bucket
- browse to the bucket URL using a path style URL (default object listing will be virtual hosted (so bucketname.s3.region.amazonaws.com))
- https://s3.af-south-1.amazonaws.com/millwam-cors/index.html
- this will fail - hit inspect on the page and show the cors errors
- fix this by either:
1. Reverting to a virtual-host header style (which is in the object by default)
2. adding a CORS policy:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"HEAD",
"GET"
],
"AllowedOrigins": [
"https://s3.af-south-1.amazonaws.com"
],
"ExposeHeaders": [
"ETag",
"x-amz-meta-custom-header"
]
}
]
===========================================================================================================
===S3 Glacier===
Pre-Work
- Create an sns topic called myTopic and subscribe to it using your email account
- Ensure that the SNS topic can be called by S3 Glacier
1. Retrieve the archive inventory
aws glacier initiate-job --account-id - --vault-name Backup --job-parameters '{"Type": "inventory-retrieval"}'
aws glacier get-job-output --account-id - --vault-name Backup --job-id inventory.json
2. Retrieve the archive
aws glacier initiate-job --account-id - --vault-name Backup --job-parameters file://job-archive-retrieval.json
{
"Type": "archive-retrieval",
"ArchiveId": "kKB7ymWJVpPSwhGP6ycSOAekp9ZYe_--zM_mw6k76ZFGEIWQX-ybtRDvc2VkPSDtfKmQrj0IRQLSGsNuDp-AJVlu2ccmDSyDUmZwKbwbpAdGATGDiB3hHO0bjbGehXTcApVud_wyDw",
"Description": "Retrieve archive"
}
3. Download the completed inventory file
aws glacier get-job-output --account-id - --vault-name Backup --job-id archive.tar.gz
================================================================================================================================================
===S3 MPU===
!D! - Demo - Multipart upload (advanced - not a good idea for non-technical groups)
- Prep (already done - but review it if needed)
- change aws cli parameters for multipart upload thresholds:
aws configure set default.s3.multipart_threshold 30MB
aws configure set default.s3.multipart_chunksize 10MB
- generate files with dd
dd if=/dev/urandom of=test1.txt bs=25MiB count=1
dd if=/dev/urandom of=test15.txt bs=35MiB count=1
- background - tmux with two panes
- ctrl-b " - creates a new pane below
- ctrl-b up and down arrow - move between panes
- Step 1 - kick off a packet capture and 25 MB file copy
- In terminal 1 - sudo tcpdump -i eth0 dst port 443 -s 65535 -w /tmp/s3_mp.cap
- In terminal 2 - aws s3 cp test25.txt s3://millwam-mpu/test25.txt
- Step 2 - open the capture file in wireshark
- path is \\wsl$\Ubuntu-20.04\tmp\s3_cap.cap
- Navigate to Statistics -> Conversations
- There should be a single 3 or 4 TCP connections (the threshold seems to be around 8MiBs)
- Step 3 - change the threshold for multipart uploads
aws configure set default.s3.multipart_threshold 30MB
aws configure set default.s3.multipart_chunksize 10MB
- Step 4 - perform the file copy again, there should only be one TCP connection
- In terminal 1 - sudo tcpdump -i eth0 dst port 443 -s 65535 -w /tmp/s3_no_mp.cap
- In terminal 2 - aws s3 rm s3://millwam-mpu/test25.txt
- In terminal 2 - aws s3 cp test25.txt s3://millwam-mpu/test25.txt
- Step 5 - perform another copy to validate the upload settings that were modified earlier by copying a 32 MiB file
- In terminal 1 - sudo tcpdump -i eth0 dst port 443 -s 65535 -w /tmp/s3_32_mp.cap
- In terminal 2 - aws s3 cp test32.txt s3://millwam-mpu/test32.txt
=================================================================================================================================================
===EC2 Instance Launch - Valheim Server===
!D! - User Data
- launch an Ubuntu 20.04, t3.medium EC2 instance (spot pricing applies) and point out the options we discussed so far
- User data will install docker, docker-compose and copy the docker yaml file using s3 to create a Valheim server
- assign the IAM role to allow S3 copy operations
- explain the installation of the aws cli
- also link the s3 commands that we discussed previously
- apply the following User Data to the instance:
---User-Data-snippet---
#!/bin/bash
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
apt install docker-compose awscli -y
aws s3 cp s3://millwam-valheim/docker-compose.yml . --region af-south-1
docker-compose up -d
---User-Data-snippet---
- Security Group - choose the existing Valheim Group - validate the source IP is allowed correctly
- Validate connection
- sudo tcpdump -i ens5 port 2456
- sudo docker exec -it default_valheim_1 cat /var/log/
- Consider looking at the outputs of
- /var/log/syslog
- /var/log/cloud-init.log
- /var/log/cloud-init-output.log
======================================================================================================================================================
===EC2 - Instance Metadata===
!D! - Metadata
- demo stepping through metadata on a newly launched EC2 instance from the previous User Data demo
- curl http://169.254.169.254/
- look at /latest/user-data and /latest/meta-data URIs and step through them
=======================================================================================================================================================
===EC2 - Windows UserData===
This is one to start before a break, and then look at after the break - takes a while for the instance to spin up (do not emphasise this)
---User-Data Snippet - windows name change---
<powershell>
$instanceId = (invoke-webrequest http://169.254.169.254/latest/meta-data/instance-id -UseBasicParsing).content
$nameValue = (get-ec2tag -filter @{Name="resource-id";Value=$instanceid},@{Name="key";Value="Name"}).Value
$pattern = "^(?![0-9]{1,15}$)[a-zA-Z0-9-]{1,15}$"
##Verify Name Value satisfies best practices for Windows hostnames
If ($nameValue -match $pattern)
{Try
{Rename-Computer -NewName $nameValue -Restart -ErrorAction Stop}
Catch
{$ErrorMessage = $_.Exception.Message
Write-Output "Rename failed: $ErrorMessage"}}
Else
{Throw "Provided name not a valid hostname. Please ensure Name value is between 1 and 15 characters in length and contains only alphanumeric or hyphen characters"}
</powershell>
---User-Data Snippet - windows name change---
=======================================================================================================================================================
===EC2 TAG Editor===
!D! - Tag Editor
- demo the tag editor - show how locate resources and tag them
- mention resource groups - can be used to manage resources as a group based on tags
- search for everything in af-south-1
- tag buckets with Env:Demo
- point out that name tags can also be updated
===================================================================================================================================================
===DynamoDB Demo===
!D! DynamoDB Demo - Create and query a table, showing read capacity units in action (gonna cost R30 a month...eep)
- Create a Table in the console
- Note - had to create the service linked role manually - should be automatic
- aws iam create-service-linked-role --aws-service-name dynamodb.application-autoscaling.amazonaws.com
- Artist as partition key, song as sort key
- add random additional fields as you go - so for example album, rating, description
- point out that you are more likely to have unique song titles so that would probably be a better option for partition key
- but artist could be used as a pk for a global secondary index(gsi)
- Create a second table, but this time only use Artist as the partition key with no sort key
- try and create two entries for Depeche Mode - first one "Enjoy the Silence" and second "Never let me down again"
- should get an error - ConditionalCheckFailedException - because you do not have a unique partion,sort combo
- Query the first table using NoSQL Workbench
- Setup the connection using the DynamoDB role
- aws sts assume-role --role-session-name NoSqlWorkbench --role-arn arn:aws:iam::203847053205:role/DynamoTestRole
- You can perform a SELECT * FROM myMusic
- Try and insert new items and update items
INSERT INTO
myMusic value {'artist' : 'Depeche Mode','songTitle' : 'Policy of Truth'}
UPDATE myMusic
SET Album='Music for the Masses'
WHERE artist='Depeche Mode' AND songTitle='Never let me down again'
=====================================================================================================================================================
===Dynamo DB - Global Table===
!D! - Demo - convert a table to a global table
aws dynamodb update-table --table-name myMusic --replica-updates ' [ { "Create": {"RegionName": "us-west-2"}}]' --region af-south-1
- may fail on first attempt - try it again
- will take a few minutes to create
- initially there will be an update to enable streams
- then the region will be enabled
- after it comes up query again, using the new region
aws dynamodb query --table-name Music_Collection --key-condition-expression "Artist = :v1" --expression-attribute-values file://expression.json --return-consumed-capacity TOTAL --profile sandbox --region us-west-2
- insert a new record in one region and read it in the other
- Depeche Mode - World In My Eyes
- do this in the console or using the following CLI
aws dynamodb put-item --table-name Music_Collection --item file://item.json --region us-east-1 --profile sandbox
- read it in the other region using the console
=====================================================================================================================================================
===VPC Buildup - time permiting===
!D!- Revisit the launch of the EC2 instance by building out a brand new VPC
- create the vpc with private subnets
- figure out why the launch will not work
- change the settings to allocate a public ipv4 address
- still won't work without an IG
- Create an IG and attach it to the vpc/create a new rt entry
====================================================================================================================================================
===TGW===
Prep: Deploy the template tgw.yaml
- this includes steps to build out and launch three EC2 instances in 3 VPCs
- It also attaches a VPN/CGW to the TGW for testing on-prem scenarios
Demo 1: Allow all VPCs to communicate with each other
- setup three tabs, with an ssh session to each of the instances and pings to each instance's private IP address
- the public addresses should be available as stack outputs
- In the VPC Tab, add VPC attachments for each VPC to the TGW
- by default the main TGW route table will permit all the VPCs to see each other
- we do need to add a summary route in order to access the other VPC CIDR ranges - 10.64.0.0/14 via the TGW
- ping should start working once this is in place
Demo 2 (Time-Permitting): Isolate the VPCs from each other, but allow them to each access the VPN on-site resource (10.67.0.101/24)
- Download the vpn client-side configuration from the CGW in the console
- Deploy the strongswan template below
- On-Site Device running on StrongSwan in af-south-1 region with strong-swan deployed
https://aws.amazon.com/blogs/networking-and-content-delivery/simulating-site-to-site-vpn-customer-gateways-strongswan/
https://raw.githubusercontent.com/aws-samples/vpn-gateway-strongswan/main/vpn-gateway-strongswan.yml
- note the usage of the AMI parameter:
aws ssm get-parameter --name '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-ebs' --query 'Parameter.Value'
====================================================================================================================================================
===ALB Demo===
!D! - ALB with two micro instances
- launch the alb-infra stack (in ~/aai/alb-demo with the command in create-stack.aws)
- create a load balancer and target group targeting the two instances
- change the listener to https
- emphasise that the backend is using http - we are performing offload
- use an ACM generated certificate to lock the website down (we already have one, but we can create this on the fly for the class I think)
- create a Route 53 resource record pointing www.millwam.com to the load balancer (you can do this later when you discuss Route 53)
- Demonstrate TG sticky
- open the www.millwam.com site - hard refresh and you will see red and blue pages
- go to dev tools - applications - cookies
- there should not be a AWSALB cookie there (probably a AWSALBCORS cookie)
- enable TG stickiness
- hard refresh again and you will notice that the page sticks with either red or blue
- Also note that the cookie is now present
- Cleanup
- remove the R53 record-set
- delete the ALB
- delete the TG
- delete the stack
================================================================================================================================================
===Route53 - alias for load balancer====
!D! - Create a record for a public load balancer if you demo'd this earlier (see above)
- modify or create an alias for millwam.com and point it to the load balancer
================================================================================================================================================
===IAM Users Demo===
!D! - Create a new user
- show sign in using the Console and demo access-keys (in a cloud9 environment probably)
- walk through policies and options to attach to users and groups
- walk through a policy document, and identify what can be achieved
- scoped permissions - which instance can be shut down based on tag for example
================================================================================================================================================
===IAM Environment Variables/Role Switch===
- create a user - note the access-key and secret access-key values
AKIAS65R3F6KQODRWORS
mnbnbIUm3fXQnxghTjI/A7WewGJUltZqQ8t4fJXh
*Note the region must be us-east-1...probably STS regional endpoint for af-south-1 is not enabled
- aws sts assume-role --role-arn arn:aws:iam::203847053205:role/S3RO --role-session-name iamtest
- This will fail
- Create a group
- Create a policy and attach it to the group group
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::203847053205:role/S3RO"
}
}
- Associate the user with the group
- aws sts assume-role --role-arn arn:aws:iam::203847053205:role/S3RO --role-session-name iamtest
- Change the environment variables to the temp
export AWS_ACCESS_KEY_ID=ASIAS65R3F6KXGORN5MK
export AWS_SECRET_ACCESS_KEY=e6lNmfEENJygw/KOx6aEO3N5hHWeOAwr/ZXcYN2Y
export AWS_DEFAULT_REGION=us-east-1
export AWS_SESSION_TOKEN=IQoJb3JpZ2luX2VjEE4aCXVzLWVhc3QtMSJHMEUCIDTNDk9Vk1K7LTf2A/n96T2ZiUCJL0YMKDeO3s/FeDbkAiEAx1McQenbBv0W3jTIomFwrecQA5IwqHObvnHWYEzKb2sqlAIIdhADGgwyMDM4NDcwNTMyMDUiDGL4x6FIkUD9I3sT9SrxAT+EH5G2GhRCrzfdxsuqTQMBBsIaEEhGSAoy/cwVq2Pbr0mj4dX5NukiGsEYMoX9PDne4A5npVpjDJ7ku5i53kflyqjOnn3gzZmIujdIy7evYcOrhipVDdw4g/FloWenb+fguXxDm+S22wuyLqW12YmnOqOLAg72aBmGI8pYkh+Adrs+qKSH6edJKiR1MRMVmE+YWitRQ+UpbcQm/n1Z7TxqXHO9R/joydHWUpk33TS/lR5KbyklOswk2sNRSF8GyDrHL+T3F6rkXpa5HgQjumausgsvucOZh/zGOlHIsD1viBrdsRtnev7Eo2w7b6BeZ6kwxO7uiAY6nQFaijdLeNCy4gLSWNIRSqK23sVsj7wBtznVxPH1n3nhhG6rbotYt0Ppl+fV4YxiFCNIPGt2VgYOf6iXU5Iz3jLbaJho3y6lyCMznkqOW1pHpHCvfS7D9K0IlIM3VlK2wJCTAygU3UksMNAzXu6xh6xjI8Q9s6ZBQVey0GD6MpnlFXYO+0i/DGJXBaHEze/riMGRHUUg3kmMV1bhSQGj
- aws s3 ls
- Try and create a new bucket in the new terminal
- this should fail with access denied
- Create the bucket in the original terminal
- this will succeed
Note that env variables override the default config file
1. CLI Options come first
2. Environment Variables
3. CLI credentials file
4. CLI configuration file
5. Container credentials (IAM role associated with ECS task definitions)
6. Instance profile credentials (IAM role attached to EC2 instance)
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-precedence
================================================================================================================================================
===CloudFormation===
!D! - Template anatomy
- use the template to prep for the ALB demo
- Prep
- create a repository named cfn-template
- aws codecommit create-repository --repository-name cfn-template --region eu-west-1
- initialize the codecommit repo by publishing the branch
- create a new branch named test
- publish this branch
- point out template settings
- room for improvement - how can I make the template available in multiple regions?
- add region specific AMIs with a mapping
#Time - add conditions to ensure that the size can be set based on prod or testing
#Time - Add Outputs to provide URL of the servers to test
- improve the template, by adding a mapping for AMI ID, and reference it as a pseudo parameter in the template
- also add in Conditions (which includes a new parameter, Condition statement and reference to instance type for each EC2 instance)
- also add in outputs to provide the Public IP address of the instance for a quick test
- save changes to the template once you are done editing it
======================================================================================================================================
===Elastic Beanstalk===
!D! - EB CLI to deploy a sample app
PRE-WORK
- installed into a virtual environment
- activate it with:
- source ~/eb-ve/bin/activate
- deactivate when done with the command:
- deactivate
- unpack a sample python app into the project directory
- wget https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/samples/python.zip
- unzip python.zip ~/aai/eb-demo
- Get git up and running
- git init
- git add .
- git commit -m "Elastic Beanstalk Application"
- Initialize the EB application
- eb init
- Create a new environment
- eb create prod
- after the environment comes up, inspect the associated cfn stack and options in the gui
- discuss .ebenvironment settings, and that these can customize environments
================================================================================================================================================
===Elasticache Redis===
- Based loosely off the aws sample tutorial:
https://aws.amazon.com/getting-started/hands-on/building-fast-session-caching-with-amazon-elasticache-for-redis/?refid=ps_a134p000006vj0caaa&trkcampaign=acq_paid_search
Step 1 - Create Base stack with ec2 instance:
aws cloudformation create-stack --stack-name flaskapp --template-body file://ec2-app.py --region af-south-1
Step 2 - Run the following to start the sample flask app:
git clone https://github.com/aws-samples/amazon-elasticache-samples/
cd amazon-elasticache-samples/session-store
virtualenv venv
source ./venv/bin/activate
pip3 install -r requirements.txt
export FLASK_APP=example-1.py
export SECRET_KEY=some_secret_string
flask run -h 0.0.0.0 -p 5000 --reload
Step 3 - Build out the Redis Cluster
- choose the same VPC deployed in Step 1
- no need to do a backup or data import
- t3.micro and 1 read replica
- multi-az (or leave this disabled and instead disable automatic failover)
Step 4 - Configure the application to use the redis endpoint obtained from the console:
export REDIS_URL="redis-ro.qet8kv.ng.0001.afs1.cache.amazonaws.com:6379"
Step 5 - Go through a few test scenarios:
export FLASK_APP=example-1.py
- test it works
export FLASK_APP=example-2.py
- supports counter
export FLASK_APP=example-3.py
- resets counter after 10 seconds (TTL)
Test Connectivity
python
>>> import redis
>>> client = redis.Redis.from_url('redis://your_redis_endpoint:6379')
>>> client.ping()
Some ideas to try
- point the url directly to a node rather than the endpoint
- test app behavior after changing the primary node and checking against the cluster endpoint vs node endpoint
=================================================================================================================================================
===ECS Demo===
Demo 1 - Create an ECR Repo and run a fargate task from a template
- Step 1 - Create an ECR Repo
- aws ecr create-repository --repository-name matt-app --image-scanning-configuration scanOnPush=true --region af-south-1
- Step 2 - Authenticate local docker to ECR Repo, create and push an image (You could also view this step using the ECR console)
- prestep - rm ~.docker/config.json - yup its a cock-up - need to do it everytime possibly - will test
- view the docker file in VSCode - code ~/aai/ecs-demo/app
- after reviewing the docker file, go ahead and build it using the command from the console (should take a minute or two)
- docker build -t matt-app .
- docker tag matt-app:latest 203847053205.dkr.ecr.af-south-1.amazonaws.com/matt-app:latest
- docker push 203847053205.dkr.ecr.af-south-1.amazonaws.com/matt-app:latest
- run the container locally to test that it works
- use docker desktop to make life easy - test it by opening http://localhost
- Optional Step 3 - Run through the getting started wizard to spin up a new fargate cluster and reference the image
- get the image uri by using
- aws ecr describe-repositories --region af-south-1 --repository-names matt-app
- expose port 80
- defaults for everything else
- point out you can introduce load balancers etc.
- also mention that this is a container orchestration option, with task definitions being the recipe for how the task is deployed
- Optional Step 3 - Deploy the app using a cloudformation template
- aws cloudformation create-stack --stack-name fargate-demo
- verify that the service was deployed
- obtain the task's public IP address and connect to it in browser
- Cleanup
- delete the stack/delete the service and cluster
- delete the repository
- aws ecr delete-repository --repository-name matt-app --force
=====================================================================================================================================================
===Serverless===
Demo 2 - Deploy a sam sample application and investigate the API Gateway/Lambda function that it creates
- intro - what is sam1
- cli driven tool to create/test/deploy serverless applications
- uses cloudformation templates with transforms
- transforms can be used to automate the creation of lambda/api gateway resources
- can include traditional resources
- allows you to also perform local testing using a docker container that simulates the backend lambda function and an API Gateway proxy
- point out the traffic flow for the test is a connection to an API Gateway endpoint which invokes the lambda function in the backend
- plenty of other examples at https://github.com/aws-samples/serverless-app-examples
- You can alternatively use the Runtime Interface Emulator docker containers
- use this for testing or as a base for creating your own
- navigate to the sam-demo/sam-app directory
- sam init
- explore the files, point out the application code, tests (unit and integration) and the template (cfn based, but a few differences)
- sam build
- look at the sub-directory .aws-sam/build
- should have the template.yaml as well as the HellowWorldFunction directories
- the new .aws-sam folder will appear automatically in vscode
- point out that the next step is to either go ahead and invoke the function locally, or deploy it
- sam local start-api
- pulls down the docker image for the appropriate runtime, fires up the container for that execution and then kills it (note that lambda does allow the container to persist for a short while for efficiency)
- allows you to simulate the lambda execution env by replicating a local REST endpoint
- setup 3 terminals (perhaps one with two tmux sessions and another for the docker inspec)
- term 1 - sam local start-api
- term 2 - curl http://127.0.0.1:3000/hello
- term 3 - docker inspect
- sam deploy --guided
- once testing has been completed, you can deploy it to AWS
- point out that initially an S3 bucket is created from which to deploy the sam application
- after the objects are uploaded to this bucket, the app stack can then be built
- show them the template and that the CodeURI points to the s3 bucket
- you can also view the processed template showing the transform in action
- view the created resources, and click through to them
- discuss the lambda/API Gateway configuration, monitor the logs/metrics for invocations
- in the stack output - click on the endpoint url for the API Gateway resource
- you can also get this by navigating to stages -> Prod
- test connectivity against the endpoint through the web browser
- point out the API-Gateway resources as the source trigger
- examine the settings for the lambda function -
- in the monitoring section of the lambda function, navigate to logs and metrics to display the invocations
- there may be a new log stream created depending on how quickly you invoke the function
- the micro-container does not get destroyed straight away, when it does there is a new log stream
- a log stream is created per container
- point out that lambda determines when to create a new container
- cold start
- lambda supports layers that allow for code re-use, rather than repackaging the same components in every package
- Cleanup
- aws cloudformation delete-stack --stack-name sam-app
- in console empty bucket
- aws cloudformation delete-stack --stack-name aws-sam-cli-managed-default
- rm -r sam-app (do this from the demo directory sam-demo)
- delete the log group created for the lambda function
- common use case - creating log groups through lambda functions often results in left-over log groups
- remove this through a custom resource
==================================================================================================================================
===Storage Gateway - Volume Gateway ===
NOTE - for mutual authentication to work ensure that the TARGET's secret is entered in the configuration tab
- ensure that the INITIATOR's secret is entered in the session advanced settings and mutual authentication is selected
Step 1 - create a gateway - volume gateway
Step 2 - choose the host platform
- Microsoft Hyper-V
- download the image
- create the VM as per the instructions
- import the vm from E:\SG\unzippedSourceVM
Step 3 - attach the VM to the default virtual switch
Step 4 - create two new scsi disks for the VM
- one is for the cached data
- the other one is for the upload buffer
Step 5 - login to the appliance after starting it up
- identify the IP address to use in the Wizard
Step 6 - Create a new volume and configure CHAP secret
- copy the initiator identifier from the client
- run iscsicpl.exe on the client
- define the initiator and target secrets
Step 7 - in the iscsi client click on the discovery tab
- Discover Portal
- Enter the IP address of the storage gateway
Step 8 - Configure CHAP authentication using the target's secret
- Configuration tab
- CHAP
- type the TARGET!! secret
Step 9 - Configure the initiator session
- Targets
- Properties
- Add Session
- Advanced
- Target Portal IP - point to the target
- Enable CHAP log on
- Enter the INITIATOR!!! secret in the target secret
- Perform Mutual Authentication
- Click OK
Step 10 - Use disk management to format the disk with FAT32
Step 11 - create a file on the disk
Step 12 - Create a snapshot of the volume - should take a minute or so
Step 13 - launch a windows instance
- add a new volume from the snapshot id that was created in Step 12
Step 14 - Disk Management in the EC2 instance - assign a drive to the volume
- access the file
=============================================================================================================================================
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment