Skip to content

Instantly share code, notes, and snippets.

@jtreminio
Last active April 11, 2017 23:02
Show Gist options
  • Save jtreminio/f6b3290e3da659935ab831897c555845 to your computer and use it in GitHub Desktop.
Save jtreminio/f6b3290e3da659935ab831897c555845 to your computer and use it in GitHub Desktop.
I've deployed to two regions:
us-east-1
us-west-1
I've configured an instance in each region as my strongswan VPN, and they can connect to the other
region's instance just fine through their EIP.
Both regions have a single VPC with a private and a public subnet.
* us-east-1
* Private Subnet
* CIDR : 10.0.0.0/25
* Route Table
* 10.0.0.0/24 local
* 0.0.0.0/0 {AWS NAT}
* 10.0.0.0/16 {EC2 VPN Instance}
* Public Subnet
* CIDR : 10.0.0.128/25
* Route Table
* 10.0.0.0/24 local
* 0.0.0.0/0 {Gateway}
* us-west-1
* Private Subnet
* CIDR : 10.0.2.0/25
* Route Table
* 10.0.2.0/24 local
* 0.0.0.0/0 {AWS NAT}
* 10.0.0.0/16 {EC2 VPN Instance}
* Public Subnet
* CIDR : 10.0.2.128/25
* Route Table
* 10.0.2.0/24 local
* 0.0.0.0/0 {Gateway}
When the two VPN instances use their EIP to connect, and the EC2 instances inside the private subnets
point to the VPN's ID, traffic flows smoothly between both regions (or multiple regions!)
`10.0.0.30` can talk to `10.0.2.41` and vice versa. Life is good.
However, I want to have the private subnets route their `10.0.0.0/16` traffic to an ENI instead, for
when the proxy instance goes down, its autoscale group spins up a new VPN instance and registers the
ENI to itself.
On the us-east-1 VPN instance I do the following (after registering the ENI to the instance using
`awscli`, of course):
# cat >/etc/network/interfaces.d/eth1.cfg <<EOL
auto eth1
iface eth1 inet dhcp
EOL
# ifup eth1
# GATEWAY=10.0.0.129
# ETH1_PRIVATE_IP_ADDRESS=10.0.0.236
# PRIVATE_CIDR=10.0.0.0/25
# ip route add default via $GATEWAY dev eth1 tab 2
# ip rule add from $ETH1_PRIVATE_IP_ADDRESS/32 tab 2
# ip rule add to $ETH1_PRIVATE_IP_ADDRESS/32 tab 2
# ip rule add from $PRIVATE_CIDR lookup 2 prio 1000
# ip route flush cache
I verify the interfaces are working:
# ip route show
default via 10.0.0.129 dev eth0
10.0.0.128/25 dev eth0 proto kernel scope link src 10.0.0.218
10.0.0.128/25 dev eth1 proto kernel scope link src 10.0.0.236
# ip route show table 2
default via 10.0.0.129 dev eth1
# ip rule show
0: from all lookup local
218: from all to 10.0.0.236 lookup 2
219: from 10.0.0.236 lookup 2
220: from all lookup 220
1000: from 10.0.0.0/25 lookup 2
32766: from all lookup main
32767: from all lookup default
# curl --interface eth0 ifconfig.co
54.236.31.50
# curl --interface eth1 ifconfig.co
52.44.62.94
I do the above on both VPN instances, switching out their variables, and connect them together.
They can both ping each other's private IP address,
`10.0.0.236 -> 10.0.2.211` and `10.0.2.211 -> 10.0.0.236`
However, now the instances in each region's private subnets can't talk to each other.
I _believe_ the issue is that the private subnet's traffic is leaving the VPN instances' eth0
interfaces, when the VPN config is set to talk through the eth1 public IP address.
I'm completely out of ideas how to solve this, and have spent hours and hours trying to figure
out where I'm going wrong. Unfortunately I'm still very new to networking and feel a bit over my head.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment