aws ec2 attach-network-interface \
--region $AWS_REGION \
--instance-id $INSTANCE_ID \
--device-index 1 \
--network-interface-id $ONLINE_ENI_ID
We at iFixit needed a way to attach an Elastic Network Interface (ENI) to a running Fedora Instance. Unfortunately, only Amazon Linux AMIs have the ability to "automatically" attach an ENI. The ec2-net-utils is available for Amazon Linux, but it has questionable support for other distributions. If you have found this page, you have seen this is not a well supported process. We have figured out a pretty reliable and general purpose way to accomplish this that we want to share.
The information we found at the time on this process was a great blog post here: https://www.internetstaff.com/multiple-ec2-network-interfaces-on-red-hat-centos-7/
Unfortunately, despite showing the configuration changes we needed, it did not provide a solid code example. There are a few moving pieces.
An ENI is effectively a "virtual ethernet cable". Creating an ENI means allocating a VPC interface semi-permanently to this "cable". Additionally, VPC security groups can be applied individually to interfaces. These ENIs can also have Elastic IPs associated to them, making them quite flexible.
This is in constrast to the "default ENI" that is created with an instance, created ENIs are not destroyed with instance termination and can be detached and re-attached to other EC2 instances. One notable limitation is that an ENI attachment is on a per VPC subnet basis.
Our use case for this is to make a Route53 private zone with an A record pointed towards that ENI. This means that even after a service caches that Record for a name, we can still control which instance traffic is sent to. We can "move around the cable" by reassigning the ENI. Further, Elastic IPs can be assigned to ENIs, meaning we can fail over public traffic in this same way. This is a very useful paradigm.
The aws cli gives us a lot of power to introspect and modify our running EC2 instances. There is also the metadata curl endpoint available to directly pull running instance metadata.
We grab lots of information and munge it all together to get the list of parameters for the various network configs and further "action" aws calls.
This is the key. To get a dynamically attach network interface up and pushing traffic, we need:
ifcfg
rules for the newly attached interface- Route rules for this new interface
- A Route Rule (PBR) to return traffic back through the correct interface
The last item, PBR, is a complex topic. Please check out this blog post about it for some more background.
We are attaching a second interface into the same subnet, so we need a way for routing to decide which which interface to return traffic on. That's a pretty powerful idea.
This is actually the easy path. See the aws cli call from the introduction. But there is a hangup. We can only attach an ENI that is in the "available" state. That means detaching it from another instance if it is in the in-use
state. Detaching an interface is done by attachment-id, so we need to dig that out of the output of the describe-network-interfaces command.
systemctl restart network
Fingers crossed? In testing this hasn't caused us any problems.
In all actuality, this is less dangerous than it might seem. The systemd.network service only manages the configuration of the network stack, so this does not actually take any interfaces down. In practice though, there is a small pause in the connection, but it does not disconnect established sessions.
The script to accomplish all this is here: Do the Thing!
I did my best to accomplish this in "Pure Bash". Of course we still need:
- curl,
- jq,
- and, the aws cli
But hopefully this is a reasonable list of required packages. Just in case, for the lazy among us, run this:
yum install -y curl jq awscli
For the sake of our scripts, most of our instance configuration is pulled from the enviornment. This should make it easy to extend for your own use case, as well as CM and Orchestration environments.
One of the key assumptions of this script is that the ENI to be attached is assumed to be the new eth1
interface. If you need to be attaching more than one interface to a single instance, you will need to parameterize out the 1
everywhere to be programable to your thrid, fourth, etc. interface.
Additionally, there is a not quite so easy to account for "bug" in that this line:
echo 'GATEWAYDEV=eth0' >> /etc/sysconfig/network
does not seem to work exactly as advertised. When attaching the new eth1
interface, it seems that it still pulls the default route to eth1
. You can verify this with the ip route
command after stealing. You can change this back with a command like:
ip route add default via $LOCAL_GATEWAY dev eth0
With the ONLINE_ENI_ID
environment variable set, running this script will "steal" and attach the ENI with
Perhaps this script should use a CLI argument instead of the environment? In our stack, we use environment variables for almost all server configuration tasks. Our orchestration tooling is built around maintaining that environment per server instance role. Hence our use of the ONLINE_ENI_ID
variable. Perhaps in other use cases, you would want a export ONLINE_ENI_ID="$1"
at the top of this script.
Additionally, as mentioned in the caveat, there are some assumptions around the new ENI attachment being assigned to eth1
. This limits flexibility a bit, and and the configuration is not necessarily idempotent.
Lastly, the script makes the assumption that the ENI ID specified exists. Perhaps some additional tooling could be built to create a new ENI from a specification if one does not already exist.
Also, we restart the network stack after we attach the ENI. This means there is a brief window where the new interface is attached, but not yet configured. We may want to restart the network service before we attach the ENI. That may reduce the time window where the interface is attached but not yet accepting connections. We should test this further.