https://blueprints.launchpad.net/kuryr/+spec/existing-neutron-network
The current Kuryr implementation assumes the Neutron networks, subnetpools, subnets and ports are created by Kuryr and their lifecycles are completely controlled by Kuryr. However, in the case where users need to mix the VM instances and/or the bare metal nodes with containers, the capability of reusing the existing Neutron networks and attach Kuryr to them is required.
Kuryr creates a Neutron network which name is NetworkID
in the request from libnetwork when docker network create
command is issued and then /NetworkDriver.CreateNetwork
is called. If Kuryr is specified as the IPAM driver and options for subnet informaion, --subnet
, --gateway
and --ip-range
, is passed by users, Kuryr also creates the subnetpools and subnets. The created Neutron network is deleted by Kuryr when docker network rm
command is issued.
This process itself and the network structure constructed in the process is completely isolated from other networks and Kuryr assumes only the containers are connected to the network. Howerver if the VM instances need to be connected to the same network and the network is created before Kuryr creates it by the OpenStack administrators, Kuryr is required to recognise it and adopt to the existing network. The possible use case and problems are described below:
- Kuryr needs to adopt to the existing network constructed by users
- Kuryr needs to create the network with the human readable name that can be specified by users
The mapping between libnetwork's network and Neutron's network was achieved by putting the Docker IDs of the networks in the name attributes of the Neutron networks.
This is mainly because in the old libnetwork spec and the CLI implementation we couldn't pass the arbitrary key-value pairs or the labels to the remote driver. Howerver, as of Docker 1.9.0, the capability to pass them has been implemented and the remote driver can receive Options
, which is the JSON object contains arbitrary string key-value pairs, from users via libentwork legitimately.
This specification proposes to use this Options
to manage the interoperability between the existing Neutron network already created by users and the networks managed by Kuryr. If the network with the name given in the request doens't exist, Kuryr creates it and the OpenStack administors can take over the management of the network.
A user creates a Docker network and binds it to Neutron network which name is
foo
with-o name=foo
:$ sudo docker network create --driver=kuryr --ipam-driver=kuryr \ --subnet 10.0.0.0/16 --gateway 10.0.0.1 --ip-range 10.0.0.0/24 \ -o name=foo -o id=25495f6a-8eae-43ff-ad7b-77ba57ed0a04 \ foo 286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364
This creates a Neutron network with the given name,
foo
in this case, if the network with the name doesn't exist. Otherwise Kuryr reuses the existing network with the name.If there are multiple networks which names are
foo
, Kuryr responds with an error and the command is failed. To eliminate this amibiguity and void the error, the UUID of the Neutron network can be specified by-o id=25495f6a-8eae-43ff-ad7b-77ba57ed0a04
.If subnet information is specified by
--subnet
,--gateway
and--ip-range
like in the command above, the corresponding subnetpools and subnets are created or the exising resources are appropriately reused based on their information such as CIDR. For instance, if the network with the name given in the command exists and that network has the subnet which CIDR is the same as what are given by--subnet
and possibly--ip-range
, Kuryr doesn't create any subnet and just leave the existing subnets as they are. Kuryr composes the response from the information of the created or reused subnet.If the gateway IP address of the reused Neutron subnet doesn't match with the one given by
--gateway
, Kuryr returns the IP address set in the Neutron subnet nevertheless and the command is failed because of Dockers's validation against the response.This makes libnetwork call
/IpamDriver.RequestPool
,/IpamDriver.RequestAddress
if--gateway
is specified like the command above, and then/NetworkDriver.CreateNetwork
.A user inspects the created Docker network :
$ sudo docker network inspect foo { "Name": "foo", "Id": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364", "Scope": "global", "Driver": "kuryr", "IPAM": { "Driver": "kuryr", "Config": [{ "Subnet": "10.0.0.0/16", "IPRange": "10.0.0.0/24", "Gateway": "10.0.0.1" }] }, "Containers": {} "Options": { "com.docker.network.generic": { "name": "foo", "id": "25495f6a-8eae-43ff-ad7b-77ba57ed0a04" } } }
A user can see the
name
and/orid
given in the command are stored in the Docker's storage.A user launches a container and attaches it to the network :
$ CID=$(sudo docker run --net=foo -itd busybox)
This process is identical to the existing logic described in Kuryr devref. libnetwork calls
/IpamDriver.RequestAddress
,/NetworkDriver.CreateEndpoint
and then/NetworkDriver.Join
. The appropriate available IP address shall be returned by Neutron through Kuryr and a port with the IP address is created under the subnet on the network.A user terminates the container :
$ sudo docker kill ${CID}
This process is identical to the existing logic described in Kuryr devref as well. libnetwork calls
/IpamDriver.ReleaseAddress
,/NetworkDriver.Leave
and then/NetworkDriver.DeleteEndpoint
.A user deletes the network :
$ sudo docker network rm foo
This makes libnetwork call
/IpamDriver.ReleasePool
and then/NetworkDriver.DeleteNetwork
against Kuryr. Kuryr try to delete the subnets and networks associated with the Docker networkfoo
, however if they have Neutron resources associated outside of Kuryr's context Kuryr just leaves them and returns the response{}
that indicates the success of the request.For instance, if the VM instances are associated with the ports which have
fixed_ips
attribute that contains the ID of the subnet, the subnet is not deleted or neither is the network which ID is innetwork_id
field in the subnet.
In the step 3. in workflow
, libnetwork calls /NetworkDriver.CreateEndpoint
and /Networkdriver.Join
with NetworkID
property. However if a user passed the name with -o name=foo
, the NetworkID
is not put in the name attribute of the Neutron network and therefore there's no mapping information we can get only through Neutron data in the current status because there's no field for metadata in Neutorn resources.
There are three optiosn to solve this problem at this moment.
- Wait for the tagging feature of Neutron implemented
- This allows us to add arbitrary meta information to any Neutron resource, however it's still a spec and it's unclear when it's implemented and usable
- Request for addding
HasStandardAttribute
mixin to Neutron resources- There's no resources which have
HasStandardAttribute
at this moment
- There's no resources which have
- Use docker-py for getting the meta information, i.e.,
name
and/orid
- This adds some additional complexities and overhead to the current implementation but it's available today