Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
Diapo del principio - descripción arquitectura oZones (igual hay que estirar)
* Hello dear opennebula users, today in this screencast we are going to talk about the ozones component, showing how to manage several ONE instances across different administrative domains, what is called oZones, how to create a virtual data center within a oZone, in order to provide an isolated virtual environment where a virtual data center administrator can manage virtual resources for their users.
* We are going to start by browsing the URL of our service provider, under the hat of the administrator of the oZones server, which has access to different oZones.
* We introduce the email and the password, and here it is the oZones graphical user interface
* We are going to create a new oZone.We input the name of the new zone, the endpoint of the XMLRPC server where the opennebula instance is listening, a valid username and password of a administrator of the opennebula instance, and the sunstone endpoint (which can live in another server potentially)
* We can now see the metadata of the oZone , and the resources of the oZone (hosts, templates, virtual machines, virtual networks, images, users)
* Let's create another oZone, this time let's add in our west coast US datacenter, which is running in non standard ports. You can browse it resources as well.
* We can now see aggregated resources that combines resources of all zones (in this case, both zones). hosts templates, virtual machine, virtual networks
Creación VDC
* Let's now create a Virtual Datacenter within a zone. We click the new bottom and a dialog opens up. We fill in the name of the VDC, the administrator name of the VDC, a password for her. Also, we chose the zone where the VDC will be created. A list of physical hosts are presented, so we can choose which hosts are going to be part of the VDC. We chose a couple of them. Note that we can force the sharing of physical hosts among VDCs.
* Once created, we can browse the metadata of the VDC. we can see the links of the sunstone and also for the command line interface. Also, the ACLs created to manage the isolation of the VDC are displayed. It is worth noting that each VDC is associated with a group (created in the zone hosting the VDC), and the ID of this group is also displayed here.
Primera aparición de Da Terminal, honey, it's full of chars!!
* Let's switch to the terminal to show how a VDC admin can manage his virtual datacenter from the command line.
* First, she needs to change the environment variable the commands use to find the opennebula instance to point to the reverse proxy that will resolve the URL and redirect to the correct zone.
* She also needs to set the credentials for her role of VDC admin. With a oneuser show she can get the information about her role within the opennebula instance.
* Now she is going to create storage resources for the VDC users, using pre-made templates. This is the template, now let's create the image with the oneimage commnand. We check that the image is created with the "list" option. Afterwards, we publish the image and check that the operation worked.
* The same process is repeated for the virtual networks. We edit the template to change the name of the virtual network. With the onevnet command we create the network, we show the creation, list all the pool and publish it. We then check again that it has been published
* Since the details of the infraestructure shouldn't matter to the user, we can't see host information or user information. But we can create a user for the newly created VDC
Sunstone de VDC user
* We are going to put on the hat of the VDC user, and try to operate through sunstone. We need to login with the credentials provided by the VDC admin. The tabs shown in sunstone are only the ones that the functionality of the user allows. There is one network and one image, the ones created and published by the VDC admin.
* Let's now make use of the VDC resources. First, we create a template for a virtual machine. We set the name, memory and cpou and 2 vcpus. We add a lease from the public VDC network, and an image from the VDC image repository, both with the default options.
* Now is the time to instantiate a virtual machine from the template. We can see the metadata, and now we just need to wait until the scheduler deploys the virtual machine onto a physical server, to start enjoying our virtualized services.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment