Cloud Manual

The LRZ Cloud portal is a new service offering from LRZ for its customers. The Cloud addresses customers' needs that cannot be satisfied through other LRZ services and are yet mission critical to achieving research success. These needs may include the following (but are not limited to these scenarios):

  • quick (turn around in minutes, not days) tests that require a clean slate and root access
  • usage of ready-made software packages in the form of virtual machine images
  • need for operating systems that are not supported by LRZ
  • platform for students to have their own system, yet no need for physical hardware (diploma thesis, practicals, classes, teaching)
  • platform for Big Data applications (e.g., Hadoop)

The idea is to offer a responsive service that allows customers to upload and use their own virtual machine images. This is also called Infrastructure as a Service (IaaS) and offers maximal flexibility to our customers.

While trying as good as possible to maintain a high level of availability, the LRZ Cloud is not meant to host long-running, critical services. That service demand is already covered by the LRZ VMWare infrastructure. The LRZ Cloud, on the other hand, may encounter announced or unannounced downtimes.

The LRZ Cloud is intended to enable your research to the fullest, however, due to limited resources we must take care to avoid monopolization of the Cloud resources by individual users. This will be implemented through budgeting and accounting services on the Cloud, similar to what is found on commercial Cloud offerings, like the Amazon Cloud. 

After an internal evaluation, LRZ decided to start its Cloud offering based on the European Cloud middleware OpenNebula, for short called ONE.

How to Get Access

The LRZ Cloud is coupled to LRZ’s central user management system, LRZ SIM. Thus you need an LRZ SIM account belonging to an LRZ SIM project  in order to be able to use the Cloud system. Account activation for Cloud services is performed by LRZ. If you want to use the LRZ Cloud, please send an email to cloud-support@lrz.de and ask for Cloud activation or submit a request to the LRZ Service Desk. Please state the account you would like to use (we will check if it is suitable), the intended usage (research goal, why you need Cloud and not one of the other LRZ services like SuperMUC or the Linux Cluster) of the Cloud, the anticipated resources you will need (CPU hours, RAM, disk usage, parallel or serial usage, number of VMs needed at any given time), the minimal and maximal runtime of your individual jobs, and your LRZ account ID. You will receive a confirmation email from us once we have enabled your account for the Cloud platform.

How to Access the Cloud Platform

The Cloud management GUI is available at https://www.cloud.mwn.de. Log in with your LRZ SIM username and password.

OpenNebula logon screen

How to work with OpenNebula

After a successful login, the user is presented with a dashboard (detailing the resources used by the current user)

OpenNebula dashboard

and the main menu, on the left part of the web interface, where it is possible to familiarize yourself with the main concepts of ONE. Please note that this document will explain the usage of the ONE web based Graphical User Interface (GUI), also known as Sunstone.

OpenNebula main menu

  • Budgeting: budget accounting. Data collection runs at one minute past the hour and records the consumption in the last time window (last hour). This means that a fraction of an hour can be accounted and the system is thus more fair towards the users. If a VM is launched, for instance, twenty minutes after the script has run, at the next sampling time only the fourty minutes of effective usage will be deducted. In fact, the granularity of the accounting system is one second.
  • Instances of Virtual Machines: this view shows the user a summary of the running VMs, i.e., the instantiated templates, including all the details, such as the allocated IP(s). From here, it is also possible to open a VNC window to have direct access to a particular instance.
  • Instances of running Services: the list of multi-tier applications running in OneFlow. This is an advanced topic and it is not required for a basic usage of the service. Please check the linked documentation for additional information.
  • Templates of Virtual Machines: this is where the user shapes the resource he/she wants to allocate. More details in the following sections.
  • Templates of Services: the list of multi-tier application defined to work by means of OneFlow. This is an advanced topic and it is not required for a basic usage of the service. Please check the linked documentation for additional information.
  • Datastores: it's the physical space to host the images. Usually the user does not have control of it, but he/she should refer to the instructions received by the administrators during the account setup.
  • Images: the disk images for the VMs. An image can contain the operating system (OS) to boot a VM, a CDROM image to install a VM or it can be a datablock device to have spare space for data. An image is one of the items that the user should customize (with his/her OS, user environment, and applications) and this capability is the rationale behind the ONE service. More details on how to work with images will be given later. However, please keep in mind that only raw and qcow2 formats are supported
  • Files: this section is used in some advanced cases, when the user wants to boot a machine using a kernel not contained on the disk image (currently discouraged by LRZ) or for customization purposes. For this topic, please refer to the official ONE documentation page.
  • Virtual Networks: the networks that can be accessed by the running VMs. Usually the LRZ provides access to a pool of public routable IPs and to the internal LRZ network (with a gateway for outgoing traffic to the general Internet), including the usual LRZ services such as NAS, TSM backup, and the MWN network.
  • Settings: a panel with some additional information. It is used to set up the default SSH public key to be injected in the virtual machines or the password for the OpenNebula's EC2 interface (though this is an advanced topic and not needed for a normal usage of the LRZ Compute Cloud).

Working with images

The first step consists of creating an image to work with. The StorageImages menu (on the left pane) offers a list of the images available to the user and the possibility to create a new one, clicking on the green button (denoted by +, on the top left corner of the interface).

one_img_list

The following screen is presented.

OpenNebula create image

In order to have a working image, it is necessary to specify a name and a type. The supported types, as per the respective drop down menu, are

  • Operating System image: the image contains a bootable kernel;
  • Readonly CD-ROM: the image contains a CDROM, possibly bootable, image;
  • Generic storage datablock: the image is an empty container for a filesystem (to be created by the user after attaching the disk to a VM) for hosting data or a bootable kernel.

Usually the datastore meant to host the image is unique and already selected since it is the only one available, but in case of doubts, contact the administrator of the service.

Most importantly, the user has to specify the location of the image, choosing among:

  • Provide a path: the corresponding text field should contain a path to a file on the frontend machine (i.e., the scratch folder named after the user's account in /media/scratch, in case the user copied the image there) or an URL pointing to a file on a webserver on the internet (typical case, the CDROM image of a Linux distribution). Please be aware that only raw and qcow2 images are supported.
  • Upload: in this case the user wants to upload to the frontend machine a file residing on his/her machine. In fact, clicking on the Choose File button, a window showing the content of the local machine will appear. Please note that this is a computationally expensive operation and it could time out. This is an artifact of the user interface; after the upload has completed, it is necessary to check the size of the image in the corresponding list, to make sure the upload did not abort prematurely. Please keep in mind that the image will stay in the status Locked till the end of the upload. Some more details and caveats are available. Please be aware that only raw and qcow2 images are supported. The direct upload through the browser of big (i.e., greater than 1 GB) files is highly discouraged. In this case, please use the previous method.
  • Empty disk image: this is available only if the image type is Generic storage datablock. The user can create an empty datablock device, specifying the size. This is equivalent to a raw disk, it is up to the user to format the newly attached disk from inside the VM, creating the desired filesystem (by means of the utility mkfs, for example), before mounting the volume itself.

Some advanced options are also available, though not mandatory:

  • BUS: it is the device bus ONE is going to use to create the disk device in the VM. It could be Parallel ATA for IDE disks, SCSI for SCSI disks or Virtio for VIRTIO emulated disks. The latter is used by default, for performance reasons. It is supported by recent kernels, nonetheless, if a different behaviour is requested, please keep in mind the extra effort needed to simulate the interface.
  • Target: the bus position (on the I/O bus of the VM) at which ONE is going to create the disk device can be specified. For example, it is possible to specify hda for a disk on the first IDE channel, sda for a disk on the first SCSI channel or vda for a disk on the first Virtio channel. It should be clear that the last letter of the combination (in this case a) stands for the position on the channel. The second slot would be identified by b, the third as c and so on. In case the user specifies these parameters, he/she should take care that they are unique otherwise an error will be thrown during instantiation.
  • Image mapping driver: the format of the disk image. The only choice here is qcow2, meaning that data block disks are created only as qcow2 compressed containers.

The Persistent flag means that the VM runs directly on the image in the datastore. Any modification is saved directly on the original disk file (the only fact that the VM runs lead to a modification of the disk image, i.e., the system log files). By default the image, once deployed, is non persistent, that is, it is replicated and then run by the worker node. No changes are saved into the original image. Please refer to this section to learn how to save disk images attached to virtual machines.

Marking an image as persistent is useful to setup and install the operating system of the VM at the beginning or to perform tests and configuration. Once satisfied, the flag can be switched back to no and the image can be deployed multiple times.

A persistent image can't be public, only the owner should be able to run it, so the permissions should be set accordingly.

Working with Virtual Networks

The purpose of virtual networks is to assign a MAC address to the NICs (network interface cards) of a VM, while IP addresses (and network configuration) is issued by a DHCP server. This means that, when a VM is stopped or undeployed (see this section later on):

  • each NIC keeps its MAC address, it will be returned to the virtual network once the VM is shut down (i.e., at the end of its lifetime, once removed from ONE);
  • the IP is returned to the DHCP server, unless a reservation has been granted. When the VM is resumed, the IP may change

This behaviour is in line with what happens in real life, with physical hardware. The MAC address sticks to the NIC for the whole lifetime of the VM while the IP may vary from time to time, especially when the VM is disconnected from the network, since the address space is managed by a DHCP server. This may sound annoying, but:

  • it allows us to serve more users, making the cloud offer a "self service" without partitioning the network each time we accomodate a new group (i.e., less time to access the platform and no reconfiguration of the existing virtual networks);
  • as long as the VM is running the IP will not change (while, for example, Internet providers usually assign new IP periodically to end users).

There are two virtual networks available for each VM:

  • private (MWN_access_x in ONE, where x is the numerical group id):it is a subnet of the MWN (Münchner Wissenschaftsnetz) and it has a gateway to the public Internet. The VMs with such an interface:
    • can reach all the services provided by the LRZ and all external machines in the Internet (in order to fetch package repositories, access data or any kind of service available);
    • can not reach nor be reached by other VMs belonging to other groups. In other words, each group runs its VMs in a Private VLAN-like network setup. Packets are forbidden at the Link Layer level (corresponding to the MAC address) to cross a "project boundary". In this way we try to prevent that a wrong (or malicious) setup of a VM hinders other group of users;
    • can not be reached from outside the MWN. The assigned IP is not publicly routable and the default gateway of the network segment acts as a NAT, Network Address Translator, masking the address of the VM wishing to cross the boundary of the MWN. This means that the VM can not act as a server for the whole Internet;
    • can still be reached by any other machine in the MWN (no NAT inside this network). The setup of a virtual network interface of a VM (detailed later) offers a basic packet filtering capability, however, if a restriction of the incoming traffic on certain ports is desired, then it is necessary to configure a proper firewall in the VM.
  • public (Internet_access_x in ONE, where x is the numerical group id): this is a pool of publicly routable IP addresses. A VM with such an interface can be reached from the public Internet, acting as a server for the general Internet, if needed. However, as in the previous case, the communication with other VMs belonging to other groups via this interface is still blocked.

By means of these two virtual networks and some flags, LRZ can offer four network provisioning models, four different security zones, so to say, as shown by the following picture

OpenNebula security zones

The inner, and most secure, zone consists of a completely isolated VM, without any network card attached (though accessibility is still granted via VNC). Such a setup could be a starting point for inexperienced users, to experiment with VMs in a very safe environment. How to obtain this? As easy as it can sound, just exclude (or remove) any network card from the template (it can be added back at any time).

A step forward towards a connected virtual infrastructure is represented by the second zone: a LAN (Local Area Network) configuration. In this case, a VM can only reach (and be reached by) other VMs belonging to the same group. Any communication to or from the outside world (including both the public Internet and MWN) is still forbidden. How to obtain this? Add a network card to the VM template (or attach it to a running VM) selecting one of the two virtual networks available (preferably MWN_access) and ticking the LAN mode flag in the Advanced options of the template's Network section. If wider network access is still needed, then one of the VMs in the group should belong to one of the other two security zones and act as a gateway (enabling the forwarding option in the kernel) for the remaining VMs.

The third security zone correspond to full bi-directional (incoming and outgoing traffic) access to MWN. How to obtain this? In the template (or when attaching a network card), just select MWN_access as the network which the VM is connected to. Of course, the LAN mode flag in the Advanced options of the template's Network section should be unselected. Thanks to the NAT-o-MAT in the MWN your VM can now already reach all the world wide internet, but itself it can only be reached (or attacked) from the MWN.

The outer ring allows full connectivity on the public Internet. How to obtain this? In the template (or when attaching a network card), just select Internet_access as the network which the VM is connected to. Of course, the LAN mode flag in the Advanced options of the template's Network section should be unselected.

Please be sure to read and understand the section dedicated to Security Considerations detailing what LRZ expects from a VM owner.

Regarding the network card setup, it is not necessary anymore to install a contextualization script in the image of the VM. A DHCP server is present to accomplish this task. The user has only to take care that the network cards of the VM get the IP via DHCP. Actually, a static configuration of the IP inside the VM will not work, though a reservation can be granted in some special cases. Network traffic will be routed to the VM only according to the network parameters provided by the DHCP. The default route and the default DNS are configured too, together with the hostname. The suggested hostname is vm-AAA-BBB-CCC-DDD, where AAA-BBB-CCC-DDD are the four octets of the given IP address. The domain is cloud.mwn.de. So, for example, if a VM is assigned the IP 10.11.12.13, the Fully Qualified Domain Name will be vm-10-11-12-13.cloud.mwn.de, which is already registered in our DNS. Usually the client accepts the DHCP provided hostname if the local hostname is equal to localhost. More details available in the FAQ section.

If you think this is all too complicated and too much work, then please do yourself and us a favour and do not use the LRZ Cloud service!

Working with Templates

A VM Template is the main configuration tool to shape a virtual machine. 

Clicking on VMs under the Templates in the main ONE menu on the left, the list of available templates is presented.

OpenNebula templates view

From here it is possible to edit the permissions associated with the template by clicking on the name and then working on the grid labelled as Permissions in the new page.

In order to create a new template, the user has to click on the green button (+) at the top left corner. A new dialogue box will open. The same interface is presented in case it is necessary to edit a template. The sequence to perform this action consists of selecting the template to modify and selecting the Update option.

The create/modify dialogue box is made up of different sections:

General

OpenNebula template general section

Here it is possible to define:

  • the name to identify the template.

  • the number of physical CPU(s) to use (CPU field): this refers to the number of CPUs of the worker (hardware) node that should be assigned to the VM. This is what the project will be accounted for.

  • the number of virtual CPU(s) to use (VCPU field). This is the number of CPU(s) as seen by the guest OS. For convenience and clarity, it should be equal to the number of physical CPU(s) requested. It has only meaning inside the VM. Nothing forbids to set CPU equal to 1 and VCPU to 8. The VM will look as a 8-core machine (VCPU=8) from the inside, but the performances will be those provided by a single "real" core (since only one CPU of the worker node is dedicated to that, CPU=1). On the other hand, if CPU=8 and VCPU=1, the VM will show to the logged in users one core (VCPU=1) but it will be backed by 8 real cores (CPU=8). Please, just remember that VCPU is meaningless in terms of accounting, only CPU matters for this purpose.

  • the main memory (RAM) to reserve for the VM (Memory label), in gigabytes (GB).

  • the hypervisor: the only applicable choice, KVM, is already pre-selected.

The number of CPUs and virtual CPUs, together with the quantity of memory can be customised at deployment time. This is the role of the three modification fields on the right side of each of the 3 quantities mentioned earlier. This means that the creator of the template can give the possibility to the user(s) of the template (the author and the user can be different) to adjust these parameters. A closer detail is shown in the next picture

OpenNebula template general modification section

The possible choices of the modification drop-down menus are:

  • fixed: no modification allowed;
  • range: the creator offers a limited set of values among which the user can choose when deploying the template;
  • any value: the user can assign any value within the VM's limits.

The resource limits for a single VM are:

Number of physical CPUs 8
RAM (in Giga Bytes) 64

If these thresholds are exceeded, the scheduler will not be able to dispatch the VM, which will remain in a pending state.

There are also limits that apply to the group, with no particular distinction among the members, i.e., the resources can be all booked by one of the accounts of the project:

Number of physical CPUs 64
RAM (in Gigabytes) 720
Image storage space (in Terabytes) 1
Volatile storage space (in Terabytes) 5
Number of MWN IPs 128
Number of public IPs 50

These limits exist to avoid that a single group monopolises the whole infrastructure. Please refer to this page to learn how to check the group quota and the consumption. It is worth remembering here that the quota is a measure of capacity (i.e., how many resources a user or a group is entitled to use) whereas the budget, explained later on, is a time quantity (i.e., for how long the resources assigned to a user or group can be used). For any special need, please feel free to contact us.

Important note: the group quotas are updated when a VM is deployed. The number of physical CPUs requested by the VM is substracted from the group quota when the VM starts to run. The group quota is not udpated when the VM is stopped or undeployed: the number of physical CPUs of the VM hitting this state is not added back to the group quota. Unfortunately this mechanism is deeply rooted into the cloud middleware and a change is not foreseeable in the short run. The risk for the users is to fill up the group quota with paused VMs, if not paying attention. The only way to refill the group quota bucket is to remove the VM from ONE, via a shutdown or delete operation, which is the right thing to do when the VM is not needed anymore. Another option is to undeploy the VM and then resize it. On one hand, the VM can be quickly waken up again (see the description of the various VM states here) without losing the disk's content. On the other side, the resources assigned can be set to a minimum (i.e., 0.5 CPU) so that the freed capacity is available for new instances.

One final remark: if you have AdBlockPlus enabled for the LRZ Compute Cloud, then it might block your ability to use the sliders. Please add our site to the exception list or disable AdBlockPlus in order to be able to use this site.

Storage

OpenNebula template image selection

In this section the user can specify the disk(s) that are attached to the VM. First of all, it is possible to pick an image available in the datastore(s) and accessible to the user. To increase the number of disks, just click on the blue button with a + sign on the left part. If the attach operation is successfull, but the new disk does not show up in the VM, please refer to this section. In case of errors, it is possible to remove a disk by clicking on the black circled X button in the tab, next to the DISK label. Some Advanced options are interesting:

OpenNebula template image advanced options

The Image ID, Image name, Image owver's user ID and Image owner's user name fields are mutually exclusive and filled automatically when an image is selected from the list. The Target device, Image mapping driver and BUS options have already been introducedYou must specify the Target device for the boot disk (hda for the IDE primary channel, sda for the first SCSI disk or vda for the first virtio disk, being the latter the default choice) in case the disk image is used for booting. In all other cases it is enough to pick somthing from the BUS drop-down (virtio is the default in case the field is left empty). The Cache and IO policy parameters, if not specified, are filled automatically during the deployment phase with the values none and native, respectively. This is a very reasonable choice in most cases and the users are not encouraged to change them. Read only should not be set to yes for boot disks, otherwise ONE will refuse to start the VM. Discard is used to notify to the block device that the filesystem does not use certain blocks anymore. This is the mechanism underlying the fstrim command used on SSD disks and the goal is give back the now unused blocks to the system. The net result on a qcow2 image is that the size of the container can be reduced. The option should be set to unmap but at the moment it is unsupported. Finally, Size on instantiate allows the user to specify that the qcow2 container of the disk should be enlarged up to the size indicated in this field before instantiating the template. Of course the image can only can not be shrunk.

Alternatively, it is possible to create a volatile disk on the fly, selecting the corresponding option:

OpenNebula template image volatile disk

The new datablock device will be deployed directly on the worker node and it can be used (according to the Disk type field) as swap space or as a filesystem (FS). In the latter case, the only value for File system formt is qcow2, meaning that the resulting disk will be just an unformatted container. It is up to the user to create a proper filesystem on it (usually by means of the mkfs tools on a Linux guest OS) from within the VM, before the mount operation. Supposing that the volatile disk has been associated to the device vdd and it should use an ext4 filesystem, typing at the console of the resulting VM once it is running mkfs -t ext4 /dev/vdd will produce the desired result.

Please note that volatile disks are the best choice to add swap and scratch (i.e., temporary data storage) partitions to the VMs. However, beware that it is not possible to save them as stand-alone images.

Network

This is the networking section. The available virtual networks for the VMs are listed.

OpenNebula template network

Once again, to add an interface, just use the + blue button on the left or click on the black circled X to remove a specific interface. Important note: if you decide to add a network card, please do select a virtual network from the list, otherwise the NIC will not be attached to any, leading to a failed deployment. In case the attach operation is successfull, but the new interface does not show up in the VM, please refer to this section. Particularly interesting are the Advanced options

OpenNebula template network advanced opts

The Virtual Network ID, Virtual Network name, Virtual Netowrk owner's user ID, and Virtual Network owner's user Name fields are mutually exclusive and filled automatically when a network is selected from the list. The default virtual network card model is virtio. This is by far the best choice performance wise. If virtio is not supported by the OS of the VM, then the user should pick the model to emulate, such as rtl8139. Usually this is widely accepted, also by old Linux kernels. Other possible entries can be ne2k_isa, i82551, i82557b, i82559er, ne2k_pci, pcnet and e1000 (see the libvirt documentation for reference). The user should be aware that avoiding virtio could lead to performance penalties due to the emulation overhead. The text field labelled as MAC should be left empty, it is automatically assigned, unless a reservation has been granted. All fields regarding IPv4 and IPv6 details are disabled, since everything is assigned via DHCP.

Finally, the LAN mode checkbox. If ticked, the VM will ignore all the packets coming from outside its local network. All the communications through the network gateway (thus including the Internet and the DNS server for name resolution) will be dropped. The only traffic allowed is those involving the other VMs of the group. This corresponds to the second security zone described in the Virtual Network section. Please find more practical information on a usage scenario here.

IP reservation

The section dedicated to virtual networks already clarified that VMs keep the NICs' MAC addresses for their lifetime while IPs may change when an action is performed. In some special cases, an IP reservation can be granted. A separate virtual network is created just for the applicant, named MWN_reservation_<user account> or Internet_reservation_<user account> containing one (or more) MAC addresses taken from the MWN_access_<group id> or Internet_access_<group id> virtual networks, respectively. Each of these MAC addresses is statically bound to a certain IP, so, the NIC that gets the given MAC address will always receive the same IP from the DHCP server. Please note the following:

  • a reservation is an expansive operation and a premium feature. The fixed IP can not be issued to any other VM and it is blocked, even if not used, till the reservation is revoked. For this reason, please apply only if really needed, being aware that requests such as I want a fixed IP for all my VMs will be denied;
  • this is not a "self service", i.e., a user can not lock an IP of a running VM, nor can we (a reservation always picks an IP among the unused ones). A reservation can be granted upon an application via the usual channels specifying the virtual network (Internet or MWN), the number of IPs, the reason why it is needed and the eventual DNS hostnames to be associated (in the cloud.mwn.de domain);
  • even if the IP is known, the provisioning mechanism of the network configuration to the target VM does not change. The NIC of the VMs that should used the reserved IP should not be configured manually, the interface should still get the network parameters via DHCP. In fact the DHCP server is aware of the mapping and it will assign the right IP.

Once the reservation has been assigned, the MAC(s) and IP(s) can be seen in the original virtual network (MWN_access_<group id> or Internet_access_<group id>), under the Network > Virtual Networks menu (the yellow square in the following picture), selecting the Leases tab (in green). In fact, a reservation is nothing but a lease of the parent virtual network.

Virtual Network with a reservation

In the example above, the MAC address 02:01:00:67:00:00 belonging to the virtual network MWN_access_103 has been assigned to the virtual network 88 (MWN_reservation_<username>, as it will be clear later) and bound to a specific IP address (masked).

In order to effectively use the reservation, the user has to edit the template of the target VM and go to the Network section (next picture, in green). Here, among the available virtual networks, MWN_reservation_<username>, whose numerical identifier is 88 (as expected according to what stated in the previous paragraph), is available and selected (see the Name field). If the reservation includes more than one MAC address, than the desired one can be entered in the MAC text input area (in red), otherwise, ONE will pick the first available from the virtual network MWN_reservation_<username>

Assign a reservation to the network tab of the template

Once the template is instantiated, the VM's NIC will be assigned the MAC address 02:01:00:67:00:00 and the DHCP server will always issue the same IP to it, as long as the reservation is not revoked. Clearly, a fixed MAC/IP can also be added on the fly to a VM simply adding a NIC connected to the ad-hoc virtual network, eventually specifying the MAC address if needed.

Finally, a reservation can also be accessed through the EC2 interface as an Elastic IP

OS Booting

The first tab, OS Booting, includes some details of the VM to boot:

OpenNebula template boot options

Assuming that the disk image used to boot the VM also contains the kernel, the most significant options are:

  • Arch: the CPU architecture of the VM, to be chosen between i686 (32 bit) and X86_64 (64 bits);
  • Boot order: it lists all the devices that can be used to boot, namely disk images, CD-ROM images and network cards. Simply use the arrows to modify the order, if needed. Please also remember that marking an image as CDROM can be done only when the image is created and not later.

The option Kernel boot parameters allows to specify the parameters to pass to the kernel of the guest OS while the Path to the bootloader executable refers again to the VM. Usually they are not needed, as the easiest way to install a guest OS is to put everything, including the kernel and the boot partition on the same disk image of the operating system. Also libvirt machine type should be left empty. It used if specific capabilities are needed for the libvirt daemon. Usually this is not the case.

OpenNebula template boot features

In the Features tab you have the possibility to synchronise the VM's clock with the worker node's one (Localtime option, suggested since all physical hosts get the time from the same source and it could save from future troubles, since some services that could run in the VM may not work because of time skew, though this does not exempt the VM's administrator to setup a proper ntp server to keep the clock in sync) and to enable the ACPI support. This is relevant in order to be able to properly undeploy and power off a VM. Please remember also to install the needed software (daemons and kernel support) in the guest OS. PAE is used for 32-bit VMs, in order to be able to address more than 4 GB of RAM, APIC enables the Advanced Programmable Interrupt Controller, HYPERV unlocks some Hyper-V features, meaningful only for Microsoft Windows-based guest OSes (see this link for more details). Finally, QEMU Guest Agent creates the virtual device needed to use the QEMU guest agent (which has to be installed and configured by the user) on the guest OS. It can be used to perform some operations such as acquire the system time or synchronise the write operations.

Input/output

The VNC configuration is kept in this section:

OpenNebula template IO section

A VNC connection is added by default to the template, it is not possible to remove it. The field Listen IP will be filled with 0.0.0.0. It is not necessary to specify a port (the field is disabled), but a password and a keymap can be added. Regarding the keymap, please select your keyborad configuration from the list. For details on how to use the VNC connection, please refer to the previous section.

On the right, the Inputs section, to specify additional input devices (together with their bus) besides the PS/2 mouse and keyboard emulated by default by the hypervisor. In particular, the tablet device using the USB bus is especially handy to solve the out of sync mouse problem. Please be sure to click on the Add button so that the new pair ends up in the list on the right, as shown in the picture.

Context

The contextualisation section of the template deals with the custom parameters that identify each single VM. The typical, and most relevant, examples are the IP address and the SSH key(s) authorised to log in as the root user via SSH (at least at the beginning, for the configuration and the setup of the VM). Regarding the IP address, no user actions are needed, since it is distributed, along with the other network parameters, by a DHCP server. On the other hand, the injection of the SSH key(s) into the disk image is more problematic and it requires a support from inside the VM. ONE gathers the contextualisation parameters in a CDROM image that is attached by default to the VM, so a script or a daemon is needed in order to read the entries and place them in the right spot (i.e., write the SSH keys into the file /root/.ssh/authorized_keys):

  • ONE developers make available a set of scripts that have been written (and packaged) ad-hoc. The framework is explained here;
  • cloud-init, a collection of python scripts (and related services) created to initialize VM images.

The images we provide as examples have been setup using cloud-init since it is available in the default repository of many Linux distributions and it supports multiple Cloud vendors beyond ONE.

For the user, the whole operation consists only in opening the Context tab of the image template, choosing Configuration (first row on the left), pasting the SSH key(s) into the text field labelled as SSH public Key and selecting the button Add SSH contextualization, as shown in the picture.

OpenNebula Contextualisation section

Instead of pasting the SSH key into every template (or all the keys of all the potential users of the template), it is also possible to define a default SSH key for the current account and have ONE use it in the template. First, in order to associate the SSH key to the profile, open the Settings panel, using the drop-down menu on the top, as shown in the picture

OpenNebula user settings

Alternatively, the Settings panel can also be accessed by clicking on the corresponding link in the main panel on the left. In the first tab, Info, press the edit button (the red square in the next picture) of the Public SSH Key text box and paste the SSH key.

OpenNebula default SSH key

Now, going back to the Context section of the template, if the option Add SSH contextualization is checked, but no text is present in the SSH public Key box, then ONE will use the default user's SSH key (if added) at deployment time.

Please note the following details:

  • SSH key(s) injection will not work out of the box on OS disk images provided by the users, till the ONE contextualization package or cloud-init is installed and properly configured. This section is devoted to a brief overview on the installation and configuration of cloud-init;
  • it is possible to add multiple keys just using new line as separator (i.e., hit enter between two consecutive SSH keys);
  • the keys will be added to the root user only;
  • the contextualisation mechanism only deploys the keys in the expected place, it can not set up the SSH daemon nor deal with SELinux;
  • if the boot image is set to persistent, then the SSH keys will be saved permanently, since the whole actual content of the disk is kept at shutdown.

An obvious prerequisite for SSH keys to work is a functioning SSH daemon. Please refer to this section for hints on some common misconfigurations.

Other

This section is used for special tasks, such as passing raw data to the hypervisor to expose the CPU of the worker node to the VM or to tag the template for the EC2 interface.

Template instantiation

When instantiating a template, the following dialogue box is proposed.

OpenNebula Virtual machine details

First of all, the user has the possibility to modify (see the green square in the previous picture) the capacity, i.e., number of physical cores, number of virtual CPUs, quantity of RAM, according to what has been setup in the General section of the template. Also the size of the non-persistent and volatile disks can be modified before instantiation (see the yellow square in the previous screenshot). Just move the slider or the selectors in the text fields. Persistent disks (as the number 0 in the picture) can not be modified. Finally, please also note the option in the red square, Instantiate as persistent. In this case the template will be copied as well as all non-volatile disks, which will also be made persistent before instantiation. The VM will run on the duplicated template and on the duplicated persistent disks.

Working with Virtual Machines

The Virtual Machines view summarizes the running VMs owned by a user.

OpenNebula Virtual machines view

As soon as one of the running VMs is selected, the details of the selection are proposed

OpenNebula Virtual machine details

and the actions listed on the top right corner are activated (please check also the section about the possible statuses for more details):

OpenNebula go back icon Back: return to the previous view, that is the list of running VMs.
OpenNebula refresh Refresh: update the view.
OpenNebula VNC icon VNC: open a VNC connection to the VM (if the VNC facility has been set up in the template).
OpenNebula save as icon Save as template: please refer to this section.
OpenNebula play icon Play: resume a stopped/saved machine.
OpenNebula pause icon Power off: gracefully power off of the machine via ACPI, preserving its disk(s) but not its state (i.e., the RAM). When resumed, the VM is booted from scratch, on the same node where it was before. It is used to save boot disks and to take disk snapshots. The hard version of this command does not use ACPI.
OpenNebula stop icon Undeploy: gracefully power off of the machine via ACPI, preserving its disk(s) but not its state (i.e., the RAM), freeing the worker node, i.e., CPU(s) and RAM return available for other users. When resumed, the VM will boot from scratch selecting the first available host. The hard version of the command does not use ACPI, hence the guest OS will be abruptly terminated, like pulling the power plug.
OpenNebula reboot icon Reboot: reboot of the VM, via ACPI. The hard version does not use ACPI.
OpenNebula other icon Hold: put a VM in hold state, i. e. the VM is not deployed to the worker node.
OpenNebula other icon Release: exit from hold state, i.e., the scheduler will try to deploy the VM.
OpenNebula trash icon Label: please refer to this section.
OpenNebula trash icon Terminate: shutdown of the VM via ACPI. This means that the resources are freed and the VM is removed. Beware: possible data loss. The content of the volatile disks is lost, it is not possible to save them. Regarding non-persistent disks: the content added after the deployment of the VM is lost, unless the disk is saved. Also the snapshots of non-permanent disks are lost, unless they are saved. Permanent disks are safe, nothing is lost if the VM is terminated. The hard version does not use ACPI.

Just below this set of icons, there is the tab selector:

  • Info: the details of the VM together with the permissions grid;

  • Capacity: a view of the CPU and memory usage (also used during the resizing of the VMs);

  • Storage: the disk(s) attached to the VM, with the possibility to attach a new disk, to save a disk or to take a disk snapshot;

  • Network: some details regarding the network usage, the interface(s) attached to the VM and the button to add another network card to the VM on the fly;

  • Actions: the scheduled actions, as offered by the list on top.

  • Conf: when the VM is powered off, use this section to add or remove features that are contained in the VM template sections OS Booting, Input/Output, Context or Other.

  • Log: if applicable, a view of the logfile created by ONE to monitor the VM.

Saving disks

This section deals with the management (save to the datastore, create a version history) of non-volatile disk images attached to VMs. There are two operations, namely Save as and Snapshot, which we will explore in more details.

Save as

It is quite straightforward that this operation leads to a new disk image in the image store, available permanently (i.e., upon removal) for later use. The button to save a disk is available in the Storage tab (red square in the following picture), next to each eligible disk. Please note that the Save as action can not be carried out on volatile disks. In fact, in the next picture, vdc is a volatile disks and the Save as option, near Detach, is missing.

The Storage tab of the Virtual Machines view

When clicked, the button Save as proposes the following dialogue box

The Save as dialogue box

where the user has to enter a unique name for the new disk image that will be created in the image store. The new image is added to the target datastore immediately and it appears in a LOCKED state, till the original disk is completely dumped. Then the state will switch to READY. Please note that:

  • while the VM is running we advise to save only data disks that are not used for booting (i.e., not containing the OS, part of it, and/or the kernel). Moreover, the disk to be saved should be unmounted from within the VM (without detaching it) prior to attempting the operation. The goal is to avoid an inconsistent state of the saved disk, which would require a filesystem check of the newly created image when attached to a VM and mounted inside the guest OS. Most of the time this is not fatal, but it is for sure inconvenient;
  • it is of course possible to save a boot disk too. The user should take care of powering off the VM and choose Save as only afterwards. Again, the rationale is to dump a disk without filesystem errors and avoid additional checks when used.

Snapshot

The Snapshot operation allows to create a version history of a non-volatile disk, eventually letting the user to revert to a previous status (i.e., replacing the disk's content with those of the snapshot). The option is again available in the Storage tab (red square in the following picture), next to each eligible disk. Please note that disk snapshot operations can not be carried out on volatile disks. In fact, in the next picture, vdc is a volatile disks and the Snapshot option, near Detach, is missing.

The Storage tab of the Virtual Machines view

In order to successfully take a snapshot, please take care that the VM has been powered off, otherwise the operation will silently fail. Please double check that the snapshot has been taken (described later on in this paragraph). It is possible to take a snapshot of both persistent and non-persistent disks, with the following remarks:

  • the snapshot of a persistent image impacts on the usage of the image store, increasing the occupation, hence eroding space from the group quota. The operation is equivalent to adding another image to the datastore, which is a scarce resource. Please use it with care, it can not be increased indefinitely.
  • the snapshot of a non-persistent image is taken into the system store, where the VM runs. This means that the disk snapshot history is bound to the VM. When the VM is terminated, then the disk snapshots will be removed as well. Hence, please save the snapshot(s) that you need to the image store if needed, as explained later.

When clicking on the Snapshot button, the following dialogue box will open

The snapshot dialogue box

where the user simply can enter a title for the snapshot. After the operation has completed, a drop down menu is available on the left of the disk which has been versioned (see the green highlight in the next picture).

The added snapshot menu

The role of this menu is to show the history of the snapshots taken.

The snapshot menu opened

It is possible to notice the date and time of the snapshot, plus the eventual title. The little "play" icons (in green in the previous picture) identifies the current snapshot in use. Supposing to take another snapshot after some time, the update situation will look like the following:

The snapshot menu opened with 2 snapshots

We can notice how the "play" icon moved to the second snapshot (identified by the number 1 instead of 0), that is, the current one. Also 3 additional option are available here:

  • Save as: this allows to dump the selected snapshot into the image store as a new image. The procedure is identical to the one explained in the previous subsection and the same considerations hold. Please note that if your disk is non-persistent, then this is the only way to let the snapshot outlive a VM after a terminate operation. Of course, only the selected snapshot is saved, not the entire history. On the other hand, the whole snapshot history of a persistent disk is saved into the data store (please beware of the disk occupation) and the Save as option here does not have such a substantial role;
  • Revert: it restores the disk to the content of the selected snapshot. Please beware of potential data losses;
  • Delete: the selected snapshot is removed from the history. A snapshot can not be deleted if it is in use and if it has children. Please beware of potential data losses.

There is one last peculiar aspect of persistent images with a snapshot history: it can be recalled from the image tab.

The snapshot tab of the image file

Please note that the previous picture refers to the image menu (highlighted in green) and to the Snapshot tab of the persistent disk (in red). In the upper right corner there are additional options:

  • Flatten: this will delete the history, leaving only one image in the image store, based on the snapshot version that has been selected. This is a way to reduce the occupation of the image storage, but it can lead to potential data losses;
  • Revert: it restores the disk to the content of the selected snapshot. Please beware of potential data losses;
  • Delete: the selected snapshot is removed from the history. A snapshot can not be deleted if it is in use and if it has children. Please beware of potential data losses.

They can be performed when the persistent disk is not in use (i.e., not attached to a VM).

VNC

The most immediate way to access a VM is to use the VNC client integrated in ONE. Just click on OpenNebula VNC client icon beside the desired VM. The facility, in order to be available, has to be set up properly in the template.

The VNC client in the web interface is configured to use only the secure version of websockets. In case the VNC window does not show up, please open a new tab in your browser, go to https://www.cloud.mwn.de:600 and accept the server certificate as trusted (ignoring the Error Response with code 405 shown in the page). From now on your browser should recognize it and the VNC window should open without problems. A few remarks:

  • The web GUI uses an embedded VNC client relying on a proxied connection. Regular VNC clients such as TightVNC, Vino, ... are not supported.
  • The web GUI VNC client uses HTLM5 and websockets, so a recent browser is needed. The supported browsers, according to the developers, are Firefox and Chrome.
  • The VNC window replaces the current view. In order to go back, please do not close the browser or the tab, simply click on the red button marked with a X on the top right corner, OpenNebula close VNC window
  • The ESC key will close the VNC window. This is not a bug, but the default behaviour. In order to avoid this, for example when using the VI editor inside the VM, maximize the VNC window clicking on the Open in a new window button OpenNebula open VNC in a new window on the top right corner.

Labels for Virtual Machines

Custom labels can be assigned to VMs in order to quickly show them in the VM list. Once in the VM list view, just select one or more instance (the status does not matter) and click on the label button to assign a label (green square in the next picture). Let's suppose we want to reference all the VMs running a Debian image by means of the label Debian.

OpenNebula VM list with labels

A label can simply be created adding the text into the field, as shown. Afterwards, we can see that the label appears in the list of VMs (green square in the next picture). If we select the remaining VM and open the label menu, we can see that is is possible to assign one of the existing tags (yellow square) or create a new one.

OpenNebula VM list with labels

In this case, we want to create a new identifier, let's say SuSE. It will also appear in the list of VMs.

OpenNebula VM list with labels

As it is easy to imagine, when clicking on a label in the VM list (see the green square in the previous screenshot), only the corresponding VMs will be show in the list. In order to go back to the normal view (i.e., all VMs), just click again on the label.

Multiple labels can be assigned to a VM and a label is removed automatically when no VMs are assigned to it.

Save a Virtual Machine as a template

When the VM is powered off, the button OpenNebula save as icon (available in the VM list view or in the detailed view of the VM itself) creates a new VM template based on the current status of the selected VM. Moreover, also the non-volatile disks are dumped into the image store with their actual content. They can eventually made persistent on the fly, by selecting the corresponding option (in yellow in the following picture).

OpenNebula save VM as template

Statuses of a Virtual Machine

A full and detailed description of all statuses that a VM can have in its life cycle is available here. Please note that all the term that are encountered, such as Boot, Running ... are referred to the VM as a facility provided by the Cloud middleware to the users and not to the Guest OS inside the VM (the final result the users are interested in).

For practical uses, it is sufficient to know that the typical sequence for a VM starting up is:

  • Pending: this is the first step after submission, ONE is still running the match-making algorithm to find the resources for the VM;
  • Prolog: ONE has found the resources for the VM, hence it is copying the VM image(s) in place, creating the volatile disk(s) and setting up thee connection(s) to the chosen virtual network(s);
  • Boot: the hypervisor is creating the VM on the worker node (beware, this does notmean that the operating system in the VM, the Guest OS, is booting);
  • Running: the VM has been instantiated and the Guest OS is booting. From now on, the VNC connection is available, if included in the template, so that it is possible to follow the boot sequence of the VM. If there are errors in the configuration of the Guest OS (i.e., disk inconsistencies, wrong setup of some services ...), the VM can not provide the services it is intended to and needs debugging.

ONE hides all the details from the final users, so it is sometimes tricky to spot problems. As a rule of thumb:

  • if the VM is stuck in the Pending, or for short pend, state, then it is likely that there is no budget left to instantiate the VM. Please refer to the next session to check the budget left. If this is not the case, that is the budget left is not 0, then it is possible that the cluster is fully booked or there are not enough resources available, mainly RAM memory and/or number of CPUs, to deploy the VM. In principle it is always possible to modify the template and reduce the resources requested.
  • if the VM is stuck in the Prolog, or for short prol, state, then ONE is moving files from the image datastore to the system datastore. The time needed to complete this operation depends on the network traffic and the size of the disk file(s). In order to have an idea, please consider that the backbone network for this purpose is a dedicated 10 Gb link and files are copied across an NFS share.

In case of errors, the VM switched to the status Failed: in this case, please contact the service administrators. Important: please think twice before terminating your VM. When you do it, there are no possibilities to recover its disk(s). On the other hand, if in state Failed, chances are that we can make a copy of the disk(s). Do not rush and do not hesitate to contact us.

If ONE loses contact with the VM, being unable to monitor it, the status is Unknown. It is acceptable to have a VM in this state for a short period of time (i.e., few minutes, maybe due to the load of the system), but if it persists, please contact the service administrators.

Another frequent source of confusion is the resulting status after performing one of the Undeploy or Power off operations from the menu available in the Virtual machines view. For the sake of clarity, the status reached after hitting the corresponding action buttons listed here are:

  • Power off: the actual content of all disks, including the volatile ones, is preserved, but not the content of the RAM, so, when resumed, the VM will boot again. Also the IPs are released. The CPUs on the worker node are still booked for the VM, even though not currently used, hence they will be switched to Undeploy if persisting in this status for more than one hour. VMs should stay in this status only in order to take a snapshot of the disks or to save them. Afterwards the user is supposed to resume them or undeploy them.
  • Undeploy: the actual content of all disks, including the volatile ones, is preserved, but not the content of the RAM, so, when resumed, the VM will boot again. Also the IPs and the CPUs on the worker node are released, so no charging will take place. When resumed, the VM will go through the match-making mechanism to find a new host and it will boot again;
  • Terminate: the VM is removed from ONE, so all the resources (CPUs, RAM, IPs) will be released and the non-persistent disks will be erased, unless they have been saved. This operation is graceful, meaning that an ACPI signal is sent to the guest OS so that it can stop the services and unmount the disks in a clean way;

The following table summarises what exposed so far:

OperationPreserve disksPreserve RAMPreserve IPsPreserve CPUs allocationChargingWhen resumed
Power off yes no no yes no reboot of the Guest OS
Undeploy yes no no no no reboot of the Guest OS
Terminate no no no no no no resume possible, but ACPI signal sent to the guest OS

Some practical comments can be read in the FAQ section.

Finally, there are the transient states Hotplug, Save and Epilog corresponding, respectively, to the attachment/detachment of a device (disk, network card ...), the dump of one or more disks (i.e., disk snapshot, suspension, powering off ...) and the final clean-up of the worker node after the user decided to shutdown or delete the VM.

Budgeting

A quick overview of the budget consumption is given in the dashboard, on the first row, top-left corner

OpenNebula dashboard budget view

Clicking on either one of the highlighted areas in the previous picture opens the following view

OpenNebula summary budget view

where the total CPU-hours consumption of the group is reported and compared against the budget assigned. This value should be seen as a plateau, it is the maximum time a certain group can use the Cloud. This is the mechanism LRZ implements to obtain a fair access by all registered projects, avoiding that someone monopolises the worker nodes regardless of other colleagues/competitors. Once the limit (budget value) is reached, no more VMs will be allowed to run and the current one(s) will be undeployed, without any data loss (the content of the disk(s) is preserved, but not the state, i.e., the content of the RAM; when resumed, the VM will boot from scratch) but also without any possibility to be resumed by the owner(s).

Clicking on the group (the only row available), the window will switch to a more detailed perspective of the resource usage of the group

OpenNebula detail budget view

The total consumption is split among the users with an indication of the percentage of usage of each group member.

Please note that the budget is a time quantity (i.e., for how long the resources assigned to a user or group can be used), while the quota, mentioned above, is a measure of capacity (i.e., how many resources a user or a group is entitled to use).