Page tree
Skip to end of metadata
Go to start of metadata

The second section of the tree view contains the storage objects of a flexVDI platform. There are three kinds of storage objects: Image Storages, Media Storages and Direct Storages.

Image Storage

The Image Storage objects represent a logical storage space for Guests' images, accessible by a set of Hosts. This space is further divided into Volumes, which store the virtual disks of the Guests. The typical use of an Image Storage is to map a shared disk array and its volumes to the set of Hosts connected to it. flexVDI supports the most common SAN technologies: SAS disk arrays, Fiber Channel, iSCSI...

Selecting the "Image Storages" subsection will show a list of the existing Image Storages. Click on the "New Image Storage" button to add a new Image Storage. You will have to give it a name, and select which Hosts are able to reach it. When you are done, click "Save". Then, clicking on an Image Storage will show information about it in the details view:


Under the "Hosts with access" title, there is a list with the Hosts that can access this Image Storage. The Guests whose virtual disks are housed in this Image Storage may run on any of these Hosts. This enables high availability of Guests.

Within an Image Storage you can create three types of Volumes: OCFS2 Volumes, Gluster Volumes and External Volumes. Click on the "New Volume" dropdown button and select the type of Volume you want to create.


OCFS2 Volume

OCFS2 Volumes are created with a block device that is shared by all the Hosts of the Image Storage (e.g. an iSCSI LUN or SAS disk connected to all the Hosts) and formated with the OCFS2 filesystem. This is a cluster filesystem that can be mounted by all the hosts simultaneously. In order to set up the cluster, please refer to the section about OCFS2.

When you create an OCFS2 Volume, you will be asked for a name and a dropdown list will let you select the backing device. This list consists of the SCSI identifiers of those block devices that are visible from all the Hosts of the Image Storage.

Gluster Volume

Gluster Volumes use the GlusterFS distributed storage technology to aggregate several local disks into a single storage space. This solution does not need to access a shared volume in a SAN and uses cheaper, more common hardware, but its performance is lower. You need at least two hosts to create a Gluster Volume.

Gluster Volumes are an experimental feature. Stability and performance issues have been observed, so use it at your own risk.

When you create a Gluster Volume, you will be first asked for a name. If you want to use a GlusterFS volume that you have already created and is accessible from the Hosts of the Image Storage, you can check the corresponding checkbox and write the name of the volume in the input box:

If you want to create a new GlusterFS volume, you must supply a list of brick paths and the replica ratio:

Brick paths will be used in all the Hosts. So, if you provide m brick paths and there are n Hosts in the Image Storage, the GlusterFS volume will be created with m*n bricks, as host1:brick1, host2:brick1, ...,  host1:brick2, host2:brick2, ..., hostn:brickm. Brick paths must meet the following conditions:

  • They must be a directory that already exists in every Host.
  • They must be a directory that is a direct child of a mount point other than /. For instance, in /data/brick1/brick, /data/brick1 must be a mount point and brick must be a directory inside it. However, /usr, /var, /boot, etc. are not accepted (for obvious security reasons). The rationale behind this condition is that the brick will be stored in a block device other than the root device, and the child directory inside the mount point will only exist if the block device is correctly mounted, avoiding writing to the wrong location in case of mount problems.

The replica ratio is selected from a list of possible values based on the number of Hosts in the Image Storage:

  • Replica 2: If there is an even number of Hosts, this option creates a (distributed) replica 2 GlusterFS volume. There are always 2 copies of each file.
  • Replica 3: If there is a number of Hosts multiple of 3, this option creates a (distributed) replica 3 GlusterFS volume. There are always 3 copies of each file.
  • Disperse n:m: If there are n+m Hosts, where n > m, this option creates a dispersed GlusterFS volume with n data bricks for each m redundancy bricks. The optimal configuration for the number of hosts is marked, where performance is best.

This is a simple yet effective way of creating Gluster Volumes. If you need additional flexibility, create the GlusterFS volume yourself and provide its name when creating the Gluster Volume.

External Volume

External Volumes are created and managed by the system Administrators, which gives them total flexibility. A system administrator has to log in to the Hosts and, using the tools provided by the operating system or hardware manufacturer, connect, format, and mount the storage. Then, you can create an External Volume that simply points to the directory where the storage is mounted. This will allow flexVDI to create virtual disks on it. If you are configuring a cluster to enable Guests to run on more than one Host, all of these Hosts must be able to access the storage on the same path (mount point).

Read-write permissions

Be careful that the qemu user has read-write permission on the External Volume target directory. Otherwise the guests using images in that External Volume will not be able to start successfully.

Media Storages

Media Storages are repositories of ISO image files. These files can be mounted as optical disks by Guests, e.g. for the initial installation of the OS. When clicking on a Media Storage, information about it will be displayed in the details view. The following figure shows an example:

They are storage space provided by the Hosts, just as Volumes, but they store:

  • Read-only images that will be seen as optical drives by the Guest (files in ISO format)
  • Files with full exported virtual machines in flexVDI format (files with .fvm extensión).

From a technical point of view, they are shared folders accessed with the CIFS/SMB protocol. The values in the figure, "IP address", "Path" and "User name"/"Password", are respectively the IP / hostname of the machine serving the CIFS share, the path of the share, and the credentials that flexVDI uses to connect to the resource.

Hosts listed under "Host list" are those which will mount the share and have access to the Media Storage.

When you select the "Media Storages" subsection, you will see a list of existing Media Storages. Click on the "New Media Storage" button to create a new Media Storage. You will be asked for the IP address, path, credentials and list of Hosts that can access it. Click on "Save" when you are done. Initially, it will be in unknown state while the shared resource is mounted on every Host. Just wait some seconds.

Direct Storages

Direct Storage provides Guests direct access to the physical disks. Clicking on a Direct Storage, will show information about it in the details view:

TODO: Screenshot

A Direct Storage provides improved performance by eliminating the middle layer of storage virtualization, at the cost of loosing its additional functionality (easy copy of images, incremental disks, ...).

When you select the "Direct Storages" subsection, you will see a list of existing Direct Storages. Click on the "New Direct Storage" button to create a new Direct Storage. You will be asked for just a name and the list of Hosts that can access it. The list of disks that are available through this Direct Storage is calculated automatically from the information provided by the flexVDI Agent in every Host.







  • No labels