This chapter describes features that are planned for CloudLab or under development: please contact us if you have any feedback or suggestions!
We plan to add the versioning to profiles to capture the evolution of a profile over time. When updating profiles, the result will be a new version that does not (entirely) replace the profile being updated.
There will be two types of versions: working versions that should be
considered ephemeral, and published versions that are intended to be
long-term stable. For example, a user may generate many working versions as
they refine their software, fix bugs, etc. Then, when the profile is in a state
where it is appropriate to share with others, it can be published. Users will
be able to link to a specific version—
One limitation on this feature will be the fact that CloudLab has limited storage space; we will have to apply a quota system that limits the amount of storage that a project can use to store multiple versions of the same profile.
For the time being, the contents of all disks in CloudLab are considered ephemeral: the contents are lost whenever an experiment terminates. The only way to save data is to copy it off or to create a profile using the disk.
We plan to change this by adding persistent storage that is hosted on storage
servers within CloudLab. Users will be able to use the CloudLab web interface to create
and manage their persistent storage, and profiles will be able to reference
where these stores should be mounted in the experiment. When sharing profiles,
it will be possible to indicate that the persistent store may only be mounted
There will be two types of persistent storage: block stores which are mounted using iSCSI, and which generally can only be mounted on one host at a time, and file stores which are mounted over NFS, and can be mounted read/write by many nodes at the same time.
This feature will be based on Emulab’s block storage system. Underlying this system is ZFS, which supports snapshots. We intend to expose this snapshot functionality to users, and to allow profiles to mount specific snapshots (eg. the version of a dataset used for a particular paper.)
It should be noted that performance of persistent stores will not be guaranteed or isolated from other users, since it will be implemented using shared storage servers that others may be accessing at the same time. Therefore, for experiments whose repeatability depends on I/O performance, all data should be copied to local disk before use.
Currently, there are two ways to create profiles in CloudLab: cloning an existing profile or creating one from scratch by writing an RSpec by hand. We plan to add two more: a GUI for RSpec creation, and bindings to a programming language for generation of RSpecs.
The GUI will be based on Jacks, an embeddable RSpec editor currently in development for the GENI project. Jacks is already used in CloudLab to display topologies in the profile selection dialog and on the experiment page.
The programming language bindings will allow users to write programs in an existing, popular language (likely Python) to create RSpecs. This will allow users to use loops, conditionals, and other programming language constructs to create large, complicated RSpecs. We are evaluating geni-lib for this purpose.
Sometimes, you just need one node running a particular disk image, without making a complicated profile to go with it. We plan to add a “quick profile” feature that will create a one-off experiment with a single node.
As part of the process of reserving resources on CloudLab, a type of RSpec called a manifest is created. The manifest gives a detailed description of the hardware allocated, including the specifics of network topology. Currently, CloudLab does not directly export this information to the user. In the interest of improving transparency and repeatable research, CloudLab will develop interfaces to expose, explore, and export this information.
Today, switches in CloudLab are treated as infrastructure; that is, they
are under CloudLab’s control and, while we provide a high degree of
transparency, we do not let users control them directly. We plan to
make at least some switches—
All switches in CloudLab will be OpenFlow-capable. In the case of exclusive-access bare metal switches, users will get direct and complete OpenFlow access to the switches. In the case of shared switches, we are investigating the use of FlowSpace Firewall from the GRNOC and Internet2 for virtualization.
We plan to export many of the monitoring features available in
CloudLab’s infrastructure switches—
Some of the equipment in CloudLab will have the ability to take fine-grained measurements of power usage and other environmental sensors (such as temperature). CloudLab will provide both logged and real-time access to this data for experimenters.