Imagine a pool of virtual VS systems distributed over several sites. A New VS customer signs up and is allocated one of them for his exclusive use. His production system at his facility and the Cloud VS are configured to be networked by WSN, providing logon and file transfer services. Volume transfer can also be done at the Linux level.
Access to the Cloud VS could be provided via LIghtspeed, WSN RLOGON, or WebVT (browser). If RSF works, then CLOGON would also provide logon service.
Immediately the VS customer copies critical files to the Cloud VS. This gives him offsite backups for business continuity. If he desires, he can transfer programs as well and set up the Cloud VS to be a complete backup system capable of serving his users by redirecting them to connect to the Cloud VS. Also, if desired, a second Cloud VS can be allocated and configured to receive copies of the backed up files, providing business continuity at two different sites.
Next, let's say the VS customer has a burden of nightly reports that take several hours to run. Nightly he transfers the data files to the Cloud VS and runs the reports there, freeing the production system at an earlier hour while the reports run independently in the Cloud. Completed report printfiles can be transferred back at his convenience, or the reports can be printed via Lightspeed directly from the Cloud VS.
Whether or not the VS customer has sufficient nightly batch processing to warrant using the Cloud, the likelihood is that he has such a burden at month end, quarter end, and/or year end.
Next, consider the periodic massive processing that may have to be done when, for example, the VS customer acquires another company and must integrate the new data into his production files. The way this is typically done is to duplicate the production files and run repeated cleanup and filtering of the new data and integration into the copies of the production files until the process is determined to be clean enough to do a final run into the production files. For a VS sized for his normal production needs this can be a severe processing burden. Well, it can all be done in the Cloud without impacting the production system.
Finally, a subset of offloading processing could be having a development system in the VS Cloud instead of in the form of another New VS. Again, an important advantage is converting what would otherwise be a capital outlay into an expense, and an expense that only need occur when needed. Some development systems might only be needed for projects and could be idled in standby mode at other times.
So the obvious benefits are:
- Business continuity -- offsite copies of production files
- Offloading processing to the VS Cloud
- Separating project files from production files
- Development system in the VS Cloud
What we don't know at the moment is whether RSF will function over wide area networks. RSF clusters multiple VS systems with shared volumes, shared Job Queue, shared Print Queue and remote procedure call facility. RSF presently functions over short distance gigabit ethernet. It remains to be seen whether it will tolerate the timing variances of a wide area network and whether we can adjust RSF to be so tolerant. The reason to consider RSF is that it more closely couples the systems than WSN does. One can simply copy a file to a remote volume instead of queueing it for file transfer.
What remains to be seen is whether these benefits will be of interest to the New VS community. TVS already has a pool of IBM x330 systems at two sites ready to serve as the VS Cloud. Compucom can also provide facilities, thus providing three trusted sites where the VS Cloud can operate.
We look forward to conducting a free trial with a suitable candidate customer, in which we can both learn how well this will work and about any issues that may arise. Interested customers should contact Thomas Junker.