You know and love our Must-Read IT Blogs lists, but now, say hello to the nonprofit side.
Regardless of company size, user base or area of responsibility, IT administrators must admit that customer service is a significant part of their daily routine. The typical to-do list for any network team should always include user considerations along with overall system needs. From simple user issues to complicated corporate implementations, the needs of the users should be part of the planning phase of any project.
An excellent way to avoid disruptions to users is by using virtualization software in a test lab. A virtual operating system can offer foresight into possible bugs and gives developers time to correct them before users are ever aware of them. Creating an environment where software can be tested in an isolated setting, separate from the company infrastructure, gives system engineers a powerful tool to increase the network uptime rate and overall user confidence.
When creating a virtual test lab, the most important factor is hardware support in the CPU. In the past, buying the most powerful processor enabled the use of any software, but this is no longer the case, as running virtual OSes creates a much heavier load on computers. Therefore, it is important that the CPU supports either Intel’s VT or AMD’s V virtualization technology to ensure the most seamless virtual experience.
These technologies provide on-chip hardware that will run the virtual operating system directly within the processor, instead of it being powered by software emulation. With this in place, an operator can see the virtual software at the native speed it was designed and not be hampered by slow response times from the CPU. Also, when on-chip resources are used, the CPU is free to compute other tasks while running the virtual OS.
When implementing a new virtual lab, remember that several operating systems can be installed and booted concurrently on a single machine. This makes it possible to create not just a single test platform, but an entire corporate network with which to test larger rollouts. Make sure to allow roughly 20 gigabytes of space per OS, but bear in mind this will vary depending on the chosen software. Windows takes the most space and Linux software takes much less.
To run multiple instances on the same machine, install as much RAM as the system configuration will allow. While all virtualization programs allow the tweaking of RAM usage, a rule of thumb is 1GB per operating system and 2GB for virtual server installs. Give thought to the use of powerful graphics hardware if virtualizing CAD or other 3D software is expected.
The market consists of several virtualization software platforms, but the most common are Microsoft Hyper-V, VMware Server, Citrix XenDesktop and Sun Microsystems VirtualBox. All of these offer a robust set of features to match nearly every scenario, but decision-makers must look deeper to find the exact fit for a particular environment.
Considerations should include expansive USB support for the possible peripherals a lab might need to use during testing, and 3D support for more advanced graphics work, which is not supported by every virtual software application. Cost alters the landscape of software options, because a handful are open source and completely free, while others can be expensive. On the other hand, the more costly software is always supported by a capable team of experts, whereas open source may leave admins fending for themselves.
Once the hardware is in place and the virtual operating systems are installed, you will need to understand virtual image management. The ability to boot several images at once is one of several tools engineers now have available. The first is an option to save a snapshot of the entire OS before any testing is started. A snapshot is a copy of the image file, in its exact running state, that can be set aside for later retrieval if an error creates an unusable system. In addition, if multiple users are involved in testing, an image file can be saved to a network location or USB device and passed between systems, which preserves continuity of the test environment.
When a server is used as a virtualization platform, systems gain an enormous level of stability. Primarily, virtualizing a server provides instant redundancy. Quite often, a duplicate is made of the primary OS and set as a backup. In the event of a system fault on the primary operating system, the secondary will instantly take over the duties of the users, thereby eliminating any user disruption. The primary operating system can be repaired, or even rebuilt, and brought online without the user being aware of any changes.
Additionally, when server hardware is used to run virtual systems, there is a significant cost savings to the bottom line. One server can conceivably run four to eight server software installs, so only one powerful server needs to be purchased to run an entire company. This also can reduce the carbon footprint compared with traditional server rooms with racks full of machines.
First and foremost, any test environment makes it possible for network engineers to offer users a trouble-free computing experience. Even more, testing software thoroughly extends uptime and creates a stable network. Software can be tested, bugs can be resolved and rollout procedures can be created without ever changing the corporate network. New software installs bring with them a certain level of unpredictability that worries even the most confident IT admin, but with a virtualization lab in place, each piece of new software can be tested and retested until every last issue has been resolved.