This year’s Nonprofit Technology Conference offered a good chance to discuss one the most important — but geeky — developments in the world of computers and networks: server virtualization.
Targeting a highly technical session to an NTEN audience is kind of like cooking a gourmet meal with one entrée for 1000 randomly picked people. We knew our attendees would include as many people who were new to the concepts as there were tech-savvy types looking for tips on resolving cache conflicts between the SAN, DBMS and Hypervisor. We aimed to start very broad, focus on use cases, and leave the real tech stuff to the Q&A. We’ll try to do the same in this article.
We’ve already summarized the view from the top in a quick, ignite-style presentation, available wherever fine NTC materials are found (and also on Slideshare). In a nutshell, virtualization technology allows many computers to run concurrently on one server, each believing it’s the sole occupant. This allows for energy and cost savings, greater efficiency, and some astounding improvements in the manageability of your networks and backups, as servers can be cloned or dragged, dropped and copied, allowing for far less downtime when maintenance is required and easy access to test environments. It accomplishes this by making the communication between an operating system, like Windows or Linux, generic and hardware-independent.
Most of the discussion related to virtualization has been centered on large data centers and enterprise implementations, but a small network can also take advantage of the benefits that virtualization has to offer. Here are three common scenarios:
- Using a new server running a virtualization hypervisor to migrate an existing server
- Using a new server to consolidate 3-4 physical servers to save on electric & warranty expenses
- Using a storage area network (SAN) to add flexibility and expandability to the infrastructure
In the first scenario, an existing server is converted into a virtual server running on new physical hardware. Tools from VMWare and other vendors allow disks to be resized, additional processor cores to be assigned and RAM to be added. The benefit to this process is that the physical server now exists on a new hardware platform with additional resources. End users are shielded from major disruptions and IT staff are not required to make any changes to scripts or touch workstations.
The second scenario, much like the first case, starts with the addition of new physical hardware to the network. Today’s servers are so powerful, it’s unlikely that more that 5% of their total processing power is used. That excess capacity allows an organization to use virtualization to lower their hardware expenses by consolidating multiple servers on one hardware platform. Ideal candidates are servers that run web & intranet applications, antivirus management, backup, directory services, or terminal services. Servers that do a lot of transactional processing such as database & email servers can also be virtualized but require a more thoughtful network architecture.
The final scenario involves taking the first step toward a more traditional enterprise implementation, incorporating two physical servers connected to a SAN. In this scenario, the hardware resources continue to be abstracted from the virtual servers. The SAN provides much more flexibility in adding storage capacity and assigning it to the virtual servers as required. Adding multiple server heads onto the SAN will also provide the capacity to take advantage of advanced features such as High Availability, Live Server Migration, and Dynamic Resource Scheduling.
The space for virtualization software is highly competitive. Vendors such as Microsoft, VMWare, Citrix and Virtual Iron continue to lower their prices or provide their virtualization software for free. Using no-cost software, an organization can comfortably run a virtual server environment of 16 virtual servers on 3 physical machines.
The session was followed by a healthy and engaging Q&A, and we were fortunate to have it all transcribed by the incredibility talented Jack Aponte. Scroll down to 10:12 in her NTC Live Blog for a full re-enactment of the session. We can also start a new Q&A, in comments, below.
And stayed tuned for more! The biggest paradigm shift from virtualization is related to the process surrounding the backup and recovery of virtual servers. We’ll be writing an article for the November NTEN newsletter with some detailed scenarios related to backup & disaster recovery in the virtual environment.