Information

Virtualization

Virtualization

With the growing number of tasks in the IT business, the question of competent resource allocation arises. Fewer hardware servers are required, they use less electricity, and take up less space.

In small and medium-sized businesses, there is an opinion that such technology is in demand only by large companies. But this is just one of the myths about virtualization. But they prevent managers and specialists from taking full advantage of the solution.

Virtualization is the same as cloud computing. These two concepts are different. Cloud computing is what becomes possible thanks to the very same virtualization. The term itself implies access to certain shared computing resources over the Internet. It can be data or programs. Server virtualization can be used without the use of cloud technologies. They can also be used later to expand the capabilities of the platform.

Virtualization is only of interest to large companies. According to this myth, the solution is not profitable for small and medium-sized firms, which will have to deploy a complex and expensive solution. In fact, no matter the size of the business, virtualization is a profitable solution. Within the framework of a small company, it is quite possible that all services will be located on virtual machines that will run on one hardware platform. This avoids the purchase of additional servers. For a small company, their cost can be impressive. In fact, two or more servers may already be virtualized.

Virtualization dramatically decreases overall system performance. In practice, it turns out that it is rare for modern processors to use all the hardware power to 100 percent. Most of the time the equipment works at "idle", in a half-asleep mode. This is especially true for domain controllers, domain name services, antivirus application center. It is simply irrational to provide a separate service for each such service. So it is quite possible to transfer some non-labor-intensive services to virtual machines, collecting them on a single host system. There will be no drop in performance. But you shouldn't make rash decisions either. Any system has its own possible performance ceiling that should be considered when virtualizing. Using a virtual machine, it is worth conducting performance tests, and only then deploy a new service on this host. It is worth considering the fact that each virtual machine requires up to 20% additional resources from what it needs for its own maintenance. And the host system itself needs free capacity.

Virtualization will require specialized hardware. This myth is complemented by frightening, as for an ignorant specialist, the words blade system, specialized servers, etc. But this myth has appeared thanks to those presentations and conferences held by manufacturers of expensive specialized equipment, for example, HP or IBM. Such meetings demonstrate equipment for building virtual solutions, the same blade systems. However, the myth is built on erroneous theses. It is convenient to use expensive and proven systems designed specifically for virtualization. However, in fact, virtual services can be deployed on ordinary hardware, as long as it satisfies the tasks in terms of power. True, there are still some limitations. Modern hypervisor programs for organizing a virtual host system may not support some hardware. So self-assembled servers may not always be the solution. Problems can arise with non-standard RAID controllers and network cards. Even so, there are certain tricks. For example, you can build a RAID programmatically or add a network card that the hypervisor can handle. Even an old HP G4 server can easily become a "home" for a couple of undemanding virtual machines without additional efforts. This will save rack space and not spend money on a new server.

All quality virtualization software is paid and expensive. It is no coincidence that they say that free cheese is only in a mousetrap. But how does this apply to hypervisors? In fact, the picture is optimistic here. There are many free products on the market, such as VMware ESXi, Citrix XenServer, and Windows 2008 Standard (64 Bit) Hyper-V Core solves the required tasks. These are all junior versions of powerful commercial solutions. But the engine is used exactly the same as that of the older paid counterparts, the same ideology of work and the format of virtual machines. The developers believe that with the growth of the company and the development of its infrastructure, there will be a transition to paid solutions that will expand the functionality of the platform. And this can be done without reinstalling virtual machines. If we compare the capabilities of paid and free programs, it turns out that it is quite possible to freely use the main functions: a hypervisor, conversion, working with storages of different types, moving virtual machines between host servers without interrupting work.

Virtualization systems are difficult to maintain. Today, almost all modern virtualization systems are managed through a graphical application. Lovers of fine tuning are available to work with the command line. The administrator no longer needs to go to the server room to increase the amount of RAM, disk space, add a processor. Today, all this can be done right from your workplace, managing the virtual environment of the production server in the console.

Virtualization is unreliable. This statement is based on the assumption that a failure in the host system will result in the termination of several virtual machines based on it. But this risk is offset by the speed of system recovery if a backup of the virtual machine is present. On average, a system restore is completed in just a third of an hour. Recovery consists in transferring the virtual machine files to another server. And large industrial solutions generally allow replication on the fly. In this case, even the failure of one hardware server will not lead to the stop of the services involved.

Modern virtualization systems, such as Citrix XenServer and VMware, use the principle of bare metal, that is, they are installed directly on "bare metal". The core of the system is the Unix OS, which is extremely reliable and well protected from viral diseases. Such a system is economical and optimized in its code, devoid of all unnecessary things. So the hypervisor will not be distracted by extraneous tasks. Hardware reliability can be ensured by purchasing reliable hardware. You can afford it given the overall savings on servers. And this will help you forget about hardware problems for a long time. The decision to use virtualization technology must be carefully considered. With careful planning, the result promises to be much less problematic than a few legacy, low-cost servers in a traditional configuration.

It is difficult to find an intelligent specialist to deploy a complex of virtualization. Good IT specialists are in demand by the market. In the case of virtualization systems, the picture is the same. The good news is that the major products in this area from Microsoft, Citrix and VMware are still well documented. Meetings of specialists with representatives of companies and system integrators are constantly held. They will answer the most exciting questions. So, in any case, even an inexperienced specialist will not find himself in a vacuum. Of course, you shouldn't entrust your infrastructure to a student working as an administrator. He will gain experience, but what will happen to the company? Today, there are more and more professional system administrators with basic skills in building virtualization systems.

Virtualization is a panacea for all problems. Virtualization can really work wonders when it comes to improving manageability, efficiency, and energy conservation. But she won't do it by herself. Some IT pros do not study the problem from all angles, believing that moving to virtual solutions will solve all problems. But this is not a magic pill. If there is no effective management and emphasis on the benefits of virtualization, then it will not bring the desired effect.

Virtualization is not suitable for high-performance I / O applications. This myth was formed long ago, when the first hypervisors had just appeared. Such complexes irrationally used all the resources of the host server. But since then, virtualization technology has made great strides. So, recently Vmware demonstrated a version of its ESX Server capable of performing more than one hundred thousand operations per second of I / O data on one host.

To use virtual machines, you need to know Linux. In the first versions of hypervisors, in the same Vmware, it was proposed to work with the command line of the Linux console to access some of the controls. While this host option is still available today, most administrators no longer use it. Many hypervisors run on a Windows-based GUI. Hypervisors are becoming simpler and clearer by helping professionals embrace this solution.

Virtualization is a software layer that slows down applications. This myth is only partially true. Some solution providers such as Vmware and Microsoft do offer their solutions on Windows or Linux. But the same Vmware ESX (i) is a hypervisor running on bare metal. This allows you to maximize the use of server resources without a software layer in the form of an operating system.

You cannot virtualize Microsoft Exchange and SQL Server. A few years ago, when single-core processors were the standard, it was not recommended to virtualize such services with their constant workloads. But modern platforms work with 4 and 8 cores of multiple processors. Now even the most time consuming services can be successfully implemented in a virtual environment. The key to load balancing is proper planning and understanding of the technology.

Virtualization only works with servers. Many companies benefit from desktop virtualization. This gives the advantage of centralized management, a common approach, and improved disaster recovery options. With a thin client or client application, you can connect to your desktop from anywhere in the world. Disk imaging technologies can reduce data storage requirements by eliminating unnecessary duplication of copies.

Virtualization is insecure. Any software can be considered unsafe. However, by applying best practices for networking, storage, and operating systems, you can build a truly secure environment. Virtualization lets you set your own security standards, customize policies, and run tests against minimum requirements.


Watch the video: Cloud Computing - Virtualization Introduction (September 2021).