Выбрать главу

Adaptive Scheme

We'll start by introducing the adaptive scheme because of its simplicity. The adaptive scheme involves creating a small number of larger LUNs for the storage of virtual machines. The adaptive scheme results in fewer requirements on the part of the SAN administrator, less effort when performing LUN masking, fewer datastores to manage, and better opportunities for virtual disk resizing.

The downside to the adaptive scheme is the increased contention for LUN access across all of the virtual machines in the datastore. For example, if a 500GB LUN holds the virtual machine disk files for 10 virtual machines, then there will be contention among all of the virtual machines for access to the LUN. This might not be an issue, as the virtual machines' disk files residing on the LUN may be for virtual machines that are not disk intensive — that is, they do not rely heavily on hard disk input/output (I/O). For the adaptive scheme to be a plausible and manageable solution, VI3 administrators must be proactive in monitoring the virtual machines stored together on a LUN. When the performance of the virtual machines begins to reach unacceptable levels, administrators must look to creating more LUNs to be made available for new or existing virtual machines. Figure 4.7 shows an implementation of the adaptive scheme for storing virtual machines.

Predictive Scheme

The predictive scheme overcomes the limitations of the adaptive scheme but introduces administrative challenges of its own. The predictive involves the additional administrative effort of customizing LUNs to be specific for individual virtual machines. Take the following example: When administrators deploy a new server to play host to a database application, it is a common practice to enhance database performance by implementing multiple disks with characteristics specific to the data stored on the disk. On a database server, this often means a RAID 1 (mirror) volume for the operating system, a RAID 5 volume for the database files, and another RAID 1 volume for the database logs. Using the predictive scheme to architect a LUN solution for this database server would result in three SAN LUNs built on RAID arrays as needed by the database server. The sizes of the LUNs would depend on the estimated sizes of the operating system, database, and log files. Figure 4.8 shows this type of predictive approach to LUN design. Table 4.2 outlines all of the pros and cons for each of the LUN design strategies.

Figure 4.7 The adaptive scheme involves creating a small number of LUNs that are larger in size and play host to virtual machine disk files for multiple virtual machines.

Figure 4.8 The predictive scheme, though administratively more involved, offers better performance measures for critical virtual machines. 

Table 4.2: Adaptive and Predictive Scheme Comparisons 

Type of Scheme Pros Cons Adaptive Less need for SAN administrator Easy resizing of virtual disks Easy snapshot management Less volume management Possible undersizing of LUN, resulting in greater administrative effort to create new LUNs Possible oversizing of LUN, resulting in wasted storage space Predictive Less contention on each VMFS More flexible share allocation and management Less wasted space on SAN storage RAID specificity for VMs Greater multipathing capability Support for Microsoft clusters Greater backup policy flexibility Greater administrative overhead for LUN masking Greater administrative effort involved in VMotion, DRS, and HA planning

As we noted earlier in this section, the most appropriate solution will most likely involve a combination of the two design schemes. You may find a handful of virtual machines where performance is unaffected by storing all the virtual machine disk files on the same LUN, and at the same time you will find those virtual machines that require a strict nonsharing approach for the virtual machine disk files. But in between the two extremes, you will find the virtual machines that require specific RAID characteristics but, at the same time, can share LUN access with multiple virtual machines. Figure 4.9 shows a LUN design strategy that incorporates both the adaptive and predictive schemes as well as a hybrid approach.

Figure 4.9 Neither the adaptive nor the predictive scheme will be the most appropriate solution in all cases, which means most environments will be built on a hybrid solution that involves both philosophies.

With all of the effort that will be put into designing the appropriate LUN structures, you will undoubtedly run into situations in which the design will require change. Luckily for the VI3 administrative community, the product is very flexible in the way virtual machine disk files are managed. In just a few short steps, a virtual machine's disk files can be moved from one LUN to another. The simplified nature of relocating disk files means that if you begin with one approach and discover it does not fit your environment, you can easily transition to a more suitable LUN structure. In Chapter 6, we'll detail the steps required to move a virtual machine from one datastore to another.

ESX Network Storage Architectures: Fibre Channel, iSCSI, and NAS

VMware Infrastructure 3 offers three shared storage options for locating virtual disk files, ISO files, and templates. Each storage technology presents its own benefits and challenges and requires careful attention. Despite their differences, there is often room for two or even all three of the technologies within a single virtualized environment.

Fibre Channel Storage

Despite its high cost, many companies rely on fibre channel storage as the backbone for critical data storage and management. The speed and security of the dedicated fibre channel storage network are attractive assets to companies looking for reliable and efficient storage solutions.

Understanding Fibre Channel Storage Networks

Fibre channel SANs can run at either 2GFC or 4GFC speeds and can be constructed in three different topologies: point-to-point, arbitrated loop, or switched fabric. The point-to-point fibre channel architecture involves a direct connection between the server and the fibre channel storage device. The arbitrated loop, as the name suggests, involves a loop created between the storage device and the connected servers. In either of these cases, a fibre channel switch is not required. Each of these topologies places limitations on the scalability of the fibre channel architecture by limiting the number of nodes that can connect to the storage device. The switched fabric architecture is the most common and offers the most functionality, so we will focus on it for the duration of this chapter and throughout the book. The fibre channel switched fabric includes a fibre channel switch that manages the flow of the SCSI communication over fibre channel traffic between servers and the storage device. Figure 4.10 displays the point-to-point and arbitrated loop architectures.

Figure 4.10 Fibre channel SANs can be constructed as point-to-point or arbitrated loop architectures.

The switched fabric architecture is more common because of its scalability and increased reliability. A fibre channel SAN is made up of several different components, including:

Logical unit numbers (LUNs) A logical configuration of disk space created from one or more underlying physical disks. LUNs are most commonly created on multiple disks in a RAID configuration appropriate for the disk usage. LUN design considerations and methodologies will be covered later in this chapter.

полную версию книги