Выбрать главу

NIC teams should be built on physical network adapters located on separate bus architectures. For example, if an ESX Server host contains two on-board network adapters and a PCI-based quad-port network adapter, a NIC team should be constructed using one on-board network adapter and one network adapter on the PCI bus. This design eliminates a single point of failure.

Perform the following steps to create a NIC team using the VI Client: 

1. Use the VI Client to establish a connection to a VirtualCenter server or an ESX Server host.

2. Click the hostname in the inventory panel on the left, select the Configuration tab from the details pane on the right, and then select Networking from the Hardware menu list.

3. Click the Properties for the virtual switch that will be assigned a NIC team and select the Network Adapters tab.

4. Click Add and select the appropriate adapter from the Unclaimed Adapters list, as shown in Figure 3.23.

Figure 3.23 Create a NIC team using unclaimed network adapters that belong to the same Layer 2 broadcast domain as the original adapter.

5. Adjust the Policy Failover Order as needed to support an Active/Standby configuration.

6. Review the summary of the virtual switch configuration, click Next, and then click Finish.

The load-balancing feature of NIC teaming does not function like the load-balancing feature of advanced routing protocols. Load balancing across a NIC team is not a product of identifying the amount of traffic transmitted through a network adapter and shifting traffic to equalize data flow through all available adapters. The load-balancing algorithm for NIC teams in a vSwitch is a balance of the number of connections — not the amount of traffic. NIC teams on a VI vSwitch can be configured with one of the following three load-balancing policies:

♦ vSwitch port-based load balancing (default)

♦ Source MAC-based load balancing

♦ IP hash-based load balancing

Outbound Load Balancing

The load-balancing feature of NIC teams on a vSwitch only applies to the outbound traffic. 

Virtual Switch Port Load Balancing 

The vSwitch port-based load-balancing policy that is used by default uses an algorithm that ties each virtual switch port to a specific uplink associated with the vSwitch. The algorithm will maintain an equal number of port-to-uplink assignments across all uplinks to achieve load balancing. As shown in Figure 3.24, this policy setting ensures that traffic from a specific virtual network adapter connected to a virtual switch port will consistently use the same physical network adapter. In the event that one of the uplinks fails, the traffic from the failed uplink will failover to another physical network adapter.

Figure 3.24 The vSwitch port-based load-balancing policy assigns each virtual switch port to a specific uplink. Failover to another uplink occurs when one of the physical network adapters experiences failure.

You can see how this policy does not provide load balancing of the amount of traffic because each virtual machine can access only one physical network adapter at any given time. Since the port to which a virtual machine is connected does not change, each virtual machine is tied to a physical network adapter until failover occurs. Looking at Figure 3.24, imagine that the Linux virtual machine and the Windows virtual machine on the far left and far right are the two most network-intensive virtual machines. In this case, the vSwitch port-based policy has assigned both of the ports used by these virtual machines to the same physical network adapter. Meanwhile, the Linux and Windows virtual machines in the middle, which both might be processing very little traffic, are connected to ports assigned to individual physical network adapters.

The physical switch passing the traffic learns the port association and therefore sends replies back through the same physical network adapter from which the request initiated. The vSwitch port-based policy is best used when the number of virtual network adapters is greater than the number of physical network adapters. In the case where there are fewer virtual network adapters than physical adapters, some physical adapters will not be used. For example, if five virtual machines are connected to a vSwitch with six uplinks, only five used vSwitch ports will be assigned to exactly five uplinks, leaving one uplink with no traffic to process.

Source MAC Load Balancing

The second load-balancing policy available for a NIC team is the source MAC-based policy, shown in Figure 3.25. This policy is susceptible to the same pitfalls as the vSwitch port-based policy simply because the static nature of the source MAC address is the same as the static nature of a vSwitch port assignment. Like the vSwitch port-based policy, the source MAC-based policy is best used when the number of virtual network adapters exceeds the number of physical network adapters. In addition, virtual machines are still not capable of using multiple physical adapters unless configured with multiple virtual network adapters. Multiple virtual network adapters inside the guest operating system of a virtual machine will provide multiple source MAC addresses and therefore offer an opportunity to use multiple physical network adapters. 

Virtual Switch to Physical Switch 

To eliminate a single point of failure, the physical network adapters in NIC teams set to use the vSwitch port-based or source MAC-based load-balancing policies can be connected to different physical switches; however, the physical switches must belong to the same Layer 2 broadcast domain. Link aggregation using 802.3ad teaming is not supported with either of these load-balancing policies.

IP Hash Load Balancing 

The third load-balancing policy available for NIC teams is the IP hash-based policy, also called the out-IP policy. This policy, shown in Figure 3.26, addresses the limitation of the other two policies that prevents a virtual machine from accessing two physical network adapters without having two virtual network adapters. The IP hash-based policy uses the source and destination IP addresses to determine the physical network adapter for communication. This algorithm then allows a single virtual machine to communicate over different physical network adapters when communicating with different destinations.

Figure 3.25 The source MAC-based load-balancing policy, as the name suggests, ties a virtual network adapter to a physical network adapter based on the MAC address.

Balancing for Large Data Transfers

Although the IP hash-based load-balancing policy can more evenly spread the transfer traffic for a single virtual machine, it does not provide a benefit for large data transfers occurring between the same source and destination systems. Since the source-destination hash will be the same for the duration of the data load, it will only flow through a single physical network adapter. 

A vSwitch with a NIC team set to use the IP hash-based load-balancing policy should have all physical network adapters connected to the same physical switch to support link aggregation. ESX Server supports standard 802.3ad teaming in static (manual) mode, but does not support the Link Aggregation Control Protocol (LACP) or Port Aggregation Protocol (PAgP) commonly found on switch devices. Link aggregation will increase throughput by combining the bandwidth of multiple physical network adapters for use by a single virtual network adapter of a virtual machine. 

Follow these steps to alter the load-balancing policy of a vSwitch with a NIC team:

1. Use the VI Client to establish a connection to a VirtualCenter server or an ESX Server host.