Reference Architecture for Active System 1000 with VMware vSphere
Page 17
Network Interface Card Partition (NPAR): NPAR allows splitting the 10GbE pipe on the NDC with no
specific configuration requirements in the switches. With NPAR, administrators can split each 10GbE
port of an NDC into four separate partitions, or physical functions and allocate the desired bandwidth
and resources as needed. Each of these partitions is enumerated as a PCI Express function that appears
as a separate physical NIC in the server, operating systems, and hypervisor. The Active System 1000v
solution takes advantage of NPAR. Partitions are created for various traffic types and bandwidth is
allocated, as described in the following section.
6.2 Network Connectivity
This section describes the network architecture of Active System 1000v.
Connectivity between hypervisor hosts and network switches: The compute cluster hypervisor hosts,
PowerEdge M620 blade servers, connect to the Dell Networking S4810 switches through the Dell I/O
Modules in the PowerEdge M1000e blade chassis. The management cluster hypervisor hosts, PowerEdge
R620 rack servers, directly connect to the Dell Networking S4810 switches.
Connectivity between the Dell PowerEdge M620 blade servers and Dell I/O modules: The
internal architecture of PowerEdge M1000e chassis provides connectivity between the
Broadcom 57810-k Dual port 10GbE KR Blade NDC in each PowerEdge M620 blade server and the
internal ports of the Dell I/O Modules. The Dell I/O Modules has 32 x 10GbE internal ports. With
one Broadcom 57810-k Dual port 10GbE KR Blade NDC in each PowerEdge M620 blade, blade
servers 1-16 connect to the internal ports 1-16 of each of the two Dell I/O Modules. Internal
ports 17-32 of each Dell I/O Module are disabled and not used.
Connectivity between the Dell I/O Module and Dell Networking S4810 switches: The two Dell
I/O modules are configured either to operate as a port aggregator for aggregating 16 internal
ports to eight external ports or as a MXL switch.
The two fixed 40GbE QSFP+ ports on each Dell I/O Module are used for network connectivity to
the two Dell Networking S4810 switches. These two 40GbE ports on each Dell I/O Module are
used with a 4 x 10Gb breakout cable to provide four 10Gb links for network traffic from each
40GbE port. Out of the 4 x 10Gb links from each 40GbE port on each Dell I/O Module, two links
connect to one of the Dell Networking S4810 switches and the other two links connect to the
other Dell Networking S4810 switch. Due to this design, each PowerEdge M1000e chassis with
two Dell I/O modules will have total of 16 x 10Gb links to the two Dell Networking S4810
switches. This design ensures load balancing while maintaining redundancy.
Connectivity between the Dell PowerEdge R620 rack servers and Dell Networking S4810
switches: Both of the PowerEdge R620 servers have two 10Gb connections to the Dell
Networking S4810 switches through one Broadcom 57810 Dual Port 10Gb Network Adapter in
each of the PowerEdge R620 servers.
Connectivity between the two network switches: The two S4810 switches are connected using Inter
Switch Links (ISLs) using two 40 Gbps QSFP+ links. Virtual Link Trunking (VLT) is configured between the
two S4810 switches. This design eliminates the need for Spanning Tree-based networks; and also
provides redundancy as well as active-active full bandwidth utilization on all links.
Comentários a estes Manuais