
EMC Infrastructure for Virtual Desktops Enabled by EMC VNX, VMware vSphere 4.1,
VMware View 4.5, and VMware View Composer 2.5
Processor
The server used in this solution has two quad-core Intel Xeon 5500 processors. The average
CPU load during the test is 9 percent. Therefore, we can run approximately 10 virtual
machines per core. One host can run 2 × 4 × 10=80 virtual machines. The Intel Nehalem
architecture is very efficient with hyper-threading and allows 50 to 80 percent more clients.
This means it can run 1.5 × 80=120 to 1.8 × 80=144 virtual machines per host.
While using linked clones, up to eight hosts are allowed in a cluster. Leaving one node as
failover capacity, with seven hosts, we can run 144 × 7=1008 virtual machines. One cluster
can host 1,000 virtual desktops. Without considering the Intel Nehalem features, the cluster
can host 80 × 7=560 virtual desktops. To host 2,000 virtual desktops, we need two to four
clusters, which are about 128 to 256 processors in total. In a non-VDI environment, deploying
2,000 desktops would require 2,000 processors.
With hyperthreading, we are able to host 1,000 VMs per cluster and without hyper threading,
we are able to host only 500 VMs per cluster. Thus, we are able to double the number of
hosts per cluster when using hyper threading. In our solution, we use hyper threading with
three clusters; one with 1,000 users and other two with 500 users each. The 500-user cluster
has extra room for processor intensive workloads.
Table 6 provides a summary of virtual machines per core.
Table 6. Virtual machine per core
Cluster with one node down
Memory
One Windows 7 virtual machine is assigned 1.5 GB of memory. Without using VMware
vSphere 4.1 features, it would require at least 9×8×1.5=108 GB to 18×8×1.5= 216 GB per
host. VMware vSphere 4.1 provides features such as Transparent Page Sharing, ballooning,
compression, recognition of zeroed pages, and memory compression that enable us to allow
over-committing the memory to obtain a better consolidation ratio.
During the baseline workload, we observed about 540 MB used in active memory. The
memory overhead was 179 MB, the hypervisor used 578 MB for the 48 GB host and 990 MB
for the 96 GB host, and the service console memory was 561 MB. Based on this workload, we
require:
((9×8×(540+179) +578+561)/1024) = 52 GB to ((18×8×(540+179) +990+561)/1024) = 103
GB
VMware vSphere uses the above-mentioned features before it uses the swap memory. The
FAST Cache on EMC’s VNX series storage platform does provide better response time
compared to swapping the memory to SAS disks. Another option is to consider having SSD on
each host to host the vSwap. This may impact vMotion and also adds complexity to the
environment. It is, therefore, advantageous to have the swap served by the FAST Cache on the
EMC array.
Comentários a estes Manuais