"Netapp is putting 20 SAS spindles in the back end. My only concern is whether or not the flash pool will keep up with our workload." Flashpool only matters for the read part of the workload. Any solution that uses SSD as read cache will perform well provided their is sufficient SSD space to handle the working dataset. In your case either solution will do fine for the read portion of the workload. The write portion of the workload will be spindle bound on the netapp ~ 20 SAS * 150 IOPS per disk = ~3000 IOPs sustainable but able to handle higher bursts due to NVRAM. The nimble's write performance is CPU bound 2 serries can do about 15,000 sustainable and the 4 serries can do about 30,000 sustainable. All of this is assuming random writes at 4K size which is the appropriate assumption for VDI. So if you assuming 10 IOPS per desktop which is considered a medium workload then you need to have 500 Desktops @ 10 IOPS per desktop = 5000 IOPS for you solution and you should assume a 20/80 read/write workload just to be safe. Assuming 90% read cache then 20% read *90% hit rate * 5000 IOPS = 900 IOPS soaked up by the read cache. This means your solution needs 4100 Write IOPS to handle the load. The netapp seems a little underspeced for the potential write load unless your VDI machines are using a ligher IOP per desktop load than we assumed. If i went netapp 2240 i would want about 36 SAS drives backing up the solution unless i knew for sure my IOP per desktop load would be small.
↧