We recently received an update about the new stuff that is coming from Dell EqualLogic. Because a lot the information was under NDA so I can’t go into details or give you release dates.

What’s here already?

Last year Dell released the 10GB connections for the PS series. This 10GB connection speeds are mostly done because the marketing asked for it, because the customers asked for it. There are rarely used cases where the 10GB connections make sense but in 95% of all cases it doesn’t give you extra performance.

The bottleneck is often not the connection speed but the spindle speeds limits of the disk themselves. Because of the adaptive load balancing of the PS series the 1GB connections are used very efficiently. When used with the MPIO drivers from Dell EqualLogic you can gain a lot extra performance from the network connections.

So what is coming this year:

  • Support for LINUX hosts.
  • Hyper-V CSV support.
  • New version of SAN HQ which can not only monitor the load on the storage array’s but also can give you information about how much load you can add before the array’s start to suffer from it and drop under an certain performance level.
  • Role based management.
  • Strong VMware Storage API integration. Offloading storage commands to the Dell storage array.

 

Dell VDI storage array.

One of the most interesting new developments was the announcement of a new storage array that can give VDI environments a big boost. As you may already know, storage is a critical component for your VDI environment. When dealing with a high density of desktops workloads on your back-end things like boot storms, antivirus storms, patch storms, application installations and application updates storms are newly introduced challenges.

What you want from your storage array is to absorb those storms. How can we do that? Simple make a mix of SSD disks for the IOPS and SAS disks for the TB’s. Put them in one array and make it intelligent so that when you need IOPS the SSD are used and when you don’t need IOPS transfer the load to the SAS disks all transparent for the underlying hypervisor. With this kind of storage array you can multiply the number of supported desktops per SAS disks with 3.

 

Data Center Bridging

Also Dell is working to prepare the network stack for the industry standard DCB. DCB stands for Data Center Bridging. In order for Ethernet to carry LAN, SAN and IPC traffic together and achieve network convergence, some necessary enhancements are required. These enhancement protocols are summarized as Data Center Bridging (DCB) protocols also referred to as Enhanced Ethernet (EE) which are defined by the IEEE 802.1 data center bridging task group.

 

iSCSI and iSCSI over DCB

iSCSI, an Ethernet standard since 2003, is the encapsulation of SCSI commands transported via Ethernet over a TCP/IP network, and is by nature, a loss‐less storage fabric. Inherent in iSCSI’s design is recovery from dropped packets or over‐subscribed, heavy network traffic patterns. So why would iSCSI need the assist of Data Center Bridging (DCB)? iSCSI over DCB reduces latency in networks which are over‐subscribed, and provides a predictable and certain application responsiveness, eliminating Ethernet’s dependence on TCP/IP (or SCTP) for the retransmission of dropped Ethernet frames. iSCSI over DCB adds the reliability that Enterprise customers need for their data center storage needs.

Dell is supporting the Ethernet Alliance with a Data Center Bridging iSCSI solution. This includes a Dell EqualLogic PS Series iSCSI storage array featuring 10GbE, SFP+, Data Center Bridging, Priority Flow Control, DCBx protocol, and Enhanced Transmission Selection.