FCoE: Networks & Storage Convergence


Click on Executive Summary, Table of Contents and Order Form for additional information.


Overview

Drivers for the NextGen Datacenter Infrastructure

With Ethernet having achieved ubiquity in connecting millions of PCs to servers and the resultant dramatic cost declines of Ethernet adapters and switches have made SMBs and medium size enterprises to gravitate towards Ethernet-based infrastructure for networking and storage equipments. But the lossy nature of Ethernet of dropping packets when network congestion occurs has kept IT managers from running mission-critical business or high-performance technical applications in an all Ethernet-based storage to servers’ environments.

As a result datacenters have resorted to three separate networks to meet the specific needs of different workloads – a Storage Area Network (SAN) that used a lossless fabric based on Fibre Channel (FC) Protocol which provided the necessary robustness and performance for storage to servers data traffic, IP for Network traffic and Infiniband (IB) for clustered high performance computing (HPC) requiring low latency. Further a new protocol called FCIP (a point to point wide area connection that tunnels between two FC switches over an extended IP link), allowed transporting FC frames over distances using IP-based networks.

On another front, the advent of new technologies such as multicore processors, servers and storage virtualization, high-density blade servers, web 2.0 applications accelerated the demand for higher server I/O performance (both in latency and bandwidth) and adoption of a multiple fabrics for network, storage I/O and cluster computing IPC traffic. Specifically Server Virtualization (through supporting multiple Virtual Machines, multiple applications and Operating Systems on the same physical server) generates significant I/O traffic putting a severe demand on existing multiport 1GbE network infrastructures.

The demand is particularly high on 1GbE storage systems to support the higher I/O rates, greater capacity, and faster provisioning and non-disruptive application delivery. Multi-socket, Multicore Servers support higher workload levels, which demand greater network throughput while shared networked storage in Virtualized Servers requires higher network bandwidth between servers and storage.

iSCSI based networks which encapsulates block-level SCSI data into IP packets and run end-to-end over Ethernet became popular as well as the new iWARP protocol, a derivative of Remote Direct Memory Access (RDMA) with low latency to run over 10GbE for high performance clustering applications. Further, the availability of free iSCSI initiator drivers embedded in Microsoft Windows and Linux OS and low cost target drivers in storage disk systems, made iSCSI running on IP based Ethernet become very popular for use in small enterprises, workgroups and remote offices.

Over the last 7-8 years large enterprises, have heavily invested in FC SANs, and the last thing they would want to do is rip and replace their robust, albeit expensive investments in FC SANs. Yet they face the conundrum of having to live with very high management costs - different IT staff with different types of expertise and experience having to manage networking and storage domains separately and completely independent of each other. But replacing everything in their datacenter with Ethernet and iSCSI IP SANs despite their lucrative cost had a number of issues. For one thing IP-based Ethernet is subject to variable latencies called latency jitter. Left alone, severe latency jitter can even down applications. IP networks can use TCP/IP’s recovery and flow control capabilities to control latency effects up to a point, but TCP/IP overhead is an unacceptable burden for FC traffic.

The holy grail – Needs of NextGen DataCenter Needs

Having to deal with overlapping networks (IP, FC, IB), IT organizations incur cost in numerous ways. The vision of I/O consolidation and unification equates to creating and operating a single network instead of three, the ability of an adapter, switch, and/or storage system to use one physical infrastructure that can carry. This means purchasing fewer equipment and spares (host bus and server adapters, cables, switches, and storage systems) while drastically reducing ongoing management and maintenance costs (power/cooling, administrative and training etc.). One of the primary enablers of I/O consolidation and unification is 10-Gigabit Ethernet, a technology with bandwidth and latency characteristics sufficient to support multiple traffic flows on the same network link.

But the holy grail for datacenter designers has always been to design a network that can leverage the ubiquity of low cost Ethernet, be a single converged network that can carry both the network and storage I/O traffic with very different characteristics, be able to handle their different requirements efficiently to enable lowering the equipment costs and even more importantly lowering the management costs through by using single type of management tools that will simplify administrative tasks and training. Another requirement is to support block-based storage (vs. a file-based storage) that can provide flexibility of choosing latency vs. bandwidth metrics to meet performance, availability requirements of various workloads and yet be agnostic to the OS deployed.

Rise of a Converged Network

The solution to true convergence of disparate networks lay in devising a scheme enhance the Ethernet in such a way so it can transport mission-critical data by running FC over lossless Layer 2 Ethernet.

In order to retain the benefit of low cost Ethernet everywhere while addressing the strength of FC as the dominant storage system interconnect in large datacenters for serious business continuity, backup and disaster recovery operations and Ethernet for FC SAN connectivity and networking was devised. The Fibre Channel over Ethernet (FCoE) protocol specification maps Fibre Channel natively over Ethernet and is independent of the Ethernet forwarding scheme. It allows an evolutionary approach to I/O consolidation by preserving all Fibre Channel constructs; maintaining the latency, security, and traffic management attributes of FC; and preserving investments in tools, training and FC SANs.

In wake of this anomaly of different separate networks for different workloads (OLTP, Business Intelligence, High Performance Computing, Web 2.0 etc – for incisive look see IMEX Report Next Gen Data Center Industry Report), fierce debate has raged over future of interconnects creating a major uncertainty and headache for IT decision-makers: Will Fibre Channel adoption outpace iSCSI deployments or will new IP SANs developments make it robust and become a viable alternative for high end storage solutions as well? Should Fibre Channel SANs be deployed throughout the organization now, believing that Fibre Channel has a long-term future or will all storage-host interconnection eventually go to iSCSI? What is the chance of IB displacing FC in medium size organization and iSCSI and IB become the mainstays?

FCoE enables FC traffic to run over Ethernet with no performance degradation and without requiring any changes to the FC frame. It supports SAN management domains by maintaining logical FC SANs across Ethernet. Since FCoE allows a direct mapping of FC over Ethernet, it enables FC to run over Ethernet with no (or minimal) changes to drivers, SW stacks, and management tools.

With Fibre Channel over Ethernet, IT organizations can incorporate FCoE-aware Ethernet switches into the access layer and converged network adapters or server adapters with an FCoE initiator at the host layer. This simplifies the network topology so that only a single pair of adapters and a single pair of network cables are needed to connect each server to both the Ethernet and the Fibre Channel networks. The FCoE-aware switches separate LAN and SAN traffic, providing seamless connectivity to existing storage systems.

FCoE is intended primarily as a way to unify FC and Ethernet networks, but it may also have an impact on other technologies being used in enterprise SANs and data centers.

 

Factor

iSCSI

SAS

FC

FCoE

IB-SRP

FCIP

iFCP

Best Suited Environment

SMB, ROBO

SMB, Enterprise

Enterprise

Enterprise

HPC

Enterprise

 

Cost

LOW

LOW

HIGH

V.HIGH

V.HIGH

V.HIGH

HIGH

Performance

MEDIUM

HIGH

HIGH

HIGH

HIGH

LOW

LOW

TCP Required

YES

NO

NO

NO

NO

YES

YES

CPU Overhead

HIGH

LOW

LOW

LOW

LOW

HIGH

HIGH

Gateway Overhead

HIGH

LOW

LOW

LOW

HIGH

HIGH

HIGH

Security Robustness

MEDIUM

LOW

HIGH

HIGH

LOW

MEDIUM

LOW

FC SW on Host Needed

NO

NO

YES

YES

NO

NO

YES

Management Features

Low

HIGH

HIGH

HIGH

Medium

NO

YES

IP Routable

YES

NO

NO

NO

NO

YES

YES

Meeting the Challenge with FCoE and CEE

Network-only 10GbE does not have the same requirements around data qualification and interoperability that storage does, However, allowing FC and Ethernet storage traffic to run over a converged fabric requires significant work in order to make 10GbE Ethernet a lossless, native network layer. Although Ethernet already has some ability to isolate traffic using Virtual Local Area Network (VLAN) protocol, and service differentiation with the Quality of Service (QoS) protocol, however, Ethernet is still prone to network congestion, latency, and frame dropping – an unacceptable state of affairs for a converged network solution. The Ethernet ‘pause’ frame was designed earlier in Ethernet for flow control but rarely implemented because it pauses the entire traffic on the link and fails the criterion of selective pause, set for converged network.

FCoE - Technology Overview

Key to the unification of storage and networking traffic in next generation datacenters is a new enhanced Ethernet standard called Converged Enhanced Ethernet (CEE) that is being adopted by major suppliers. FCoE heavily depends on CEE. This new form of Ethernet includes enhancements that make it a viable transport for storage traffic and storage fabrics without requiring TCP/IP overheads. These enhancements include the Priority-based Flow Control (PFC), Enhanced Transmission Selection (ETS), and Congestion Notification (CN).

Enhancements in CEE Specifications include:

  • Priority Flow Control (PFC - 802.1Qbb)
    • Create data group’s priority based on data class using tags (802.1Q)
    • Control data flow by pausing low priority data groups.
    • Allows lossless Storage (FCoE) traffic without affecting classical Networking (Ethernet) traffic
      Classical Ethernet manages congestion by dropping packets, and high-level, connection-based protocols recover from packet losses. IEEE 802.3x Pause mechanism transforms Ethernet into a lossless fabric, allowing it to emulate Fibre Channel operation. While the 802.3x Pause mechanism operates on a port level, Priority Flow Control (PFC 802.1p) operates on a per-priority-class basis. PFC allows multiple traffic types simultaneously over a common, enhanced Ethernet fabric. Virtual lanes are created within an Ethernet link, with each virtual lane assigned a priority level. During periods of heavy congestion, lower-priority traffic can be paused, allowing higher-priority and latency-sensitive tasks such as VoIP and data storage to continue. With Priority Flow Control, Ethernet can support traffic including FCoE, HPC Clustering and classical IP Ethernet.
  • Enhanced Transmission Selection (ETS - 802.1Qaz)
    • Allows bandwidth allocation based on Priority Groups
      Priority Class Groups, allow greater quality of service (QoS) control among different “lanes” on the same physical cable, allowing storage traffic to be managed as a group, with configurable latency and bandwidth to allow QoS SLAs.
  • Congestion Notification & Congestion Management(CN - 802.1Qau)
    • Allows throttling of Ethernet traffic at the edge of the network when congestion occurs within the network
      IEEE 802.1Qau supports end-to-end congestion notification – particularly required for large networks... With 802.1Qau, when congestion occurs in the interior of the network, traffic sources at the edges are instructed to throttle transmission, thus reducing congestion.
  • Energy Efficiency Management (EEE)
    • Automatically reduce power use of Ethernet components during times of low use.
      When completed, EEE is expected to offer further operational cost-reduction possibilities for Ethernet-based storage over FC-based storage.

FCoE hosted on 10 Gbps Enhanced Ethernet extends the reach of Fibre Channel (FC) storage networks, allowing FC storage networks to connect virtually every datacenter server to a centralized pool of storage. Using the FCoE protocol, FC traffic can now be mapped directly onto Enhanced Ethernet. FCoE allows storage and network traffic to be converged onto one set of cables, switches and adapters, reducing cable clutter, power consumption and heat generation. FCoE is designed to use the same operational model as native Fibre Channel technology. Services such as discovery, world-wide name (WWN) addressing, zoning and LUN masking all operate the same way in FCoE as they do in native FC Storage management using an FCoE interface has the same look and feel as storage management with traditional FC interfaces.

Mapping FC stack on FCoE

FCoE provides the capability to carry the FC-2 layer over the Ethernet layer. This allows Ethernet to transmit the upper Fibre Channel layers FC-3 and FC-4 over the IEEE 802.3 Ethernet layers. Using IEEE 802.1Q tags, Ethernet can be configured with multiple virtual LANs (VLANs) that partition the physical network into multiple separate and secure virtual networks. Using VLANs, FCoE traffic can be separated from IP traffic so that the two domains are isolated and one network cannot be used to view traffic on the other.

Because the complete Fibre Channel frame is preserved by using FCoE, all traditional FC management functions such as zoning and LUN masking are preserved. Zones and zone sets are created and managed by switches in the same fashion and LUN masking is performed in the same manner as with traditional Fibre Channel fabrics. Application servers view storage LUNs in the same manner whether they are presented with standard FC technology or with FCoE technology.

As storage systems equipped with native FCoE interfaces come to market, IT organizations can integrate them into their data center networks by connecting them directly into the FCoE fabric; over time, data centers can migrate to an end-to-end converged fabric that uses FCoE-aware initiators, switches, and storage targets while maintaining their traditional Fibre Channel management tools.

FCoE: Pros & Cons

The business benefits of this improved topology include reduced cost and complexity, better and flexible performance, and reduced power consumption—all while providing seamless connectivity with existing Ethernet and storage networks.

Reduced Cabling Spaghetti
The ability to carry FCoE and IP traffic over the same physical cables allows IT organizations to cut the number of network cables in each rack in half, reducing the cost of cabling, cabling errors, better cooling efficiency due to less restriction in front-to-back airflow and an ability to configure and deploy new server racks more rapidly.
Unified Management
A new class of 10GbE converged network adapters (CNAs) helps to simplify FCoE deployment, allowing it to be incorporated without disrupting current data center management practices, software, or the roles of network and storage administrators. CNAs present both an Ethernet interface and a Fibre Channel initiator to the server, allowing the OS to see two physical devices. CNAs implement FCoE in hardware where as Server Adapters are implemented as a software initiator, FCoE technology makes the existence of the converged network transparent to the OS system and applications, allowing both storage and network administrators to manage their respective domains just as they do today. Consistent management helps ease FCoE deployment while reducing operating expenses.
High Performance
With 10Gbps Ethernet available today, and committees already having 40- and 100GbpsEthernet on the roadmap, FCoE products including extensions have a long life...
Power Savings
IT equipments consume over 50% of energy in a data center of which networks and storage constitute 25% -related expenses. The reduced power consumption from using fewer NICs and switches becomes important given that every watt saved tallies to become total of 2 watts saved at the data center electrical source.
High Availability
Existing heavy investment in FC notwithstanding, storing mission-critical data is first and foremost about reliability and data integrity and then about speed. FC’s strengths rise from its ability to speedily process and effectively transport data based on its highly efficient four-layer architecture. While network speeds are important, development on faster FC will continue, but it would be the fast Ethernet speeds combined with FCoE’s robustness that will help it propel it further – witness the line up of major vendors behind it.
Management Costs
The ability of FCoE and Fibre Channel to coexist allows administrators to use the same tools and techniques they use today to manage their storage. The ability of FCoE networks to connect either natively to FCoE targets or directly to Fibre Channel networks preserves the investments that organizations have made in their storage infrastructure.

Extending FC into the Future

The Fibre Channel over Ethernet (FCoE) protocol may still be in the early stages of development, but its capabilities to consolidate separate storage and data networks into a single unified architecture in the data center has a lot of potential and value.

SMBs, ROBOs and Medium Enterprise will also strongly benefit from the lossless Ethernet adaption in CEE from the strong economies of scale and management simplicity that many iSCSI SANs provide.

Some of the leading storage system companies such as Emulex, QLogic, Brocade, Cisco, EMC, IBM, HP, HDS, Sun and others having a lot invested in FC and enjoying excellent margins on their FC products, started to feel the heat and recognized the threat posed by incursion from iSCSI products riding on the popularity of Ethernet everywhere and the associated price erosions driven by volume driven economics. So it is but natural that they are all actively supporting FCoE and CEE efforts, in order to extend the life of FC. Their logic and motivation to provide extensions to the classical Ethernet are understandable for financial (and technical reasons). In spite of all of iSCSI’s qualities – an all IP environment, TOE/RDMA engines, low cost implementations etc., it still rides on TCP/IP and is lossy.

Cisco, Emulex and Brocade have taken an aggressive stance towards promoting the FCoE effort and driving the development of the standard, and have a lot riding on it. Even QLogic, who seems to have a leg in both camps, will ultimately benefit from this as will all players with strong FC products. But it is not only the FC vendors who will benefit, in our view CEE tide will raise both boats – FCoE and iSCSI and in the end it is the end users who benefit the most from open standards.

Supplier Companies

Industry heavyweights with strong FC investments are at the heart of the movement. Many startups like Woven Systems are actively introducing related products into the market.

Components:                      Brocade, Fujitsu Limited, Intel, LSI, PMC-Sierra, Mellanox, Vitesse Semconductor

Software:                             Neoscale Systems, VMWare

Testers:                                Finisar

Storage Subsystems:       EMC, Emulex, Hitachi Data Systems,NetApp, Qlogic

Systems:                              Cisco, Fujitsu-Siemens, Hewlett Packard, IBM, Sun Microsystems

IMEX Industry Report – FCoE in NGDC 2009

Synopsis

IMEX Research‘s Industry Report on FCoE in NexGen DataCenter 2009 addresses the need to fully understand the strategic and tactical directions the IT industry vendors are taking in formulating the future of Networking and Storage converged fabric products that will be deployed in Data Center LANs, SANs and WANs. It compares existing FC and iSCSI SAN protocols versus the upcoming FCoE (Fiber Channel over Ethernet) running on CEE (Converged 10GbE Enhance Ethernet) using various criteria to meet the SLA needs of CIOs for their data centers and migration strategies required for protection of existing investment. It also provides detailed guidance for IT infrastructure vendors and IT decision makers on how to optimally deploy these convergent storage and networks that best meet their mission-critical operational needs while significantly improving the TCO/ROI of their datacenter operations.

For the technical and marketing managers, the report enumerates various technologies, architectures and go-to-market solutions for vendors. It compare the advantages of using FCoE against other storage technologies, specifically iSCSI and InfiniBand, and explains if, when and where FCoE is likely to make competitive market inroads against them. Based on interviews with end-users from different verticals, a number of use case solutions are explored targeted for different environments since FCoE can be deployed in a variety of topologies, either as an addition to existing storage and data infrastructure, or as the basis of a completely new network in greenfield sites.
In appendices. it explains what FCoE is, how it works, what its benefits are, and how vendors are likely to implement the protocol into new and existing storage technologies.

Target Audience The report provides critical data and an incisive analysis for a range of industry participants, including:

  • Storage technology suppliers needing a deeper understanding of FCoE's potential as a challenger to iSCSI and InfiniBand
  • SAN operators and enterprises seeking a clear, independent analysis of FCoE's costs and benefits for upgrading infrastructure
  • Investors needing a better understanding of the scale of the opportunity FCoE presents

Subjects analyzed

  • Datacenter technology trends: what emergent technologies will have the most TCO impact on next-gen datacenter?
  • Strategic vision: what issues, opportunities, and threats do datacenters foresee as they plan for implementation of new convergence technologies and solutions in the next 2-5 years?
  • Enterprise architecture: what are the datacenter infrastructure requirements for infrastructure, system utilization, compute density,change control processes, and alternate power and cooling plans?
  • Case Studies: Experiences of early adopters of implementing and managing data center using profiles of some leading US data centers and Service Providers
  • Datacenter/infrastructure Vendors: Products, profiles and strategies.

Major chapters

  1. Executive Summary
  2. Market Drivers and Industry Dynamics
  3. Market Segments and Product Requirements
  4. Market Forecast and Market Shares
  5. Enabling Technologies and Standards
  6. Competitive Products and Positioning
  7. Major Suppliers Portfolio and Strategies
  8. Go-to-Market/Distribution Channels
  9. Recommendations for Vendors, Channel, End Users & Investors plus
  10. Methodology and References sections.

 


Click on the following for additional information or go to http://www.imexresearch.com

   

   

IMEX Research, 1474 Camino Robles San Jose, CA 95120 (408) 268-0800 http://www.imexresearch.com
If you wish not to receive any IT information/news from IMEX, please click here.